Science.gov

Sample records for algorithm showed good

  1. Why is Boris Algorithm So Good?

    SciTech Connect

    et al, Hong Qin

    2013-03-03

    Due to its excellent long term accuracy, the Boris algorithm is the de facto standard for advancing a charged particle. Despite its popularity, up to now there has been no convincing explanation why the Boris algorithm has this advantageous feature. In this letter, we provide an answer to this question. We show that the Boris algorithm conserves phase space volume, even though it is not symplectic. The global bound on energy error typically associated with symplectic algorithms still holds for the Boris algorithm, making it an effective algorithm for the multi-scale dynamics of plasmas.

  2. Why is Boris algorithm so good?

    SciTech Connect

    Qin, Hong; Plasma Physics Laboratory, Princeton University, Princeton, New Jersey 08543 ; Zhang, Shuangxi; Xiao, Jianyuan; Liu, Jian; Sun, Yajuan; Tang, William M.

    2013-08-15

    Due to its excellent long term accuracy, the Boris algorithm is the de facto standard for advancing a charged particle. Despite its popularity, up to now there has been no convincing explanation why the Boris algorithm has this advantageous feature. In this paper, we provide an answer to this question. We show that the Boris algorithm conserves phase space volume, even though it is not symplectic. The global bound on energy error typically associated with symplectic algorithms still holds for the Boris algorithm, making it an effective algorithm for the multi-scale dynamics of plasmas.

  3. Can You Show You Are a Good Lecturer?

    ERIC Educational Resources Information Center

    Wood, Leigh N.; Harding, Ansie

    2007-01-01

    Measurement of the quality of teaching activities is becoming increasingly important since universities are rewarding performance in terms of promotion, awards and bonuses and research is no longer the only key performance indicator. Good teaching is not easy to identify and measure. This paper specifically deals with the issue of good teaching in…

  4. Winners show the way to good management in health care.

    PubMed

    Schwefel, D; Pons, M C

    1994-01-01

    To stimulate resourcefulness in the health care services of the Philippines, the German Agency for Technical Cooperation (GTZ) organized a competition to discover and publicize examples of good management. The results provide a rich fund of new ideas. PMID:7999220

  5. Cationorm shows good tolerability on human HCE-2 corneal epithelial cell cultures.

    PubMed

    Kinnunen, Kati; Kauppinen, Anu; Piippo, Niina; Koistinen, Arto; Toropainen, Elisa; Kaarniranta, Kai

    2014-03-01

    mitochondrial metabolism to 73% with Cationorm and 53% with BAK from that of the control cells after 30 min exposure in MTT assay. BAK was the only test compound having clear adverse effects on the cell number and metabolism in CCK-8 assay. The activity of caspase-3 did not show significant differences between the groups. Inflammatory response after exposure to Cationorm was significantly lower than after exposure to BAK. There were no significant differences in NF-κB activity between the groups. Diluted Cationorm and Systane with polyquaternium-1/polidronium chloride 0.001% showed good tolerability on HCE-2 cells and thereby provide a clear improvement when compared to BAK-containing eye drop formulations. PMID:24462278

  6. You Showed Your Whiteness: You Don't Get a "Good" White People's Medal

    ERIC Educational Resources Information Center

    Hayes, Cleveland; Juarez, Brenda G.

    2009-01-01

    The White liberal is a person who finds themselves defined as White, as an oppressor, in short, and retreats in horror from that designation. The desire to be and to be known as a good White person stems from the recognition that Whiteness is problematic, recognition that many White liberals try to escape by being demonstrably different from…

  7. Nonoperatively treated forearm shaft fractures in children show good long-term recovery

    PubMed Central

    Sinikumpu, Juha-Jaakko; Victorzon, Sarita; Antila, Eeva; Pokka, Tytti; Serlo, Willy

    2014-01-01

    Background and purpose — The incidence of forearm shaft fractures in children has increased and operative treatment has increased compared with nonoperative treatment in recent years. We analyzed the long-term results of nonoperative treatment. Patients and methods — We performed a population-based age- and sex-matched case-control study in Vaasa Central Hospital, concerning fractures treated in the period 1995–1999. There were 47 nonoperatively treated both-bone forearm shaft fractures, and the patients all participated in the study. 1 healthy control per case was randomly selected and evaluated for comparison. We analyzed clinical and radiographic outcomes of all fractures at a mean of 11 (9–14) years after the trauma. Results — The main outcome, pronosupination of the forearm, was not decreased in the long term. Grip strength was also equally as good as in the controls. Wrist mobility was similar in flexion (85°) and extension (83°) compared to the contralateral side. The patients were satisfied with the outcome, and pain-free. Radiographally, 4 cases had radio-carpal joint degeneration and 4 had a local bone deformity. Interpretation — The long-term outcome of nonoperatively treated both-bone forearm shaft fractures in children was excellent. PMID:25238437

  8. Oxygen isotopes in tree rings show good coherence between species and sites in Bolivia

    NASA Astrophysics Data System (ADS)

    Baker, Jessica C. A.; Hunt, Sarah F. P.; Clerici, Santiago J.; Newton, Robert J.; Bottrell, Simon H.; Leng, Melanie J.; Heaton, Timothy H. E.; Helle, Gerhard; Argollo, Jaime; Gloor, Manuel; Brienen, Roel J. W.

    2015-10-01

    A tree ring oxygen isotope (δ18OTR) chronology developed from one species (Cedrela odorata) growing in a single site has been shown to be a sensitive proxy for rainfall over the Amazon Basin, thus allowing reconstructions of precipitation in a region where meteorological records are short and scarce. Although these results suggest that there should be large-scale (> 100 km) spatial coherence of δ18OTR records in the Amazon, this has not been tested. Furthermore, it is of interest to investigate whether other, possibly longer-lived, species similarly record interannual variation of Amazon precipitation, and can be used to develop climate sensitive isotope chronologies. In this study, we measured δ18O in tree rings from seven lowland and one highland tree species from Bolivia. We found that cross-dating with δ18OTR gave more accurate tree ring dates than using ring width. Our "isotope cross-dating approach" is confirmed with radiocarbon "bomb-peak" dates, and has the potential to greatly facilitate development of δ18OTR records in the tropics, identify dating errors, and check annual ring formation in tropical trees. Six of the seven lowland species correlated significantly with C. odorata, showing that variation in δ18OTR has a coherent imprint across very different species, most likely arising from a dominant influence of source water δ18O on δ18OTR. In addition we show that δ18OTR series cohere over large distances, within and between species. Comparison of two C. odorata δ18OTR chronologies from sites several hundreds of kilometres apart showed a very strong correlation (r = 0.80, p < 0.001, 1901-2001), and a significant (but weaker) relationship was found between lowland C. odorata trees and a Polylepis tarapacana tree growing in the distant Altiplano (r = 0.39, p < 0.01, 1931-2001). This large-scale coherence of δ18OTR records is probably triggered by a strong spatial coherence in precipitation δ18O due to large-scale controls. These results

  9. An algorithm for the contextual adaption of SURF octave selection with good matching performance: best octaves.

    PubMed

    Ehsan, Shoaib; Kanwal, Nadia; Clark, Adrian F; McDonald-Maier, Klaus D

    2012-01-01

    Speeded-Up Robust Features is a feature extraction algorithm designed for real-time execution, although this is rarely achievable on low-power hardware such as that in mobile robots. One way to reduce the computation is to discard some of the scale-space octaves, and previous research has simply discarded the higher octaves. This paper shows that this approach is not always the most sensible and presents an algorithm for choosing which octaves to discard based on the properties of the imagery. Results obtained with this best octaves algorithm show that it is able to achieve a significant reduction in computation without compromising matching performance. PMID:21712160

  10. A brief 5-item version of the Neck Disability Index shows good psychometric properties

    PubMed Central

    2013-01-01

    Background The purpose of this secondary analysis of clinical databases of people with neck pain was to use a mixed unique conceptual and statistical approach to develop a brief version of the Neck Disability Index (NDI). Methods An a priori framework of neck-related function based on the International Classification of Functioning, Disability and Health was used to identify items from the original 10-item NDI that do not conceptually fit. Remaining items were subject to Rasch analysis to identify items that did not statistically fit with axioms of quantitative measurement. Finally, approaches drawn from classical test theory were used to compare stability, responsiveness and concurrent validity of the original NDI, the new brief NDI and the linearly-transformed brief NDI. Results Conceptual analysis identified 3 items that did not fit with the construct of self-reported ability to perform activity: pain intensity, headache, and sleeping. These items were removed, and responses to the remaining 7 items drawn from an assembled database of 316 physiotherapy patients with neck pain were subject to Rasch analysis. Two items were removed due to either considerable differential item functioning (reading) or statistical redundancy (lifting). The remaining items were considered the NDI-5. Test-retest reliability, responsiveness, sensitivity to change, and concurrent validity were all comparable across the original NDI, NDI-5 and linearly-transformed NDI-5. Sensitivity to change over a 1-month period of physiotherapy was the notable exception, where the linearly-transformed NDI-5 showed superiority over the other two forms. Conclusions A shortened version of the NDI, the NDI-5, has been constructed that is conceptually and statistically sound. Implications for research and clinical practice are discussed. Comparison with the NDI-8 is provided that suggests overall similar function across the forms, although the latter may be more sensitive to change. PMID:23816395

  11. Good-Enough Brain Model: Challenges, Algorithms, and Discoveries in Multisubject Experiments.

    PubMed

    Papalexakis, Evangelos E; Fyshe, Alona; Sidiropoulos, Nicholas D; Talukdar, Partha Pratim; Mitchell, Tom M; Faloutsos, Christos

    2014-12-01

    Given a simple noun such as apple, and a question such as "Is it edible?," what processes take place in the human brain? More specifically, given the stimulus, what are the interactions between (groups of) neurons (also known as functional connectivity) and how can we automatically infer those interactions, given measurements of the brain activity? Furthermore, how does this connectivity differ across different human subjects? In this work, we show that this problem, even though originating from the field of neuroscience, can benefit from big data techniques; we present a simple, novel good-enough brain model, or GeBM in short, and a novel algorithm Sparse-SysId, which are able to effectively model the dynamics of the neuron interactions and infer the functional connectivity. Moreover, GeBM is able to simulate basic psychological phenomena such as habituation and priming (whose definition we provide in the main text). We evaluate GeBM by using real brain data. GeBM produces brain activity patterns that are strikingly similar to the real ones, where the inferred functional connectivity is able to provide neuroscientific insights toward a better understanding of the way that neurons interact with each other, as well as detect regularities and outliers in multisubject brain activity measurements. PMID:27442756

  12. Validity of reduced radiation dose for localized diffuse large B-cell lymphoma showing a good response to chemotherapy.

    PubMed

    Koiwai, Keiichiro; Sasaki, Shigeru; Yoshizawa, Eriko; Ina, Hironobu; Fukazawa, Ayumu; Sakai, Katsuya; Ozawa, Takesumi; Matsushita, Hirohide; Kadoya, Masumi

    2014-03-01

    To evaluate the validity of a decrease in the radiation dose for patients who were good responders to chemotherapy for localized diffuse large B-cell lymphoma (DLBCL), 91 patients with localized DLBCL who underwent radiotherapy after multi-agent chemotherapy from 1988-2008 were reviewed. Exclusion criteria were as follows: central nervous system or nasal cavity primary site, or Stage II with bulky tumor (≥10 cm). Of these patients, 62 were identified as good responders to chemotherapy. They were divided into two groups receiving either a higher or a lower radiation dose (32-50.4 Gy or 15-30.6 Gy, respectively). There were no statistically significant differences between the lower and higher dose groups in progression-free survival, locoregional progression-free survival or overall survival. Adaptation of decreased radiation dose may be valid for localized DLBCL patients who show a good response to chemotherapy. PMID:24187329

  13. Climatic Associations of British Species Distributions Show Good Transferability in Time but Low Predictive Accuracy for Range Change

    PubMed Central

    Rapacciuolo, Giovanni; Roy, David B.; Gillings, Simon; Fox, Richard; Walker, Kevin; Purvis, Andy

    2012-01-01

    Conservation planners often wish to predict how species distributions will change in response to environmental changes. Species distribution models (SDMs) are the primary tool for making such predictions. Many methods are widely used; however, they all make simplifying assumptions, and predictions can therefore be subject to high uncertainty. With global change well underway, field records of observed range shifts are increasingly being used for testing SDM transferability. We used an unprecedented distribution dataset documenting recent range changes of British vascular plants, birds, and butterflies to test whether correlative SDMs based on climate change provide useful approximations of potential distribution shifts. We modelled past species distributions from climate using nine single techniques and a consensus approach, and projected the geographical extent of these models to a more recent time period based on climate change; we then compared model predictions with recent observed distributions in order to estimate the temporal transferability and prediction accuracy of our models. We also evaluated the relative effect of methodological and taxonomic variation on the performance of SDMs. Models showed good transferability in time when assessed using widespread metrics of accuracy. However, models had low accuracy to predict where occupancy status changed between time periods, especially for declining species. Model performance varied greatly among species within major taxa, but there was also considerable variation among modelling frameworks. Past climatic associations of British species distributions retain a high explanatory power when transferred to recent time – due to their accuracy to predict large areas retained by species – but fail to capture relevant predictors of change. We strongly emphasize the need for caution when using SDMs to predict shifts in species distributions: high explanatory power on temporally-independent records – as assessed

  14. Is It that Difficult to Find a Good Preference Order for the Incremental Algorithm?

    ERIC Educational Resources Information Center

    Krahmer, Emiel; Koolen, Ruud; Theune, Mariet

    2012-01-01

    In a recent article published in this journal (van Deemter, Gatt, van der Sluis, & Power, 2012), the authors criticize the Incremental Algorithm (a well-known algorithm for the generation of referring expressions due to Dale & Reiter, 1995, also in this journal) because of its strong reliance on a pre-determined, domain-dependent Preference Order.…

  15. Immunophenotypic and gene expression analysis of monoclonal B-cell lymphocytosis shows biologic characteristics associated with good prognosis CLL.

    PubMed

    Lanasa, M C; Allgood, S D; Slager, S L; Dave, S S; Love, C; Marti, G E; Kay, N E; Hanson, C A; Rabe, K G; Achenbach, S J; Goldin, L R; Camp, N J; Goodman, B K; Vachon, C M; Spector, L G; Rassenti, L Z; Leis, J F; Gockerman, J P; Strom, S S; Call, T G; Glenn, M; Cerhan, J R; Levesque, M C; Weinberg, J B; Caporaso, N E

    2011-09-01

    Monoclonal B-cell lymphocytosis (MBL) is a hematologic condition wherein small B-cell clones can be detected in the blood of asymptomatic individuals. Most MBL have an immunophenotype similar to chronic lymphocytic leukemia (CLL), and 'CLL-like' MBL is a precursor to CLL. We used flow cytometry to identify MBL from unaffected members of CLL kindreds. We identified 101 MBL cases from 622 study subjects; of these, 82 individuals with MBL were further characterized. In all, 91 unique MBL clones were detected: 73 CLL-like MBL (CD5(+)CD20(dim)sIg(dim)), 11 atypical MBL (CD5(+)CD20(+)sIg(+)) and 7 CD5(neg) MBL (CD5(neg)CD20(+)sIg(neg)). Extended immunophenotypic characterization of these MBL subtypes was performed, and significant differences in cell surface expression of CD23, CD49d, CD79b and FMC-7 were observed among the groups. Markers of risk in CLL such as CD38, ZAP70 and CD49d were infrequently expressed in CLL-like MBL, but were expressed in the majority of atypical MBL. Interphase cytogenetics was performed in 35 MBL cases, and del 13q14 was most common (22/30 CLL-like MBL cases). Gene expression analysis using oligonucleotide arrays was performed on seven CLL-like MBL, and showed activation of B-cell receptor associated pathways. Our findings underscore the diversity of MBL subtypes and further clarify the relationship between MBL and other lymphoproliferative disorders. PMID:21617698

  16. Searching good strategies in evolutionary minority game using variable length genetic algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Wei-Song; Wang, Bing-Hong; Wu, Yi-Lin; Xie, Yan-Bo

    2004-08-01

    We propose and study a new adaptation minority game for understanding the complex dynamical behavior characterized by agent interactions competing limited resource in many natural and social systems. We compare the strategy of agents in the model to chromosome in biology. In our model, the agents with poor performance during certain time period may modify their strategies via variable length genetic algorithm which consists of cut and splice operator, imitating similar processes in biology. The performances of the agents in our model are calculated for different parameter conditions and different evolution mechanism. It is found that the system may evolve into a much more ideal equilibrium state, which implies much stronger cooperation among agents and much more effective utilization of the social resources. It is also found that the distribution of the strategies held by agents will tend towards a state concentrating upon small m region.

  17. Good Show by Today's Students

    ERIC Educational Resources Information Center

    Lowry, W. Kenneth

    1977-01-01

    Investigates whether today's students would score as well as students of the 1930-1950 era on achievement tests. Uses the Progressive Achievement Test, a test widely used in the 1930-1950 era as a barometer of student ability. (RK)

  18. Toxicity assessments of nonsteroidal anti-inflammatory drugs in isolated mitochondria, rat hepatocytes, and zebrafish show good concordance across chemical classes

    SciTech Connect

    Nadanaciva, Sashi; Aleo, Michael D.; Strock, Christopher J.; Stedman, Donald B.; Wang, Huijun; Will, Yvonne

    2013-10-15

    To reduce costly late-stage compound attrition, there has been an increased focus on assessing compounds in in vitro assays that predict attributes of human safety liabilities, before preclinical in vivo studies are done. Relevant questions when choosing a panel of assays for predicting toxicity are (a) whether there is general concordance in the data among the assays, and (b) whether, in a retrospective analysis, the rank order of toxicity of compounds in the assays correlates with the known safety profile of the drugs in humans. The aim of our study was to answer these questions using nonsteroidal anti-inflammatory drugs (NSAIDs) as a test set since NSAIDs are generally associated with gastrointestinal injury, hepatotoxicity, and/or cardiovascular risk, with mitochondrial impairment and endoplasmic reticulum stress being possible contributing factors. Eleven NSAIDs, flufenamic acid, tolfenamic acid, mefenamic acid, diclofenac, meloxicam, sudoxicam, piroxicam, diflunisal, acetylsalicylic acid, nimesulide, and sulindac (and its two metabolites, sulindac sulfide and sulindac sulfone), were tested for their effects on (a) the respiration of rat liver mitochondria, (b) a panel of mechanistic endpoints in rat hepatocytes, and (c) the viability and organ morphology of zebrafish. We show good concordance for distinguishing among/between NSAID chemical classes in the observations among the three approaches. Furthermore, the assays were complementary and able to correctly identify “toxic” and “non-toxic” drugs in accordance with their human safety profile, with emphasis on hepatic and gastrointestinal safety. We recommend implementing our multi-assay approach in the drug discovery process to reduce compound attrition. - Highlights: • NSAIDS cause liver and GI toxicity. • Mitochondrial uncoupling contributes to NSAID liver toxicity. • ER stress is a mechanism that contributes to liver toxicity. • Zebrafish and cell based assays are complimentary.

  19. Chia Seed Shows Good Protein Quality, Hypoglycemic Effect and Improves the Lipid Profile and Liver and Intestinal Morphology of Wistar Rats.

    PubMed

    da Silva, Bárbara Pereira; Dias, Desirrê Morais; de Castro Moreira, Maria Eliza; Toledo, Renata Celi Lopes; da Matta, Sérgio Luis Pinto; Lucia, Ceres Mattos Della; Martino, Hércia Stampini Duarte; Pinheiro-Sant'Ana, Helena Maria

    2016-09-01

    Chia has been consumed by the world population due to its high fiber, lipids and proteins content. The objective was to evaluate the protein quality of chia untreated (seed and flour) and heat treated (90 °C/20 min), their influence on glucose and lipid homeostasis and integrity of liver and intestinal morphology of Wistar rats. 36 male rats, weanling, divided into six groups which received control diet (casein), free protein diet (aproteic) and four diet tests (chia seed; chia seed with heat treatment; chia flour and chia flour with heat treatment) for 14 days were used. The protein efficiency ratio (PER), net protein ratio (NPR) and true digestibility (TD) were evaluated. The biochemical variables and liver and intestinal morphologies of animals were determined. The values of PER, NPR and TD did not differ among the animals that were fed with chia and were lower than the control group. The animals that were fed with chia showed lower concentrations of glucose; triacylglycerides, low-density lipoprotein cholesterol and very low-density lipoprotein and higher high-density lipoprotein cholesterol than the control group. The liver weight of animals that were fed with chia was lower than the control group. Crypt depth and thickness of intestinal muscle layers were higher in groups that were fed with chia. The consumption of chia has shown good digestibility, hypoglycemic effect, improved lipid and glycemic profiles and reduced fat deposition in liver of animals, and also promoted changes in intestinal tissue that enhanced its functionality. PMID:27193017

  20. Five Good Reasons to Show "Great Guy" (1936) in Our U.S. History and American Studies Classes (and the Challenges We'll Face)

    ERIC Educational Resources Information Center

    Allocco, Katherine

    2010-01-01

    One of the most versatile and multi-faceted films that an educator can use to illustrate urban America in the 1930s is "Great Guy," a relatively obscure film from 1936 directed by John G. Blystone and starring James Cagney and Mae Clarke. There are some simple practical considerations that make the film such a good fit for an American history or…

  1. Algorithms used in heterogeneous dose calculations show systematic differences as measured with the Radiological Physics Center’s anthropomorphic thorax phantom used for RTOG credentialing

    PubMed Central

    Kry, Stephen F.; Alvarez, Paola; Molineu, Andrea; Amador, Carrie; Galvin, James; Followill, David S.

    2012-01-01

    Purpose To determine the impact of treatment planning algorithm on the accuracy of heterogeneous dose calculations in the Radiological Physics Center (RPC) thorax phantom. Methods and Materials We retrospectively analyzed the results of 304 irradiations of the RPC thorax phantom at 221 different institutions as part of credentialing for RTOG clinical trials; the irradiations were all done using 6-MV beams. Treatment plans included those for intensity-modulated radiation therapy (IMRT) as well as 3D conformal therapy (3D CRT). Heterogeneous plans were developed using Monte Carlo (MC), convolution/superposition (CS) and the anisotropic analytic algorithm (AAA), as well as pencil beam (PB) algorithms. For each plan and delivery, the absolute dose measured in the center of a lung target was compared to the calculated dose, as was the planar dose in 3 orthogonal planes. The difference between measured and calculated dose was examined as a function of planning algorithm as well as use of IMRT. Results PB algorithms overestimated the dose delivered to the center of the target by 4.9% on average. Surprisingly, CS algorithms and AAA also showed a systematic overestimation of the dose to the center of the target, by 3.7% on average. In contrast, the MC algorithm dose calculations agreed with measurement within 0.6% on average. There was no difference observed between IMRT and 3D CRT calculation accuracy. Conclusion Unexpectedly, advanced treatment planning systems (those using CS and AAA algorithms) overestimated the dose that was delivered to the lung target. This issue requires attention in terms of heterogeneity calculations and potentially in terms of clinical practice. PMID:23237006

  2. Azvudine, a novel nucleoside reverse transcriptase inhibitor showed good drug combination features and better inhibition on drug-resistant strains than lamivudine in vitro.

    PubMed

    Wang, Rui-Rui; Yang, Qing-Hua; Luo, Rong-Hua; Peng, You-Mei; Dai, Shao-Xing; Zhang, Xing-Jie; Chen, Huan; Cui, Xue-Qing; Liu, Ya-Juan; Huang, Jing-Fei; Chang, Jun-Biao; Zheng, Yong-Tang

    2014-01-01

    Azvudine is a novel nucleoside reverse transcriptase inhibitor with antiviral activity on human immunodeficiency virus, hepatitis B virus and hepatitis C virus. Here we reported the in vitro activity of azvudine against HIV-1 and HIV-2 when used alone or in combination with other antiretroviral drugs and its drug resistance features. Azvudine exerted highly potent inhibition on HIV-1 (EC(50)s ranging from 0.03 to 6.92 nM) and HIV-2 (EC(50)s ranging from 0.018 to 0.025 nM). It also showed synergism in combination with six approved anti-HIV drugs on both C8166 and PBMC. In combination assay, the concentrations of azvudine used were 1000 or 500 fold lower than other drugs. Azvudine also showed potent inhibition on NRTI-resistant strains (L74V and T69N). Although M184V caused 250 fold reduction in susceptibility, azvudine remained active at nanomolar range. In in vitro induced resistant assay, the frequency of M184I mutation increased with induction time which suggests M184I as the key mutation in azvudine treatment. As control, lamivudine treatment resulted in a higher frequency of M184I/V given the same induction time and higher occurrence of M184V was found. Molecular modeling analysis suggests that steric hindrance is more pronounced in mutant M184I than M184V due to the azido group of azvudine. The present data demonstrates the potential of azvudine as a complementary drug to current anti-HIV drugs. M184I should be the key mutation, however, azvudine still remains active on HIV-1LAI-M184V at nanomolar range. PMID:25144636

  3. Good Agreements Make Good Friends

    PubMed Central

    Han, The Anh; Pereira, Luís Moniz; Santos, Francisco C.; Lenaerts, Tom

    2013-01-01

    When starting a new collaborative endeavor, it pays to establish upfront how strongly your partner commits to the common goal and what compensation can be expected in case the collaboration is violated. Diverse examples in biological and social contexts have demonstrated the pervasiveness of making prior agreements on posterior compensations, suggesting that this behavior could have been shaped by natural selection. Here, we analyze the evolutionary relevance of such a commitment strategy and relate it to the costly punishment strategy, where no prior agreements are made. We show that when the cost of arranging a commitment deal lies within certain limits, substantial levels of cooperation can be achieved. Moreover, these levels are higher than that achieved by simple costly punishment, especially when one insists on sharing the arrangement cost. Not only do we show that good agreements make good friends, agreements based on shared costs result in even better outcomes. PMID:24045873

  4. Study of 201 Non-Small Cell Lung Cancer Patients Given Stereotactic Ablative Radiation Therapy Shows Local Control Dependence on Dose Calculation Algorithm

    SciTech Connect

    Latifi, Kujtim; Oliver, Jasmine; Baker, Ryan; Dilling, Thomas J.; Stevens, Craig W.; Kim, Jongphil; Yue, Binglin; DeMarco, MaryLou; Zhang, Geoffrey G.; Moros, Eduardo G.; Feygelman, Vladimir

    2014-04-01

    Purpose: Pencil beam (PB) and collapsed cone convolution (CCC) dose calculation algorithms differ significantly when used in the thorax. However, such differences have seldom been previously directly correlated with outcomes of lung stereotactic ablative body radiation (SABR). Methods and Materials: Data for 201 non-small cell lung cancer patients treated with SABR were analyzed retrospectively. All patients were treated with 50 Gy in 5 fractions of 10 Gy each. The radiation prescription mandated that 95% of the planning target volume (PTV) receive the prescribed dose. One hundred sixteen patients were planned with BrainLab treatment planning software (TPS) with the PB algorithm and treated on a Novalis unit. The other 85 were planned on the Pinnacle TPS with the CCC algorithm and treated on a Varian linac. Treatment planning objectives were numerically identical for both groups. The median follow-up times were 24 and 17 months for the PB and CCC groups, respectively. The primary endpoint was local/marginal control of the irradiated lesion. Gray's competing risk method was used to determine the statistical differences in local/marginal control rates between the PB and CCC groups. Results: Twenty-five patients planned with PB and 4 patients planned with the CCC algorithms to the same nominal doses experienced local recurrence. There was a statistically significant difference in recurrence rates between the PB and CCC groups (hazard ratio 3.4 [95% confidence interval: 1.18-9.83], Gray's test P=.019). The differences (Δ) between the 2 algorithms for target coverage were as follows: ΔD99{sub GITV} = 7.4 Gy, ΔD99{sub PTV} = 10.4 Gy, ΔV90{sub GITV} = 13.7%, ΔV90{sub PTV} = 37.6%, ΔD95{sub PTV} = 9.8 Gy, and ΔD{sub ISO} = 3.4 Gy. GITV = gross internal tumor volume. Conclusions: Local control in patients receiving who were planned to the same nominal dose with PB and CCC algorithms were statistically significantly different. Possible alternative

  5. Human herpes virus 8-unrelated primary effusion lymphoma-like lymphoma in the pericardium: A case with latency type III Epstein-Barr virus infection showing good prognosis without chemotherapy.

    PubMed

    Nakamura, Harumi; Tsuta, Koji; Nakagawa, Takashi; Hirai, Risen; Ota, Yasunori

    2015-12-01

    Primary effusion lymphoma (PEL) is a rare subtype of non-Hodgkin lymphoma that proliferates in body cavities without detectable masses. PEL is universally associated with human herpes virus-8 (HHV-8) infection and has an aggressive prognosis. Recently, an HHV-8-unrelated PEL-like lymphoma that usually occurs in elderly individuals and follows a more indolent prognosis has been reported, and it is treated as a disease distinct from PEL. However, its pathogenesis and prognostic factors have not been sufficiently clarified. In PEL-like lymphoma accompanied by Epstein-Barr virus (EBV) infection, latent infection types are not mentioned in the literature. Herein, we report the case of an 85-year-old Japanese man with pericardial PEL-like lymphoma who showed good improvement in condition for 24 months after pericardiocentesis without chemotherapy. Serological test results were positive for EBV capsid antigen and EBV nuclear antigen 2 (EBNA2), but negative for human immunodeficiency virus, hepatitis B virus, and hepatitis C virus. The disease phenotype and EBV infection mechanism were immunohistochemically investigated by the cellblock prepared from pericardial effusion. Atypical cells were positive for CD20, CD30, CD45, BCL2, MUM1, EBNA2, latent membrane protein 1, and EBV-encoded RNA (on in situ hybridization), but negative for CD3, CD5, CD10, CD138, cytokeratin AE1/AE3, and HHV-8. Accordingly, this case was considered to be a B-cell activated phenotype with a type III latent EBV infection. Type III latent EBV infection is unusual in PEL. PMID:26384578

  6. Applications and accuracy of the parallel diagonal dominant algorithm

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He

    1993-01-01

    The Parallel Diagonal Dominant (PDD) algorithm is a highly efficient, ideally scalable tridiagonal solver. In this paper, a detailed study of the PDD algorithm is given. First the PDD algorithm is introduced. Then the algorithm is extended to solve periodic tridiagonal systems. A variant, the reduced PDD algorithm, is also proposed. Accuracy analysis is provided for a class of tridiagonal systems, the symmetric, and anti-symmetric Toeplitz tridiagonal systems. Implementation results show that the analysis gives a good bound on the relative error, and the algorithm is a good candidate for the emerging massively parallel machines.

  7. "The Show"

    ERIC Educational Resources Information Center

    Gehring, John

    2004-01-01

    For the past 16 years, the blue-collar city of Huntington, West Virginia, has rolled out the red carpet to welcome young wrestlers and their families as old friends. They have come to town chasing the same dream for a spot in what many of them call "The Show". For three days, under the lights of an arena packed with 5,000 fans, the state's best…

  8. Good Schools.

    ERIC Educational Resources Information Center

    Schoenheimer, Henry P.

    This book contains seventeen thumb-nail sketches of schools in Europe, the United States, Asia, Britain, and Australia, as they appeared in the eye of the author as a professional educator and a journalist while travelling around the world. The author considers the schools described to be good schools, and not necessarily the 17 best schools in…

  9. Cloud model bat algorithm.

    PubMed

    Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi

    2014-01-01

    Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425

  10. "Good mothering" or "good citizenship"?

    PubMed

    Porter, Maree; Kerridge, Ian H; Jordens, Christopher F C

    2012-03-01

    Umbilical cord blood banking is one of many biomedical innovations that confront pregnant women with new choices about what they should do to secure their own and their child's best interests. Many mothers can now choose to donate their baby's umbilical cord blood (UCB) to a public cord blood bank or pay to store it in a private cord blood bank. Donation to a public bank is widely regarded as an altruistic act of civic responsibility. Paying to store UCB may be regarded as a "unique opportunity" to provide "insurance" for the child's future. This paper reports findings from a survey of Australian women that investigated the decision to either donate or store UCB. We conclude that mothers are faced with competing discourses that force them to choose between being a "good mother" and fulfilling their role as a "good citizen." We discuss this finding with reference to the concept of value pluralism. PMID:23180199

  11. Cape of Good Hope

    Atmospheric Science Data Center

    2016-08-24

    article title:  Aerosol retrieval over Cape of Good Hope (Enlargement) ... SpectroRadiometer (MISR) image is an enlargement of the  aerosol retrieval over Cape of Good Hope, August 23, 2000 , showing a more ... the incoming energy, so MISR's contribution is not only the aerosol retrieval necessary to do the correction, but the multi-angular ...

  12. A Winner Determination Algorithm for Combinatorial Auctions Based on Hybrid Artificial Fish Swarm Algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Genrang; Lin, ZhengChun

    The problem of winner determination in combinatorial auctions is a hotspot electronic business, and a NP hard problem. A Hybrid Artificial Fish Swarm Algorithm(HAFSA), which is combined with First Suite Heuristic Algorithm (FSHA) and Artificial Fish Swarm Algorithm (AFSA), is proposed to solve the problem after probing it base on the theories of AFSA. Experiment results show that the HAFSA is a rapidly and efficient algorithm for The problem of winner determining. Compared with Ant colony Optimization Algorithm, it has a good performance with broad and prosperous application.

  13. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  14. Comparing barrier algorithms

    NASA Technical Reports Server (NTRS)

    Arenstorf, Norbert S.; Jordan, Harry F.

    1987-01-01

    A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor, that support these conclusions are detailed.

  15. Comparing barrier algorithms

    NASA Technical Reports Server (NTRS)

    Arenstorf, Norbert S.; Jordan, Harry F.

    1989-01-01

    A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor that support these conclusions, are detailed.

  16. Advanced optimization of permanent magnet wigglers using a genetic algorithm

    SciTech Connect

    Hajima, Ryoichi

    1995-12-31

    In permanent magnet wigglers, magnetic imperfection of each magnet piece causes field error. This field error can be reduced or compensated by sorting magnet pieces in proper order. We showed a genetic algorithm has good property for this sorting scheme. In this paper, this optimization scheme is applied to the case of permanent magnets which have errors in the direction of field. The result shows the genetic algorithm is superior to other algorithms.

  17. Quantum algorithms

    NASA Astrophysics Data System (ADS)

    Abrams, Daniel S.

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

  18. Haplotyping algorithms

    SciTech Connect

    Sobel, E.; Lange, K.; O`Connell, J.R.

    1996-12-31

    Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.

  19. A Revision of the NASA Team Sea Ice Algorithm

    NASA Technical Reports Server (NTRS)

    Markus, T.; Cavalieri, Donald J.

    1998-01-01

    In a recent paper, two operational algorithms to derive ice concentration from satellite multichannel passive microwave sensors have been compared. Although the results of these, known as the NASA Team algorithm and the Bootstrap algorithm, have been validated and are generally in good agreement, there are areas where the ice concentrations differ, by up to 30%. These differences can be explained by shortcomings in one or the other algorithm. Here, we present an algorithm which, in addition to the 19 and 37 GHz channels used by both the Bootstrap and NASA Team algorithms, makes use of the 85 GHz channels as well. Atmospheric effects particularly at 85 GHz are reduced by using a forward atmospheric radiative transfer model. Comparisons with the NASA Team and Bootstrap algorithm show that the individual shortcomings of these algorithms are not apparent in this new approach. The results further show better quantitative agreement with ice concentrations derived from NOAA AVHRR infrared data.

  20. Runtime support for parallelizing data mining algorithms

    NASA Astrophysics Data System (ADS)

    Jin, Ruoming; Agrawal, Gagan

    2002-03-01

    With recent technological advances, shared memory parallel machines have become more scalable, and offer large main memories and high bus bandwidths. They are emerging as good platforms for data warehousing and data mining. In this paper, we focus on shared memory parallelization of data mining algorithms. We have developed a series of techniques for parallelization of data mining algorithms, including full replication, full locking, fixed locking, optimized full locking, and cache-sensitive locking. Unlike previous work on shared memory parallelization of specific data mining algorithms, all of our techniques apply to a large number of common data mining algorithms. In addition, we propose a reduction-object based interface for specifying a data mining algorithm. We show how our runtime system can apply any of the technique we have developed starting from a common specification of the algorithm.

  1. Spectrum parameter estimation in Brillouin scattering distributed temperature sensor based on cuckoo search algorithm combined with the improved differential evolution algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Yanjun; Yu, Chunjuan; Fu, Xinghu; Liu, Wenzhe; Bi, Weihong

    2015-12-01

    In the distributed optical fiber sensing system based on Brillouin scattering, strain and temperature are the main measuring parameters which can be obtained by analyzing the Brillouin center frequency shift. The novel algorithm which combines the cuckoo search algorithm (CS) with the improved differential evolution (IDE) algorithm is proposed for the Brillouin scattering parameter estimation. The CS-IDE algorithm is compared with CS algorithm and analyzed in different situation. The results show that both the CS and CS-IDE algorithm have very good convergence. The analysis reveals that the CS-IDE algorithm can extract the scattering spectrum features with different linear weight ratio, linewidth combination and SNR. Moreover, the BOTDR temperature measuring system based on electron optical frequency shift is set up to verify the effectiveness of the CS-IDE algorithm. Experimental results show that there is a good linear relationship between the Brillouin center frequency shift and temperature changes.

  2. Nonlinear dynamics optimization with particle swarm and genetic algorithms for SPEAR3 emittance upgrade

    NASA Astrophysics Data System (ADS)

    Huang, Xiaobiao; Safranek, James

    2014-09-01

    Nonlinear dynamics optimization is carried out for a low emittance upgrade lattice of SPEAR3 in order to improve its dynamic aperture and Touschek lifetime. Two multi-objective optimization algorithms, a genetic algorithm and a particle swarm algorithm, are used for this study. The performance of the two algorithms are compared. The result shows that the particle swarm algorithm converges significantly faster to similar or better solutions than the genetic algorithm and it does not require seeding of good solutions in the initial population. These advantages of the particle swarm algorithm may make it more suitable for many accelerator optimization applications.

  3. A Hybrid Monkey Search Algorithm for Clustering Analysis

    PubMed Central

    Chen, Xin; Zhou, Yongquan; Luo, Qifang

    2014-01-01

    Clustering is a popular data analysis and data mining technique. The k-means clustering algorithm is one of the most commonly used methods. However, it highly depends on the initial solution and is easy to fall into local optimum solution. In view of the disadvantages of the k-means method, this paper proposed a hybrid monkey algorithm based on search operator of artificial bee colony algorithm for clustering analysis and experiment on synthetic and real life datasets to show that the algorithm has a good performance than that of the basic monkey algorithm for clustering analysis. PMID:24772039

  4. Good continuation in dot patterns: A quantitative approach based on local symmetry and non-accidentalness.

    PubMed

    Lezama, José; Randall, Gregory; Morel, Jean-Michel; Grompone von Gioi, Rafael

    2016-09-01

    We propose a novel approach to the grouping of dot patterns by the good continuation law. Our model is based on local symmetries, and the non-accidentalness principle to determine perceptually relevant configurations. A quantitative measure of non-accidentalness is proposed, showing a good correlation with the visibility of a curve of dots. A robust, unsupervised and scale-invariant algorithm for the detection of good continuation of dots is derived. The results of the proposed method are illustrated on various datasets, including data from classic psychophysical studies. An online demonstration of the algorithm allows the reader to directly evaluate the method. PMID:26408332

  5. A Simple Calculator Algorithm.

    ERIC Educational Resources Information Center

    Cook, Lyle; McWilliam, James

    1983-01-01

    The problem of finding cube roots when limited to a calculator with only square root capability is discussed. An algorithm is demonstrated and explained which should always produce a good approximation within a few iterations. (MP)

  6. Good Discipline, Good Kids. [Videotape with Guide].

    ERIC Educational Resources Information Center

    Squier, William; Simmons, Susan; Yannes, Michelle; Simmons, Susan; Levine, Beth

    Noting that good parental discipline provides positive, constructive ways to encourage cooperation and good behavior and gives children the skills to regulate themselves, this 42-minute videotape with facilitator's guide comprise a program intended to help parents get past daily power struggles by using effective disciplinary techniques and…

  7. Nios II hardware acceleration of the epsilon quadratic sieve algorithm

    NASA Astrophysics Data System (ADS)

    Meyer-Bäse, Uwe; Botella, Guillermo; Castillo, Encarnacion; García, Antonio

    2010-04-01

    The quadratic sieve (QS) algorithm is one of the most powerful algorithms to factor large composite primes used to break RSA cryptographic systems. The hardware structure of the QS algorithm seems to be a good fit for FPGA acceleration. Our new ɛ-QS algorithm further simplifies the hardware architecture making it an even better candidate for C2H acceleration. This paper shows our design results in FPGA resource and performance when implementing very long arithmetic on the Nios microprocessor platform with C2H acceleration for different libraries (GMP, LIP, FLINT, NRMP) and QS architecture choices for factoring 32-2048 bit RSA numbers.

  8. Identification of Traceability Barcode Based on Phase Correlation Algorithm

    NASA Astrophysics Data System (ADS)

    Lang, Liying; Zhang, Xiaofang

    In the paper phase correlation algorithm based on Fourier transform is applied to the traceability barcode identification, which is a widely used method of image registration. And there is the rotation-invariant phase correlation algorithm which combines polar coordinate transform with phase correlation, that they can recognize the barcode with partly destroyed and rotated. The paper provides the analysis and simulation for the algorithm using Matlab, the results show that the algorithm has the advantages of good real-time and high performance. And it improves the matching precision and reduces the calculation by optimizing the rotation-invariant phase correlation.

  9. A scalable parallel algorithm for multiple objective linear programs

    NASA Technical Reports Server (NTRS)

    Wiecek, Malgorzata M.; Zhang, Hong

    1994-01-01

    This paper presents an ADBASE-based parallel algorithm for solving multiple objective linear programs (MOLP's). Job balance, speedup and scalability are of primary interest in evaluating efficiency of the new algorithm. Implementation results on Intel iPSC/2 and Paragon multiprocessors show that the algorithm significantly speeds up the process of solving MOLP's, which is understood as generating all or some efficient extreme points and unbounded efficient edges. The algorithm gives specially good results for large and very large problems. Motivation and justification for solving such large MOLP's are also included.

  10. An ant colony algorithm on continuous searching space

    NASA Astrophysics Data System (ADS)

    Xie, Jing; Cai, Chao

    2015-12-01

    Ant colony algorithm is heuristic, bionic and parallel. Because of it is property of positive feedback, parallelism and simplicity to cooperate with other method, it is widely adopted in planning on discrete space. But it is still not good at planning on continuous space. After a basic introduction to the basic ant colony algorithm, we will propose an ant colony algorithm on continuous space. Our method makes use of the following three tricks. We search for the next nodes of the route according to fixed-step to guarantee the continuity of solution. When storing pheromone, it discretizes field of pheromone, clusters states and sums up the values of pheromone of these states. When updating pheromone, it makes good resolutions measured in relative score functions leave more pheromone, so that ant colony algorithm can find a sub-optimal solution in shorter time. The simulated experiment shows that our ant colony algorithm can find sub-optimal solution in relatively shorter time.

  11. Cape of Good Hope

    Atmospheric Science Data Center

    2013-04-16

    article title:  Aerosol retrieval over Cape of Good Hope   View larger JPEG image ... Imaging SpectroRadiometer (MISR) images of the Cape of Good Hope were acquired on August 23, 2000. This first of two image sets, ...

  12. Good Concrete Activity Is Good Mental Activity

    ERIC Educational Resources Information Center

    McDonough, Andrea

    2016-01-01

    Early years mathematics classrooms can be colourful, exciting, and challenging places of learning. Andrea McDonough and fellow teachers have noticed that some students make good decisions about using materials to assist their problem solving, but this is not always the case. These experiences lead her to ask the following questions: (1) Are…

  13. Good Teaching and Supervision.

    ERIC Educational Resources Information Center

    Zahorik, John A.

    1992-01-01

    Without a definition of good teaching, the supervisor's efforts to help teachers improve will probably be fragmented, and teacher improvement may not occur. This article examines three definitions of good teaching, presents and defends a certain definition, and suggests supervisory applications. Good teachers are proficient in the kinds of…

  14. Education Is Not a Public Good.

    ERIC Educational Resources Information Center

    Pisciotta, John

    The purpose of this essay is to show that education is not a public good, and that in contrast to a public good such as national defense, education can be provided through competitive suppliers in the private sector as well as through government enterprise. A public good differs from a private good in the nature of consumption. A public good…

  15. Covariance Structure Model Fit Testing under Missing Data: An Application of the Supplemented EM Algorithm

    ERIC Educational Resources Information Center

    Cai, Li; Lee, Taehun

    2009-01-01

    We apply the Supplemented EM algorithm (Meng & Rubin, 1991) to address a chronic problem with the "two-stage" fitting of covariance structure models in the presence of ignorable missing data: the lack of an asymptotically chi-square distributed goodness-of-fit statistic. We show that the Supplemented EM algorithm provides a convenient…

  16. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  17. Image segmentation using an improved differential algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Hao; Shi, Yujiao; Wu, Dongmei

    2014-10-01

    Among all the existing segmentation techniques, the thresholding technique is one of the most popular due to its simplicity, robustness, and accuracy (e.g. the maximum entropy method, Otsu's method, and K-means clustering). However, the computation time of these algorithms grows exponentially with the number of thresholds due to their exhaustive searching strategy. As a population-based optimization algorithm, differential algorithm (DE) uses a population of potential solutions and decision-making processes. It has shown considerable success in solving complex optimization problems within a reasonable time limit. Thus, applying this method into segmentation algorithm should be a good choice during to its fast computational ability. In this paper, we first propose a new differential algorithm with a balance strategy, which seeks a balance between the exploration of new regions and the exploitation of the already sampled regions. Then, we apply the new DE into the traditional Otsu's method to shorten the computation time. Experimental results of the new algorithm on a variety of images show that, compared with the EA-based thresholding methods, the proposed DE algorithm gets more effective and efficient results. It also shortens the computation time of the traditional Otsu method.

  18. Advice on Good Grooming.

    ERIC Educational Resources Information Center

    Tingey, Carol

    1987-01-01

    Suggestions are presented from parents on how to help children with disabilities (with particular focus on Downs Syndrome) learn good grooming habits in such areas as good health, exercise, cleanliness, teeth and hair care, skin care, glasses and other devices, and social behavior. (CB)

  19. "Good Citizen" Program.

    ERIC Educational Resources Information Center

    Placer Hills Union Elementary School District, Meadow Vista, CA.

    THE FOLLOWING IS THE FULL TEXT OF THIS DOCUMENT: The "Good Citizen" Program was developed for many reasons: to keep the campus clean, to reward students for improvement, to reward students for good deeds, to improve the total school climate, to reward students for excellence, and to offer staff members a method of reward for positive…

  20. Productivity and Capital Goods.

    ERIC Educational Resources Information Center

    Zicht, Barbara, Ed.; And Others

    1981-01-01

    Providing teacher background on the concepts of productivity and capital goods, this document presents 3 teaching units about these ideas for different grade levels. The grade K-2 unit, "How Do They Do It?," is designed to provide students with an understanding of how physical capital goods add to productivity. Activities include a field trip to…

  1. How Good Writers Punctuate.

    ERIC Educational Resources Information Center

    Dawkins, John

    The punctuation system presented in this paper has explanatory power insofar as it explains how good writers punctuate. The paper notes that good writers have learned, through reading, the differences among a hierarchy of marks and acquired a sense of independent clauses that allows them to use the hierarchy, along with a reader-sensitive notion…

  2. The Good Work.

    ERIC Educational Resources Information Center

    Csikszentmihalyi, Mihaly

    2003-01-01

    Examines the working lives of geneticists and journalists to place into perspective what lies behind personal ethics and success. Defines "good work" as productive activity that is valued socially and loved by people engaged in it. Asserts that certain cultural values, social controls, and personal standards are necessary to maintain good work and…

  3. Adaptive color image watermarking algorithm

    NASA Astrophysics Data System (ADS)

    Feng, Gui; Lin, Qiwei

    2008-03-01

    As a major method for intellectual property right protecting, digital watermarking techniques have been widely studied and used. But due to the problems of data amount and color shifted, watermarking techniques on color image was not so widespread studied, although the color image is the principal part for multi-medium usages. Considering the characteristic of Human Visual System (HVS), an adaptive color image watermarking algorithm is proposed in this paper. In this algorithm, HSI color model was adopted both for host and watermark image, the DCT coefficient of intensity component (I) of the host color image was used for watermark date embedding, and while embedding watermark the amount of embedding bit was adaptively changed with the complex degree of the host image. As to the watermark image, preprocessing is applied first, in which the watermark image is decomposed by two layer wavelet transformations. At the same time, for enhancing anti-attack ability and security of the watermarking algorithm, the watermark image was scrambled. According to its significance, some watermark bits were selected and some watermark bits were deleted as to form the actual embedding data. The experimental results show that the proposed watermarking algorithm is robust to several common attacks, and has good perceptual quality at the same time.

  4. Flower pollination algorithm: A novel approach for multiobjective optimization

    NASA Astrophysics Data System (ADS)

    Yang, Xin-She; Karamanoglu, Mehmet; He, Xingshi

    2014-09-01

    Multiobjective design optimization problems require multiobjective optimization techniques to solve, and it is often very challenging to obtain high-quality Pareto fronts accurately. In this article, the recently developed flower pollination algorithm (FPA) is extended to solve multiobjective optimization problems. The proposed method is used to solve a set of multiobjective test functions and two bi-objective design benchmarks, and a comparison of the proposed algorithm with other algorithms has been made, which shows that the FPA is efficient with a good convergence rate. Finally, the importance for further parametric studies and theoretical analysis is highlighted and discussed.

  5. A Novel Image Encryption Algorithm Based on DNA Subsequence Operation

    PubMed Central

    Zhang, Qiang; Xue, Xianglian; Wei, Xiaopeng

    2012-01-01

    We present a novel image encryption algorithm based on DNA subsequence operation. Different from the traditional DNA encryption methods, our algorithm does not use complex biological operation but just uses the idea of DNA subsequence operations (such as elongation operation, truncation operation, deletion operation, etc.) combining with the logistic chaotic map to scramble the location and the value of pixel points from the image. The experimental results and security analysis show that the proposed algorithm is easy to be implemented, can get good encryption effect, has a wide secret key's space, strong sensitivity to secret key, and has the abilities of resisting exhaustive attack and statistic attack. PMID:23093912

  6. Changes to the COS Extraction Algorithm for Lifetime Position 3

    NASA Astrophysics Data System (ADS)

    Proffitt, Charles R.; Bostroem, K. Azalee; Ely, Justin; Foster, Deatrick; Hernandez, Svea; Hodge, Philip; Jedrzejewski, Robert I.; Lockwood, Sean A.; Massa, Derck; Peeples, Molly S.; Oliveira, Cristina M.; Penton, Steven V.; Plesha, Rachel; Roman-Duval, Julia; Sana, Hugues; Sahnow, David J.; Sonnentrucker, Paule; Taylor, Joanna M.

    2015-09-01

    The COS FUV Detector Lifetime Position 3 (LP3) has been placed only 2.5" below the original lifetime position (LP1). This is sufficiently close to gain-sagged regions at LP1 that a revised extraction algorithm is needed to ensure good spectral quality. We provide an overview of this new "TWOZONE" extraction algorithm, discuss its strengths and limitations, describe new output columns in the X1D files that show the boundaries of the new extraction regions, and provide some advice on how to manually tune the algorithm for specialized applications.

  7. A novel image encryption algorithm based on DNA subsequence operation.

    PubMed

    Zhang, Qiang; Xue, Xianglian; Wei, Xiaopeng

    2012-01-01

    We present a novel image encryption algorithm based on DNA subsequence operation. Different from the traditional DNA encryption methods, our algorithm does not use complex biological operation but just uses the idea of DNA subsequence operations (such as elongation operation, truncation operation, deletion operation, etc.) combining with the logistic chaotic map to scramble the location and the value of pixel points from the image. The experimental results and security analysis show that the proposed algorithm is easy to be implemented, can get good encryption effect, has a wide secret key's space, strong sensitivity to secret key, and has the abilities of resisting exhaustive attack and statistic attack. PMID:23093912

  8. Television Quiz Show Simulation

    ERIC Educational Resources Information Center

    Hill, Jonnie Lynn

    2007-01-01

    This article explores the simulation of four television quiz shows for students in China studying English as a foreign language (EFL). It discusses the adaptation and implementation of television quiz shows and how the students reacted to them.

  9. Scalable Nearest Neighbor Algorithms for High Dimensional Data.

    PubMed

    Muja, Marius; Lowe, David G

    2014-11-01

    For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching. PMID:26353063

  10. Electronics show their age

    NASA Astrophysics Data System (ADS)

    Brown, Alan S.

    1992-10-01

    The paper examines the prevention and prediction of failures in avionics systems caused by persistent corrosion and vibration. Preventive maintenance of redundant avionics elements and subsystems is discussed in terms of corrosion initiated by water intrusion. Measures developed to mitigate electromagnetic interference can lead to corrosion such as the introduction of Al flakes into rubber gaskets. Hermetic seals are shown to be good for corrosion prevention under certain conditions, and the limitations of glues and rubbery organics are listed. Solder joints in avionics are shown to be vulnerable to accumulated vibration and shock, and techniques for force and temperature isolation can be used to extend the life of avionics. Simulations are described of flight vibration and shock demonstrating that resonance is a more serious problem than direct coupling, and vibration can also hasten the onset of overloads.

  11. The Wordpath Show.

    ERIC Educational Resources Information Center

    Anderton, Alice

    The Intertribal Wordpath Society is a nonprofit educational corporation formed to promote the teaching, status, awareness, and use of Oklahoma Indian languages. The Society produces "Wordpath," a weekly 30-minute public access television show about Oklahoma Indian languages and the people who are teaching and preserving them. The show aims to…

  12. Fast Optimal Load Balancing Algorithms for 1D Partitioning

    SciTech Connect

    Pinar, Ali; Aykanat, Cevdet

    2002-12-09

    One-dimensional decomposition of nonuniform workload arrays for optimal load balancing is investigated. The problem has been studied in the literature as ''chains-on-chains partitioning'' problem. Despite extensive research efforts, heuristics are still used in parallel computing community with the ''hope'' of good decompositions and the ''myth'' of exact algorithms being hard to implement and not runtime efficient. The main objective of this paper is to show that using exact algorithms instead of heuristics yields significant load balance improvements with negligible increase in preprocessing time. We provide detailed pseudocodes of our algorithms so that our results can be easily reproduced. We start with a review of literature on chains-on-chains partitioning problem. We propose improvements on these algorithms as well as efficient implementation tips. We also introduce novel algorithms, which are asymptotically and runtime efficient. We experimented with data sets from two different applications: Sparse matrix computations and Direct volume rendering. Experiments showed that the proposed algorithms are 100 times faster than a single sparse-matrix vector multiplication for 64-way decompositions on average. Experiments also verify that load balance can be significantly improved by using exact algorithms instead of heuristics. These two findings show that exact algorithms with efficient implementations discussed in this paper can effectively replace heuristics.

  13. Public goods and procreation.

    PubMed

    Anomaly, Jonathan

    2014-01-01

    Procreation is the ultimate public goods problem. Each new child affects the welfare of many other people, and some (but not all) children produce uncompensated value that future people will enjoy. This essay addresses challenges that arise if we think of procreation and parenting as public goods. These include whether individual choices are likely to lead to a socially desirable outcome, and whether changes in laws, social norms, or access to genetic engineering and embryo selection might improve the aggregate outcome of our reproductive choices. PMID:25743046

  14. A Holographic Road Show.

    ERIC Educational Resources Information Center

    Kirkpatrick, Larry D.; Rugheimer, Mac

    1979-01-01

    Describes the viewing sessions and the holograms of a holographic road show. The traveling exhibits, believed to stimulate interest in physics, include a wide variety of holograms and demonstrate several physical principles. (GA)

  15. Competing Sudakov veto algorithms

    NASA Astrophysics Data System (ADS)

    Kleiss, Ronald; Verheyen, Rob

    2016-07-01

    We present a formalism to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The formal analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance in a semi-realistic setting and show that there are significantly faster alternatives to the commonly used algorithms.

  16. The Superior Lambert Algorithm

    NASA Astrophysics Data System (ADS)

    der, G.

    2011-09-01

    Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most

  17. Designing Good Educational Software.

    ERIC Educational Resources Information Center

    Kingman, James C.

    1984-01-01

    Describes eight characteristics of good educational software. They are: (1) educational soundness; (2) ease of use; (3) "bullet" proofing (preventing a program from coming to a premature halt); (4) clear instructions; (5) appropriate language; (6) appropriate frame size; (7) motivation; and (8) evaluation. (JN)

  18. Good-Neighbor Policy

    ERIC Educational Resources Information Center

    Drozdowski, Mark J.

    2007-01-01

    In this article, the author draws on his experience as the director of the Fitchburg State College Foundation in Fitchburg, Massachusetts, to make a distinction between being a good neighbor to local non-profit organizations by sharing strategies and information, and creating conflicts of interest when both the college and its neighbor…

  19. Choosing Good Websites

    ERIC Educational Resources Information Center

    Webber, Nancy

    2004-01-01

    Many art teachers use the Web as an information source. Overall, they look for good content that is clearly written concise, accurate, and pertinent. A well-designed site gives users what they want quickly, efficiently, and logically, and does not ask them to assemble a puzzle to resolve their search. How can websites with these qualities be…

  20. Reconsidering the "Good Divorce"

    ERIC Educational Resources Information Center

    Amato, Paul R.; Kane, Jennifer B.; James, Spencer

    2011-01-01

    This study attempted to assess the notion that a "good divorce" protects children from the potential negative consequences of marital dissolution. A cluster analysis of data on postdivorce parenting from 944 families resulted in three groups: cooperative coparenting, parallel parenting, and single parenting. Children in the cooperative coparenting…

  1. Restructuring for Good Governance

    ERIC Educational Resources Information Center

    Robert, Stephen; Carey, Russell C.

    2006-01-01

    American higher education has never been more in need of good governance than it is right now. Yet much of the structure many boards have inherited or created tends to stall or impede timely, well-informed, and broadly supported decision making. At many institutions (ours included), layers of governance have been added with each passing year,…

  2. Show What You Know

    ERIC Educational Resources Information Center

    Eccleston, Jeff

    2007-01-01

    Big things come in small packages. This saying came to the mind of the author after he created a simple math review activity for his fourth grade students. Though simple, it has proven to be extremely advantageous in reinforcing math concepts. He uses this activity, which he calls "Show What You Know," often. This activity provides the perfect…

  3. The Ozone Show.

    ERIC Educational Resources Information Center

    Mathieu, Aaron

    2000-01-01

    Uses a talk show activity for a final assessment tool for students to debate about the ozone hole. Students are assessed on five areas: (1) cooperative learning; (2) the written component; (3) content; (4) self-evaluation; and (5) peer evaluation. (SAH)

  4. Honored Teacher Shows Commitment.

    ERIC Educational Resources Information Center

    Ratte, Kathy

    1987-01-01

    Part of the acceptance speech of the 1985 National Council for the Social Studies Teacher of the Year, this article describes the censorship experience of this honored social studies teacher. The incident involved the showing of a videotape version of the feature film entitled "The Seduction of Joe Tynan." (JDH)

  5. Talk Show Science.

    ERIC Educational Resources Information Center

    Moore, Mitzi Ruth

    1992-01-01

    Proposes having students perform skits in which they play the roles of the science concepts they are trying to understand. Provides the dialog for a skit in which hot and cold gas molecules are interviewed on a talk show to study how these properties affect wind, rain, and other weather phenomena. (MDH)

  6. Stage a Water Show

    ERIC Educational Resources Information Center

    Frasier, Debra

    2008-01-01

    In the author's book titled "The Incredible Water Show," the characters from "Miss Alaineus: A Vocabulary Disaster" used an ocean of information to stage an inventive performance about the water cycle. In this article, the author relates how she turned the story into hands-on science teaching for real-life fifth-grade students. The author also…

  7. Showing What They Know

    ERIC Educational Resources Information Center

    Cech, Scott J.

    2008-01-01

    Having students show their skills in three dimensions, known as performance-based assessment, dates back at least to Socrates. Individual schools such as Barrington High School--located just outside of Providence--have been requiring students to actively demonstrate their knowledge for years. The Rhode Island's high school graduating class became…

  8. 'Good palliative care' orders.

    PubMed

    Maddocks, I

    1993-01-01

    A Select Committee of the Parliament of South Australia, considering revisions to legislation governing care of the dying, did not support allowing doctors to assist suicide. They recommended that no liability attach to the provision of reasonable palliative care which happens to shorten life. The Committee affirmed the suggestion that positive open orders to provide 'good palliative care' should replace 'do not resuscitate' orders. PMID:7506978

  9. Doing good & doing well.

    PubMed

    Barnett, K; Pittman, M

    2001-01-01

    Leaders cannot make the "business case" for community benefit in the traditional sense of near-term financial returns on investment. The concept of returns must be expanded to encompass more long-term--yet concrete and measurable--benefits that may be accrued both by nonprofit hospitals and local communities. Hospitals can "do well" economically through a more strategic approach to "doing good." PMID:11372275

  10. An Artificial Immune Univariate Marginal Distribution Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Qingbin; Kang, Shuo; Gao, Junxiang; Wu, Song; Tian, Yanping

    Hybridization is an extremely effective way of improving the performance of the Univariate Marginal Distribution Algorithm (UMDA). Owing to its diversity and memory mechanisms, artificial immune algorithm has been widely used to construct hybrid algorithms with other optimization algorithms. This paper proposes a hybrid algorithm which combines the UMDA with the principle of general artificial immune algorithm. Experimental results on deceptive function of order 3 show that the proposed hybrid algorithm can get more building blocks (BBs) than the UMDA.

  11. A scalable and practical one-pass clustering algorithm for recommender system

    NASA Astrophysics Data System (ADS)

    Khalid, Asra; Ghazanfar, Mustansar Ali; Azam, Awais; Alahmari, Saad Ali

    2015-12-01

    KMeans clustering-based recommendation algorithms have been proposed claiming to increase the scalability of recommender systems. One potential drawback of these algorithms is that they perform training offline and hence cannot accommodate the incremental updates with the arrival of new data, making them unsuitable for the dynamic environments. From this line of research, a new clustering algorithm called One-Pass is proposed, which is a simple, fast, and accurate. We show empirically that the proposed algorithm outperforms K-Means in terms of recommendation and training time while maintaining a good level of accuracy.

  12. Taking in a Show.

    PubMed

    Boden, Timothy W

    2016-01-01

    Many medical practices have cut back on education and staff development expenses, especially those costs associated with conventions and conferences. But there are hard-to-value returns on your investment in these live events--beyond the obvious benefits of acquired knowledge and skills. Major vendors still exhibit their services and wares at many events, and the exhibit hall is a treasure-house of information and resources for the savvy physician or administrator. Make and stick to a purposeful plan to exploit the trade show. You can compare products, gain new insights and ideas, and even negotiate better deals with representatives anxious to realize returns on their exhibition investments. PMID:27249887

  13. Not a "reality" show.

    PubMed

    Wrong, Terence; Baumgart, Erica

    2013-01-01

    The authors of the preceding articles raise legitimate questions about patient and staff rights and the unintended consequences of allowing ABC News to film inside teaching hospitals. We explain why we regard their fears as baseless and not supported by what we heard from individuals portrayed in the filming, our decade-long experience making medical documentaries, and the full un-aired context of the scenes shown in the broadcast. The authors don't and can't know what conversations we had, what documents we reviewed, and what protections we put in place in each televised scene. Finally, we hope to correct several misleading examples cited by the authors as well as their offhand mischaracterization of our program as a "reality" show. PMID:23631336

  14. Cooperation and the common good.

    PubMed

    Johnstone, Rufus A; Rodrigues, António M M

    2016-02-01

    In this paper, we draw the attention of biologists to a result from the economic literature, which suggests that when individuals are engaged in a communal activity of benefit to all, selection may favour cooperative sharing of resources even among non-relatives. Provided that group members all invest some resources in the public good, they should refrain from conflict over the division of these resources. The reason is that, given diminishing returns on investment in public and private goods, claiming (or ceding) a greater share of total resources only leads to the actor (or its competitors) investing more in the public good, such that the marginal costs and benefits of investment remain in balance. This cancels out any individual benefits of resource competition. We illustrate how this idea may be applied in the context of biparental care, using a sequential game in which parents first compete with one another over resources, and then choose how to allocate the resources they each obtain to care of their joint young (public good) versus their own survival and future reproductive success (private good). We show that when the two parents both invest in care to some extent, they should refrain from any conflict over the division of resources. The same effect can also support asymmetric outcomes in which one parent competes for resources and invests in care, whereas the other does not invest but refrains from competition. The fact that the caring parent gains higher fitness pay-offs at these equilibria suggests that abandoning a partner is not always to the latter's detriment, when the potential for resource competition is taken into account, but may instead be of benefit to the 'abandoned' mate. PMID:26729926

  15. Public medical shows.

    PubMed

    Walusinski, Olivier

    2014-01-01

    In the second half of the 19th century, Jean-Martin Charcot (1825-1893) became famous for the quality of his teaching and his innovative neurological discoveries, bringing many French and foreign students to Paris. A hunger for recognition, together with progressive and anticlerical ideals, led Charcot to invite writers, journalists, and politicians to his lessons, during which he presented the results of his work on hysteria. These events became public performances, for which physicians and patients were transformed into actors. Major newspapers ran accounts of these consultations, more like theatrical shows in some respects. The resultant enthusiasm prompted other physicians in Paris and throughout France to try and imitate them. We will compare the form and substance of Charcot's lessons with those given by Jules-Bernard Luys (1828-1897), Victor Dumontpallier (1826-1899), Ambroise-Auguste Liébault (1823-1904), Hippolyte Bernheim (1840-1919), Joseph Grasset (1849-1918), and Albert Pitres (1848-1928). We will also note their impact on contemporary cinema and theatre. PMID:25273491

  16. The Great Cometary Show

    NASA Astrophysics Data System (ADS)

    2007-01-01

    its high spatial and spectral resolution, it was possible to zoom into the very heart of this very massive star. In this innermost region, the observations are dominated by the extremely dense stellar wind that totally obscures the underlying central star. The AMBER observations show that this dense stellar wind is not spherically symmetric, but exhibits a clearly elongated structure. Overall, the AMBER observations confirm that the extremely high mass loss of Eta Carinae's massive central star is non-spherical and much stronger along the poles than in the equatorial plane. This is in agreement with theoretical models that predict such an enhanced polar mass-loss in the case of rapidly rotating stars. ESO PR Photo 06c/07 ESO PR Photo 06c/07 RS Ophiuchi in Outburst Several papers from this special feature focus on the later stages in a star's life. One looks at the binary system Gamma 2 Velorum, which contains the closest example of a star known as a Wolf-Rayet. A single AMBER observation allowed the astronomers to separate the spectra of the two components, offering new insights in the modeling of Wolf-Rayet stars, but made it also possible to measure the separation between the two stars. This led to a new determination of the distance of the system, showing that previous estimates were incorrect. The observations also revealed information on the region where the winds from the two stars collide. The famous binary system RS Ophiuchi, an example of a recurrent nova, was observed just 5 days after it was discovered to be in outburst on 12 February 2006, an event that has been expected for 21 years. AMBER was able to detect the extension of the expanding nova emission. These observations show a complex geometry and kinematics, far from the simple interpretation of a spherical fireball in extension. AMBER has detected a high velocity jet probably perpendicular to the orbital plane of the binary system, and allowed a precise and careful study of the wind and the shockwave

  17. Two algorithms for compressing noise like signals

    NASA Astrophysics Data System (ADS)

    Agaian, Sos S.; Cherukuri, Ravindranath; Akopian, David

    2005-05-01

    Compression is a technique that is used to encode data so that the data needs less storage/memory space. Compression of random data is vital in case where data where we need preserve data that has low redundancy and whose power spectrum is close to noise. In case of noisy signals that are used in various data hiding schemes the data has low redundancy and low energy spectrum. Therefore, upon compressing with lossy compression algorithms the low energy spectrum might get lost. Since the LSB plane data has low redundancy, lossless compression algorithms like Run length, Huffman coding, Arithmetic coding are in effective in providing a good compression ratio. These problems motivated in developing a new class of compression algorithms for compressing noisy signals. In this paper, we introduce a two new compression technique that compresses the random data like noise with reference to know pseudo noise sequence generated using a key. In addition, we developed a representation model for digital media using the pseudo noise signals. For simulation, we have made comparison between our methods and existing compression techniques like Run length that shows the Run length cannot compress when data is random but the proposed algorithms can compress. Furthermore, the proposed algorithms can be extended to all kinds of random data used in various applications.

  18. Efficient algorithms for survivable virtual network embedding

    NASA Astrophysics Data System (ADS)

    Sun, Gang; Yu, Hongfang; Li, Lemin; Anand, Vishal; di, Hao; Gao, Xiujiao

    2010-12-01

    Network Virtualization Technology is serving as an effective method for providing a flexible and highly adaptable shared substrate network to satisfy the diversity of demands. But the problem of efficiently embedding Virtual Network (VN) onto substrate network is intractable since it is NP-hard. How to guarantee survivability of the embedding efficiently is another great challenge. In this paper, we investigate the Survivable Virtual Network Embedding (SVNE) problem and propose two efficient algorithms for solving this problem efficiently. Firstly, we formulate the model with minimum-cost objective of survivable network virtualization problem by Mixed Integer Linear Programming (MILP). We then devise two efficient relaxation-based algorithms for solving survivable virtual network embedding problem: (1) Lagrangian Relaxation based algorithm, called LR-SVNE in this paper; (2) Decomposition based algorithm called DSVNE in this paper. The results of simulation experiments show that these two algorithms both have good performance on time efficiency but LR-SVNE can guarantee the solution converge to optimal one under small scale substrate network.

  19. Multipartite entanglement in quantum algorithms

    SciTech Connect

    Bruss, D.; Macchiavello, C.

    2011-05-15

    We investigate the entanglement features of the quantum states employed in quantum algorithms. In particular, we analyze the multipartite entanglement properties in the Deutsch-Jozsa, Grover, and Simon algorithms. Our results show that for these algorithms most instances involve multipartite entanglement.

  20. What makes good image composition?

    NASA Astrophysics Data System (ADS)

    Banner, Ron

    2011-03-01

    Some people are born with an intuitive sense of good composition. They do not need to be taught composition, and their work is immediately perceived as being well by other people. In an attempt to help others learn composition, art critics, scientists and psychologists analyzed well-compose works in the hope of recognizing patterns and trends that anyone could employ to achieve similar results. Unfortunately, the identified patterns are by no means universal. Moreover, since a compositional rule is useful only as long as it enhances the idea that the artist is trying to express, there is no objective standard to judge whether a given composition is "good" or "bad". As a result, the study of composition seems to be full of contradictions. Nevertheless, there are several basic "low level" rules supported by physiological studies in visual perception that artists and photographers intuitively obey. Regardless of image content, a prerequisite for all good images is that their respective composition would be balanced. In a balanced composition, factors such as shape, direction, location and color are determined in a way that is pleasant to the eye. An unbalanced composition looks accidental, transitory and its elements show a tendency to change place or shape in order to reach a state that better reflects the total structure. Under these conditions, the artistic statement becomes incomprehensive and confusing.

  1. On algorithmic rate-coded AER generation.

    PubMed

    Linares-Barranco, Alejandro; Jimenez-Moreno, Gabriel; Linares-Barranco, Bernabé; Civit-Balcells, Antón

    2006-05-01

    This paper addresses the problem of converting a conventional video stream based on sequences of frames into the spike event-based representation known as the address-event-representation (AER). In this paper we concentrate on rate-coded AER. The problem is addressed as an algorithmic problem, in which different methods are proposed, implemented and tested through software algorithms. The proposed algorithms are comparatively evaluated according to different criteria. Emphasis is put on the potential of such algorithms for a) doing the frame-based to event-based representation in real time, and b) that the resulting event streams ressemble as much as possible those generated naturally by rate-coded address-event VLSI chips, such as silicon AER retinae. It is found that simple and straightforward algorithms tend to have high potential for real time but produce event distributions that differ considerably from those obtained in AER VLSI chips. On the other hand, sophisticated algorithms that yield better event distributions are not efficient for real time operations. The methods based on linear-feedback-shift-register (LFSR) pseudorandom number generation is a good compromise, which is feasible for real time and yield reasonably well distributed events in time. Our software experiments, on a 1.6-GHz Pentium IV, show that at 50% AER bus load the proposed algorithms require between 0.011 and 1.14 ms per 8 bit-pixel per frame. One of the proposed LFSR methods is implemented in real time hardware using a prototyping board that includes a VirtexE 300 FPGA. The demonstration hardware is capable of transforming frames of 64 x 64 pixels of 8-bit depth at a frame rate of 25 frames per second, producing spike events at a peak rate of 10(7) events per second. PMID:16722179

  2. A Parallel Prefix Algorithm for Almost Toeplitz Tridiagonal Systems

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He; Joslin, Ronald D.

    1995-01-01

    A compact scheme is a discretization scheme that is advantageous in obtaining highly accurate solutions. However, the resulting systems from compact schemes are tridiagonal systems that are difficult to solve efficiently on parallel computers. Considering the almost symmetric Toeplitz structure, a parallel algorithm, simple parallel prefix (SPP), is proposed. The SPP algorithm requires less memory than the conventional LU decomposition and is efficient on parallel machines. It consists of a prefix communication pattern and AXPY operations. Both the computation and the communication can be truncated without degrading the accuracy when the system is diagonally dominant. A formal accuracy study has been conducted to provide a simple truncation formula. Experimental results have been measured on a MasPar MP-1 SIMD machine and on a Cray 2 vector machine. Experimental results show that the simple parallel prefix algorithm is a good algorithm for symmetric, almost symmetric Toeplitz tridiagonal systems and for the compact scheme on high-performance computers.

  3. Firefly algorithm with chaos

    NASA Astrophysics Data System (ADS)

    Gandomi, A. H.; Yang, X.-S.; Talatahari, S.; Alavi, A. H.

    2013-01-01

    A recently developed metaheuristic optimization algorithm, firefly algorithm (FA), mimics the social behavior of fireflies based on the flashing and attraction characteristics of fireflies. In the present study, we will introduce chaos into FA so as to increase its global search mobility for robust global optimization. Detailed studies are carried out on benchmark problems with different chaotic maps. Here, 12 different chaotic maps are utilized to tune the attractive movement of the fireflies in the algorithm. The results show that some chaotic FAs can clearly outperform the standard FA.

  4. The Algorithm Selection Problem

    NASA Technical Reports Server (NTRS)

    Minton, Steve; Allen, John; Deiss, Ron (Technical Monitor)

    1994-01-01

    Work on NP-hard problems has shown that many instances of these theoretically computationally difficult problems are quite easy. The field has also shown that choosing the right algorithm for the problem can have a profound effect on the time needed to find a solution. However, to date there has been little work showing how to select the right algorithm for solving any particular problem. The paper refers to this as the algorithm selection problem. It describes some of the aspects that make this problem difficult, as well as proposes a technique for addressing it.

  5. Good Clinical Practice Training

    PubMed Central

    Arango, Jaime; Chuck, Tina; Ellenberg, Susan S.; Foltz, Bridget; Gorman, Colleen; Hinrichs, Heidi; McHale, Susan; Merchant, Kunal; Shapley, Stephanie; Wild, Gretchen

    2016-01-01

    Good Clinical Practice (GCP) is an international standard for the design, conduct, performance, monitoring, auditing, recording, analyses, and reporting of clinical trials. The goal of GCP is to ensure the protection of the rights, integrity, and confidentiality of clinical trial participants and to ensure the credibility and accuracy of data and reported results. In the United States, trial sponsors generally require investigators to complete GCP training prior to participating in each clinical trial to foster GCP and as a method to meet regulatory expectations (ie, sponsor’s responsibility to select qualified investigators per 21 CFR 312.50 and 312.53(a) for drugs and biologics and 21 CFR 812.40 and 812.43(a) for medical devices). This training requirement is often extended to investigative site staff, as deemed relevant by the sponsor, institution, or investigator. Those who participate in multiple clinical trials are often required by sponsors to complete repeated GCP training, which is unnecessarily burdensome. The Clinical Trials Transformation Initiative convened a multidisciplinary project team involving partners from academia, industry, other researchers and research staff, and government to develop recommendations for streamlining current GCP training practices. Recommendations drafted by the project team, including the minimum key training elements, frequency, format, and evidence of training completion, were presented to a broad group of experts to foster discussion of the current issues and to seek consensus on proposed solutions. PMID:27390628

  6. Eggs: good or bad?

    PubMed

    Griffin, Bruce A

    2016-08-01

    Eggs have one of the lowest energy to nutrient density ratios of any food, and contain a quality of protein that is superior to beef steak and similar to dairy. From a nutritional perspective, this must qualify eggs as 'good'. The greater burden of proof has been to establish that eggs are not 'bad', by increasing awareness of the difference between dietary and blood cholesterol, and accumulating sufficient evidence to exonerate eggs from their associations with CVD and diabetes. After 60 years of research, a general consensus has now been reached that dietary cholesterol, chiefly from eggs, exerts a relatively small effect on serum LDL-cholesterol and CVD risk, in comparison with other diet and lifestyle factors. While dietary guidelines have been revised worldwide to reflect this view, associations between egg intake and the incidence of diabetes, and increased CVD risk in diabetes, prevail. These associations may be explained, in part, by residual confounding produced by other dietary components. The strength of evidence that links egg intake to increased CVD risk in diabetes is also complicated by variation in the response of serum LDL-cholesterol to eggs and dietary cholesterol in types 1 and 2 diabetes. On balance, the answer to the question as to whether eggs are 'bad', is probably 'no', but we do need to gain a better understanding of the effects of dietary cholesterol and its association with CVD risk in diabetes. PMID:27126575

  7. An improved filter-u least mean square vibration control algorithm for aircraft framework.

    PubMed

    Huang, Quanzhen; Luo, Jun; Gao, Zhiyuan; Zhu, Xiaojin; Li, Hengyu

    2014-09-01

    Active vibration control of aerospace vehicle structures is very a hot spot and in which filter-u least mean square (FULMS) algorithm is one of the key methods. But for practical reasons and technical limitations, vibration reference signal extraction is always a difficult problem for FULMS algorithm. To solve the vibration reference signal extraction problem, an improved FULMS vibration control algorithm is proposed in this paper. Reference signal is constructed based on the controller structure and the data in the algorithm process, using a vibration response residual signal extracted directly from the vibration structure. To test the proposed algorithm, an aircraft frame model is built and an experimental platform is constructed. The simulation and experimental results show that the proposed algorithm is more practical with a good vibration suppression performance. PMID:25273765

  8. Making good connections.

    PubMed

    1993-01-01

    Suggestions are made on how best to integrate sexually transmitted disease (STD) screening and education within family planning (FP) programs in the UK. FP programs are in a good position to advise about HIV infections and STDs because most clients are in a vulnerable age group (women aged 15-50 years) and because health personnel are experienced in discussing sexual issues. When FP clinics do not provide STD services, the options are to collaborate on joint referral and training efforts with STD clinics and to train staff to recognize and talk about STDs. Information about STDs can be clearly displayed in the clinics. Health personnel can talk about STD transmission to clients, explain the role of condoms in infection prevention, and demonstrate how to use condoms properly. Examples are given of integrated HIV and STD and FP programs in the US, Gambia, Zambia, and Mexico. In the US, Planned Parenthood of New York City trains staff in prevention and counseling skills and supervises staff until a level of comfort is reached. HIV and AIDS education and risk assessment are part of the initial and annual follow-up visits. The Gambia FP Association helps staff learn to counsel clients about the problems with sexual satisfaction between men and women and with communication between partners, impotence, painful intercourse from female circumcision, STDs and AIDS, infertility, and contraceptive side effects. In Zambia, a women's organization helps women prepare educational skits on condom use for males and helps women learn to talk with spouses about condom use without suffering rejection or charges of infidelity. The Ghana Planned Parenthood Association has a Daddy's Club where men learn about HIV and safe sex with condoms and meet for private counseling. Mexfam in Mexico educates for female farm laborers on sex education, FP, reproductive health and pregnancy, child health, water and sanitation, and energy-saving methods. PMID:12287338

  9. Study of image matching algorithm and sub-pixel fitting algorithm in target tracking

    NASA Astrophysics Data System (ADS)

    Yang, Ming-dong; Jia, Jianjun; Qiang, Jia; Wang, Jian-yu

    2015-03-01

    Image correlation matching is a tracking method that searched a region most approximate to the target template based on the correlation measure between two images. Because there is no need to segment the image, and the computation of this method is little. Image correlation matching is a basic method of target tracking. This paper mainly studies the image matching algorithm of gray scale image, which precision is at sub-pixel level. The matching algorithm used in this paper is SAD (Sum of Absolute Difference) method. This method excels in real-time systems because of its low computation complexity. The SAD method is introduced firstly and the most frequently used sub-pixel fitting algorithms are introduced at the meantime. These fitting algorithms can't be used in real-time systems because they are too complex. However, target tracking often requires high real-time performance, we put forward a fitting algorithm named paraboloidal fitting algorithm based on the consideration above, this algorithm is simple and realized easily in real-time system. The result of this algorithm is compared with that of surface fitting algorithm through image matching simulation. By comparison, the precision difference between these two algorithms is little, it's less than 0.01pixel. In order to research the influence of target rotation on precision of image matching, the experiment of camera rotation was carried on. The detector used in the camera is a CMOS detector. It is fixed to an arc pendulum table, take pictures when the camera rotated different angles. Choose a subarea in the original picture as the template, and search the best matching spot using image matching algorithm mentioned above. The result shows that the matching error is bigger when the target rotation angle is larger. It's an approximate linear relation. Finally, the influence of noise on matching precision was researched. Gaussian noise and pepper and salt noise were added in the image respectively, and the image

  10. Shuttle Entry Air Data System (SEADS) - Optimization of preflight algorithms based on flight results

    NASA Technical Reports Server (NTRS)

    Wolf, H.; Henry, M. W.; Siemers, Paul M., III

    1988-01-01

    The SEADS pressure model algorithm results were tested against other sources of air data, in particular, the Shuttle Best Estimated Trajectory (BET). The algorithm basis was also tested through a comparison of flight-measured pressure distribution vs the wind tunnel database. It is concluded that the successful flight of SEADS and the subsequent analysis of the data shows good agreement between BET and SEADS air data.

  11. Good Enough to Eat

    ERIC Educational Resources Information Center

    Swinbank, Elizabeth

    2004-01-01

    This article shows how the physical testing of food ingredients and products can be used as a starting point for exploring aspects of physics. The three main areas addressed are: mechanical properties of solid materials; liquid flow; optical techniques for measuring sugar concentration. The activities described were originally developed for…

  12. A Rapid Convergent Low Complexity Interference Alignment Algorithm for Wireless Sensor Networks

    PubMed Central

    Jiang, Lihui; Wu, Zhilu; Ren, Guanghui; Wang, Gangyi; Zhao, Nan

    2015-01-01

    Interference alignment (IA) is a novel technique that can effectively eliminate the interference and approach the sum capacity of wireless sensor networks (WSNs) when the signal-to-noise ratio (SNR) is high, by casting the desired signal and interference into different signal subspaces. The traditional alternating minimization interference leakage (AMIL) algorithm for IA shows good performance in high SNR regimes, however, the complexity of the AMIL algorithm increases dramatically as the number of users and antennas increases, posing limits to its applications in the practical systems. In this paper, a novel IA algorithm, called directional quartic optimal (DQO) algorithm, is proposed to minimize the interference leakage with rapid convergence and low complexity. The properties of the AMIL algorithm are investigated, and it is discovered that the difference between the two consecutive iteration results of the AMIL algorithm will approximately point to the convergence solution when the precoding and decoding matrices obtained from the intermediate iterations are sufficiently close to their convergence values. Based on this important property, the proposed DQO algorithm employs the line search procedure so that it can converge to the destination directly. In addition, the optimal step size can be determined analytically by optimizing a quartic function. Numerical results show that the proposed DQO algorithm can suppress the interference leakage more rapidly than the traditional AMIL algorithm, and can achieve the same level of sum rate as that of AMIL algorithm with far less iterations and execution time. PMID:26230697

  13. Acoustic design of rotor blades using a genetic algorithm

    NASA Technical Reports Server (NTRS)

    Wells, V. L.; Han, A. Y.; Crossley, W. A.

    1995-01-01

    A genetic algorithm coupled with a simplified acoustic analysis was used to generate low-noise rotor blade designs. The model includes thickness, steady loading and blade-vortex interaction noise estimates. The paper presents solutions for several variations in the fitness function, including thickness noise only, loading noise only, and combinations of the noise types. Preliminary results indicate that the analysis provides reasonable assessments of the noise produced, and that genetic algorithm successfully searches for 'good' designs. The results show that, for a given required thrust coefficient, proper blade design can noticeably reduce the noise produced at some expense to the power requirements.

  14. Optimization of multilayer cylindrical cloaks using genetic algorithms and NEWUOA

    NASA Astrophysics Data System (ADS)

    Sakr, Ahmed A.; Abdelmageed, Alaa K.

    2016-06-01

    The problem of minimizing the scattering from a multilayer cylindrical cloak is studied. Both TM and TE polarizations are considered. A two-stage optimization procedure using genetic algorithms and NEWUOA (new unconstrained optimization algorithm) is adopted for realizing the cloak using homogeneous isotropic layers. The layers are arranged such that they follow a repeated pattern of alternating DPS and DNG materials. The results show that a good level of invisibility can be realized using a reasonable number of layers. Maintaining the cloak performance over a finite range of frequencies without sacrificing the level of invisibility is achieved.

  15. Noise robustness and parallel computation of the inverse compositional Gauss-Newton algorithm in digital image correlation

    NASA Astrophysics Data System (ADS)

    Shao, Xinxing; Dai, Xiangjun; He, Xiaoyuan

    2015-08-01

    The inverse compositional Gauss-Newton (IC-GN) algorithm is one of the most popular sub-pixel registration algorithms in digital image correlation (DIC). The IC-GN algorithm, compared with the traditional forward additive Newton-Raphson (FA-NR) algorithm, can achieve the same accuracy in less time. However, there are no clear results regarding the noise robustness of IC-GN algorithm and the computational efficiency is still in need of further improvements. In this paper, a theoretical model of the IC-GN algorithm was derived based on the sum of squared differences correlation criterion and linear interpolation. The model indicates that the IC-GN algorithm has better noise robustness than the FA-NR algorithm, and shows no noise-induced bias if the gray gradient operator is chosen properly. Both numerical simulations and experiments show good agreements with the theoretical predictions. Furthermore, a seed point-based parallel method is proposed to improve the calculation speed. Compared with the recently proposed path-independent method, our model is feasible and practical, and it can maximize the computing speed using an improved initial guess. Moreover, we compared the computational efficiency of our method with that of the reliability-guided method using a four-point bending experiment, and the results show that the computational efficiency is greatly improved. This proposed parallel IC-GN algorithm has good noise robustness and is expected to be a practical option for real-time DIC.

  16. And the good news...?

    NASA Astrophysics Data System (ADS)

    1999-11-01

    Along with the increase in the number of young people applying to enter higher education announced back in July, the UK Department for Education and Employment noted that over a thousand more graduates had applied for postgraduate teacher training when compared with the same time in 1998. It appeared that the `Golden hello' programme for new mathematics and science teachers had succeeded in its aim of encouraging applicants in those subjects: an increase of 37% had been witnessed for maths teaching, 33% for physics and 27% for chemistry. Primary teacher training was also well on target with over five applicants seeking each available place. Statistics for UK schools released in August by the DfEE show that 62% of primary schools and 93% of secondary schools are now linked to the Internet (the corresponding figures were 17% and 83% in 1998). On average there is now one computer for every 13 pupils at primary school and one for every eight students in secondary school. The figures show continuing progress towards the Government's target of ensuring that all schools, colleges, universities, libraries and as many community centres as possible should be online (with access to the National Grid for Learning) by 2002.

  17. Good Grubbin': Impact of a TV Cooking Show for College Students Living off Campus

    ERIC Educational Resources Information Center

    Clifford, Dawn; Anderson, Jennifer; Auld, Garry; Champ, Joseph

    2009-01-01

    Objective: To determine if a series of 4 15-minute, theory-driven (Social Cognitive Theory) cooking programs aimed at college students living off campus improved cooking self-efficacy, knowledge, attitudes, and behaviors regarding fruit and vegetable intake. Design: A randomized controlled trial with pre-, post- and follow-up tests. Setting:…

  18. Good News for New Orleans: Early Evidence Shows Reforms Lifting Student Achievement

    ERIC Educational Resources Information Center

    Harris, Douglas N.

    2015-01-01

    What happened to the New Orleans public schools following the tragic levee breeches after Hurricane Katrina is truly unprecedented. Within the span of one year, all public-school employees were fired, the teacher contract expired and was not replaced, and most attendance zones were eliminated. The state took control of almost all public schools…

  19. EDGA: A Population Evolution Direction-Guided Genetic Algorithm for Protein-Ligand Docking.

    PubMed

    Guan, Boxin; Zhang, Changsheng; Ning, Jiaxu

    2016-07-01

    Protein-ligand docking can be formulated as a search algorithm associated with an accurate scoring function. However, most current search algorithms cannot show good performance in docking problems, especially for highly flexible docking. To overcome this drawback, this article presents a novel and robust optimization algorithm (EDGA) based on the Lamarckian genetic algorithm (LGA) for solving flexible protein-ligand docking problems. This method applies a population evolution direction-guided model of genetics, in which search direction evolves to the optimum solution. The method is more efficient to find the lowest energy of protein-ligand docking. We consider four search methods-a tradition genetic algorithm, LGA, SODOCK, and EDGA-and compare their performance in docking of six protein-ligand docking problems. The results show that EDGA is the most stable, reliable, and successful. PMID:26895461

  20. Optimization of composite structures by estimation of distribution algorithms

    NASA Astrophysics Data System (ADS)

    Grosset, Laurent

    The design of high performance composite laminates, such as those used in aerospace structures, leads to complex combinatorial optimization problems that cannot be addressed by conventional methods. These problems are typically solved by stochastic algorithms, such as evolutionary algorithms. This dissertation proposes a new evolutionary algorithm for composite laminate optimization, named Double-Distribution Optimization Algorithm (DDOA). DDOA belongs to the family of estimation of distributions algorithms (EDA) that build a statistical model of promising regions of the design space based on sets of good points, and use it to guide the search. A generic framework for introducing statistical variable dependencies by making use of the physics of the problem is proposed. The algorithm uses two distributions simultaneously: the marginal distributions of the design variables, complemented by the distribution of auxiliary variables. The combination of the two generates complex distributions at a low computational cost. The dissertation demonstrates the efficiency of DDOA for several laminate optimization problems where the design variables are the fiber angles and the auxiliary variables are the lamination parameters. The results show that its reliability in finding the optima is greater than that of a simple EDA and of a standard genetic algorithm, and that its advantage increases with the problem dimension. A continuous version of the algorithm is presented and applied to a constrained quadratic problem. Finally, a modification of the algorithm incorporating probabilistic and directional search mechanisms is proposed. The algorithm exhibits a faster convergence to the optimum and opens the way for a unified framework for stochastic and directional optimization.

  1. A Few Good Barchans

    NASA Technical Reports Server (NTRS)

    2006-01-01

    This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows several small, dark sand dunes and a small crater (about 1 kilometer in diameter) within a much larger crater (not visible in this image). The floor of the larger crater is rough and has been eroded with time. The floor of the smaller crater contains windblown ripples. The steep faces of the dunes point to the east (right), indicating that the dominant winds blew from the west (left). This scene is located near 38.5 S, 347.1 W, and covers an area approximately 3 km (1.9 mi) wide. Sunlight illuminates the landscape from the upper left. This southern autumn image was acquired on 1 July 2006.

  2. Benchmark graphs for testing community detection algorithms

    NASA Astrophysics Data System (ADS)

    Lancichinetti, Andrea; Fortunato, Santo; Radicchi, Filippo

    2008-10-01

    Community structure is one of the most important features of real networks and reveals the internal organization of the nodes. Many algorithms have been proposed but the crucial issue of testing, i.e., the question of how good an algorithm is, with respect to others, is still open. Standard tests include the analysis of simple artificial graphs with a built-in community structure, that the algorithm has to recover. However, the special graphs adopted in actual tests have a structure that does not reflect the real properties of nodes and communities found in real networks. Here we introduce a class of benchmark graphs, that account for the heterogeneity in the distributions of node degrees and of community sizes. We use this benchmark to test two popular methods of community detection, modularity optimization, and Potts model clustering. The results show that the benchmark poses a much more severe test to algorithms than standard benchmarks, revealing limits that may not be apparent at a first analysis.

  3. A graph spectrum based geometric biclustering algorithm.

    PubMed

    Wang, Doris Z; Yan, Hong

    2013-01-21

    Biclustering is capable of performing simultaneous clustering on two dimensions of a data matrix and has many applications in pattern classification. For example, in microarray experiments, a subset of genes is co-expressed in a subset of conditions, and biclustering algorithms can be used to detect the coherent patterns in the data for further analysis of function. In this paper, we present a graph spectrum based geometric biclustering (GSGBC) algorithm. In the geometrical view, biclusters can be seen as different linear geometrical patterns in high dimensional spaces. Based on this, the modified Hough transform is used to find the Hough vector (HV) corresponding to sub-bicluster patterns in 2D spaces. A graph can be built regarding each HV as a node. The graph spectrum is utilized to identify the eigengroups in which the sub-biclusters are grouped naturally to produce larger biclusters. Through a comparative study, we find that the GSGBC achieves as good a result as GBC and outperforms other kinds of biclustering algorithms. Also, compared with the original geometrical biclustering algorithm, it reduces the computing time complexity significantly. We also show that biologically meaningful biclusters can be identified by our method from real microarray gene expression data. PMID:23079285

  4. A fast image encryption algorithm based on chaotic map

    NASA Astrophysics Data System (ADS)

    Liu, Wenhao; Sun, Kehui; Zhu, Congxu

    2016-09-01

    Derived from Sine map and iterative chaotic map with infinite collapse (ICMIC), a new two-dimensional Sine ICMIC modulation map (2D-SIMM) is proposed based on a close-loop modulation coupling (CMC) model, and its chaotic performance is analyzed by means of phase diagram, Lyapunov exponent spectrum and complexity. It shows that this map has good ergodicity, hyperchaotic behavior, large maximum Lyapunov exponent and high complexity. Based on this map, a fast image encryption algorithm is proposed. In this algorithm, the confusion and diffusion processes are combined for one stage. Chaotic shift transform (CST) is proposed to efficiently change the image pixel positions, and the row and column substitutions are applied to scramble the pixel values simultaneously. The simulation and analysis results show that this algorithm has high security, low time complexity, and the abilities of resisting statistical analysis, differential, brute-force, known-plaintext and chosen-plaintext attacks.

  5. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  6. Straight Talk For Good Health

    MedlinePlus

    ... Home Current Issue Past Issues Straight Talk For Good Health Past Issues / Summer 2009 Table of Contents ... one of the most important aspects of getting good care. Make a List To Find Out More ...

  7. Good Health For the Holidays!

    MedlinePlus

    Skip Navigation Bar Home Current Issue Past Issues Good Health For the Holidays! Past Issues / Fall 2007 ... medical and health question. Healthy families know that good medical information should be a part of everyone's ...

  8. Digital watermarking algorithm based on HVS in wavelet domain

    NASA Astrophysics Data System (ADS)

    Zhang, Qiuhong; Xia, Ping; Liu, Xiaomei

    2013-10-01

    As a new technique used to protect the copyright of digital productions, the digital watermark technique has drawn extensive attention. A digital watermarking algorithm based on discrete wavelet transform (DWT) was presented according to human visual properties in the paper. Then some attack analyses were given. Experimental results show that the watermarking scheme proposed in this paper is invisible and robust to cropping, and also has good robustness to cut , compression , filtering , and noise adding .

  9. Good pitch memory is widespread.

    PubMed

    Schellenberg, E Glenn; Trehub, Sandra E

    2003-05-01

    Here we show that good pitch memory is widespread among adults with no musical training. We tested unselected college students on their memory for the pitch level of instrumental soundtracks from familiar television programs. Participants heard 5-s excerpts either at the original pitch level or shifted upward or downward by 1 or 2 semitones. They successfully identified the original pitch levels. Other participants who heard comparable excerpts from unfamiliar recordings could not do so. These findings reveal that ordinary listeners retain fine-grained information about pitch level over extended periods. Adults' reportedly poor memory for pitch is likely to be a by-product of their inability to name isolated pitches. PMID:12741751

  10. ETD: an extended time delay algorithm for ventricular fibrillation detection.

    PubMed

    Kim, Jungyoon; Chu, Chao-Hsien

    2014-01-01

    Ventricular fibrillation (VF) is the most serious type of heart attack which requires quick detection and first aid to improve patients' survival rates. To be most effective in using wearable devices for VF detection, it is vital that the detection algorithms be accurate, robust, reliable and computationally efficient. Previous studies and our experiments both indicate that the time-delay (TD) algorithm has a high reliability for separating sinus rhythm (SR) from VF and is resistant to variable factors, such as window size and filtering method. However, it fails to detect some VF cases. In this paper, we propose an extended time-delay (ETD) algorithm for VF detection and conduct experiments comparing the performance of ETD against five good VF detection algorithms, including TD, using the popular Creighton University (CU) database. Our study shows that (1) TD and ETD outperform the other four algorithms considered and (2) with the same sensitivity setting, ETD improves upon TD in three other quality measures for up to 7.64% and in terms of aggregate accuracy, the ETD algorithm shows an improvement of 2.6% of the area under curve (AUC) compared to TD. PMID:25571480

  11. Substantial Goodness and Nascent Human Life.

    PubMed

    Floyd, Shawn

    2015-09-01

    Many believe that moral value is--at least to some extent--dependent on the developmental states necessary for supporting rational activity. My paper rejects this view, but does not aim simply to register objections to it. Rather, my essay aims to answer the following question: if a human being's developmental state and occurrent capacities do not bequeath moral standing, what does? The question is intended to prompt careful consideration of what makes human beings objects of moral value, dignity, or (to employ my preferred term) goodness. Not only do I think we can answer this question, I think we can show that nascent human life possesses goodness of precisely this sort. I appeal to Aquinas's metaethics to establish the conclusion that the goodness of a human being--even if that being is an embryo or fetus--resides at the substratum of her existence. If she possesses goodness, it is because human existence is good. PMID:25633227

  12. A variational multi-symplectic particle-in-cell algorithm with smoothing functions for the Vlasov-Maxwell system

    SciTech Connect

    Xiao, Jianyuan; Liu, Jian; Qin, Hong; Plasma Physics Laboratory, Princeton University, Princeton, New Jersey 08543 ; Yu, Zhi

    2013-10-15

    Smoothing functions are commonly used to reduce numerical noise arising from coarse sampling of particles in particle-in-cell (PIC) plasma simulations. When applying smoothing functions to symplectic algorithms, the conservation of symplectic structure should be guaranteed to preserve good conservation properties. In this paper, we show how to construct a variational multi-symplectic PIC algorithm with smoothing functions for the Vlasov-Maxwell system. The conservation of the multi-symplectic structure and the reduction of numerical noise make this algorithm specifically suitable for simulating long-term dynamics of plasmas, such as those in the steady-state operation or long-pulse discharge of a super-conducting tokamak. The algorithm has been implemented in a 6D large scale PIC code. Numerical examples are given to demonstrate the good conservation properties of the multi-symplectic algorithm and the reduction of the noise due to the application of smoothing function.

  13. Preconditioned quantum linear system algorithm.

    PubMed

    Clader, B D; Jacobs, B C; Sprouse, C R

    2013-06-21

    We describe a quantum algorithm that generalizes the quantum linear system algorithm [Harrow et al., Phys. Rev. Lett. 103, 150502 (2009)] to arbitrary problem specifications. We develop a state preparation routine that can initialize generic states, show how simple ancilla measurements can be used to calculate many quantities of interest, and integrate a quantum-compatible preconditioner that greatly expands the number of problems that can achieve exponential speedup over classical linear systems solvers. To demonstrate the algorithm's applicability, we show how it can be used to compute the electromagnetic scattering cross section of an arbitrary target exponentially faster than the best classical algorithm. PMID:23829722

  14. A novel method to design S-box based on chaotic map and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Yong; Wong, Kwok-Wo; Li, Changbing; Li, Yang

    2012-01-01

    The substitution box (S-box) is an important component in block encryption algorithms. In this Letter, the problem of constructing S-box is transformed to a Traveling Salesman Problem and a method for designing S-box based on chaos and genetic algorithm is proposed. Since the proposed method makes full use of the traits of chaotic map and evolution process, stronger S-box is obtained. The results of performance test show that the presented S-box has good cryptographic properties, which justify that the proposed algorithm is effective in generating strong S-boxes.

  15. A baseline algorithm for face detection and tracking in video

    NASA Astrophysics Data System (ADS)

    Manohar, Vasant; Soundararajan, Padmanabhan; Korzhova, Valentina; Boonstra, Matthew; Goldgof, Dmitry; Kasturi, Rangachar

    2007-10-01

    Establishing benchmark datasets, performance metrics and baseline algorithms have considerable research significance in gauging the progress in any application domain. These primarily allow both users and developers to compare the performance of various algorithms on a common platform. In our earlier works, we focused on developing performance metrics and establishing a substantial dataset with ground truth for object detection and tracking tasks (text and face) in two video domains -- broadcast news and meetings. In this paper, we present the results of a face detection and tracking algorithm on broadcast news videos with the objective of establishing a baseline performance for this task-domain pair. The detection algorithm uses a statistical approach that was originally developed by Viola and Jones and later extended by Lienhart. The algorithm uses a feature set that is Haar-like and a cascade of boosted decision tree classifiers as a statistical model. In this work, we used the Intel Open Source Computer Vision Library (OpenCV) implementation of the Haar face detection algorithm. The optimal values for the tunable parameters of this implementation were found through an experimental design strategy commonly used in statistical analyses of industrial processes. Tracking was accomplished as continuous detection with the detected objects in two frames mapped using a greedy algorithm based on the distances between the centroids of bounding boxes. Results on the evaluation set containing 50 sequences (~ 2.5 mins.) using the developed performance metrics show good performance of the algorithm reflecting the state-of-the-art which makes it an appropriate choice as the baseline algorithm for the problem.

  16. Parallel algorithms for unconstrained optimizations by multisplitting

    SciTech Connect

    He, Qing

    1994-12-31

    In this paper a new parallel iterative algorithm for unconstrained optimization using the idea of multisplitting is proposed. This algorithm uses the existing sequential algorithms without any parallelization. Some convergence and numerical results for this algorithm are presented. The experiments are performed on an Intel iPSC/860 Hyper Cube with 64 nodes. It is interesting that the sequential implementation on one node shows that if the problem is split properly, the algorithm converges much faster than one without splitting.

  17. Digital watermarking algorithm research of color images based on quaternion Fourier transform

    NASA Astrophysics Data System (ADS)

    An, Mali; Wang, Weijiang; Zhao, Zhen

    2013-10-01

    A watermarking algorithm of color images based on the quaternion Fourier Transform (QFFT) and improved quantization index algorithm (QIM) is proposed in this paper. The original image is transformed by QFFT, the watermark image is processed by compression and quantization coding, and then the processed watermark image is embedded into the components of the transformed original image. It achieves embedding and blind extraction of the watermark image. The experimental results show that the watermarking algorithm based on the improved QIM algorithm with distortion compensation achieves a good tradeoff between invisibility and robustness, and better robustness for the attacks of Gaussian noises, salt and pepper noises, JPEG compression, cropping, filtering and image enhancement than the traditional QIM algorithm.

  18. Distributed Range-Free Localization Algorithm Based on Self-Organizing Maps

    NASA Astrophysics Data System (ADS)

    Tinh, Pham Doan; Kawai, Makoto

    In Mobile Ad-Hoc Networks (MANETs), determining the physical location of nodes (localization) is very important for many network services and protocols. This paper proposes a new Distributed Range-free Localization Algorithm Based on Self-Organizing Maps (SOM) to deal with this issue. Our proposed algorithm utilizes only connectivity information to determine the location of nodes. By utilizing the intersection areas between radio coverage of neighboring nodes, the algorithm has maximized the correlation between neighboring nodes in distributed implementation of SOM and reduced the SOM learning time. An implementation of the algorithm on Network Simulator 2 (NS-2) was done with the mobility consideration to verify the performance of the proposed algorithm. From our intensive simulations, the results show that the proposed scheme achieves very good accuracy in most cases.

  19. Algorithm for identifying and separating beats from arterial pulse records

    PubMed Central

    Treo, Ernesto F; Herrera, Myriam C; Valentinuzzi, Max E

    2005-01-01

    Background This project was designed as an epidemiological aid-selecting tool for a small country health center with the general objective of screening out possible coronary patients. Peripheral artery function can be non-invasively evaluated by impedance plethysmography. Changes in these vessels appear as good predictors of future coronary behavior. Impedance plethysmography detects volume variations after simple occlusive maneuvers that may show indicative modifications in arterial/venous responses. Averaging of a series of pulses is needed and this, in turn, requires proper determination of the beginning and end of each beat. Thus, the objective here is to describe an algorithm to identify and separate out beats from a plethysmographic record. A secondary objective was to compare the output given by human operators against the algorithm. Methods The identification algorithm detected the beat's onset and end on the basis of the maximum rising phase, the choice of possible ventricular systolic starting points considering cardiac frequency, and the adjustment of some tolerance values to optimize the behavior. Out of 800 patients in the study, 40 occlusive records (supradiastolic- subsystolic) were randomly selected without any preliminary diagnosis. Radial impedance plethysmographic pulse and standard ECG were recorded digitizing and storing the data. Cardiac frequency was estimated with the Power Density Function and, thereafter, the signal was derived twice, followed by binarization of the first derivative and rectification of the second derivative. The product of the two latter results led to a weighing signal from which the cycles' onsets and ends were established. Weighed and frequency filters are needed along with the pre-establishment of their respective tolerances. Out of the 40 records, 30 seconds strands were randomly chosen to be analyzed by the algorithm and by two operators. Sensitivity and accuracy were calculated by means of the true/false and

  20. Predicting the performance of a spatial gamut mapping algorithm

    NASA Astrophysics Data System (ADS)

    Bakke, Arne M.; Farup, Ivar; Hardeberg, Jon Y.

    2009-01-01

    Gamut mapping algorithms are currently being developed to take advantage of the spatial information in an image to improve the utilization of the destination gamut. These algorithms try to preserve the spatial information between neighboring pixels in the image, such as edges and gradients, without sacrificing global contrast. Experiments have shown that such algorithms can result in significantly improved reproduction of some images compared with non-spatial methods. However, due to the spatial processing of images, they introduce unwanted artifacts when used on certain types of images. In this paper we perform basic image analysis to predict whether a spatial algorithm is likely to perform better or worse than a good, non-spatial algorithm. Our approach starts by detecting the relative amount of areas in the image that are made up of uniformly colored pixels, as well as the amount of areas that contain details in out-of-gamut areas. A weighted difference is computed from these numbers, and we show that the result has a high correlation with the observed performance of the spatial algorithm in a previously conducted psychophysical experiment.

  1. A new map-making algorithm for CMB polarization experiments

    NASA Astrophysics Data System (ADS)

    Wallis, Christopher G. R.; Bonaldi, A.; Brown, Michael L.; Battye, Richard A.

    2015-10-01

    With the temperature power spectrum of the cosmic microwave background (CMB) at least four orders of magnitude larger than the B-mode polarization power spectrum, any instrumental imperfections that couple temperature to polarization must be carefully controlled and/or removed. Here we present two new map-making algorithms that can create polarization maps that are clean of temperature-to-polarization leakage systematics due to differential gain and pointing between a detector pair. Where a half-wave plate is used, we show that the spin-2 systematic due to differential ellipticity can also be removed using our algorithms. The algorithms require no prior knowledge of the imperfections or temperature sky to remove the temperature leakage. Instead, they calculate the systematic and polarization maps in one step directly from the time-ordered data (TOD). The first algorithm is designed to work with scan strategies that have a good range of crossing angles for each map pixel and the second for scan strategies that have a limited range of crossing angles. The first algorithm can also be used to identify if systematic errors that have a particular spin are present in a TOD. We demonstrate the use of both algorithms and the ability to identify systematics with simulations of TOD with realistic scan strategies and instrumental noise.

  2. Comparative analysis of PSO algorithms for PID controller tuning

    NASA Astrophysics Data System (ADS)

    Štimac, Goranka; Braut, Sanjin; Žigulić, Roberto

    2014-09-01

    The active magnetic bearing(AMB) suspends the rotating shaft and maintains it in levitated position by applying controlled electromagnetic forces on the rotor in radial and axial directions. Although the development of various control methods is rapid, PID control strategy is still the most widely used control strategy in many applications, including AMBs. In order to tune PID controller, a particle swarm optimization(PSO) method is applied. Therefore, a comparative analysis of particle swarm optimization(PSO) algorithms is carried out, where two PSO algorithms, namely (1) PSO with linearly decreasing inertia weight(LDW-PSO), and (2) PSO algorithm with constriction factor approach(CFA-PSO), are independently tested for different PID structures. The computer simulations are carried out with the aim of minimizing the objective function defined as the integral of time multiplied by the absolute value of error(ITAE). In order to validate the performance of the analyzed PSO algorithms, one-axis and two-axis radial rotor/active magnetic bearing systems are examined. The results show that PSO algorithms are effective and easily implemented methods, providing stable convergence and good computational efficiency of different PID structures for the rotor/AMB systems. Moreover, the PSO algorithms prove to be easily used for controller tuning in case of both SISO and MIMO system, which consider the system delay and the interference among the horizontal and vertical rotor axes.

  3. A Hybrid Ant Colony Algorithm for Loading Pattern Optimization

    NASA Astrophysics Data System (ADS)

    Hoareau, F.

    2014-06-01

    Electricité de France (EDF) operates 58 nuclear power plant (NPP), of the Pressurized Water Reactor (PWR) type. The loading pattern (LP) optimization of these NPP is currently done by EDF expert engineers. Within this framework, EDF R&D has developed automatic optimization tools that assist the experts. The latter can resort, for instance, to a loading pattern optimization software based on ant colony algorithm. This paper presents an analysis of the search space of a few realistic loading pattern optimization problems. This analysis leads us to introduce a hybrid algorithm based on ant colony and a local search method. We then show that this new algorithm is able to generate loading patterns of good quality.

  4. Scheduling Jobs with Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Ferrolho, António; Crisóstomo, Manuel

    Most scheduling problems are NP-hard, the time required to solve the problem optimally increases exponentially with the size of the problem. Scheduling problems have important applications, and a number of heuristic algorithms have been proposed to determine relatively good solutions in polynomial time. Recently, genetic algorithms (GA) are successfully used to solve scheduling problems, as shown by the growing numbers of papers. GA are known as one of the most efficient algorithms for solving scheduling problems. But, when a GA is applied to scheduling problems various crossovers and mutations operators can be applicable. This paper presents and examines a new concept of genetic operators for scheduling problems. A software tool called hybrid and flexible genetic algorithm (HybFlexGA) was developed to examine the performance of various crossover and mutation operators by computing simulations of job scheduling problems.

  5. Dreaming, Stealing, Dancing, Showing Off.

    ERIC Educational Resources Information Center

    Lavender, Peter; Taylor, Chris

    2002-01-01

    Lessons learned from British projects to delivery literacy, numeracy, and English as a second language through community agencies included the following: (1) innovation and measured risks are required to attract hard-to-reach adults; (2) good practice needs to be shared; and (3) projects worked best when government funds were managed by community…

  6. Temperature Corrected Bootstrap Algorithm

    NASA Technical Reports Server (NTRS)

    Comiso, Joey C.; Zwally, H. Jay

    1997-01-01

    A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.

  7. Power spectral estimation algorithms

    NASA Technical Reports Server (NTRS)

    Bhatia, Manjit S.

    1989-01-01

    Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.

  8. Retrieval of Sea Surface Temperature Over Poteran Island Water of Indonesia with Landsat 8 Tirs Image: a Preliminary Algorithm

    NASA Astrophysics Data System (ADS)

    Syariz, M. A.; Jaelani, L. M.; Subehi, L.; Pamungkas, A.; Koenhardono, E. S.; Sulisetyono, A.

    2015-10-01

    The Sea Surface Temperature (SST) retrieval from satellites data Thus, it could provide SST data for a long time. Since, the algorithms of SST estimation by using Landsat 8 Thermal Band are sitedependence, we need to develop an applicable algorithm in Indonesian water. The aim of this research was to develop SST algorithms in the North Java Island Water. The data used are in-situ data measured on April 22, 2015 and also estimated brightness temperature data from Landsat 8 Thermal Band Image (band 10 and band 11). The algorithm was established using 45 data by assessing the relation of measured in-situ data and estimated brightness temperature. Then, the algorithm was validated by using another 40 points. The results showed that the good performance of the sea surface temperature algorithm with coefficient of determination (R2) and Root Mean Square Error (RMSE) of 0.912 and 0.028, respectively.

  9. Defect recognition algorithm based on direction curve in x-ray photos

    NASA Astrophysics Data System (ADS)

    Li, Hua; Liu, Jianguo

    1998-09-01

    A new recognition algorithm using composite operator and direction curves to auto-detect the dreg defects in x-rays photo is proposed in this paper. This algorithm is simple with little computation, fast detection speed, good effect of real-time process, high accuracy of detection and good adaptability. Experiments evince this algorithm is a good feature detection method.

  10. What Are Good Child Outcomes?

    ERIC Educational Resources Information Center

    Moore, Kristin Anderson; Evans, V. Jeffery; Brooks-Gunn, Jeanne; Roth, Jodie

    This paper considers the question "What are good child outcomes?" from the perspectives of developmental psychology, economics, and sociology. Section 1 of the paper examines good child outcomes as characteristics of stage-salient tasks of development. Section 2 emphasizes the acquisition of "human capital," the development of productive traits…

  11. Straight Talk for Good Health

    MedlinePlus

    ... this page please turn JavaScript on. Feature: Healthcare Communication Straight Talk For Good Health Past Issues / Summer 2015 Table of Contents Straight talk with your healthcare provider is important. You and your medical team can then make better decisions for your good ...

  12. Enjoyment and the Good Life.

    ERIC Educational Resources Information Center

    Estes, Cheryl; Henderson, Karla

    2003-01-01

    Presents information to update parks and recreation professionals about what recent research says in regard to enjoyment and the good life, noting what applications this research has for practitioners. The article focuses on: the good life and leisure services; happiness, subjective well-being, and intrinsic motivation; leisure, happiness, and…

  13. Memoir of "a good daughter".

    PubMed

    Brown, Carolyn T

    2013-01-01

    This short memoir reflects on the experience of a "good daughter" caring for both parents through their late aging and deaths. The memoir contemplates their personalities as expressed in their aging and the "good daughter's" experience in the death room. Those on a similar journey, whether as travelers, guides, or witnesses, may draw comfort, perhaps reassurance, from this account. PMID:23159687

  14. A test sheet generating algorithm based on intelligent genetic algorithm and hierarchical planning

    NASA Astrophysics Data System (ADS)

    Gu, Peipei; Niu, Zhendong; Chen, Xuting; Chen, Wei

    2013-03-01

    In recent years, computer-based testing has become an effective method to evaluate students' overall learning progress so that appropriate guiding strategies can be recommended. Research has been done to develop intelligent test assembling systems which can automatically generate test sheets based on given parameters of test items. A good multisubject test sheet depends on not only the quality of the test items but also the construction of the sheet. Effective and efficient construction of test sheets according to multiple subjects and criteria is a challenging problem. In this paper, a multi-subject test sheet generation problem is formulated and a test sheet generating approach based on intelligent genetic algorithm and hierarchical planning (GAHP) is proposed to tackle this problem. The proposed approach utilizes hierarchical planning to simplify the multi-subject testing problem and adopts genetic algorithm to process the layered criteria, enabling the construction of good test sheets according to multiple test item requirements. Experiments are conducted and the results show that the proposed approach is capable of effectively generating multi-subject test sheets that meet specified requirements and achieve good performance.

  15. A test sheet generating algorithm based on intelligent genetic algorithm and hierarchical planning

    NASA Astrophysics Data System (ADS)

    Gu, Peipei; Niu, Zhendong; Chen, Xuting; Chen, Wei

    2012-04-01

    In recent years, computer-based testing has become an effective method to evaluate students' overall learning progress so that appropriate guiding strategies can be recommended. Research has been done to develop intelligent test assembling systems which can automatically generate test sheets based on given parameters of test items. A good multisubject test sheet depends on not only the quality of the test items but also the construction of the sheet. Effective and efficient construction of test sheets according to multiple subjects and criteria is a challenging problem. In this paper, a multi-subject test sheet generation problem is formulated and a test sheet generating approach based on intelligent genetic algorithm and hierarchical planning (GAHP) is proposed to tackle this problem. The proposed approach utilizes hierarchical planning to simplify the multi-subject testing problem and adopts genetic algorithm to process the layered criteria, enabling the construction of good test sheets according to multiple test item requirements. Experiments are conducted and the results show that the proposed approach is capable of effectively generating multi-subject test sheets that meet specified requirements and achieve good performance.

  16. A novel hybrid reconstruction algorithm for first generation incoherent scatter CT (ISCT) of large objects with potential medical imaging applications.

    PubMed

    Alpuche Aviles, Jorge E; Pistorius, Stephen; Gordon, Richard; Elbakri, Idris A

    2011-01-01

    This work presents a first generation incoherent scatter CT (ISCT) hybrid (analytic-iterative) reconstruction algorithm for accurate ρ{e}imaging of objects with clinically relevant sizes. The algorithm reconstructs quantitative images of ρ{e} within a few iterations, avoiding the challenges of optimization based reconstruction algorithms while addressing the limitations of current analytical algorithms. A 4π detector is conceptualized in order to address the issue of directional dependency and is then replaced with a ring of detectors which detect a constant fraction of the scattered photons. The ISCT algorithm corrects for the attenuation of photons using a limited number of iterations and filtered back projection (FBP) for image reconstruction. This results in a hybrid reconstruction algorithm that was tested with sinograms generated by Monte Carlo (MC) and analytical (AN) simulations. Results show that the ISCT algorithm is weakly dependent on the ρ{e} initial estimate. Simulation results show that the proposed algorithm reconstruct ρ{e} images with a mean error of -1% ± 3% for the AN model and from -6% to -8% for the MC model. Finally, the algorithm is capable of reconstructing qualitatively good images even in the presence of multiple scatter. The proposed algorithm would be suitable for in-vivo medical imaging as long as practical limitations can be addressed. PMID:21422588

  17. Ensembles of satellite aerosol retrievals based on three AATSR algorithms within aerosol_cci

    NASA Astrophysics Data System (ADS)

    Kosmale, Miriam; Popp, Thomas

    2016-04-01

    Ensemble techniques are widely used in the modelling community, combining different modelling results in order to reduce uncertainties. This approach could be also adapted to satellite measurements. Aerosol_cci is an ESA funded project, where most of the European aerosol retrieval groups work together. The different algorithms are homogenized as far as it makes sense, but remain essentially different. Datasets are compared with ground based measurements and between each other. Three AATSR algorithms (Swansea university aerosol retrieval, ADV aerosol retrieval by FMI and Oxford aerosol retrieval ORAC) provide within this project 17 year global aerosol records. Each of these algorithms provides also uncertainty information on pixel level. Within the presented work, an ensembles of the three AATSR algorithms is performed. The advantage over each single algorithm is the higher spatial coverage due to more measurement pixels per gridbox. A validation to ground based AERONET measurements shows still a good correlation of the ensemble, compared to the single algorithms. Annual mean maps show the global aerosol distribution, based on a combination of the three aerosol algorithms. In addition, pixel level uncertainties of each algorithm are used for weighting the contributions, in order to reduce the uncertainty of the ensemble. Results of different versions of the ensembles for aerosol optical depth will be presented and discussed. The results are validated against ground based AERONET measurements. A higher spatial coverage on daily basis allows better results in annual mean maps. The benefit of using pixel level uncertainties is analysed.

  18. Good Law, Good Practice, Good Sense: Using Legal Guidelines for Drafting Educational Policies.

    ERIC Educational Resources Information Center

    Bogotch, Ira E.

    1988-01-01

    Suggests how to use legal guidelines for drafting educational policies. Analyzes the political context in which present policymaking and governance initiatives exist. Two assumptions frame this article. First, good law makes for good administrative practice. Second, administrator policymaking is more important than the content of the policy…

  19. The Chopthin Algorithm for Resampling

    NASA Astrophysics Data System (ADS)

    Gandy, Axel; Lau, F. Din-Houn

    2016-08-01

    Resampling is a standard step in particle filters and more generally sequential Monte Carlo methods. We present an algorithm, called chopthin, for resampling weighted particles. In contrast to standard resampling methods the algorithm does not produce a set of equally weighted particles; instead it merely enforces an upper bound on the ratio between the weights. Simulation studies show that the chopthin algorithm consistently outperforms standard resampling methods. The algorithms chops up particles with large weight and thins out particles with low weight, hence its name. It implicitly guarantees a lower bound on the effective sample size. The algorithm can be implemented efficiently, making it practically useful. We show that the expected computational effort is linear in the number of particles. Implementations for C++, R (on CRAN), Python and Matlab are available.

  20. VIEW SHOWING WEST ELEVATION, EAST SIDE OF MEYER AVENUE. SHOWS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    VIEW SHOWING WEST ELEVATION, EAST SIDE OF MEYER AVENUE. SHOWS 499-501, MUNOZ HOUSE (AZ-73-37) ON FAR RIGHT - Antonio Bustamente House, 485-489 South Meyer Avenue & 186 West Kennedy Street, Tucson, Pima County, AZ

  1. 15. Detail showing lower chord pinconnected to vertical member, showing ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    15. Detail showing lower chord pin-connected to vertical member, showing floor beam riveted to extension of vertical member below pin-connection, and showing brackets supporting cantilevered sidewalk. View to southwest. - Selby Avenue Bridge, Spanning Short Line Railways track at Selby Avenue between Hamline & Snelling Avenues, Saint Paul, Ramsey County, MN

  2. An algorithm on distributed mining association rules

    NASA Astrophysics Data System (ADS)

    Xu, Fan

    2005-12-01

    With the rapid development of the Internet/Intranet, distributed databases have become a broadly used environment in various areas. It is a critical task to mine association rules in distributed databases. The algorithms of distributed mining association rules can be divided into two classes. One is a DD algorithm, and another is a CD algorithm. A DD algorithm focuses on data partition optimization so as to enhance the efficiency. A CD algorithm, on the other hand, considers a setting where the data is arbitrarily partitioned horizontally among the parties to begin with, and focuses on parallelizing the communication. A DD algorithm is not always applicable, however, at the time the data is generated, it is often already partitioned. In many cases, it cannot be gathered and repartitioned for reasons of security and secrecy, cost transmission, or sheer efficiency. A CD algorithm may be a more appealing solution for systems which are naturally distributed over large expenses, such as stock exchange and credit card systems. An FDM algorithm provides enhancement to CD algorithm. However, CD and FDM algorithms are both based on net-structure and executing in non-shareable resources. In practical applications, however, distributed databases often are star-structured. This paper proposes an algorithm based on star-structure networks, which are more practical in application, have lower maintenance costs and which are more practical in the construction of the networks. In addition, the algorithm provides high efficiency in communication and good extension in parallel computation.

  3. Research on registration algorithm for check seal verification

    NASA Astrophysics Data System (ADS)

    Wang, Shuang; Liu, Tiegen

    2008-03-01

    Nowadays seals play an important role in China. With the development of social economy, the traditional method of manual check seal identification can't meet the need s of banking transactions badly. This paper focus on pre-processing and registration algorithm for check seal verification using theory of image processing and pattern recognition. First of all, analyze the complex characteristics of check seals. To eliminate the difference of producing conditions and the disturbance caused by background and writing in check image, many methods are used in the pre-processing of check seal verification, such as color components transformation, linearity transform to gray-scale image, medium value filter, Otsu, close calculations and labeling algorithm of mathematical morphology. After the processes above, the good binary seal image can be obtained. On the basis of traditional registration algorithm, a double-level registration method including rough and precise registration method is proposed. The deflection angle of precise registration method can be precise to 0.1°. This paper introduces the concepts of difference inside and difference outside and use the percent of difference inside and difference outside to judge whether the seal is real or fake. The experimental results of a mass of check seals are satisfied. It shows that the methods and algorithmic presented have good robustness to noise sealing conditions and satisfactory tolerance of difference within class.

  4. Bouc-Wen hysteresis model identification using Modified Firefly Algorithm

    NASA Astrophysics Data System (ADS)

    Zaman, Mohammad Asif; Sikder, Urmita

    2015-12-01

    The parameters of Bouc-Wen hysteresis model are identified using a Modified Firefly Algorithm. The proposed algorithm uses dynamic process control parameters to improve its performance. The algorithm is used to find the model parameter values that results in the least amount of error between a set of given data points and points obtained from the Bouc-Wen model. The performance of the algorithm is compared with the performance of conventional Firefly Algorithm, Genetic Algorithm and Differential Evolution algorithm in terms of convergence rate and accuracy. Compared to the other three optimization algorithms, the proposed algorithm is found to have good convergence rate with high degree of accuracy in identifying Bouc-Wen model parameters. Finally, the proposed method is used to find the Bouc-Wen model parameters from experimental data. The obtained model is found to be in good agreement with measured data.

  5. What makes a life good?

    PubMed

    King, L A; Napa, C K

    1998-07-01

    Two studies examined folk concepts of the good life. Samples of college students (N = 104) and community adults (N = 264) were shown a career survey ostensibly completed by a person rating his or her occupation. After reading the survey, participants judged the desirability and moral goodness of the respondent's life, as a function of the amount of happiness, meaning in life, and wealth experienced. Results revealed significant effects of happiness and meaning on ratings of desirability and moral goodness. In the college sample, individuals high on all 3 independent variables were judged as likely to go to heaven. In the adult sample, wealth was also related to higher desirability. Results suggest a general perception that meaning in life and happiness are essential to the folk concept of the good life, whereas money is relatively unimportant. PMID:9686456

  6. Good Practices for Hood Use.

    ERIC Educational Resources Information Center

    Mikell, William G.; Drinkard, William C.

    1984-01-01

    Describes safety practices for laboratory fume hoods based on certain assumptions of hood design and performance. Also discusses the procedures in preparing to work at a hood. A checklist of good hood practices is included. (JM)

  7. Good News About Childhood Cancer

    MedlinePlus

    ... Home Current Issue Past Issues Good News About Childhood Cancer Past Issues / Spring 2008 Table of Contents ... 85 percent for the most common form of childhood cancer (acute lymphoblastic leukemia or ALL). During the ...

  8. An innovative localisation algorithm for railway vehicles

    NASA Astrophysics Data System (ADS)

    Allotta, B.; D'Adamio, P.; Malvezzi, M.; Pugi, L.; Ridolfi, A.; Rindi, A.; Vettori, G.

    2014-11-01

    . The estimation strategy has good performance also under degraded adhesion conditions and could be put on board of high-speed railway vehicles; it represents an accurate and reliable solution. The IMU board is tested via a dedicated Hardware in the Loop (HIL) test rig: it includes an industrial robot able to replicate the motion of the railway vehicle. Through the generated experimental outputs the performances of the innovative localisation algorithm have been evaluated: the HIL test rig permitted to test the proposed algorithm, avoiding expensive (in terms of time and cost) on-track tests, obtaining encouraging results. In fact, the preliminary results show a significant improvement of the position and speed estimation performances compared to those obtained with SCMT algorithms, currently in use on the Italian railway network.

  9. Communicating for the Good of Your Child.

    ERIC Educational Resources Information Center

    Lerman, Saf

    1985-01-01

    Parents can help their children feel secure and have a good self-image by communicating these feelings through words and actions. Suggestions for showing respect, building self-esteem, fostering security and success, and talking to children in a positive way are dicussed. (DF)

  10. Planning Good Change with Technology and Literacy.

    ERIC Educational Resources Information Center

    McKenzie, Jamie

    This book describes strategies to put information literacy and student learning at the center of technology planning. Filled with stories of success and with models of good planning, the book shows how to clarify purpose, involve important stakeholders, and pace the change process to maximize the daily use of new technologies. The following…

  11. A Good Teaching Technique: WebQuests

    ERIC Educational Resources Information Center

    Halat, Erdogan

    2008-01-01

    In this article, the author first introduces and describes a new teaching tool called WebQuests to practicing teachers. He then provides detailed information about the structure of a good WebQuest. Third, the author shows the strengths and weaknesses of using Web-Quests in teaching and learning. Last, he points out the challenges for practicing…

  12. Do good actions inspire good actions in others?

    PubMed Central

    Capraro, Valerio; Marcelletti, Alessandra

    2014-01-01

    Actions such as sharing food and cooperating to reach a common goal have played a fundamental role in the evolution of human societies. Despite the importance of such good actions, little is known about if and how they can spread from person to person to person. For instance, does being recipient of an altruistic act increase your probability of being cooperative with a third party? We have conducted an experiment on Amazon Mechanical Turk to test this mechanism using economic games. We have measured willingness to be cooperative through a standard Prisoner's dilemma and willingness to act altruistically using a binary Dictator game. In the baseline treatments, the endowments needed to play were given by the experimenters, as usual; in the control treatments, they came from a good action made by someone else. Across four different comparisons and a total of 572 subjects, we have never found a significant increase of cooperation or altruism when the endowment came from a good action. We conclude that good actions do not necessarily inspire good actions in others. While this is consistent with the theoretical prediction, it challenges the majority of other experimental studies. PMID:25502617

  13. Depreciation of public goods in spatial public goods games

    NASA Astrophysics Data System (ADS)

    Shi, Dong-Mei; Zhuang, Yong; Li, Yu-Jian; Wang, Bing-Hong

    2011-10-01

    In real situations, the value of public goods will be reduced or even lost because of external factors or for intrinsic reasons. In this work, we investigate the evolution of cooperation by considering the effect of depreciation of public goods in spatial public goods games on a square lattice. It is assumed that each individual gains full advantage if the number of the cooperators nc within a group centered on that individual equals or exceeds the critical mass (CM). Otherwise, there is depreciation of the public goods, which is realized by rescaling the multiplication factor r to (nc/CM)r. It is shown that the emergence of cooperation is remarkably promoted for CM > 1 even at small values of r, and a global cooperative level is achieved at an intermediate value of CM = 4 at a small r. We further study the effect of depreciation of public goods on different topologies of a regular lattice, and find that the system always reaches global cooperation at a moderate value of CM = G - 1 regardless of whether or not there exist overlapping triangle structures on the regular lattice, where G is the group size of the associated regular lattice.

  14. SAGE II inversion algorithm. [Stratospheric Aerosol and Gas Experiment

    NASA Technical Reports Server (NTRS)

    Chu, W. P.; Mccormick, M. P.; Lenoble, J.; Brogniez, C.; Pruvost, P.

    1989-01-01

    The operational Stratospheric Aerosol and Gas Experiment II multichannel data inversion algorithm is described. Aerosol and ozone retrievals obtained with the algorithm are discussed. The algorithm is compared to an independently developed algorithm (Lenoble, 1989), showing that the inverted aerosol and ozone profiles from the two algorithms are similar within their respective uncertainties.

  15. [Drug flow. Good manufacturing practices, good clinical practices].

    PubMed

    Dupin-Spriet, T; Spriet, A

    1991-01-01

    On a worldwide basis, the drug development circuit in clinical trials undergoes a general movement towards improvement which is sensitive to the degree of quality. The methods used to achieve this are found at the interface of Good Manufacturing Practices (GMP) and Good Clinical Practices (GCP). They consist primarily of two types, for which examples are given here: strengthening of controls (verification of the resemblance of test drugs in double-blind comparison by a "jury" and computerized systems of drug accountability), improvement in "compliance with therapy at the site of investigation" (use of more "intelligent" drug packages and labels). PMID:2020929

  16. Research on Routing Selection Algorithm Based on Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Guohong; Zhang, Baojian; Li, Xueyong; Lv, Jinna

    The hereditary algorithm is a kind of random searching and method of optimizing based on living beings natural selection and hereditary mechanism. In recent years, because of the potentiality in solving complicate problems and the successful application in the fields of industrial project, hereditary algorithm has been widely concerned by the domestic and international scholar. Routing Selection communication has been defined a standard communication model of IP version 6.This paper proposes a service model of Routing Selection communication, and designs and implements a new Routing Selection algorithm based on genetic algorithm.The experimental simulation results show that this algorithm can get more resolution at less time and more balanced network load, which enhances search ratio and the availability of network resource, and improves the quality of service.

  17. Algorithmic chemistry

    SciTech Connect

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  18. 28. MAP SHOWING LOCATION OF ARVFS FACILITY AS BUILT. SHOWS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    28. MAP SHOWING LOCATION OF ARVFS FACILITY AS BUILT. SHOWS LINCOLN BOULEVARD, BIG LOST RIVER, AND NAVAL REACTORS FACILITY. F.C. TORKELSON DRAWING NUMBER 842-ARVFS-101-2. DATED OCTOBER 12, 1965. INEL INDEX CODE NUMBER: 075 0101 851 151969. - Idaho National Engineering Laboratory, Advanced Reentry Vehicle Fusing System, Scoville, Butte County, ID

  19. 8. Detail showing concrete abutment, showing substructure of bridge, specifically ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    8. Detail showing concrete abutment, showing substructure of bridge, specifically west side of arch and substructure. - Presumpscot Falls Bridge, Spanning Presumptscot River at Allen Avenue extension, 0.75 mile west of U.S. Interstate 95, Falmouth, Cumberland County, ME

  20. ICESat-2 / ATLAS Flight Science Receiver Algorithms

    NASA Astrophysics Data System (ADS)

    Mcgarry, J.; Carabajal, C. C.; Degnan, J. J.; Mallama, A.; Palm, S. P.; Ricklefs, R.; Saba, J. L.

    2013-12-01

    . This Simulator makes it possible to check all logic paths that could be encountered by the Algorithms on orbit. In addition the NASA airborne instrument MABEL is collecting data with characteristics similar to what ATLAS will see. MABEL data is being used to test the ATLAS Receiver Algorithms. Further verification will be performed during Integration and Testing of the ATLAS instrument and during Environmental Testing on the full ATLAS instrument. Results from testing to date show the Receiver Algorithms have the ability to handle a wide range of signal and noise levels with a very good sensitivity at relatively low signal to noise ratios. In addition, preliminary tests have demonstrated, using the ICESat-2 Science Team's selected land ice and sea ice test cases, the capability of the Algorithms to successfully find and telemeter the surface echoes. In this presentation we will describe the ATLAS Flight Science Receiver Algorithms and the Software Simulator, and will present results of the testing to date. The onboard databases (DEM, DRM and the Surface Reference Mask) are being developed at the University of Texas at Austin as part of the ATLAS Flight Science Receiver Algorithms. Verification of the onboard databases is being performed by ATLAS Receiver Algorithms team members Claudia Carabajal and Jack Saba.

  1. 8 CFR 274a.4 - Good faith defense.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 8 Aliens and Nationality 1 2014-01-01 2014-01-01 false Good faith defense. 274a.4 Section 274a.4 Aliens and Nationality DEPARTMENT OF HOMELAND SECURITY IMMIGRATION REGULATIONS CONTROL OF EMPLOYMENT OF ALIENS Employer Requirements § 274a.4 Good faith defense. An employer or a recruiter or referrer for a fee for employment who shows good...

  2. A Pretty Good Paper about Pretty Good Privacy.

    ERIC Educational Resources Information Center

    McCollum, Roy

    With today's growth in the use of electronic information systems for e-mail, data development and research, and the relative ease of access to such resources, protecting one's data and correspondence has become a great concern. "Pretty Good Privacy" (PGP), an encryption program developed by Phil Zimmermann, may be the software tool that will…

  3. Good Practices in Free-energy Calculations

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrew; Jarzynski, Christopher; Chipot, Christopher

    2013-01-01

    As access to computational resources continues to increase, free-energy calculations have emerged as a powerful tool that can play a predictive role in drug design. Yet, in a number of instances, the reliability of these calculations can be improved significantly if a number of precepts, or good practices are followed. For the most part, the theory upon which these good practices rely has been known for many years, but often overlooked, or simply ignored. In other cases, the theoretical developments are too recent for their potential to be fully grasped and merged into popular platforms for the computation of free-energy differences. The current best practices for carrying out free-energy calculations will be reviewed demonstrating that, at little to no additional cost, free-energy estimates could be markedly improved and bounded by meaningful error estimates. In energy perturbation and nonequilibrium work methods, monitoring the probability distributions that underlie the transformation between the states of interest, performing the calculation bidirectionally, stratifying the reaction pathway and choosing the most appropriate paradigms and algorithms for transforming between states offer significant gains in both accuracy and precision. In thermodynamic integration and probability distribution (histogramming) methods, properly designed adaptive techniques yield nearly uniform sampling of the relevant degrees of freedom and, by doing so, could markedly improve efficiency and accuracy of free energy calculations without incurring any additional computational expense.

  4. Sort-Mid tasks scheduling algorithm in grid computing.

    PubMed

    Reda, Naglaa M; Tawfik, A; Marzok, Mohamed A; Khamis, Soheir M

    2015-11-01

    Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan. PMID:26644937

  5. Nonuniformity correction algorithm based on Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Mou, Xin-gang; Zhang, Gui-lin; Hu, Ruo-lan; Zhou, Xiao

    2011-08-01

    As an important tool to acquire information of target scene, infrared detector is widely used in imaging guidance field. Because of the limit of material and technique, the performance of infrared imaging system is known to be strongly affected by the spatial nonuniformity in the photoresponse of the detectors in the array. Temporal highpass filter(THPF) is a popular adaptive NUC algorithm because of its simpleness and effectiveness. However, there still exists the problem of ghosting artifact in the algorithms caused by blind update of parameters, and the performance is noticeably degraded when the methods are applied over scenes with lack of motion. In order to tackle with this problem, a novel adaptive NUC algorithm based on Gaussian mixed model (GMM) is put forward according to traditional THPF. The drift of the detectors is assumed to obey a single Gaussian distribution, and the update of the parameters is selectively performed based on the scene. GMM is applied in the new algorithm for background modeling, in which the background is updated selectively so as to avoid the influence of the foreground target on the update of the background, thus eliminating the ghosting artifact. The performance of the proposed algorithm is evaluated with infrared image sequences with simulated and real fixed-pattern noise. The results show a more reliable fixed-pattern noise reduction, tracking the parameter drift, and presenting a good adaptability to scene changes.

  6. A Comparison of Three Algorithms for Orion Drogue Parachute Release

    NASA Technical Reports Server (NTRS)

    Matz, Daniel A.; Braun, Robert D.

    2015-01-01

    The Orion Multi-Purpose Crew Vehicle is susceptible to ipping apex forward between drogue parachute release and main parachute in ation. A smart drogue release algorithm is required to select a drogue release condition that will not result in an apex forward main parachute deployment. The baseline algorithm is simple and elegant, but does not perform as well as desired in drogue failure cases. A simple modi cation to the baseline algorithm can improve performance, but can also sometimes fail to identify a good release condition. A new algorithm employing simpli ed rotational dynamics and a numeric predictor to minimize a rotational energy metric is proposed. A Monte Carlo analysis of a drogue failure scenario is used to compare the performance of the algorithms. The numeric predictor prevents more of the cases from ipping apex forward, and also results in an improvement in the capsule attitude at main bag extraction. The sensitivity of the numeric predictor to aerodynamic dispersions, errors in the navigated state, and execution rate is investigated, showing little degradation in performance.

  7. Parameter estimation techniques based on optimizing goodness-of-fit statistics for structural reliability

    NASA Technical Reports Server (NTRS)

    Starlinger, Alois; Duffy, Stephen F.; Palko, Joseph L.

    1993-01-01

    New methods are presented that utilize the optimization of goodness-of-fit statistics in order to estimate Weibull parameters from failure data. It is assumed that the underlying population is characterized by a three-parameter Weibull distribution. Goodness-of-fit tests are based on the empirical distribution function (EDF). The EDF is a step function, calculated using failure data, and represents an approximation of the cumulative distribution function for the underlying population. Statistics (such as the Kolmogorov-Smirnov statistic and the Anderson-Darling statistic) measure the discrepancy between the EDF and the cumulative distribution function (CDF). These statistics are minimized with respect to the three Weibull parameters. Due to nonlinearities encountered in the minimization process, Powell's numerical optimization procedure is applied to obtain the optimum value of the EDF. Numerical examples show the applicability of these new estimation methods. The results are compared to the estimates obtained with Cooper's nonlinear regression algorithm.

  8. IR and visual image registration based on mutual information and PSO-Powell algorithm

    NASA Astrophysics Data System (ADS)

    Zhuang, Youwen; Gao, Kun; Miu, Xianghu

    2014-11-01

    Infrared and visual image registration has a wide application in the fields of remote sensing and military. Mutual information (MI) has proved effective and successful in infrared and visual image registration process. To find the most appropriate registration parameters, optimal algorithms, such as Particle Swarm Optimization (PSO) algorithm or Powell search method, are often used. The PSO algorithm has strong global search ability and search speed is fast at the beginning, while the weakness is low search performance in late search stage. In image registration process, it often takes a lot of time to do useless search and solution's precision is low. Powell search method has strong local search ability. However, the search performance and time is more sensitive to initial values. In image registration, it is often obstructed by local maximum and gets wrong results. In this paper, a novel hybrid algorithm, which combined PSO algorithm and Powell search method, is proposed. It combines both advantages that avoiding obstruction caused by local maximum and having higher precision. Firstly, using PSO algorithm gets a registration parameter which is close to global minimum. Based on the result in last stage, the Powell search method is used to find more precision registration parameter. The experimental result shows that the algorithm can effectively correct the scale, rotation and translation additional optimal algorithm. It can be a good solution to register infrared difference of two images and has a greater performance on time and precision than traditional and visible images.

  9. Improved artificial bee colony algorithm for wavefront sensor-less system in free space optical communication

    NASA Astrophysics Data System (ADS)

    Niu, Chaojun; Han, Xiang'e.

    2015-10-01

    Adaptive optics (AO) technology is an effective way to alleviate the effect of turbulence on free space optical communication (FSO). A new adaptive compensation method can be used without a wave-front sensor. Artificial bee colony algorithm (ABC) is a population-based heuristic evolutionary algorithm inspired by the intelligent foraging behaviour of the honeybee swarm with the advantage of simple, good convergence rate, robust and less parameter setting. In this paper, we simulate the application of the improved ABC to correct the distorted wavefront and proved its effectiveness. Then we simulate the application of ABC algorithm, differential evolution (DE) algorithm and stochastic parallel gradient descent (SPGD) algorithm to the FSO system and analyze the wavefront correction capabilities by comparison of the coupling efficiency, the error rate and the intensity fluctuation in different turbulence before and after the correction. The results show that the ABC algorithm has much faster correction speed than DE algorithm and better correct ability for strong turbulence than SPGD algorithm. Intensity fluctuation can be effectively reduced in strong turbulence, but not so effective in week turbulence.

  10. A Space-Bounded Anytime Algorithm for the Multiple Longest Common Subsequence Problem

    PubMed Central

    Yang, Jiaoyun; Xu, Yun; Shang, Yi; Chen, Guoliang

    2014-01-01

    The multiple longest common subsequence (MLCS) problem, related to the identification of sequence similarity, is an important problem in many fields. As an NP-hard problem, its exact algorithms have difficulty in handling large-scale data and time- and space-efficient algorithms are required in real-world applications. To deal with time constraints, anytime algorithms have been proposed to generate good solutions with a reasonable time. However, there exists little work on space-efficient MLCS algorithms. In this paper, we formulate the MLCS problem into a graph search problem and present two space-efficient anytime MLCS algorithms, SA-MLCS and SLA-MLCS. SA-MLCS uses an iterative beam widening search strategy to reduce space usage during the iterative process of finding better solutions. Based on SA-MLCS, SLA-MLCS, a space-bounded algorithm, is developed to avoid space usage from exceeding available memory. SLA-MLCS uses a replacing strategy when SA-MLCS reaches a given space bound. Experimental results show SA-MLCS and SLA-MLCS use an order of magnitude less space and time than the state-of-the-art approximate algorithm MLCS-APP while finding better solutions. Compared to the state-of-the-art anytime algorithm Pro-MLCS, SA-MLCS and SLA-MLCS can solve an order of magnitude larger size instances. Furthermore, SLA-MLCS can find much better solutions than SA-MLCS on large size instances. PMID:25400485

  11. On Parallel Push-Relabel based Algorithms for Bipartite Maximum Matching

    SciTech Connect

    Langguth, Johannes; Azad, Md Ariful; Halappanavar, Mahantesh; Manne, Fredrik

    2014-07-01

    We study multithreaded push-relabel based algorithms for computing maximum cardinality matching in bipartite graphs. Matching is a fundamental combinatorial (graph) problem with applications in a wide variety of problems in science and engineering. We are motivated by its use in the context of sparse linear solvers for computing maximum transversal of a matrix. We implement and test our algorithms on several multi-socket multicore systems and compare their performance to state-of-the-art augmenting path-based serial and parallel algorithms using a testset comprised of a wide range of real-world instances. Building on several heuristics for enhancing performance, we demonstrate good scaling for the parallel push-relabel algorithm. We show that it is comparable to the best augmenting path-based algorithms for bipartite matching. To the best of our knowledge, this is the first extensive study of multithreaded push-relabel based algorithms. In addition to a direct impact on the applications using matching, the proposed algorithmic techniques can be extended to preflow-push based algorithms for computing maximum flow in graphs.

  12. An Algorithm for Testing the Efficient Market Hypothesis

    PubMed Central

    Boboc, Ioana-Andreea; Dinică, Mihai-Cristian

    2013-01-01

    The objective of this research is to examine the efficiency of EUR/USD market through the application of a trading system. The system uses a genetic algorithm based on technical analysis indicators such as Exponential Moving Average (EMA), Moving Average Convergence Divergence (MACD), Relative Strength Index (RSI) and Filter that gives buying and selling recommendations to investors. The algorithm optimizes the strategies by dynamically searching for parameters that improve profitability in the training period. The best sets of rules are then applied on the testing period. The results show inconsistency in finding a set of trading rules that performs well in both periods. Strategies that achieve very good returns in the training period show difficulty in returning positive results in the testing period, this being consistent with the efficient market hypothesis (EMH). PMID:24205148

  13. An algorithm for testing the efficient market hypothesis.

    PubMed

    Boboc, Ioana-Andreea; Dinică, Mihai-Cristian

    2013-01-01

    The objective of this research is to examine the efficiency of EUR/USD market through the application of a trading system. The system uses a genetic algorithm based on technical analysis indicators such as Exponential Moving Average (EMA), Moving Average Convergence Divergence (MACD), Relative Strength Index (RSI) and Filter that gives buying and selling recommendations to investors. The algorithm optimizes the strategies by dynamically searching for parameters that improve profitability in the training period. The best sets of rules are then applied on the testing period. The results show inconsistency in finding a set of trading rules that performs well in both periods. Strategies that achieve very good returns in the training period show difficulty in returning positive results in the testing period, this being consistent with the efficient market hypothesis (EMH). PMID:24205148

  14. Projection Classification Based Iterative Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Ruiqiu; Li, Chen; Gao, Wenhua

    2015-05-01

    Iterative algorithm has good performance as it does not need complete projection data in 3D image reconstruction area. It is possible to be applied in BGA based solder joints inspection but with low convergence speed which usually acts with x-ray Laminography that has a worse reconstruction image compared to the former one. This paper explores to apply one projection classification based method which tries to separate the object to three parts, i.e. solute, solution and air, and suppose that the reconstruction speed decrease from solution to two other parts on both side lineally. And then SART and CAV algorithms are improved under the proposed idea. Simulation experiment result with incomplete projection images indicates the fast convergence speed of the improved iterative algorithms and the effectiveness of the proposed method. Less the projection images, more the superiority is also founded.

  15. Planning a Successful Tech Show

    ERIC Educational Resources Information Center

    Nikirk, Martin

    2011-01-01

    Tech shows are a great way to introduce prospective students, parents, and local business and industry to a technology and engineering or career and technical education program. In addition to showcasing instructional programs, a tech show allows students to demonstrate their professionalism and skills, practice public presentations, and interact…

  16. Hey Teacher, Your Personality's Showing!

    ERIC Educational Resources Information Center

    Paulsen, James R.

    1977-01-01

    A study of 30 fourth, fifth, and sixth grade teachers and 300 of their students showed that a teacher's age, sex, and years of experience did not relate to students' mathematics achievement, but that more effective teachers showed greater "freedom from defensive behavior" than did less effective teachers. (DT)

  17. What Do Blood Tests Show?

    MedlinePlus

    ... shows the ranges for blood glucose levels after 8 to 12 hours of fasting (not eating). It shows the normal range and the abnormal ranges that are a sign of prediabetes or diabetes. Plasma Glucose Results (mg/dL)* Diagnosis 70 to 99 ...

  18. Computed laminography and reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Que, Jie-Min; Cao, Da-Quan; Zhao, Wei; Tang, Xiao; Sun, Cui-Li; Wang, Yan-Fang; Wei, Cun-Feng; Shi, Rong-Jian; Wei, Long; Yu, Zhong-Qiang; Yan, Yong-Lian

    2012-08-01

    Computed laminography (CL) is an alternative to computed tomography if large objects are to be inspected with high resolution. This is especially true for planar objects. In this paper, we set up a new scanning geometry for CL, and study the algebraic reconstruction technique (ART) for CL imaging. We compare the results of ART with variant weighted functions by computer simulation with a digital phantom. It proves that ART algorithm is a good choice for the CL system.

  19. Linear Bregman algorithm implemented in parallel GPU

    NASA Astrophysics Data System (ADS)

    Li, Pengyan; Ke, Jue; Sui, Dong; Wei, Ping

    2015-08-01

    At present, most compressed sensing (CS) algorithms have poor converging speed, thus are difficult to run on PC. To deal with this issue, we use a parallel GPU, to implement a broadly used compressed sensing algorithm, the Linear Bregman algorithm. Linear iterative Bregman algorithm is a reconstruction algorithm proposed by Osher and Cai. Compared with other CS reconstruction algorithms, the linear Bregman algorithm only involves the vector and matrix multiplication and thresholding operation, and is simpler and more efficient for programming. We use C as a development language and adopt CUDA (Compute Unified Device Architecture) as parallel computing architectures. In this paper, we compared the parallel Bregman algorithm with traditional CPU realized Bregaman algorithm. In addition, we also compared the parallel Bregman algorithm with other CS reconstruction algorithms, such as OMP and TwIST algorithms. Compared with these two algorithms, the result of this paper shows that, the parallel Bregman algorithm needs shorter time, and thus is more convenient for real-time object reconstruction, which is important to people's fast growing demand to information technology.

  20. On mapping systolic algorithms onto the hypercube

    SciTech Connect

    Ibarra, O.H.; Sohn, S.M. )

    1990-01-01

    Much effort has been devoted toward developing efficient algorithms for systolic arrays. Here the authors consider the problem of mapping these algorithms into efficient algorithms for a fixed-size hypercube architecture. They describe in detail several optimal implementations of algorithms given for one-way one and two-dimensional systolic arrays. Since interprocessor communication is many times slower than local computation in parallel computers built to date, the problem of efficient communication is specifically addressed for these mappings. In order to experimentally validate the technique, five systolic algorithms were mapped in various ways onto a 64-node NCUBE/7 MMD hypercube machine. The algorithms are for the following problems: the shuffle scheduling problem, finite impulse response filtering, linear context-free language recognition, matrix multiplication, and computing the Boolean transitive closure. Experimental evidence indicates that good performance is obtained for the mappings.

  1. Making the Common Good Common

    ERIC Educational Resources Information Center

    Chase, Barbara

    2011-01-01

    How are independent schools to be useful to the wider world? Beyond their common commitment to educate their students for meaningful lives in service of the greater good, can they educate a broader constituency and, thus, share their resources and skills more broadly? Their answers to this question will be shaped by their independence. Any…

  2. Gender Play and Good Governance

    ERIC Educational Resources Information Center

    Powell, Mark

    2008-01-01

    Like good government, thoughtful care of children requires those in power, whether teachers or parents, to recognize when it is appropriate for them to step back from day-to-day decision-making while still working behind the scenes to ensure an organizational structure that supports the independence and equitable development of those they serve.…

  3. Is New Work Good Work?

    ERIC Educational Resources Information Center

    Westwood, Andy

    Some new work is good work. Quality is ultimately defined by the individual. However, these perceptions are inevitably colored by the circumstances in which people find themselves, by the time, place, and wide range of motivations for having to do a particular job in the first place. One person's quality may be another's purgatory and vice versa.…

  4. Practicing Good Habits, Grade 2.

    ERIC Educational Resources Information Center

    Nguyen Van Quan; And Others

    This illustrated primer, designed for second grade students in Vietnam, consists of stories depicting rural family life in Vietnam. The book is divided into the following six chapters: (1) Practicing Good Habits (health, play, helpfulness); (2) Duties at Home (grandparents, father and mother, servants, the extended family; (3) Duties in School…

  5. Practicing Good Habits, Grade 1.

    ERIC Educational Resources Information Center

    Huynh Cong Tu; And Others

    This primer, intended for use during the child's first year in elementary school in Vietnam, relates the story of the daily lives of Hong, age 10, and her brother Lac, age 7, at home and at school. The 64 lessons are divided into four chapters: (1) Good Habits (personal hygiene, grooming, dressing, obedience, truthfulness); (2) At Home: Father and…

  6. Measuring Goodness of Story Narratives

    ERIC Educational Resources Information Center

    Le, Karen; Coelho, Carl; Mozeiko, Jennifer; Grafman, Jordan

    2011-01-01

    Purpose: The purpose of this article was to evaluate a new measure of story narrative performance: story completeness. It was hypothesized that by combining organizational (story grammar) and completeness measures, story "goodness" could be quantified. Method: Discourse samples from 46 typically developing adults were compared with those from 24…

  7. "Good Morning Boys and Girls"

    ERIC Educational Resources Information Center

    Bigler, Rebecca S.

    2005-01-01

    It happens every day across the nation: Teachers welcome their students to class by saying, "Good morning, boys and girls." It is one of countless ways teachers highlight gender with their speech and behavior. Unfortunately, teachers' use of gender to label students and organize the classroom can have negative consequences. New research in the…

  8. Everyone Loves a Good Story

    ERIC Educational Resources Information Center

    Croxall, Kathy C.; Gubler, Rea R.

    2006-01-01

    Everyone loves a good story. Reading brings back pleasant memories of being read to by parents or others. Literacy is encouraged when students are continually exposed to stories and books. Teachers can encourage students to discover their parents' favorite stories and share them with the class. In this article, the authors recommend the use of…

  9. Metrics for Soft Goods Merchandising.

    ERIC Educational Resources Information Center

    Cooper, Gloria S., Ed.; Magisos, Joel H., Ed.

    Designed to meet the job-related metric measurement needs of students interested in soft goods merchandising, this instructional package is one of five for the marketing and distribution cluster, part of a set of 55 packages for metric instruction in different occupations. The package is intended for students who already know the occupational…

  10. Metrics for Hard Goods Merchandising.

    ERIC Educational Resources Information Center

    Cooper, Gloria S., Ed.; Magisos, Joel H., Ed.

    Designed to meet the job-related metric measurement needs of students interested in hard goods merchandising, this instructional package is one of five for the marketing and distribution cluster, part of a set of 55 packages for metric instruction in different occupations. The package is intended for students who already know the occupational…

  11. "What's the Plan?": "Good Management Begins with Good People"

    ERIC Educational Resources Information Center

    Vicars, Dennis

    2008-01-01

    In order for a successful center/school to achieve all it can for its children, staff, and operator, a plan is critical. Good planning begins by looking into the future that one wants for his or her center/school. Be as descriptive as possible in writing down the details of what that future looks like. Next, walk backwards from that future to the…

  12. Satellite Movie Shows Erika Dissipate

    NASA Video Gallery

    This animation of visible and infrared imagery from NOAA's GOES-West satellite from Aug. 27 to 29 shows Tropical Storm Erika move through the Eastern Caribbean Sea and dissipate near eastern Cuba. ...

  13. Performance Analysis of Selective Breeding Algorithm on One Dimensional Bin Packing Problems

    NASA Astrophysics Data System (ADS)

    Sriramya, P.; Parvathavarthini, B.

    2012-12-01

    The bin packing optimization problem packs a set of objects into a set of bins so that the amount of wasted space is minimized. The bin packing problem has many important applications. The objective is to find a feasible assignment of all weights to bins that minimizes the total number of bins used. The bin packing problem models several practical problems in such diverse areas as industrial control, computer systems, machine scheduling, VLSI chip layout and etc. Selective breeding algorithm (SBA) is an iterative procedure which borrows the ideas of artificial selection and breeding process. By simulating artificial evolution in this way SBA algorithm can easily solve complex problems. One dimensional bin packing benchmark problems are taken for evaluating the performance of the SBA. The computational results of SBA algorithm show optimal solution for the tested benchmark problems. The proposed SBA algorithm is a good problem-solving technique for one dimensional bin packing problems.

  14. Combining algorithms in automatic detection of QRS complexes in ECG signals.

    PubMed

    Meyer, Carsten; Fernández Gavela, José; Harris, Matthew

    2006-07-01

    QRS complex and specifically R-Peak detection is the crucial first step in every automatic electrocardiogram analysis. Much work has been carried out in this field, using various methods ranging from filtering and threshold methods, through wavelet methods, to neural networks and others. Performance is generally good, but each method has situations where it fails. In this paper, we suggest an approach to automatically combine different QRS complex detection algorithms, here the Pan-Tompkins and wavelet algorithms, to benefit from the strengths of both methods. In particular, we introduce parameters allowing to balance the contribution of the individual algorithms; these parameters are estimated in a data-driven way. Experimental results and analysis are provided on the Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) Arrhythmia Database. We show that our combination approach outperforms both individual algorithms. PMID:16871713

  15. Study on algorithm and real-time implementation of infrared image processing based on FPGA

    NASA Astrophysics Data System (ADS)

    Pang, Yulin; Ding, Ruijun; Liu, Shanshan; Chen, Zhe

    2010-10-01

    With the fast development of Infrared Focal Plane Arrays (IRFPA) detectors, high quality real-time image processing becomes more important in infrared imaging system. Facing the demand of better visual effect and good performance, we find FPGA is an ideal choice of hardware to realize image processing algorithm that fully taking advantage of its high speed, high reliability and processing a great amount of data in parallel. In this paper, a new idea of dynamic linear extension algorithm is introduced, which has the function of automatically finding the proper extension range. This image enhancement algorithm is designed in Verilog HDL and realized on FPGA. It works on higher speed than serial processing device like CPU and DSP. Experiment shows that this hardware unit of dynamic linear extension algorithm enhances the visual effect of infrared image effectively.

  16. A simple parallel prefix algorithm for compact finite-difference schemes

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He; Joslin, Ronald D.

    1993-01-01

    A compact scheme is a discretization scheme that is advantageous in obtaining highly accurate solutions. However, the resulting systems from compact schemes are tridiagonal systems that are difficult to solve efficiently on parallel computers. Considering the almost symmetric Toeplitz structure, a parallel algorithm, simple parallel prefix (SPP), is proposed. The SPP algorithm requires less memory than the conventional LU decomposition and is highly efficient on parallel machines. It consists of a prefix communication pattern and AXPY operations. Both the computation and the communication can be truncated without degrading the accuracy when the system is diagonally dominant. A formal accuracy study was conducted to provide a simple truncation formula. Experimental results were measured on a MasPar MP-1 SIMD machine and on a Cray 2 vector machine. Experimental results show that the simple parallel prefix algorithm is a good algorithm for the compact scheme on high-performance computers.

  17. An improved distributed routing algorithm for Benes based optical NoC

    NASA Astrophysics Data System (ADS)

    Zhang, Jing; Gu, Huaxi; Yang, Yintang

    2010-08-01

    Integrated optical interconnect is believed to be one of the main technologies to replace electrical wires. Optical Network-on-Chip (ONoC) has attracted more attentions nowadays. Benes topology is a good choice for ONoC for its rearrangeable non-blocking character, multistage feature and easy scalability. Routing algorithm plays an important role in determining the performance of ONoC. But traditional routing algorithms for Benes network are not suitable for ONoC communication, we developed a new distributed routing algorithm for Benes ONoC in this paper. Our algorithm selected the routing path dynamically according to network condition and enables more path choices for the message traveling in the network. We used OPNET to evaluate the performance of our routing algorithm and also compared it with a well-known bit-controlled routing algorithm. ETE delay and throughput were showed under different packet length and network sizes. Simulation results show that our routing algorithm can provide better performance for ONoC.

  18. Stability of Bareiss algorithm

    NASA Astrophysics Data System (ADS)

    Bojanczyk, Adam W.; Brent, Richard P.; de Hoog, F. R.

    1991-12-01

    In this paper, we present a numerical stability analysis of Bareiss algorithm for solving a symmetric positive definite Toeplitz system of linear equations. We also compare Bareiss algorithm with Levinson algorithm and conclude that the former has superior numerical properties.

  19. Parameters Identification of Fluxgate Magnetic Core Adopting the Biogeography-Based Optimization Algorithm

    PubMed Central

    Jiang, Wenjuan; Shi, Yunbo; Zhao, Wenjie; Wang, Xiangxin

    2016-01-01

    The main part of the magnetic fluxgate sensor is the magnetic core, the hysteresis characteristic of which affects the performance of the sensor. When the fluxgate sensors are modelled for design purposes, an accurate model of hysteresis characteristic of the cores is necessary to achieve good agreement between modelled and experimental data. The Jiles-Atherton model is simple and can reflect the hysteresis properties of the magnetic material precisely, which makes it widely used in hysteresis modelling and simulation of ferromagnetic materials. However, in practice, it is difficult to determine the parameters accurately owing to the sensitivity of the parameters. In this paper, the Biogeography-Based Optimization (BBO) algorithm is applied to identify the Jiles-Atherton model parameters. To enhance the performances of the BBO algorithm such as global search capability, search accuracy and convergence rate, an improved Biogeography-Based Optimization (IBBO) algorithm is put forward by using Arnold map and mutation strategy of Differential Evolution (DE) algorithm. Simulation results show that IBBO algorithm is superior to Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Differential Evolution algorithm and BBO algorithm in identification accuracy and convergence rate. The IBBO algorithm is applied to identify Jiles-Atherton model parameters of selected permalloy. The simulation hysteresis loop is in high agreement with experimental data. Using permalloy as core of fluxgate probe, the simulation output is consistent with experimental output. The IBBO algorithm can identify the parameters of Jiles-Atherton model accurately, which provides a basis for the precise analysis and design of instruments and equipment with magnetic core. PMID:27347974

  20. Parameters Identification of Fluxgate Magnetic Core Adopting the Biogeography-Based Optimization Algorithm.

    PubMed

    Jiang, Wenjuan; Shi, Yunbo; Zhao, Wenjie; Wang, Xiangxin

    2016-01-01

    The main part of the magnetic fluxgate sensor is the magnetic core, the hysteresis characteristic of which affects the performance of the sensor. When the fluxgate sensors are modelled for design purposes, an accurate model of hysteresis characteristic of the cores is necessary to achieve good agreement between modelled and experimental data. The Jiles-Atherton model is simple and can reflect the hysteresis properties of the magnetic material precisely, which makes it widely used in hysteresis modelling and simulation of ferromagnetic materials. However, in practice, it is difficult to determine the parameters accurately owing to the sensitivity of the parameters. In this paper, the Biogeography-Based Optimization (BBO) algorithm is applied to identify the Jiles-Atherton model parameters. To enhance the performances of the BBO algorithm such as global search capability, search accuracy and convergence rate, an improved Biogeography-Based Optimization (IBBO) algorithm is put forward by using Arnold map and mutation strategy of Differential Evolution (DE) algorithm. Simulation results show that IBBO algorithm is superior to Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Differential Evolution algorithm and BBO algorithm in identification accuracy and convergence rate. The IBBO algorithm is applied to identify Jiles-Atherton model parameters of selected permalloy. The simulation hysteresis loop is in high agreement with experimental data. Using permalloy as core of fluxgate probe, the simulation output is consistent with experimental output. The IBBO algorithm can identify the parameters of Jiles-Atherton model accurately, which provides a basis for the precise analysis and design of instruments and equipment with magnetic core. PMID:27347974

  1. A Multigrid Algorithm for Immersed Interface Problems

    NASA Technical Reports Server (NTRS)

    Adams, Loyce

    1996-01-01

    Many physical problems involve interior interfaces across which the coefficients in the problem, the solution, its derivatives, the flux, or the source term may have jumps. These interior interfaces may or may not align with a underlying Cartesian grid. Zhilin Li, in his dissertation, showed how to discretize such elliptic problems using only a Cartesian grid and the known jump conditions to second order accuracy. In this paper, we describe how to apply the full multigrid algorithm in this context. In particular, the restriction, interpolation, and coarse grid problem will be described. Numerical results for several model problems are given to demonstrate that good rates can be obtained even when jumps in the coefficients are large and do not align with the grid.

  2. National Orange Show Photovoltaic Demonstration

    SciTech Connect

    Dan Jimenez Sheri Raborn, CPA; Tom Baker

    2008-03-31

    National Orange Show Photovoltaic Demonstration created a 400KW Photovoltaic self-generation plant at the National Orange Show Events Center (NOS). The NOS owns a 120-acre state fairground where it operates an events center and produces an annual citrus fair known as the Orange Show. The NOS governing board wanted to employ cost-saving programs for annual energy expenses. It is hoped the Photovoltaic program will result in overall savings for the NOS, help reduce the State's energy demands as relating to electrical power consumption, improve quality of life within the affected grid area as well as increase the energy efficiency of buildings at our venue. In addition, the potential to reduce operational expenses would have a tremendous effect on the ability of the NOS to service its community.

  3. Spectral Regularization Algorithms for Learning Large Incomplete Matrices.

    PubMed

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-03-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques. PMID:21552465

  4. Public Good Diffusion Limits Microbial Mutualism

    NASA Astrophysics Data System (ADS)

    Menon, Rajita; Korolev, Kirill S.

    2015-04-01

    Standard game theory cannot describe microbial interactions mediated by diffusible molecules. Nevertheless, we show that one can still model microbial dynamics using game theory with parameters renormalized by diffusion. Contrary to expectations, greater sharing of metabolites reduces the strength of cooperation and leads to species extinction via a nonequilibrium phase transition. We report analytic results for the critical diffusivity and the length scale of species intermixing. Species producing slower public good is favored by selection when fitness saturates with nutrient concentration.

  5. Switch for Good Community Program

    SciTech Connect

    Crawford, Tabitha; Amran, Martha

    2013-11-19

    Switch4Good is an energy-savings program that helps residents reduce consumption from behavior changes; it was co-developed by Balfour Beatty Military Housing Management (BB) and WattzOn in Phase I of this grant. The program was offered at 11 Navy bases. Three customer engagement strategies were evaluated, and it was found that Digital Nudges (a combination of monthly consumption statements with frequent messaging via text or email) was most cost-effective.

  6. Statistical behaviour of adaptive multilevel splitting algorithms in simple models

    SciTech Connect

    Rolland, Joran Simonnet, Eric

    2015-02-15

    Adaptive multilevel splitting algorithms have been introduced rather recently for estimating tail distributions in a fast and efficient way. In particular, they can be used for computing the so-called reactive trajectories corresponding to direct transitions from one metastable state to another. The algorithm is based on successive selection–mutation steps performed on the system in a controlled way. It has two intrinsic parameters, the number of particles/trajectories and the reaction coordinate used for discriminating good or bad trajectories. We investigate first the convergence in law of the algorithm as a function of the timestep for several simple stochastic models. Second, we consider the average duration of reactive trajectories for which no theoretical predictions exist. The most important aspect of this work concerns some systems with two degrees of freedom. They are studied in detail as a function of the reaction coordinate in the asymptotic regime where the number of trajectories goes to infinity. We show that during phase transitions, the statistics of the algorithm deviate significatively from known theoretical results when using non-optimal reaction coordinates. In this case, the variance of the algorithm is peaking at the transition and the convergence of the algorithm can be much slower than the usual expected central limit behaviour. The duration of trajectories is affected as well. Moreover, reactive trajectories do not correspond to the most probable ones. Such behaviour disappears when using the optimal reaction coordinate called committor as predicted by the theory. We finally investigate a three-state Markov chain which reproduces this phenomenon and show logarithmic convergence of the trajectory durations.

  7. Byte structure variable length coding (BS-VLC): a new specific algorithm applied in the compression of trajectories generated by molecular dynamics

    PubMed

    Melo; Puga; Gentil; Brito; Alves; Ramos

    2000-05-01

    Molecular dynamics is a well-known technique very much used in the study of biomolecular systems. The trajectory files produced by molecular dynamics simulations are extensive, and the classical lossless algorithms give poor efficiencies in their compression. In this work, a new specific algorithm, named byte structure variable length coding (BS-VLC), is introduced. Trajectory files, obtained by molecular dynamics applied to trypsin and a trypsin:pancreatic trypsin inhibitor complex, were compressed using four classical lossless algorithms (Huffman, adaptive Huffman, LZW, and LZ77) as well as the BS-VLC algorithm. The results obtained show that BS-VLC nearly triplicates the compression efficiency of the best classical lossless algorithm, preserving a near lossless behavior. Compression efficiencies close to 50% can be obtained with a high degree of precision, and the maximum efficiency possible (75%), within this algorithm, can be performed with good precision. PMID:10850759

  8. The New and Computationally Efficient MIL-SOM Algorithm: Potential Benefits for Visualization and Analysis of a Large-Scale High-Dimensional Clinically Acquired Geographic Data

    PubMed Central

    Oyana, Tonny J.; Achenie, Luke E. K.; Heo, Joon

    2012-01-01

    The objective of this paper is to introduce an efficient algorithm, namely, the mathematically improved learning-self organizing map (MIL-SOM) algorithm, which speeds up the self-organizing map (SOM) training process. In the proposed MIL-SOM algorithm, the weights of Kohonen's SOM are based on the proportional-integral-derivative (PID) controller. Thus, in a typical SOM learning setting, this improvement translates to faster convergence. The basic idea is primarily motivated by the urgent need to develop algorithms with the competence to converge faster and more efficiently than conventional techniques. The MIL-SOM algorithm is tested on four training geographic datasets representing biomedical and disease informatics application domains. Experimental results show that the MIL-SOM algorithm provides a competitive, better updating procedure and performance, good robustness, and it runs faster than Kohonen's SOM. PMID:22481977

  9. Creating Slide Show Book Reports.

    ERIC Educational Resources Information Center

    Taylor, Harriet G.; Stuhlmann, Janice M.

    1995-01-01

    Describes the use of "Kid Pix 2" software by fourth grade students to develop slide-show book reports. Highlights include collaboration with education majors from Louisiana State University, changes in attitudes of the education major students and elementary students, and problems with navigation and disk space. (LRW)

  10. Producing Talent and Variety Shows.

    ERIC Educational Resources Information Center

    Szabo, Chuck

    1995-01-01

    Identifies key aspects of producing talent shows and outlines helpful hints for avoiding pitfalls and ensuring a smooth production. Presents suggestions concerning publicity, scheduling, and support personnel. Describes types of acts along with special needs and problems specific to each act. Includes a list of resources. (MJP)

  11. Spaceborne SAR Imaging Algorithm for Coherence Optimized

    PubMed Central

    Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun

    2016-01-01

    This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446

  12. Which fMRI clustering gives good brain parcellations?

    PubMed Central

    Thirion, Bertrand; Varoquaux, Gaël; Dohmatob, Elvis; Poline, Jean-Baptiste

    2014-01-01

    Analysis and interpretation of neuroimaging data often require one to divide the brain into a number of regions, or parcels, with homogeneous characteristics, be these regions defined in the brain volume or on the cortical surface. While predefined brain atlases do not adapt to the signal in the individual subject images, parcellation approaches use brain activity (e.g., found in some functional contrasts of interest) and clustering techniques to define regions with some degree of signal homogeneity. In this work, we address the question of which clustering technique is appropriate and how to optimize the corresponding model. We use two principled criteria: goodness of fit (accuracy), and reproducibility of the parcellation across bootstrap samples. We study these criteria on both simulated and two task-based functional Magnetic Resonance Imaging datasets for the Ward, spectral and k-means clustering algorithms. We show that in general Ward’s clustering performs better than alternative methods with regard to reproducibility and accuracy and that the two criteria diverge regarding the preferred models (reproducibility leading to more conservative solutions), thus deferring the practical decision to a higher level alternative, namely the choice of a trade-off between accuracy and stability. PMID:25071425

  13. Efficient spectral and pseudospectral algorithms for 3D simulations of whistler-mode waves in a plasma

    NASA Astrophysics Data System (ADS)

    Gumerov, Nail A.; Karavaev, Alexey V.; Surjalal Sharma, A.; Shao, Xi; Papadopoulos, Konstantinos D.

    2011-04-01

    Efficient spectral and pseudospectral algorithms for simulation of linear and nonlinear 3D whistler waves in a cold electron plasma are developed. These algorithms are applied to the simulation of whistler waves generated by loop antennas and spheromak-like stationary waves of considerable amplitude. The algorithms are linearly stable and show good stability properties for computations of nonlinear waves over tens of thousands of time steps. Additional speedups by factors of 10-20 (comparing single core CPU and one GPU) are achieved by using graphics processors (GPUs), which enable efficient numerical simulation of the wave propagation on relatively high resolution meshes (tens of millions nodes) in personal computing environment. Comparisons of the numerical results with analytical solutions and experiments show good agreement. The limitations of the codes and the performance of the GPU computing are discussed.

  14. Virtual Goods Recommendations in Virtual Worlds

    PubMed Central

    Chen, Kuan-Yu; Liao, Hsiu-Yu; Chen, Jyun-Hung; Liu, Duen-Ren

    2015-01-01

    Virtual worlds (VWs) are computer-simulated environments which allow users to create their own virtual character as an avatar. With the rapidly growing user volume in VWs, platform providers launch virtual goods in haste and stampede users to increase sales revenue. However, the rapidity of development incurs virtual unrelated items which will be difficult to remarket. It not only wastes virtual global companies' intelligence resources, but also makes it difficult for users to find suitable virtual goods fit for their virtual home in daily virtual life. In the VWs, users decorate their houses, visit others' homes, create families, host parties, and so forth. Users establish their social life circles through these activities. This research proposes a novel virtual goods recommendation method based on these social interactions. The contact strength and contact influence result from interactions with social neighbors and influence users' buying intention. Our research highlights the importance of social interactions in virtual goods recommendation. The experiment's data were retrieved from an online VW platform, and the results show that the proposed method, considering social interactions and social life circle, has better performance than existing recommendation methods. PMID:25834837

  15. Magic Carpet Shows Its Colors

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The upper left image in this display is from the panoramic camera on the Mars Exploration Rover Spirit, showing the 'Magic Carpet' region near the rover at Gusev Crater, Mars, on Sol 7, the seventh martian day of its journey (Jan. 10, 2004). The lower image, also from the panoramic camera, is a monochrome (single filter) image of a rock in the 'Magic Carpet' area. Note that colored portions of the rock correlate with extracted spectra shown in the plot to the side. Four different types of materials are shown: the rock itself, the soil in front of the rock, some brighter soil on top of the rock, and some dust that has collected in small recesses on the rock face ('spots'). Each color on the spectra matches a line on the graph, showing how the panoramic camera's different colored filters are used to broadly assess the varying mineral compositions of martian rocks and soils.

  16. Stitching algorithm of the images acquired from different points of fixation

    NASA Astrophysics Data System (ADS)

    Semenishchev, E. A.; Voronin, V. V.; Marchuk, V. I.; Pismenskova, M. M.

    2015-02-01

    Image mosaicing is the act of combining two or more images and is used in many applications in computer vision, image processing, and computer graphics. It aims to combine images such that no obstructive boundaries exist around overlapped regions and to create a mosaic image that exhibits as little distortion as possible from the original images. Most of the existing algorithms are the computationally complex and don't show good results always in obtaining of the stitched images, which are different: scale, light, various free points of view and others. In this paper we consider an algorithm which allows increasing the speed of processing in the case of stitching high-resolution images. We reduced the computational complexity used an edge image analysis and saliency map on high-detailisation areas. On detected areas are determined angles of rotation, scaling factors, the coefficients of the color correction and transformation matrix. We define key points using SURF detector and ignore false correspondences based on correlation analysis. The proposed algorithm allows to combine images from free points of view with the different color balances, time shutter and scale. We perform a comparative study and show that statistically, the new algorithm deliver good quality results compared to existing algorithms.

  17. Analysis of the Dryden Wet Bulb GLobe Temperature Algorithm for White Sands Missile Range

    NASA Technical Reports Server (NTRS)

    LaQuay, Ryan Matthew

    2011-01-01

    In locations where workforce is exposed to high relative humidity and light winds, heat stress is a significant concern. Such is the case at the White Sands Missile Range in New Mexico. Heat stress is depicted by the wet bulb globe temperature, which is the official measurement used by the American Conference of Governmental Industrial Hygienists. The wet bulb globe temperature is measured by an instrument which was designed to be portable and needing routine maintenance. As an alternative form for measuring the wet bulb globe temperature, algorithms have been created to calculate the wet bulb globe temperature from basic meteorological observations. The algorithms are location dependent; therefore a specific algorithm is usually not suitable for multiple locations. Due to climatology similarities, the algorithm developed for use at the Dryden Flight Research Center was applied to data from the White Sands Missile Range. A study was performed that compared a wet bulb globe instrument to data from two Surface Atmospheric Measurement Systems that was applied to the Dryden wet bulb globe temperature algorithm. The period of study was from June to September of2009, with focus being applied from 0900 to 1800, local time. Analysis showed that the algorithm worked well, with a few exceptions. The algorithm becomes less accurate to the measurement when the dew point temperature is over 10 Celsius. Cloud cover also has a significant effect on the measured wet bulb globe temperature. The algorithm does not show red and black heat stress flags well due to shorter time scales of such events. The results of this study show that it is plausible that the Dryden Flight Research wet bulb globe temperature algorithm is compatible with the White Sands Missile Range, except for when there are increased dew point temperatures and cloud cover or precipitation. During such occasions, the wet bulb globe temperature instrument would be the preferred method of measurement. Out of the 30

  18. Algorithmic causets

    NASA Astrophysics Data System (ADS)

    Bolognesi, Tommaso

    2011-07-01

    In the context of quantum gravity theories, several researchers have proposed causal sets as appropriate discrete models of spacetime. We investigate families of causal sets obtained from two simple models of computation - 2D Turing machines and network mobile automata - that operate on 'high-dimensional' supports, namely 2D arrays of cells and planar graphs, respectively. We study a number of quantitative and qualitative emergent properties of these causal sets, including dimension, curvature and localized structures, or 'particles'. We show how the possibility to detect and separate particles from background space depends on the choice between a global or local view at the causal set. Finally, we spot very rare cases of pseudo-randomness, or deterministic chaos; these exhibit a spontaneous phenomenon of 'causal compartmentation' that appears as a prerequisite for the occurrence of anything of physical interest in the evolution of spacetime.

  19. A 2D vector map watermarking algorithm resistant to simplication attack

    NASA Astrophysics Data System (ADS)

    Wang, Chuanjian; Liang, Bin; Zhao, Qingzhan; Qiu, Zuqi; Peng, Yuwei; Yu, Liang

    2009-12-01

    Vector maps are valuable asset of data producers. How to protect copyright of vector maps effectively using digital watermarking is a hot research issue. In this paper, we propose a new robust and blind watermarking algorithm resilient to simplification attack. We proof that spatial topological relation between map objects bears an important property of approximate simplification invariance. We choose spatial topological relations as watermark feature domain and embed watermarks by slightly modifying spatial topological relation between map objects. Experiment shows that our algorithm has good performance to resist simplification attack and tradeoff of the robustness and data fidelity is acquired.

  20. Improvement of characteristic statistic algorithm and its application on equilibrium cycle reloading optimization

    SciTech Connect

    Hu, Y.; Liu, Z.; Shi, X.; Wang, B.

    2006-07-01

    A brief introduction of characteristic statistic algorithm (CSA) is given in the paper, which is a new global optimization algorithm to solve the problem of PWR in-core fuel management optimization. CSA is modified by the adoption of back propagation neural network and fast local adjustment. Then the modified CSA is applied to PWR Equilibrium Cycle Reloading Optimization, and the corresponding optimization code of CSA-DYW is developed. CSA-DYW is used to optimize the equilibrium cycle of 18 month reloading of Daya bay nuclear plant Unit 1 reactor. The results show that CSA-DYW has high efficiency and good global performance on PWR Equilibrium Cycle Reloading Optimization. (authors)

  1. A new algorithm for detecting cloud height using OMPS/LP measurements

    NASA Astrophysics Data System (ADS)

    Chen, Zhong; DeLand, Matthew; Bhartia, Pawan K.

    2016-03-01

    The Ozone Mapping and Profiler Suite Limb Profiler (OMPS/LP) ozone product requires the determination of cloud height for each event to establish the lower boundary of the profile for the retrieval algorithm. We have created a revised cloud detection algorithm for LP measurements that uses the spectral dependence of the vertical gradient in radiance between two wavelengths in the visible and near-IR spectral regions. This approach provides better discrimination between clouds and aerosols than results obtained using a single wavelength. Observed LP cloud height values show good agreement with coincident Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) measurements.

  2. ENVITEC shows off air technologies

    SciTech Connect

    McIlvaine, R.W.

    1995-08-01

    The ENVITEC International Trade Fair for Environmental Protection and Waste Management Technologies, held in June in Duesseldorf, Germany, is the largest air pollution exhibition in the world and may be the largest environmental technology show overall. Visitors saw thousands of environmental solutions from 1,318 companies representing 29 countries and occupying roughly 43,000 square meters of exhibit space. Many innovations were displayed under the category, ``thermal treatment of air pollutants.`` New technologies include the following: regenerative thermal oxidizers; wet systems for removing pollutants; biological scrubbers;electrostatic precipitators; selective adsorption systems; activated-coke adsorbers; optimization of scrubber systems; and air pollution monitors.

  3. Detection algorithm of big bandwidth chirp signals based on STFT

    NASA Astrophysics Data System (ADS)

    Wang, Jinzhen; Wu, Juhong; Su, Shaoying; Chen, Zengping

    2014-10-01

    Aiming at solving the problem of detecting the wideband chirp signals under low Signal-to-Noise Ratio (SNR) condition, an effective signal detection algorithm based on Short-Time-Fourier-Transform (STFT) is proposed. Considering the characteristic of dispersion of noise spectrum and concentration of chirp spectrum, STFT is performed on chirp signals with Gauss window by fixed step, and these frequencies of peak spectrum obtained from every STFT are in correspondence to the time of every stepped window. Then, the frequencies are binarized and the approach similar to mnk method in time domain is used to detect the chirp pulse signal and determine the coarse starting time and ending time. Finally, the data segments, where the former starting time and ending time locate, are subdivided into many segments evenly, on which the STFT is implemented respectively. By that, the precise starting and ending time are attained. Simulations shows that when the SNR is higher than -28dB, the detection probability is not less than 99% and false alarm probability is zero, and also good estimation accuracy of starting and ending time is acquired. The algorithm is easy to realize and surpasses FFT in computation when the width of STFT window and step length are selected properly, so the presented algorithm has good engineering value.

  4. PACS model based on digital watermarking and its core algorithms

    NASA Astrophysics Data System (ADS)

    Que, Dashun; Wen, Xianlin; Chen, Bi

    2009-10-01

    PACS model based on digital watermarking is proposed by analyzing medical image features and PACS requirements from the point of view of information security, its core being digital watermarking server and the corresponding processing module. Two kinds of digital watermarking algorithm are studied; one is non-region of interest (NROI) digital watermarking algorithm based on wavelet domain and block-mean, the other is reversible watermarking algorithm on extended difference and pseudo-random matrix. The former belongs to robust lossy watermarking, which embedded in NROI by wavelet provides a good way for protecting the focus area (ROI) of images, and introduction of block-mean approach a good scheme to enhance the anti-attack capability; the latter belongs to fragile lossless watermarking, which has the performance of simple implementation and can realize tamper localization effectively, and the pseudo-random matrix enhances the correlation and security between pixels. Plenty of experimental research has been completed in this paper, including the realization of digital watermarking PACS model, the watermarking processing module and its anti-attack experiments, the digital watermarking server and the network transmission simulating experiments of medical images. Theoretical analysis and experimental results show that the designed PACS model can effectively ensure confidentiality, authenticity, integrity and security of medical image information.

  5. Parallelizing a Symbolic Compositional Model-Checking Algorithm

    NASA Astrophysics Data System (ADS)

    Cohen, Ariel; Namjoshi, Kedar S.; Sa'Ar, Yaniv; Zuck, Lenore D.; Kisyova, Katya I.

    We describe a parallel, symbolic, model-checking algorithm, built around a compositional reasoning method. The method constructs a collection of per-process (i.e., local) invariants, which together imply a desired global safety property. The local invariant computation is a simultaneous fixpoint evaluation, which easily lends itself to parallelization. Moreover, locality of reasoning helps limit both the frequency and the amount of cross-thread synchronization, leading to good parallel performance. Experimental results show that the parallelized computation can achieve substantial speed-up, with reasonably small memory overhead.

  6. Beneficence: doing good for others.

    PubMed

    Gillon, R

    1985-07-01

    Gillon's essay on beneficence is one in a series of British Medical Journal articles on philosophical medical ethics. The duty of beneficence, or doing good for others, figures more prominently in medicine than in most other professions. As important as beneficence is in the physician patient relationship, however, it must be tempered by respect for the patient's autonomy; by the duty of nonmaleficence, or of doing no harm; and by a concern for justice, especially in the allocation of scarce medical resources. PMID:3926060

  7. Library of Continuation Algorithms

    2005-03-01

    LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newton’s method for their nonlinear solve.

  8. Parallelized Dilate Algorithm for Remote Sensing Image

    PubMed Central

    Zhang, Suli; Hu, Haoran; Pan, Xin

    2014-01-01

    As an important algorithm, dilate algorithm can give us more connective view of a remote sensing image which has broken lines or objects. However, with the technological progress of satellite sensor, the resolution of remote sensing image has been increasing and its data quantities become very large. This would lead to the decrease of algorithm running speed or cannot obtain a result in limited memory or time. To solve this problem, our research proposed a parallelized dilate algorithm for remote sensing Image based on MPI and MP. Experiments show that our method runs faster than traditional single-process algorithm. PMID:24955392

  9. An improved algorithm of mask image dodging for aerial image

    NASA Astrophysics Data System (ADS)

    Zhang, Zuxun; Zou, Songbai; Zuo, Zhiqi

    2011-12-01

    The technology of Mask image dodging based on Fourier transform is a good algorithm in removing the uneven luminance within a single image. At present, the difference method and the ratio method are the methods in common use, but they both have their own defects .For example, the difference method can keep the brightness uniformity of the whole image, but it is deficient in local contrast; meanwhile the ratio method can work better in local contrast, but sometimes it makes the dark areas of the original image too bright. In order to remove the defects of the two methods effectively, this paper on the basis of research of the two methods proposes a balance solution. Experiments show that the scheme not only can combine the advantages of the difference method and the ratio method, but also can avoid the deficiencies of the two algorithms.

  10. An improved piecewise linear chaotic map based image encryption algorithm.

    PubMed

    Hu, Yuping; Zhu, Congxu; Wang, Zhijian

    2014-01-01

    An image encryption algorithm based on improved piecewise linear chaotic map (MPWLCM) model was proposed. The algorithm uses the MPWLCM to permute and diffuse plain image simultaneously. Due to the sensitivity to initial key values, system parameters, and ergodicity in chaotic system, two pseudorandom sequences are designed and used in the processes of permutation and diffusion. The order of processing pixels is not in accordance with the index of pixels, but it is from beginning or end alternately. The cipher feedback was introduced in diffusion process. Test results and security analysis show that not only the scheme can achieve good encryption results but also its key space is large enough to resist against brute attack. PMID:24592159

  11. An Improved Piecewise Linear Chaotic Map Based Image Encryption Algorithm

    PubMed Central

    Hu, Yuping; Wang, Zhijian

    2014-01-01

    An image encryption algorithm based on improved piecewise linear chaotic map (MPWLCM) model was proposed. The algorithm uses the MPWLCM to permute and diffuse plain image simultaneously. Due to the sensitivity to initial key values, system parameters, and ergodicity in chaotic system, two pseudorandom sequences are designed and used in the processes of permutation and diffusion. The order of processing pixels is not in accordance with the index of pixels, but it is from beginning or end alternately. The cipher feedback was introduced in diffusion process. Test results and security analysis show that not only the scheme can achieve good encryption results but also its key space is large enough to resist against brute attack. PMID:24592159

  12. Calibration of FRESIM for Singapore expressway using genetic algorithm

    SciTech Connect

    Cheu, R.L.; Jin, X.; Srinivasa, D.; Ng, K.C.; Ng, Y.L.

    1998-11-01

    FRESIM is a microscopic time-stepping simulation model for freeway corridor traffic operations. To enable FRESIM to realistically simulate expressway traffic flow in Singapore, parameters that govern the movement of vehicles needed to be recalibrated for local traffic conditions. This paper presents the application of a genetic algorithm as an optimization method for finding a suitable combination of FRESIM parameter values. The calibration is based on field data collected on weekdays over a 5.8 km segment of the Ayer Rajar Expressway. Independent calibrations have been made for evening peak and midday off-peak traffic. The results show that the genetic algorithm is able to search for two sets of parameter values that enable FRESIM to produce 30-s loop-detector volume and speed (averaged across all lanes) closely matching the field data under two different traffic conditions. The two sets of parameter values are found to produce a consistently good match for data collected in different days.

  13. Sorting on STAR. [CDC computer algorithm timing comparison

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1978-01-01

    Timing comparisons are given for three sorting algorithms written for the CDC STAR computer. One algorithm is Hoare's (1962) Quicksort, which is the fastest or nearly the fastest sorting algorithm for most computers. A second algorithm is a vector version of Quicksort that takes advantage of the STAR's vector operations. The third algorithm is an adaptation of Batcher's (1968) sorting algorithm, which makes especially good use of vector operations but has a complexity of N(log N)-squared as compared with a complexity of N log N for the Quicksort algorithms. In spite of its worse complexity, Batcher's sorting algorithm is competitive with the serial version of Quicksort for vectors up to the largest that can be treated by STAR. Vector Quicksort outperforms the other two algorithms and is generally preferred. These results indicate that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis.

  14. The BR eigenvalue algorithm

    SciTech Connect

    Geist, G.A.; Howell, G.W.; Watkins, D.S.

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  15. ShowMe3D

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from themore » displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.« less

  16. ShowMe3D

    SciTech Connect

    Sinclair, Michael B

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from the displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.

  17. Establishing a Dynamic Self-Adaptation Learning Algorithm of the BP Neural Network and Its Applications

    NASA Astrophysics Data System (ADS)

    Li, Xiaofeng; Xiang, Suying; Zhu, Pengfei; Wu, Min

    2015-12-01

    In order to avoid the inherent deficiencies of the traditional BP neural network, such as slow convergence speed, that easily leading to local minima, poor generalization ability and difficulty in determining the network structure, the dynamic self-adaptive learning algorithm of the BP neural network is put forward to improve the function of the BP neural network. The new algorithm combines the merit of principal component analysis, particle swarm optimization, correlation analysis and self-adaptive model, hence can effectively solve the problems of selecting structural parameters, initial connection weights and thresholds and learning rates of the BP neural network. This new algorithm not only reduces the human intervention, optimizes the topological structures of BP neural networks and improves the network generalization ability, but also accelerates the convergence speed of a network, avoids trapping into local minima, and enhances network adaptation ability and prediction ability. The dynamic self-adaptive learning algorithm of the BP neural network is used to forecast the total retail sale of consumer goods of Sichuan Province, China. Empirical results indicate that the new algorithm is superior to the traditional BP network algorithm in predicting accuracy and time consumption, which shows the feasibility and effectiveness of the new algorithm.

  18. Infrared small target detection based on bilateral filtering algorithm with similarity judgments

    NASA Astrophysics Data System (ADS)

    Li, Yanbei; Li, Yan

    2014-11-01

    Infrared small target detection is part of the key technologies in infrared precision-guided, search and track system. Resulting from the relative distance of the infrared image system and the target is far, the target becomes small, faint and obscure. Furthermore, the interference of background clutter and system noise is intense. To solve the problem of infrared small target detection in a complex background, this paper proposes a bilateral filtering algorithm based on similarity judgments for infrared image background prediction. The algorithm introduces gradient factor and similarity judgment factor into traditional bilateral filtering. The two factors can enhance the accuracy of the algorithm for smooth region. At the same time, spatial proximity coefficients and gray similarity coefficient in the bilateral filtering are all expressed by the first two of McLaughlin expansion, which aiming at reducing the time overhead. Simulation results show that the proposed algorithm can effectively suppress complex background clutter in the infrared image and enhance target signal compared with the improved bilateral filtering algorithm, and it also can improve the signal to noise ratio (SNR) and contrast. Besides, this algorithm can reduce the computation time. In a word, this algorithm has a good background rejection performance.

  19. Two novel batch scheduling algorithms with insufficient wavelength converters in optical burst switching networks

    NASA Astrophysics Data System (ADS)

    Huang, Sheng; Pang, Hong-Feng; Li, Ling-Xia

    2013-03-01

    In optical burst switching networks, wavelength converters (WCs) of core nodes are used to decrease the burst loss rate. The implementation of the WCs is difficult in the current technology and the cost of WCs is high. So some core nodes may be configured insufficient WCs to reduce the cost in OBS networks. However, many data channel scheduling algorithms do not count the number of WCs and the performance of burst loss rate is not good in the condition of insufficient WCs. To overcome the defect, two novel batch scheduling algorithm with insufficiency of WC are proposed in this paper. The former algorithm improves the WCs' resource utilization probability to reduce the burst loss rate and the later algorithm saves the WCs' resource for the incoming bursts to use to improve the burst loss performance. The later algorithm can reduce more burst loss rate with the same number of WCs, compared with the other scheduling algorithms. The simulation results show that the later algorithm is more effective in reducing the burst loss rate with insufficient WCs.

  20. Solving the depth of the repeated texture areas based on the clustering algorithm

    NASA Astrophysics Data System (ADS)

    Xiong, Zhang; Zhang, Jun; Tian, Jinwen

    2015-12-01

    The reconstruction of the 3D scene in the monocular stereo vision needs to get the depth of the field scenic points in the picture scene. But there will inevitably be error matching in the process of image matching, especially when there are a large number of repeat texture areas in the images, there will be lots of error matches. At present, multiple baseline stereo imaging algorithm is commonly used to eliminate matching error for repeated texture areas. This algorithm can eliminate the ambiguity correspond to common repetition texture. But this algorithm has restrictions on the baseline, and has low speed. In this paper, we put forward an algorithm of calculating the depth of the matching points in the repeat texture areas based on the clustering algorithm. Firstly, we adopt Gauss Filter to preprocess the images. Secondly, we segment the repeated texture regions in the images into image blocks by using spectral clustering segmentation algorithm based on super pixel and tag the image blocks. Then, match the two images and solve the depth of the image. Finally, the depth of the image blocks takes the median in all depth values of calculating point in the bock. So the depth of repeated texture areas is got. The results of a lot of image experiments show that the effect of our algorithm for calculating the depth of repeated texture areas is very good.

  1. Bunkhouse basement interior showing storage area and a conveyor belt ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Bunkhouse basement interior showing storage area and a conveyor belt (circa 1936) used to unload dry goods into the basement through an opening on the east side of the bunkhouse. - Sespe Ranch, Bunkhouse, 2896 Telegraph Road, Fillmore, Ventura County, CA

  2. GOES-West Shows U.S. West's Record Rainfall

    NASA Video Gallery

    A new time-lapse animation of data from NOAA's GOES-West satellite provides a good picture of why the U.S. West Coast continues to experience record rainfall. The new animation shows the movement o...

  3. Going public: good scientific conduct.

    PubMed

    Meyer, Gitte; Sandøe, Peter

    2012-06-01

    The paper addresses issues of scientific conduct regarding relations between science and the media, relations between scientists and journalists, and attitudes towards the public at large. In the large and increasing body of literature on scientific conduct and misconduct, these issues seem underexposed as ethical challenges. Consequently, individual scientists here tend to be left alone with problems and dilemmas, with no guidance for good conduct. Ideas are presented about how to make up for this omission. Using a practical, ethical approach, the paper attempts to identify ways scientists might deal with ethical public relations issues, guided by a norm or maxim of openness. Drawing on and rethinking the CUDOS codification of the scientific ethos, as it was worked out by Robert K. Merton in 1942, we propose that this, which is echoed in current codifications of norms for good scientific conduct, contains a tacit maxim of openness which may naturally be extended to cover the public relations of science. Discussing openness as access, accountability, transparency and receptiveness, the argumentation concentrates on the possible prevention of misconduct with respect to, on the one hand, sins of omission-withholding important information from the public-and, on the other hand, abuses of the authority of science in order to gain publicity. Statements from interviews with scientists are used to illustrate how scientists might view the relevance of the issues raised. PMID:21088921

  4. One of the Good Guys

    SciTech Connect

    Wiley, H. S.

    2010-10-01

    I was talking with some younger colleagues at a meeting last month when the subject of career goals came up. These colleagues were successful in that they had recently received tenure at top research universities and had some grants and good students. Thus, the early career pressure to simply survive was gone. So now what motivated them? Solving challenging and significant scientific problems was at the top of their lists. Interestingly, they were also motivated by a desire to become one of the “good guys” in science. The fact that being an important contributor to the scientific community can be fulfilling should not come as a surprise to anyone. However, what I do consider surprising is how rarely this seems to be discussed with students and postdocs. What we do discuss are either those issues that are fundamental aspects of the job (get a grant, get tenure, do research in an important field) or those that are important to our institutions. Knowing how to do our jobs well is indeed essential for any kind of professional success. However, achieving the right balance in our ambitions is also important for our happiness.

  5. Goode Gym Energy Renovation Project

    SciTech Connect

    Coleman, Andrena

    2014-12-11

    The Ida H. Goode Gymnasium was constructed in 1964 to serve as a focal point for academics, student recreation, and health and wellness activities. This 38,000 SF building contains a gymnasium with a stage, swimming pool, eight classrooms, a weight room, six offices and auxiliary spaces for the athletic programs. The gym is located on a 4-acre greenfield, which is slated for improvement and enhancement to future athletics program at Bennett College. The available funding for this project was used to weatherize the envelope of the gymnasium, installation of a new energy-efficient mechanical system, and a retrofit of the existing lighting systems in the building’s interior. The envelope weatherization was completed without disturbing the building’s historic preservation eligibility. The existing heating system was replaced with a new high efficiency condensing system. The new heating system also includes a new Building Automation System which provides additional monitoring. Proper usage of this system will provide additional energy savings. Most of the existing interior lighting fixtures and bulbs were replaced with new LED and high efficiency T-8 bulbs and fixtures. Occupancy sensors were installed in applicable areas. The Ida Goode Gymnasium should experience high electricity and natural gas savings as well as operational/maintenance efficiency increases. The aesthetics of the building was maintained and the overall safety was improved.

  6. Natural gradient learning algorithms for RBF networks.

    PubMed

    Zhao, Junsheng; Wei, Haikun; Zhang, Chi; Li, Weiling; Guo, Weili; Zhang, Kanjian

    2015-02-01

    Radial basis function (RBF) networks are one of the most widely used models for function approximation and classification. There are many strange behaviors in the learning process of RBF networks, such as slow learning speed and the existence of the plateaus. The natural gradient learning method can overcome these disadvantages effectively. It can accelerate the dynamics of learning and avoid plateaus. In this letter, we assume that the probability density function (pdf) of the input and the activation function are gaussian. First, we introduce natural gradient learning to the RBF networks and give the explicit forms of the Fisher information matrix and its inverse. Second, since it is difficult to calculate the Fisher information matrix and its inverse when the numbers of the hidden units and the dimensions of the input are large, we introduce the adaptive method to the natural gradient learning algorithms. Finally, we give an explicit form of the adaptive natural gradient learning algorithm and compare it to the conventional gradient descent method. Simulations show that the proposed adaptive natural gradient method, which can avoid the plateaus effectively, has a good performance when RBF networks are used for nonlinear functions approximation. PMID:25380332

  7. Speech Enhancement based on Compressive Sensing Algorithm

    NASA Astrophysics Data System (ADS)

    Sulong, Amart; Gunawan, Teddy S.; Khalifa, Othman O.; Chebil, Jalel

    2013-12-01

    There are various methods, in performance of speech enhancement, have been proposed over the years. The accurate method for the speech enhancement design mainly focuses on quality and intelligibility. The method proposed with high performance level. A novel speech enhancement by using compressive sensing (CS) is a new paradigm of acquiring signals, fundamentally different from uniform rate digitization followed by compression, often used for transmission or storage. Using CS can reduce the number of degrees of freedom of a sparse/compressible signal by permitting only certain configurations of the large and zero/small coefficients, and structured sparsity models. Therefore, CS is significantly provides a way of reconstructing a compressed version of the speech in the original signal by taking only a small amount of linear and non-adaptive measurement. The performance of overall algorithms will be evaluated based on the speech quality by optimise using informal listening test and Perceptual Evaluation of Speech Quality (PESQ). Experimental results show that the CS algorithm perform very well in a wide range of speech test and being significantly given good performance for speech enhancement method with better noise suppression ability over conventional approaches without obvious degradation of speech quality.

  8. Intelligent perturbation algorithms for space scheduling optimization

    NASA Technical Reports Server (NTRS)

    Kurtzman, Clifford R.

    1991-01-01

    Intelligent perturbation algorithms for space scheduling optimization are presented in the form of the viewgraphs. The following subject areas are covered: optimization of planning, scheduling, and manifesting; searching a discrete configuration space; heuristic algorithms used for optimization; use of heuristic methods on a sample scheduling problem; intelligent perturbation algorithms are iterative refinement techniques; properties of a good iterative search operator; dispatching examples of intelligent perturbation algorithm and perturbation operator attributes; scheduling implementations using intelligent perturbation algorithms; major advances in scheduling capabilities; the prototype ISF (industrial Space Facility) experiment scheduler; optimized schedule (max revenue); multi-variable optimization; Space Station design reference mission scheduling; ISF-TDRSS command scheduling demonstration; and example task - communications check.

  9. A parallel algorithm for mesh smoothing

    SciTech Connect

    Freitag, L.; Jones, M.; Plassmann, P.

    1999-07-01

    Maintaining good mesh quality during the generation and refinement of unstructured meshes in finite-element applications is an important aspect in obtaining accurate discretizations and well-conditioned linear systems. In this article, the authors present a mesh-smoothing algorithm based on nonsmooth optimization techniques and a scalable implementation of this algorithm. They prove that the parallel algorithm has a provably fast runtime bound and executes correctly for a parallel random access machine (PRAM) computational model. They extend the PRAM algorithm to distributed memory computers and report results for two-and three-dimensional simplicial meshes that demonstrate the efficiency and scalability of this approach for a number of different test cases. They also examine the effect of different architectures on the parallel algorithm and present results for the IBM SP supercomputer and an ATM-connected network of SPARC Ultras.

  10. Good-enough linguistic representations and online cognitive equilibrium in language processing.

    PubMed

    Karimi, Hossein; Ferreira, Fernanda

    2016-05-01

    We review previous research showing that representations formed during language processing are sometimes just "good enough" for the task at hand and propose the "online cognitive equilibrium" hypothesis as the driving force behind the formation of good-enough representations in language processing. Based on this view, we assume that the language comprehension system by default prefers to achieve as early as possible and remain as long as possible in a state of cognitive equilibrium where linguistic representations are successfully incorporated with existing knowledge structures (i.e., schemata) so that a meaningful and coherent overall representation is formed, and uncertainty is resolved or at least minimized. We also argue that the online equilibrium hypothesis is consistent with current theories of language processing, which maintain that linguistic representations are formed through a complex interplay between simple heuristics and deep syntactic algorithms and also theories that hold that linguistic representations are often incomplete and lacking in detail. We also propose a model of language processing that makes use of both heuristic and algorithmic processing, is sensitive to online cognitive equilibrium, and, we argue, is capable of explaining the formation of underspecified representations. We review previous findings providing evidence for underspecification in relation to this hypothesis and the associated language processing model and argue that most of these findings are compatible with them. PMID:26103207

  11. Parameter incremental learning algorithm for neural networks.

    PubMed

    Wan, Sheng; Banta, Larry E

    2006-11-01

    In this paper, a novel stochastic (or online) training algorithm for neural networks, named parameter incremental learning (PIL) algorithm, is proposed and developed. The main idea of the PIL strategy is that the learning algorithm should not only adapt to the newly presented input-output training pattern by adjusting parameters, but also preserve the prior results. A general PIL algorithm for feedforward neural networks is accordingly presented as the first-order approximate solution to an optimization problem, where the performance index is the combination of proper measures of preservation and adaptation. The PIL algorithms for the multilayer perceptron (MLP) are subsequently derived. Numerical studies show that for all the three benchmark problems used in this paper the PIL algorithm for MLP is measurably superior to the standard online backpropagation (BP) algorithm and the stochastic diagonal Levenberg-Marquardt (SDLM) algorithm in terms of the convergence speed and accuracy. Other appealing features of the PIL algorithm are that it is computationally as simple as the BP algorithm, and as easy to use as the BP algorithm. It, therefore, can be applied, with better performance, to any situations where the standard online BP algorithm is applicable. PMID:17131658

  12. Pea Plants Show Risk Sensitivity.

    PubMed

    Dener, Efrat; Kacelnik, Alex; Shemesh, Hagai

    2016-07-11

    Sensitivity to variability in resources has been documented in humans, primates, birds, and social insects, but the fit between empirical results and the predictions of risk sensitivity theory (RST), which aims to explain this sensitivity in adaptive terms, is weak [1]. RST predicts that agents should switch between risk proneness and risk aversion depending on state and circumstances, especially according to the richness of the least variable option [2]. Unrealistic assumptions about agents' information processing mechanisms and poor knowledge of the extent to which variability imposes specific selection in nature are strong candidates to explain the gap between theory and data. RST's rationale also applies to plants, where it has not hitherto been tested. Given the differences between animals' and plants' information processing mechanisms, such tests should help unravel the conflicts between theory and data. Measuring root growth allocation by split-root pea plants, we show that they favor variability when mean nutrient levels are low and the opposite when they are high, supporting the most widespread RST prediction. However, the combination of non-linear effects of nitrogen availability at local and systemic levels may explain some of these effects as a consequence of mechanisms not necessarily evolved to cope with variance [3, 4]. This resembles animal examples in which properties of perception and learning cause risk sensitivity even though they are not risk adaptations [5]. PMID:27374342

  13. Casimir experiments showing saturation effects

    SciTech Connect

    Sernelius, Bo E.

    2009-10-15

    We address several different Casimir experiments where theory and experiment disagree. First out is the classical Casimir force measurement between two metal half spaces; here both in the form of the torsion pendulum experiment by Lamoreaux and in the form of the Casimir pressure measurement between a gold sphere and a gold plate as performed by Decca et al.; theory predicts a large negative thermal correction, absent in the high precision experiments. The third experiment is the measurement of the Casimir force between a metal plate and a laser irradiated semiconductor membrane as performed by Chen et al.; the change in force with laser intensity is larger than predicted by theory. The fourth experiment is the measurement of the Casimir force between an atom and a wall in the form of the measurement by Obrecht et al. of the change in oscillation frequency of a {sup 87}Rb Bose-Einstein condensate trapped to a fused silica wall; the change is smaller than predicted by theory. We show that saturation effects can explain the discrepancies between theory and experiment observed in all these cases.

  14. Aerosol Retrieval and Atmospheric Correction Algorithms for EPIC

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Lyapustin, A.; Marshak, A.; Korkin, S.; Herman, J. R.

    2011-12-01

    EPIC is a multi-spectral imager onboard planned Deep Space Climate ObserVatoRy (DSCOVR) designed for observations of the full illuminated disk of the Earth with high temporal and coarse spatial resolution (10 km) from Lagrangian L1 point. During the course of the day, EPIC will view the same Earth surface area in the full range of solar and view zenith angles at equator with fixed scattering angle near the backscattering direction. This talk will describe a new aerosol retrieval/atmospheric correction algorithm developed for EPIC and tested with EPIC Simulator data. This algorithm uses the time series approach and consists of two stages: the first stage is designed to periodically re-initialize the surface spectral bidirectional reflectance (BRF) on stable low AOD days. Such days can be selected based on the same measured reflectance between the morning and afternoon reciprocal view geometries of EPIC. On the second stage, the algorithm will monitor the diurnal cycle of aerosol optical depth and fine mode fraction based on the known spectral surface BRF. Testing of the developed algorithm with simulated EPIC data over continental USA showed a good accuracy of AOD retrievals (10-20%) except over very bright surfaces.

  15. Aerosol Retrieval and Atmospheric Correction Algorithms for EPIC

    NASA Technical Reports Server (NTRS)

    Wang, Yujie; Lyapustin, Alexei; Marshak, Alexander; Korkin, Sergey; Herman, Jay

    2011-01-01

    EPIC is a multi-spectral imager onboard planned Deep Space Climate ObserVatoRy (DSCOVR) designed for observations of the full illuminated disk of the Earth with high temporal and coarse spatial resolution (10 km) from Lagrangian L1 point. During the course of the day, EPIC will view the same Earth surface area in the full range of solar and view zenith angles at equator with fixed scattering angle near the backscattering direction. This talk will describe a new aerosol retrieval/atmospheric correction algorithm developed for EPIC and tested with EPIC Simulator data. This algorithm uses the time series approach and consists of two stages: the first stage is designed to periodically re-initialize the surface spectral bidirectional reflectance (BRF) on stable low AOD days. Such days can be selected based on the same measured reflectance between the morning and afternoon reciprocal view geometries of EPIC. On the second stage, the algorithm will monitor the diurnal cycle of aerosol optical depth and fine mode fraction based on the known spectral surface BRF. Testing of the developed algorithm with simulated EPIC data over continental USA showed a good accuracy of AOD retrievals (10-20%) except over very bright surfaces.

  16. What are narratives good for?

    PubMed

    Beatty, John

    2016-08-01

    Narratives may be easy to come by, but not everything is worth narrating. What merits a narrative? Here, I follow the lead of narratologists and literary theorists, and focus on one particular proposal concerning the elements of a story that make it narrative-worthy. These elements correspond to features of the natural world addressed by the historical sciences, where narratives figure so prominently. What matters is contingency. Narratives are especially good for representing contingency and accounting for contingent outcomes. This will be squared with a common view that narratives leave no room for chance. On the contrary, I will argue, tracing one path through a maze of alternative possibilities, and alluding to those possibilities along the way, is what a narrative does particularly well. PMID:26806602

  17. Broadband and Broad-Angle Low-Scattering Metasurface Based on Hybrid Optimization Algorithm

    PubMed Central

    Wang, Ke; Zhao, Jie; Cheng, Qiang; Dong, Di Sha; Cui, Tie Jun

    2014-01-01

    A broadband and broad-angle low-scattering metasurface is designed, fabricated, and characterized. Based on the optimization algorithm and far-field scattering pattern analysis, we propose a rapid and efficient method to design metasurfaces, which avoids the large amount of time-consuming electromagnetic simulations. Full-wave simulation and measurement results show that the proposed metasurface is insensitive to the polarization of incident waves, and presents good scattering-reduction properties for oblique incident waves. PMID:25089367

  18. Performance comparison of accelerometer calibration algorithms based on 3D-ellipsoid fitting methods.

    PubMed

    Gietzelt, Matthias; Wolf, Klaus-Hendrik; Marschollek, Michael; Haux, Reinhold

    2013-07-01

    Calibration of accelerometers can be reduced to 3D-ellipsoid fitting problems. Changing extrinsic factors like temperature, pressure or humidity, as well as intrinsic factors like the battery status, demand to calibrate the measurements permanently. Thus, there is a need for fast calibration algorithms, e.g. for online analyses. The primary aim of this paper is to propose a non-iterative calibration algorithm for accelerometers with the focus on minimal execution time and low memory consumption. The secondary aim is to benchmark existing calibration algorithms based on 3D-ellipsoid fitting methods. We compared the algorithms regarding the calibration quality and the execution time as well as the number of quasi-static measurements needed for a stable calibration. As evaluation criterion for the calibration, both the norm of calibrated real-life measurements during inactivity and simulation data was used. The algorithms showed a high calibration quality, but the execution time differed significantly. The calibration method proposed in this paper showed the shortest execution time and a very good performance regarding the number of measurements needed to produce stable results. Furthermore, this algorithm was successfully implemented on a sensor node and calibrates the measured data on-the-fly while continuously storing the measured data to a microSD-card. PMID:23566707

  19. Aerocapture Guidance Algorithm Comparison Campaign

    NASA Technical Reports Server (NTRS)

    Rousseau, Stephane; Perot, Etienne; Graves, Claude; Masciarelli, James P.; Queen, Eric

    2002-01-01

    The aerocapture is a promising technique for the future human interplanetary missions. The Mars Sample Return was initially based on an insertion by aerocapture. A CNES orbiter Mars Premier was developed to demonstrate this concept. Mainly due to budget constraints, the aerocapture was cancelled for the French orbiter. A lot of studies were achieved during the three last years to develop and test different guidance algorithms (APC, EC, TPC, NPC). This work was shared between CNES and NASA, with a fruitful joint working group. To finish this study an evaluation campaign has been performed to test the different algorithms. The objective was to assess the robustness, accuracy, capability to limit the load, and the complexity of each algorithm. A simulation campaign has been specified and performed by CNES, with a similar activity on the NASA side to confirm the CNES results. This evaluation has demonstrated that the numerical guidance principal is not competitive compared to the analytical concepts. All the other algorithms are well adapted to guaranty the success of the aerocapture. The TPC appears to be the more robust, the APC the more accurate, and the EC appears to be a good compromise.

  20. Mimas Showing False Colors #1

    NASA Technical Reports Server (NTRS)

    2005-01-01

    False color images of Saturn's moon, Mimas, reveal variation in either the composition or texture across its surface.

    During its approach to Mimas on Aug. 2, 2005, the Cassini spacecraft narrow-angle camera obtained multi-spectral views of the moon from a range of 228,000 kilometers (142,500 miles).

    The image at the left is a narrow angle clear-filter image, which was separately processed to enhance the contrast in brightness and sharpness of visible features. The image at the right is a color composite of narrow-angle ultraviolet, green, infrared and clear filter images, which have been specially processed to accentuate subtle changes in the spectral properties of Mimas' surface materials. To create this view, three color images (ultraviolet, green and infrared) were combined into a single black and white picture that isolates and maps regional color differences. This 'color map' was then superimposed over the clear-filter image at the left.

    The combination of color map and brightness image shows how the color differences across the Mimas surface materials are tied to geological features. Shades of blue and violet in the image at the right are used to identify surface materials that are bluer in color and have a weaker infrared brightness than average Mimas materials, which are represented by green.

    Herschel crater, a 140-kilometer-wide (88-mile) impact feature with a prominent central peak, is visible in the upper right of each image. The unusual bluer materials are seen to broadly surround Herschel crater. However, the bluer material is not uniformly distributed in and around the crater. Instead, it appears to be concentrated on the outside of the crater and more to the west than to the north or south. The origin of the color differences is not yet understood. It may represent ejecta material that was excavated from inside Mimas when the Herschel impact occurred. The bluer color of these materials may be caused by subtle differences in

  1. Surveys show support for green 'activities'.

    PubMed

    Baillie, Jonathan

    2012-03-01

    Two independently conducted surveys on sustainability - one into the 'views and values' of NHS 'leaders', and the other questioning the public about the importance of the 'green agenda' in the NHS, and their opinions on how the service might most effectively reduce its carbon footprint, form the basis of Sustainability in the NHS: Health Check 2012, a new NHS Sustainable Development Unit (NHS SDU) publication. As HEJ editor Jonathan Baillie reports, the new document also presents updated data on the 'size' of the carbon footprint of the NHS in England, showing that, although good work by a number of Trusts in the past two years has seen healthcare-generated carbon emissions start to 'level off', the biggest contributors have been the current health service spending review, and the increased national availability of renewable energy. PMID:22515017

  2. A Synthesized Heuristic Task Scheduling Algorithm

    PubMed Central

    Dai, Yanyan; Zhang, Xiangli

    2014-01-01

    Aiming at the static task scheduling problems in heterogeneous environment, a heuristic task scheduling algorithm named HCPPEFT is proposed. In task prioritizing phase, there are three levels of priority in the algorithm to choose task. First, the critical tasks have the highest priority, secondly the tasks with longer path to exit task will be selected, and then algorithm will choose tasks with less predecessors to schedule. In resource selection phase, the algorithm is selected task duplication to reduce the interresource communication cost, besides forecasting the impact of an assignment for all children of the current task permits better decisions to be made in selecting resources. The algorithm proposed is compared with STDH, PEFT, and HEFT algorithms through randomly generated graphs and sets of task graphs. The experimental results show that the new algorithm can achieve better scheduling performance. PMID:25254244

  3. A new method for mesoscale eddy detection based on watershed segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Qin, Lijuan; Dong, Qing; Xue, Cunjin; Hou, Xueyan; Song, Wanjiao

    2014-11-01

    Mesoscale eddies are widely found in the ocean. They play important roles in heat transport, momentum transport, ocean circulation and so on. The automatic detection of mesoscale eddies based on satellite remote sensing images is an important research topic. Some image processing methods have been applied to identify mesoscale eddies such as Canny operator, Hough transform and so forth, but the accuracy of detection was not very ideal. This paper described a new algorithm based on watershed segmentation algorithm for automatic detection of mesoscale eddies from sea level anomaly(SLA) image. Watershed segmentation algorithm has the disadvantage of over-segmentation. It is important to select appropriate markers. In this study, markers were selected from the reconstructed SLA image, which were used to modify the gradient image. Then two parameters, radius and amplitude of eddy, were used to filter the segmentation results. The method was tested on the Northwest Pacific using TOPEX/Poseidon altimeter data. The results are encouraging, showing that this algorithm is applicable for mesoscale eddies and has a good accuracy. This algorithm has a good response to weak edges and extracted eddies have complete and continuous boundaries. The eddy boundaries generally coincide with closed contours of SSH.

  4. Multithreaded Algorithms for Graph Coloring

    SciTech Connect

    Catalyurek, Umit V.; Feo, John T.; Gebremedhin, Assefaw H.; Halappanavar, Mahantesh; Pothen, Alex

    2012-10-21

    Graph algorithms are challenging to parallelize when high performance and scalability are primary goals. Low concurrency, poor data locality, irregular access pattern, and high data access to computation ratio are among the chief reasons for the challenge. The performance implication of these features is exasperated on distributed memory machines. More success is being achieved on shared-memory, multi-core architectures supporting multithreading. We consider a prototypical graph problem, coloring, and show how a greedy algorithm for solving it can be e*ectively parallelized on multithreaded architectures. We present in particular two di*erent parallel algorithms. The first relies on speculation and iteration, and is suitable for any shared-memory, multithreaded system. The second uses data ow principles and is targeted at the massively multithreaded Cray XMT system. We benchmark the algorithms on three di*erent platforms and demonstrate scalable runtime performance. In terms of quality of solution, both algorithms use nearly the same number of colors as the serial algorithm.

  5. Hybrid feature selection algorithm using symmetrical uncertainty and a harmony search algorithm

    NASA Astrophysics Data System (ADS)

    Salameh Shreem, Salam; Abdullah, Salwani; Nazri, Mohd Zakree Ahmad

    2016-04-01

    Microarray technology can be used as an efficient diagnostic system to recognise diseases such as tumours or to discriminate between different types of cancers in normal tissues. This technology has received increasing attention from the bioinformatics community because of its potential in designing powerful decision-making tools for cancer diagnosis. However, the presence of thousands or tens of thousands of genes affects the predictive accuracy of this technology from the perspective of classification. Thus, a key issue in microarray data is identifying or selecting the smallest possible set of genes from the input data that can achieve good predictive accuracy for classification. In this work, we propose a two-stage selection algorithm for gene selection problems in microarray data-sets called the symmetrical uncertainty filter and harmony search algorithm wrapper (SU-HSA). Experimental results show that the SU-HSA is better than HSA in isolation for all data-sets in terms of the accuracy and achieves a lower number of genes on 6 out of 10 instances. Furthermore, the comparison with state-of-the-art methods shows that our proposed approach is able to obtain 5 (out of 10) new best results in terms of the number of selected genes and competitive results in terms of the classification accuracy.

  6. Innovative hyperchaotic encryption algorithm for compressed video

    NASA Astrophysics Data System (ADS)

    Yuan, Chun; Zhong, Yuzhuo; Yang, Shiqiang

    2002-12-01

    It is accepted that stream cryptosystem can achieve good real-time performance and flexibility which implements encryption by selecting few parts of the block data and header information of the compressed video stream. Chaotic random number generator, for example Logistics Map, is a comparatively promising substitute, but it is easily attacked by nonlinear dynamic forecasting and geometric information extracting. In this paper, we present a hyperchaotic cryptography scheme to encrypt the compressed video, which integrates Logistics Map with Z(232 - 1) field linear congruential algorithm to strengthen the security of the mono-chaotic cryptography, meanwhile, the real-time performance and flexibility of the chaotic sequence cryptography are maintained. It also integrates with the dissymmetrical public-key cryptography and implements encryption and identity authentification on control parameters at initialization phase. In accord with the importance of data in compressed video stream, encryption is performed in layered scheme. In the innovative hyperchaotic cryptography, the value and the updating frequency of control parameters can be changed online to satisfy the requirement of the network quality, processor capability and security requirement. The innovative hyperchaotic cryprography proves robust security by cryptoanalysis, shows good real-time performance and flexible implement capability through the arithmetic evaluating and test.

  7. Dynamic hybrid algorithms for MAP inference in discrete MRFs.

    PubMed

    Alahari, Karteek; Kohli, Pushmeet; Torr, Philip H S

    2010-10-01

    In this paper, we present novel techniques that improve the computational and memory efficiency of algorithms for solving multilabel energy functions arising from discrete mrfs or crfs. These methods are motivated by the observations that the performance of minimization algorithms depends on: 1) the initialization used for the primal and dual variables and 2) the number of primal variables involved in the energy function. Our first method (dynamic alpha-expansion) works by "recycling" results from previous problem instances. The second method simplifies the energy function by "reducing" the number of unknown variables present in the problem. Further, we show that it can also be used to generate a good initialization for the dynamic alpha-expansion algorithm by "reusing" dual variables. We test the performance of our methods on energy functions encountered in the problems of stereo matching and color and object-based segmentation. Experimental results show that our methods achieve a substantial improvement in the performance of alpha-expansion, as well as other popular algorithms such as sequential tree-reweighted message passing and max-product belief propagation. We also demonstrate the applicability of our schemes for certain higher order energy functions, such as the one described in [1], for interactive texture-based image and video segmentation. In most cases, we achieve a 10-15 times speed-up in the computation time. Our modified alpha-expansion algorithm provides similar performance to Fast-PD, but is conceptually much simpler. Both alpha-expansion and Fast-PD can be made orders of magnitude faster when used in conjunction with the "reduce" scheme proposed in this paper. PMID:20724761

  8. Pedagogical Uses of the Public Goods Concept in Economics.

    ERIC Educational Resources Information Center

    Kiesling, Herbert J.

    1990-01-01

    Describes some of the relatively unknown aspects of the concept of public goods and shows how they might be brought into undergraduate textbooks in microeconomic principles, public finance, and welfare economics. Illustrates how these aspects of public goods can be brought into undergraduate instruction. (DB)

  9. Ordered subsets algorithms for transmission tomography.

    PubMed

    Erdogan, H; Fessler, J A

    1999-11-01

    The ordered subsets EM (OSEM) algorithm has enjoyed considerable interest for emission image reconstruction due to its acceleration of the original EM algorithm and ease of programming. The transmission EM reconstruction algorithm converges very slowly and is not used in practice. In this paper, we introduce a simultaneous update algorithm called separable paraboloidal surrogates (SPS) that converges much faster than the transmission EM algorithm. Furthermore, unlike the 'convex algorithm' for transmission tomography, the proposed algorithm is monotonic even with nonzero background counts. We demonstrate that the ordered subsets principle can also be applied to the new SPS algorithm for transmission tomography to accelerate 'convergence', albeit with similar sacrifice of global convergence properties as for OSEM. We implemented and evaluated this ordered subsets transmission (OSTR) algorithm. The results indicate that the OSTR algorithm speeds up the increase in the objective function by roughly the number of subsets in the early iterates when compared to the ordinary SPS algorithm. We compute mean square errors and segmentation errors for different methods and show that OSTR is superior to OSEM applied to the logarithm of the transmission data. However, penalized-likelihood reconstructions yield the best quality images among all other methods tested. PMID:10588288

  10. An SMP soft classification algorithm for remote sensing

    NASA Astrophysics Data System (ADS)

    Phillips, Rhonda D.; Watson, Layne T.; Easterling, David R.; Wynne, Randolph H.

    2014-07-01

    This work introduces a symmetric multiprocessing (SMP) version of the continuous iterative guided spectral class rejection (CIGSCR) algorithm, a semiautomated classification algorithm for remote sensing (multispectral) images. The algorithm uses soft data clusters to produce a soft classification containing inherently more information than a comparable hard classification at an increased computational cost. Previous work suggests that similar algorithms achieve good parallel scalability, motivating the parallel algorithm development work here. Experimental results of applying parallel CIGSCR to an image with approximately 108 pixels and six bands demonstrate superlinear speedup. A soft two class classification is generated in just over 4 min using 32 processors.

  11. Novel biomedical tetrahedral mesh methods: algorithms and applications

    NASA Astrophysics Data System (ADS)

    Yu, Xiao; Jin, Yanfeng; Chen, Weitao; Huang, Pengfei; Gu, Lixu

    2007-12-01

    Tetrahedral mesh generation algorithm, as a prerequisite of many soft tissue simulation methods, becomes very important in the virtual surgery programs because of the real-time requirement. Aiming to speed up the computation in the simulation, we propose a revised Delaunay algorithm which makes a good balance of quality of tetrahedra, boundary preservation and time complexity, with many improved methods. Another mesh algorithm named Space-Disassembling is also presented in this paper, and a comparison of Space-Disassembling, traditional Delaunay algorithm and the revised Delaunay algorithm is processed based on clinical soft-tissue simulation projects, including craniofacial plastic surgery and breast reconstruction plastic surgery.

  12. Advanced software algorithms

    SciTech Connect

    Berry, K.; Dayton, S.

    1996-10-28

    Citibank was using a data collection system to create a one-time-only mailing history on prospective credit card customers that was becoming dated in its time to market requirements and as such was in need of performance improvements. To compound problems with their existing system, the assurance of the quality of the data matching process was manpower intensive and needed to be automated. Analysis, design, and prototyping capabilities involving information technology were areas of expertise provided by DOE-LMES Data Systems Research and Development (DSRD) program. The goal of this project was for Data Systems Research and Development (DSRD) to analyze the current Citibank credit card offering system and suggest and prototype technology improvements that would result in faster processing with quality as good as the current system. Technologies investigated include: a high-speed network of reduced instruction set computing (RISC) processors for loosely coupled parallel processing, tightly coupled, high performance parallel processing, higher order computer languages such as `C`, fuzzy matching algorithms applied to very large data files, relational database management system, and advanced programming techniques.

  13. Algorithms for automated DNA assembly

    PubMed Central

    Densmore, Douglas; Hsiau, Timothy H.-C.; Kittleson, Joshua T.; DeLoache, Will; Batten, Christopher; Anderson, J. Christopher

    2010-01-01

    Generating a defined set of genetic constructs within a large combinatorial space provides a powerful method for engineering novel biological functions. However, the process of assembling more than a few specific DNA sequences can be costly, time consuming and error prone. Even if a correct theoretical construction scheme is developed manually, it is likely to be suboptimal by any number of cost metrics. Modular, robust and formal approaches are needed for exploring these vast design spaces. By automating the design of DNA fabrication schemes using computational algorithms, we can eliminate human error while reducing redundant operations, thus minimizing the time and cost required for conducting biological engineering experiments. Here, we provide algorithms that optimize the simultaneous assembly of a collection of related DNA sequences. We compare our algorithms to an exhaustive search on a small synthetic dataset and our results show that our algorithms can quickly find an optimal solution. Comparison with random search approaches on two real-world datasets show that our algorithms can also quickly find lower-cost solutions for large datasets. PMID:20335162

  14. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.

    1986-01-01

    Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.

  15. Algorithm-development activities

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    1994-01-01

    The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.

  16. Predictive Caching Using the TDAG Algorithm

    NASA Technical Reports Server (NTRS)

    Laird, Philip; Saul, Ronald

    1992-01-01

    We describe how the TDAG algorithm for learning to predict symbol sequences can be used to design a predictive cache store. A model of a two-level mass storage system is developed and used to calculate the performance of the cache under various conditions. Experimental simulations provide good confirmation of the model.

  17. Vertical Sextants give Good Sights

    NASA Astrophysics Data System (ADS)

    Dixon, Mark

    Many texts stress the need for marine sextants to be held precisely vertical at the instant that the altitude of a heavenly body is measured. Several authors lay particular emphasis on the technique of the instrument in a small arc about the horizontal axis to obtain a good sight. Nobody, to the author's knowledge, however, has attempted to quantify the errors involved, so as to compare them with other errors inherent in determining celestial position lines. This paper sets out to address these issues and to pose the question: what level of accuracy of vertical alignment can reasonably be expected during marine sextant work at sea ?When a heavenly body is brought to tangency with the visible horizon it is particularly important to ensure that the sextant is held in a truly vertical position. To this end the instrument is rocked gently about the horizontal so that the image of the body describes a small arc in the observer's field of vision. As Bruce Bauer points out, tangency with the horizon must be achieved during the process of rocking and not a second or so after rocking has been discontinued. The altitude is recorded for the instant that the body kisses the visible horizon at the lowest point of the rocking arc, as in Fig. 2. The only other visual clue as to whether the sextant is vertical is provided by the right angle made by the vertical edge of the horizon glass mirror with the horizon. There may also be some input from the observer's sense of balance and his hand orientation.

  18. Voronoi particle merging algorithm for PIC codes

    NASA Astrophysics Data System (ADS)

    Luu, Phuc T.; Tückmantel, T.; Pukhov, A.

    2016-05-01

    We present a new particle-merging algorithm for the particle-in-cell method. Based on the concept of the Voronoi diagram, the algorithm partitions the phase space into smaller subsets, which consist of only particles that are in close proximity in the phase space to each other. We show the performance of our algorithm in the case of the two-stream instability and the magnetic shower.

  19. Sequential and Parallel Algorithms for Spherical Interpolation

    NASA Astrophysics Data System (ADS)

    De Rossi, Alessandra

    2007-09-01

    Given a large set of scattered points on a sphere and their associated real values, we analyze sequential and parallel algorithms for the construction of a function defined on the sphere satisfying the interpolation conditions. The algorithms we implemented are based on a local interpolation method using spherical radial basis functions and the Inverse Distance Weighted method. Several numerical results show accuracy and efficiency of the algorithms.

  20. 42 CFR 93.210 - Good faith.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Good faith. 93.210 Section 93.210 Public Health... MISCONDUCT Definitions § 93.210 Good faith. Good faith as applied to a complainant or witness, means having a... allegation or cooperation with a research misconduct proceeding is not in good faith if made with knowing...

  1. 42 CFR 93.210 - Good faith.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 1 2012-10-01 2012-10-01 false Good faith. 93.210 Section 93.210 Public Health... MISCONDUCT Definitions § 93.210 Good faith. Good faith as applied to a complainant or witness, means having a... allegation or cooperation with a research misconduct proceeding is not in good faith if made with knowing...

  2. 42 CFR 93.210 - Good faith.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 1 2011-10-01 2011-10-01 false Good faith. 93.210 Section 93.210 Public Health... MISCONDUCT Definitions § 93.210 Good faith. Good faith as applied to a complainant or witness, means having a... allegation or cooperation with a research misconduct proceeding is not in good faith if made with knowing...

  3. 42 CFR 93.210 - Good faith.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 1 2013-10-01 2013-10-01 false Good faith. 93.210 Section 93.210 Public Health... MISCONDUCT Definitions § 93.210 Good faith. Good faith as applied to a complainant or witness, means having a... allegation or cooperation with a research misconduct proceeding is not in good faith if made with knowing...

  4. 42 CFR 93.210 - Good faith.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 1 2014-10-01 2014-10-01 false Good faith. 93.210 Section 93.210 Public Health... MISCONDUCT Definitions § 93.210 Good faith. Good faith as applied to a complainant or witness, means having a... allegation or cooperation with a research misconduct proceeding is not in good faith if made with knowing...

  5. 29 CFR 779.107 - Goods defined.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Goods defined. 779.107 Section 779.107 Labor Regulations... Engaged in Commerce Or in the Production of Goods for Commerce § 779.107 Goods defined. The term goods is defined in section 3(i) of the Act and has a well established meaning under the Act since it has...

  6. 29 CFR 779.107 - Goods defined.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 3 2011-07-01 2011-07-01 false Goods defined. 779.107 Section 779.107 Labor Regulations... Engaged in Commerce Or in the Production of Goods for Commerce § 779.107 Goods defined. The term goods is defined in section 3(i) of the Act and has a well established meaning under the Act since it has...

  7. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  8. A Family of Algorithms for Computing Consensus about Node State from Network Data

    PubMed Central

    Brush, Eleanor R.; Krakauer, David C.; Flack, Jessica C.

    2013-01-01

    Biological and social networks are composed of heterogeneous nodes that contribute differentially to network structure and function. A number of algorithms have been developed to measure this variation. These algorithms have proven useful for applications that require assigning scores to individual nodes–from ranking websites to determining critical species in ecosystems–yet the mechanistic basis for why they produce good rankings remains poorly understood. We show that a unifying property of these algorithms is that they quantify consensus in the network about a node's state or capacity to perform a function. The algorithms capture consensus by either taking into account the number of a target node's direct connections, and, when the edges are weighted, the uniformity of its weighted in-degree distribution (breadth), or by measuring net flow into a target node (depth). Using data from communication, social, and biological networks we find that that how an algorithm measures consensus–through breadth or depth– impacts its ability to correctly score nodes. We also observe variation in sensitivity to source biases in interaction/adjacency matrices: errors arising from systematic error at the node level or direct manipulation of network connectivity by nodes. Our results indicate that the breadth algorithms, which are derived from information theory, correctly score nodes (assessed using independent data) and are robust to errors. However, in cases where nodes “form opinions” about other nodes using indirect information, like reputation, depth algorithms, like Eigenvector Centrality, are required. One caveat is that Eigenvector Centrality is not robust to error unless the network is transitive or assortative. In these cases the network structure allows the depth algorithms to effectively capture breadth as well as depth. Finally, we discuss the algorithms' cognitive and computational demands. This is an important consideration in systems in which

  9. A family of algorithms for computing consensus about node state from network data.

    PubMed

    Brush, Eleanor R; Krakauer, David C; Flack, Jessica C

    2013-01-01

    Biological and social networks are composed of heterogeneous nodes that contribute differentially to network structure and function. A number of algorithms have been developed to measure this variation. These algorithms have proven useful for applications that require assigning scores to individual nodes-from ranking websites to determining critical species in ecosystems-yet the mechanistic basis for why they produce good rankings remains poorly understood. We show that a unifying property of these algorithms is that they quantify consensus in the network about a node's state or capacity to perform a function. The algorithms capture consensus by either taking into account the number of a target node's direct connections, and, when the edges are weighted, the uniformity of its weighted in-degree distribution (breadth), or by measuring net flow into a target node (depth). Using data from communication, social, and biological networks we find that that how an algorithm measures consensus-through breadth or depth- impacts its ability to correctly score nodes. We also observe variation in sensitivity to source biases in interaction/adjacency matrices: errors arising from systematic error at the node level or direct manipulation of network connectivity by nodes. Our results indicate that the breadth algorithms, which are derived from information theory, correctly score nodes (assessed using independent data) and are robust to errors. However, in cases where nodes "form opinions" about other nodes using indirect information, like reputation, depth algorithms, like Eigenvector Centrality, are required. One caveat is that Eigenvector Centrality is not robust to error unless the network is transitive or assortative. In these cases the network structure allows the depth algorithms to effectively capture breadth as well as depth. Finally, we discuss the algorithms' cognitive and computational demands. This is an important consideration in systems in which individuals use the

  10. Beyond Policy and Good Intentions

    ERIC Educational Resources Information Center

    Bevan-Brown, Jill

    2006-01-01

    This paper examines the situation for Maori learners for special needs in Aotearoa/New Zealand. Despite considerable legislation and official documentation supporting the provision of culturally appropriate special education services for Maori, research shows that these learners are often neglected, overlooked and sometimes even excluded. The main…

  11. Detection of cracks in shafts with the Approximated Entropy algorithm

    NASA Astrophysics Data System (ADS)

    Sampaio, Diego Luchesi; Nicoletti, Rodrigo

    2016-05-01

    The Approximate Entropy is a statistical calculus used primarily in the fields of Medicine, Biology, and Telecommunication for classifying and identifying complex signal data. In this work, an Approximate Entropy algorithm is used to detect cracks in a rotating shaft. The signals of the cracked shaft are obtained from numerical simulations of a de Laval rotor with breathing cracks modelled by the Fracture Mechanics. In this case, one analysed the vertical displacements of the rotor during run-up transients. The results show the feasibility of detecting cracks from 5% depth, irrespective of the unbalance of the rotating system and crack orientation in the shaft. The results also show that the algorithm can differentiate the occurrence of crack only, misalignment only, and crack + misalignment in the system. However, the algorithm is sensitive to intrinsic parameters p (number of data points in a sample vector) and f (fraction of the standard deviation that defines the minimum distance between two sample vectors), and good results are only obtained by appropriately choosing their values according to the sampling rate of the signal.

  12. What are single photons good for?

    NASA Astrophysics Data System (ADS)

    Sangouard, Nicolas; Zbinden, Hugo

    2012-10-01

    In a long-held preconception, photons play a central role in present-day quantum technologies. But what are sources producing photons one by one good for precisely? Well, in opposition to what many suggest, we show that single-photon sources are not helpful for point to point quantum key distribution because faint laser pulses do the job comfortably. However, there is no doubt about the usefulness of sources producing single photons for future quantum technologies. In particular, we show how single-photon sources could become the seed of a revolution in the framework of quantum communication, making the security of quantum key distribution device-independent or extending quantum communication over many hundreds of kilometers. Hopefully, these promising applications will provide a guideline for researchers to develop more and more efficient sources, producing narrowband, pure and indistinguishable photons at appropriate wavelengths.

  13. Avoiding or restricting defectors in public goods games?

    PubMed

    Han, The Anh; Pereira, Luís Moniz; Lenaerts, Tom

    2015-02-01

    When creating a public good, strategies or mechanisms are required to handle defectors. We first show mathematically and numerically that prior agreements with posterior compensations provide a strategic solution that leads to substantial levels of cooperation in the context of public goods games, results that are corroborated by available experimental data. Notwithstanding this success, one cannot, as with other approaches, fully exclude the presence of defectors, raising the question of how they can be dealt with to avoid the demise of the common good. We show that both avoiding creation of the common good, whenever full agreement is not reached, and limiting the benefit that disagreeing defectors can acquire, using costly restriction mechanisms, are relevant choices. Nonetheless, restriction mechanisms are found the more favourable, especially in larger group interactions. Given decreasing restriction costs, introducing restraining measures to cope with public goods free-riding issues is the ultimate advantageous solution for all participants, rather than avoiding its creation. PMID:25540240

  14. Avoiding or restricting defectors in public goods games?

    PubMed Central

    Han, The Anh; Pereira, Luís Moniz; Lenaerts, Tom

    2015-01-01

    When creating a public good, strategies or mechanisms are required to handle defectors. We first show mathematically and numerically that prior agreements with posterior compensations provide a strategic solution that leads to substantial levels of cooperation in the context of public goods games, results that are corroborated by available experimental data. Notwithstanding this success, one cannot, as with other approaches, fully exclude the presence of defectors, raising the question of how they can be dealt with to avoid the demise of the common good. We show that both avoiding creation of the common good, whenever full agreement is not reached, and limiting the benefit that disagreeing defectors can acquire, using costly restriction mechanisms, are relevant choices. Nonetheless, restriction mechanisms are found the more favourable, especially in larger group interactions. Given decreasing restriction costs, introducing restraining measures to cope with public goods free-riding issues is the ultimate advantageous solution for all participants, rather than avoiding its creation. PMID:25540240

  15. Weight Loss Surgery May Boost Good Cholesterol in Obese Boys

    MedlinePlus

    ... Loss Surgery May Boost Good Cholesterol in Obese Boys Small study showed surgery also improved protective effects ... Weight loss surgery could help severely obese teenage boys reduce their risk for heart disease by increasing ...

  16. A Short Survey of Document Structure Similarity Algorithms

    SciTech Connect

    Buttler, D

    2004-02-27

    This paper provides a brief survey of document structural similarity algorithms, including the optimal Tree Edit Distance algorithm and various approximation algorithms. The approximation algorithms include the simple weighted tag similarity algorithm, Fourier transforms of the structure, and a new application of the shingle technique to structural similarity. We show three surprising results. First, the Fourier transform technique proves to be the least accurate of any of approximation algorithms, while also being slowest. Second, optimal Tree Edit Distance algorithms may not be the best technique for clustering pages from different sites. Third, the simplest approximation to structure may be the most effective and efficient mechanism for many applications.

  17. [A Hyperspectral Imagery Anomaly Detection Algorithm Based on Gauss-Markov Model].

    PubMed

    Gao, Kun; Liu, Ying; Wang, Li-jing; Zhu, Zhen-yu; Cheng, Hao-bo

    2015-10-01

    With the development of spectral imaging technology, hyperspectral anomaly detection is getting more and more widely used in remote sensing imagery processing. The traditional RX anomaly detection algorithm neglects spatial correlation of images. Besides, it does not validly reduce the data dimension, which costs too much processing time and shows low validity on hyperspectral data. The hyperspectral images follow Gauss-Markov Random Field (GMRF) in space and spectral dimensions. The inverse matrix of covariance matrix is able to be directly calculated by building the Gauss-Markov parameters, which avoids the huge calculation of hyperspectral data. This paper proposes an improved RX anomaly detection algorithm based on three-dimensional GMRF. The hyperspectral imagery data is simulated with GMRF model, and the GMRF parameters are estimated with the Approximated Maximum Likelihood method. The detection operator is constructed with GMRF estimation parameters. The detecting pixel is considered as the centre in a local optimization window, which calls GMRF detecting window. The abnormal degree is calculated with mean vector and covariance inverse matrix, and the mean vector and covariance inverse matrix are calculated within the window. The image is detected pixel by pixel with the moving of GMRF window. The traditional RX detection algorithm, the regional hypothesis detection algorithm based on GMRF and the algorithm proposed in this paper are simulated with AVIRIS hyperspectral data. Simulation results show that the proposed anomaly detection method is able to improve the detection efficiency and reduce false alarm rate. We get the operation time statistics of the three algorithms in the same computer environment. The results show that the proposed algorithm improves the operation time by 45.2%, which shows good computing efficiency. PMID:26904830

  18. Good news for sea turtles.

    PubMed

    Hays, Graeme C

    2004-07-01

    Following the overexploitation of sea turtle populations, conservation measures are now in place in many areas. However, the overall impact of these measures is often unknown because there are few long time-series showing trends in population sizes. In a recent paper, George Balazs and Milani Chaloupka chart the number of green turtles Chelonia mydas nesting in Hawaii over the past 30 years and reveal a remarkably quick increase in the size of this population following the instigation of conservation measures during the 1970s. Importantly, this work shows how even a small population of sea turtles can recover rapidly, suggesting that Allee effects do not impede conservation efforts in operation worldwide. PMID:16701283

  19. 19 CFR 10.2021 - Goods classifiable as goods put up in sets.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 19 Customs Duties 1 2014-04-01 2014-04-01 false Goods classifiable as goods put up in sets. 10... Trade Promotion Agreement Rules of Origin § 10.2021 Goods classifiable as goods put up in sets. Notwithstanding the specific rules set forth in General Note 35, HTSUS, goods classifiable as goods put up in...

  20. 19 CFR 10.921 - Goods classifiable as goods put up in sets.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 19 Customs Duties 1 2012-04-01 2012-04-01 false Goods classifiable as goods put up in sets. 10.921... Trade Promotion Agreement Rules of Origin § 10.921 Goods classifiable as goods put up in sets. Notwithstanding the specific rules set forth in General Note 32(n), HTSUS, goods classifiable as goods put up...

  1. 19 CFR 10.921 - Goods classifiable as goods put up in sets.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 19 Customs Duties 1 2014-04-01 2014-04-01 false Goods classifiable as goods put up in sets. 10.921... Trade Promotion Agreement Rules of Origin § 10.921 Goods classifiable as goods put up in sets. Notwithstanding the specific rules set forth in General Note 32(n), HTSUS, goods classifiable as goods put up...

  2. 19 CFR 10.3021 - Goods classifiable as goods put up in sets.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 19 Customs Duties 1 2013-04-01 2013-04-01 false Goods classifiable as goods put up in sets. 10...-Colombia Trade Promotion Agreement Rules of Origin § 10.3021 Goods classifiable as goods put up in sets. Notwithstanding the specific rules set forth in General Note 34, HTSUS, goods classifiable as goods put up in...

  3. 19 CFR 10.1021 - Goods classifiable as goods put up in sets.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 19 Customs Duties 1 2012-04-01 2012-04-01 false Goods classifiable as goods put up in sets. 10... Free Trade Agreement Rules of Origin § 10.1021 Goods classifiable as goods put up in sets. Notwithstanding the specific rules set forth in General Note 33, HTSUS, goods classifiable as goods put up in...

  4. 19 CFR 10.3021 - Goods classifiable as goods put up in sets.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 19 Customs Duties 1 2014-04-01 2014-04-01 false Goods classifiable as goods put up in sets. 10...-Colombia Trade Promotion Agreement Rules of Origin § 10.3021 Goods classifiable as goods put up in sets. Notwithstanding the specific rules set forth in General Note 34, HTSUS, goods classifiable as goods put up in...

  5. 19 CFR 10.1021 - Goods classifiable as goods put up in sets.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 19 Customs Duties 1 2014-04-01 2014-04-01 false Goods classifiable as goods put up in sets. 10... Free Trade Agreement Rules of Origin § 10.1021 Goods classifiable as goods put up in sets. Notwithstanding the specific rules set forth in General Note 33, HTSUS, goods classifiable as goods put up in...

  6. 19 CFR 10.1021 - Goods classifiable as goods put up in sets.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 19 Customs Duties 1 2013-04-01 2013-04-01 false Goods classifiable as goods put up in sets. 10... Free Trade Agreement Rules of Origin § 10.1021 Goods classifiable as goods put up in sets. Notwithstanding the specific rules set forth in General Note 33, HTSUS, goods classifiable as goods put up in...

  7. 19 CFR 10.921 - Goods classifiable as goods put up in sets.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 19 Customs Duties 1 2013-04-01 2013-04-01 false Goods classifiable as goods put up in sets. 10.921... Trade Promotion Agreement Rules of Origin § 10.921 Goods classifiable as goods put up in sets. Notwithstanding the specific rules set forth in General Note 32(n), HTSUS, goods classifiable as goods put up...

  8. Development of a stereo analysis algorithm for generating topographic maps using interactive techniques of the MPP

    NASA Technical Reports Server (NTRS)

    Strong, James P.

    1987-01-01

    A local area matching algorithm was developed on the Massively Parallel Processor (MPP). It is an iterative technique that first matches coarse or low resolution areas and at each iteration performs matches of higher resolution. Results so far show that when good matches are possible in the two images, the MPP algorithm matches corresponding areas as well as a human observer. To aid in developing this algorithm, a control or shell program was developed for the MPP that allows interactive experimentation with various parameters and procedures to be used in the matching process. (This would not be possible without the high speed of the MPP). With the system, optimal techniques can be developed for different types of matching problems.

  9. An infrared small target detection algorithm based on high-speed local contrast method

    NASA Astrophysics Data System (ADS)

    Cui, Zheng; Yang, Jingli; Jiang, Shouda; Li, Junbao

    2016-05-01

    Small-target detection in infrared imagery with a complex background is always an important task in remote sensing fields. It is important to improve the detection capabilities such as detection rate, false alarm rate, and speed. However, current algorithms usually improve one or two of the detection capabilities while sacrificing the other. In this letter, an Infrared (IR) small target detection algorithm with two layers inspired by Human Visual System (HVS) is proposed to balance those detection capabilities. The first layer uses high speed simplified local contrast method to select significant information. And the second layer uses machine learning classifier to separate targets from background clutters. Experimental results show the proposed algorithm pursue good performance in detection rate, false alarm rate and speed simultaneously.

  10. Online empirical evaluation of tracking algorithms.

    PubMed

    Wu, Hao; Sankaranarayanan, Aswin C; Chellappa, Rama

    2010-08-01

    Evaluation of tracking algorithms in the absence of ground truth is a challenging problem. There exist a variety of approaches for this problem, ranging from formal model validation techniques to heuristics that look for mismatches between track properties and the observed data. However, few of these methods scale up to the task of visual tracking, where the models are usually nonlinear and complex and typically lie in a high-dimensional space. Further, scenarios that cause track failures and/or poor tracking performance are also quite diverse for the visual tracking problem. In this paper, we propose an online performance evaluation strategy for tracking systems based on particle filters using a time-reversed Markov chain. The key intuition of our proposed methodology relies on the time-reversible nature of physical motion exhibited by most objects, which in turn should be possessed by a good tracker. In the presence of tracking failures due to occlusion, low SNR, or modeling errors, this reversible nature of the tracker is violated. We use this property for detection of track failures. To evaluate the performance of the tracker at time instant t, we use the posterior of the tracking algorithm to initialize a time-reversed Markov chain. We compute the posterior density of track parameters at the starting time t=0 by filtering back in time to the initial time instant. The distance between the posterior density of the time-reversed chain (at t=0) and the prior density used to initialize the tracking algorithm forms the decision statistic for evaluation. It is observed that when the data are generated by the underlying models, the decision statistic takes a low value. We provide a thorough experimental analysis of the evaluation methodology. Specifically, we demonstrate the effectiveness of our approach for tackling common challenges such as occlusion, pose, and illumination changes and provide the Receiver Operating Characteristic (ROC) curves. Finally, we also show

  11. Swarm-based algorithm for phase unwrapping.

    PubMed

    da Silva Maciel, Lucas; Albertazzi, Armando G

    2014-08-20

    A novel algorithm for phase unwrapping based on swarm intelligence is proposed. The algorithm was designed based on three main goals: maximum coverage of reliable information, focused effort for better efficiency, and reliable unwrapping. Experiments were performed, and a new agent was designed to follow a simple set of five rules in order to collectively achieve these goals. These rules consist of random walking for unwrapping and searching, ambiguity evaluation by comparing unwrapped regions, and a replication behavior responsible for the good distribution of agents throughout the image. The results were comparable with the results from established methods. The swarm-based algorithm was able to suppress ambiguities better than the flood-fill algorithm without relying on lengthy processing times. In addition, future developments such as parallel processing and better-quality evaluation present great potential for the proposed method. PMID:25321125

  12. Autophagy: not good OR bad, but good AND bad.

    PubMed

    Altman, Brian J; Rathmell, Jeffrey C

    2009-05-01

    Autophagy is a well-established mechanism to degrade intracellular components and provide a nutrient source to promote survival of cells in metabolic distress. Such stress can be caused by a lack of available nutrients or by insufficient rates of nutrient uptake. Indeed, growth factor deprivation leads to internalization and degradation of nutrient transporters, leaving cells with limited means to access extracellular nutrients even when plentiful.This loss of growth factor signaling and extracellular nutrients ultimately leads to apoptosis, but also activates autophagy, which may degrade intracellular components and provide fuel for mitochondrial bioenergetics. The precise metabolic role of autophagy and how it intersects with the apoptotic pathways in growth factor withdrawal, however, has been uncertain. Our recent findings ingrowth factor-deprived hematopoietic cells show that autophagy can simultaneously contribute to cell metabolism and initiate a pathway to sensitize cells to apoptotic death. This pathway may promote tissue homeostasis by ensuring that only cells with high resistance to apoptosis may utilize autophagy as a survival mechanism when growth factors are limiting and nutrient uptake decreases. PMID:19398886

  13. New analytical algorithm for overlay accuracy

    NASA Astrophysics Data System (ADS)

    Ham, Boo-Hyun; Yun, Sangho; Kwak, Min-Cheol; Ha, Soon Mok; Kim, Cheol-Hong; Nam, Suk-Woo

    2012-03-01

    total overlay error data is decomposed into two parts: the systematic error and the random error. And we tried to show both error components characteristic, systematic error has a good correlation with residual error by scanner condition, whereas, random error has a good correlation with residual error as going process steps. Furthermore, we demonstrate the practical using case with proposed method that shows the working of the high order method through systematic error. Our results show that to characterize an overlay data that is suitable for use in advanced technology nodes requires much more than just evaluating the conventional metrology metrics of TIS and TMU.

  14. Memory consolidation of landmarks in good navigators.

    PubMed

    Janzen, Gabriele; Jansen, Clemens; van Turennout, Miranda

    2008-01-01

    Landmarks play an important role in successful navigation. To successfully find your way around an environment, navigationally relevant information needs to be stored and become available at later moments in time. Evidence from functional magnetic resonance imaging (fMRI) studies shows that the human parahippocampal gyrus encodes the navigational relevance of landmarks. In the present event-related fMRI experiment, we investigated memory consolidation of navigationally relevant landmarks in the medial temporal lobe after route learning. Sixteen right-handed volunteers viewed two film sequences through a virtual museum with objects placed at locations relevant (decision points) or irrelevant (nondecision points) for navigation. To investigate consolidation effects, one film sequence was seen in the evening before scanning, the other one was seen the following morning, directly before scanning. Event-related fMRI data were acquired during an object recognition task. Participants decided whether they had seen the objects in the previously shown films. After scanning, participants answered standardized questions about their navigational skills, and were divided into groups of good and bad navigators, based on their scores. An effect of memory consolidation was obtained in the hippocampus: Objects that were seen the evening before scanning (remote objects) elicited more activity than objects seen directly before scanning (recent objects). This increase in activity in bilateral hippocampus for remote objects was observed in good navigators only. In addition, a spatial-specific effect of memory consolidation for navigationally relevant objects was observed in the parahippocampal gyrus. Remote decision point objects induced increased activity as compared with recent decision point objects, again in good navigators only. The results provide initial evidence for a connection between memory consolidation and navigational ability that can provide a basis for successful

  15. Thermostat algorithm for generating target ensembles

    NASA Astrophysics Data System (ADS)

    Bravetti, A.; Tapias, D.

    2016-02-01

    We present a deterministic algorithm called contact density dynamics that generates any prescribed target distribution in the physical phase space. Akin to the famous model of Nosé and Hoover, our algorithm is based on a non-Hamiltonian system in an extended phase space. However, the equations of motion in our case follow from contact geometry and we show that in general they have a similar form to those of the so-called density dynamics algorithm. As a prototypical example, we apply our algorithm to produce a Gibbs canonical distribution for a one-dimensional harmonic oscillator.

  16. Practical algorithmic probability: an image inpainting example

    NASA Astrophysics Data System (ADS)

    Potapov, Alexey; Scherbakov, Oleg; Zhdanov, Innokentii

    2013-12-01

    Possibility of practical application of algorithmic probability is analyzed on an example of image inpainting problem that precisely corresponds to the prediction problem. Such consideration is fruitful both for the theory of universal prediction and practical image inpaiting methods. Efficient application of algorithmic probability implies that its computation is essentially optimized for some specific data representation. In this paper, we considered one image representation, namely spectral representation, for which an image inpainting algorithm is proposed based on the spectrum entropy criterion. This algorithm showed promising results in spite of very simple representation. The same approach can be used for introducing ALP-based criterion for more powerful image representations.

  17. Thermostat algorithm for generating target ensembles.

    PubMed

    Bravetti, A; Tapias, D

    2016-02-01

    We present a deterministic algorithm called contact density dynamics that generates any prescribed target distribution in the physical phase space. Akin to the famous model of Nosé and Hoover, our algorithm is based on a non-Hamiltonian system in an extended phase space. However, the equations of motion in our case follow from contact geometry and we show that in general they have a similar form to those of the so-called density dynamics algorithm. As a prototypical example, we apply our algorithm to produce a Gibbs canonical distribution for a one-dimensional harmonic oscillator. PMID:26986320

  18. A novel algorithm combining finite state method and genetic algorithm for solving crude oil scheduling problem.

    PubMed

    Duan, Qian-Qian; Yang, Gen-Ke; Pan, Chang-Chun

    2014-01-01

    A hybrid optimization algorithm combining finite state method (FSM) and genetic algorithm (GA) is proposed to solve the crude oil scheduling problem. The FSM and GA are combined to take the advantage of each method and compensate deficiencies of individual methods. In the proposed algorithm, the finite state method makes up for the weakness of GA which is poor at local searching ability. The heuristic returned by the FSM can guide the GA algorithm towards good solutions. The idea behind this is that we can generate promising substructure or partial solution by using FSM. Furthermore, the FSM can guarantee that the entire solution space is uniformly covered. Therefore, the combination of the two algorithms has better global performance than the existing GA or FSM which is operated individually. Finally, a real-life crude oil scheduling problem from the literature is used for conducting simulation. The experimental results validate that the proposed method outperforms the state-of-art GA method. PMID:24772031

  19. SDR input power estimation algorithms

    NASA Astrophysics Data System (ADS)

    Briones, J. C.; Nappier, J. M.

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  20. Ensemble algorithms in reinforcement learning.

    PubMed

    Wiering, Marco A; van Hasselt, Hado

    2008-08-01

    This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and implemented four different ensemble methods combining the following five different RL algorithms: Q-learning, Sarsa, actor-critic (AC), QV-learning, and AC learning automaton. The intuitively designed ensemble methods, namely, majority voting (MV), rank voting, Boltzmann multiplication (BM), and Boltzmann addition, combine the policies derived from the value functions of the different RL algorithms, in contrast to previous work where ensemble methods have been used in RL for representing and learning a single value function. We show experiments on five maze problems of varying complexity; the first problem is simple, but the other four maze tasks are of a dynamic or partially observable nature. The results indicate that the BM and MV ensembles significantly outperform the single RL algorithms. PMID:18632380

  1. SDR Input Power Estimation Algorithms

    NASA Technical Reports Server (NTRS)

    Nappier, Jennifer M.; Briones, Janette C.

    2013-01-01

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  2. Benchmarking image fusion algorithm performance

    NASA Astrophysics Data System (ADS)

    Howell, Christopher L.

    2012-06-01

    Registering two images produced by two separate imaging sensors having different detector sizes and fields of view requires one of the images to undergo transformation operations that may cause its overall quality to degrade with regards to visual task performance. This possible change in image quality could add to an already existing difference in measured task performance. Ideally, a fusion algorithm would take as input unaltered outputs from each respective sensor used in the process. Therefore, quantifying how well an image fusion algorithm performs should be base lined to whether the fusion algorithm retained the performance benefit achievable by each independent spectral band being fused. This study investigates an identification perception experiment using a simple and intuitive process for discriminating between image fusion algorithm performances. The results from a classification experiment using information theory based image metrics is presented and compared to perception test results. The results show an effective performance benchmark for image fusion algorithms can be established using human perception test data. Additionally, image metrics have been identified that either agree with or surpass the performance benchmark established.

  3. Motion Cueing Algorithm Development: Initial Investigation and Redesign of the Algorithms

    NASA Technical Reports Server (NTRS)

    Telban, Robert J.; Wu, Weimin; Cardullo, Frank M.; Houck, Jacob A. (Technical Monitor)

    2000-01-01

    In this project four motion cueing algorithms were initially investigated. The classical algorithm generated results with large distortion and delay and low magnitude. The NASA adaptive algorithm proved to be well tuned with satisfactory performance, while the UTIAS adaptive algorithm produced less desirable results. Modifications were made to the adaptive algorithms to reduce the magnitude of undesirable spikes. The optimal algorithm was found to have the potential for improved performance with further redesign. The center of simulator rotation was redefined. More terms were added to the cost function to enable more tuning flexibility. A new design approach using a Fortran/Matlab/Simulink setup was employed. A new semicircular canals model was incorporated in the algorithm. With these changes results show the optimal algorithm has some advantages over the NASA adaptive algorithm. Two general problems observed in the initial investigation required solutions. A nonlinear gain algorithm was developed that scales the aircraft inputs by a third-order polynomial, maximizing the motion cues while remaining within the operational limits of the motion system. A braking algorithm was developed to bring the simulator to a full stop at its motion limit and later release the brake to follow the cueing algorithm output.

  4. 3DRISM Multigrid Algorithm for Fast Solvation Free Energy Calculations.

    PubMed

    Sergiievskyi, Volodymyr P; Fedorov, Maxim V

    2012-06-12

    In this paper we present a fast and accurate method for modeling solvation properties of organic molecules in water with a main focus on predicting solvation (hydration) free energies of small organic compounds. The method is based on a combination of (i) a molecular theory, three-dimensional reference interaction sites model (3DRISM); (ii) a fast multigrid algorithm for solving the high-dimensional 3DRISM integral equations; and (iii) a recently introduced universal correction (UC) for the 3DRISM solvation free energies by properly scaled molecular partial volume (3DRISM-UC, Palmer et al., J. Phys.: Condens. Matter2010, 22, 492101). A fast multigrid algorithm is the core of the method because it helps to reduce the high computational costs associated with solving the 3DRISM equations. To facilitate future applications of the method, we performed benchmarking of the algorithm on a set of several model solutes in order to find optimal grid parameters and to test the performance and accuracy of the algorithm. We have shown that the proposed new multigrid algorithm is on average 24 times faster than the simple Picard method and at least 3.5 times faster than the MDIIS method which is currently actively used by the 3DRISM community (e.g., the MDIIS method has been recently implemented in a new 3DRISM implicit solvent routine in the recent release of the AmberTools 1.4 molecular modeling package (Luchko et al. J. Chem. Theory Comput. 2010, 6, 607-624). Then we have benchmarked the multigrid algorithm with chosen optimal parameters on a set of 99 organic compounds. We show that average computational time required for one 3DRISM calculation is 3.5 min per a small organic molecule (10-20 atoms) on a standard personal computer. We also benchmarked predicted solvation free energy values for all of the compounds in the set against the corresponding experimental data. We show that by using the proposed multigrid algorithm and the 3DRISM-UC model, it is possible to obtain good

  5. Exploration of new multivariate spectral calibration algorithms.

    SciTech Connect

    Van Benthem, Mark Hilary; Haaland, David Michael; Melgaard, David Kennett; Martin, Laura Elizabeth; Wehlburg, Christine Marie; Pell, Randy J.; Guenard, Robert D.

    2004-03-01

    A variety of multivariate calibration algorithms for quantitative spectral analyses were investigated and compared, and new algorithms were developed in the course of this Laboratory Directed Research and Development project. We were able to demonstrate the ability of the hybrid classical least squares/partial least squares (CLSIPLS) calibration algorithms to maintain calibrations in the presence of spectrometer drift and to transfer calibrations between spectrometers from the same or different manufacturers. These methods were found to be as good or better in prediction ability as the commonly used partial least squares (PLS) method. We also present the theory for an entirely new class of algorithms labeled augmented classical least squares (ACLS) methods. New factor selection methods are developed and described for the ACLS algorithms. These factor selection methods are demonstrated using near-infrared spectra collected from a system of dilute aqueous solutions. The ACLS algorithm is also shown to provide improved ease of use and better prediction ability than PLS when transferring calibrations between near-infrared calibrations from the same manufacturer. Finally, simulations incorporating either ideal or realistic errors in the spectra were used to compare the prediction abilities of the new ACLS algorithm with that of PLS. We found that in the presence of realistic errors with non-uniform spectral error variance across spectral channels or with spectral errors correlated between frequency channels, ACLS methods generally out-performed the more commonly used PLS method. These results demonstrate the need for realistic error structure in simulations when the prediction abilities of various algorithms are compared. The combination of equal or superior prediction ability and the ease of use of the ACLS algorithms make the new ACLS methods the preferred algorithms to use for multivariate spectral calibrations.

  6. Annealed Importance Sampling Reversible Jump MCMC algorithms

    SciTech Connect

    Karagiannis, Georgios; Andrieu, Christophe

    2013-03-20

    It will soon be 20 years since reversible jump Markov chain Monte Carlo (RJ-MCMC) algorithms have been proposed. They have significantly extended the scope of Markov chain Monte Carlo simulation methods, offering the promise to be able to routinely tackle transdimensional sampling problems, as encountered in Bayesian model selection problems for example, in a principled and flexible fashion. Their practical efficient implementation, however, still remains a challenge. A particular difficulty encountered in practice is in the choice of the dimension matching variables (both their nature and their distribution) and the reversible transformations which allow one to define the one-to-one mappings underpinning the design of these algorithms. Indeed, even seemingly sensible choices can lead to algorithms with very poor performance. The focus of this paper is the development and performance evaluation of a method, annealed importance sampling RJ-MCMC (aisRJ), which addresses this problem by mitigating the sensitivity of RJ-MCMC algorithms to the aforementioned poor design. As we shall see the algorithm can be understood as being an “exact approximation” of an idealized MCMC algorithm that would sample from the model probabilities directly in a model selection set-up. Such an idealized algorithm may have good theoretical convergence properties, but typically cannot be implemented, and our algorithms can approximate the performance of such idealized algorithms to an arbitrary degree while not introducing any bias for any degree of approximation. Our approach combines the dimension matching ideas of RJ-MCMC with annealed importance sampling and its Markov chain Monte Carlo implementation. We illustrate the performance of the algorithm with numerical simulations which indicate that, although the approach may at first appear computationally involved, it is in fact competitive.

  7. An improved Camshift algorithm for target recognition

    NASA Astrophysics Data System (ADS)

    Fu, Min; Cai, Chao; Mao, Yusu

    2015-12-01

    Camshift algorithm and three frame difference algorithm are the popular target recognition and tracking methods. Camshift algorithm requires a manual initialization of the search window, which needs the subjective error and coherence, and only in the initialization calculating a color histogram, so the color probability model cannot be updated continuously. On the other hand, three frame difference method does not require manual initialization search window, it can make full use of the motion information of the target only to determine the range of motion. But it is unable to determine the contours of the object, and can not make use of the color information of the target object. Therefore, the improved Camshift algorithm is proposed to overcome the disadvantages of the original algorithm, the three frame difference operation is combined with the object's motion information and color information to identify the target object. The improved Camshift algorithm is realized and shows better performance in the recognition and tracking of the target.

  8. Spatial search algorithms on Hanoi networks

    NASA Astrophysics Data System (ADS)

    Marquezino, Franklin de Lima; Portugal, Renato; Boettcher, Stefan

    2013-01-01

    We use the abstract search algorithm and its extension due to Tulsi to analyze a spatial quantum search algorithm that finds a marked vertex in Hanoi networks of degree 4 faster than classical algorithms. We also analyze the effect of using non-Groverian coins that take advantage of the small-world structure of the Hanoi networks. We obtain the scaling of the total cost of the algorithm as a function of the number of vertices. We show that Tulsi's technique plays an important role to speed up the searching algorithm. We can improve the algorithm's efficiency by choosing a non-Groverian coin if we do not implement Tulsi's method. Our conclusions are based on numerical implementations.

  9. Modified OMP Algorithm for Exponentially Decaying Signals

    PubMed Central

    Kazimierczuk, Krzysztof; Kasprzak, Paweł

    2015-01-01

    A group of signal reconstruction methods, referred to as compressed sensing (CS), has recently found a variety of applications in numerous branches of science and technology. However, the condition of the applicability of standard CS algorithms (e.g., orthogonal matching pursuit, OMP), i.e., the existence of the strictly sparse representation of a signal, is rarely met. Thus, dedicated algorithms for solving particular problems have to be developed. In this paper, we introduce a modification of OMP motivated by nuclear magnetic resonance (NMR) application of CS. The algorithm is based on the fact that the NMR spectrum consists of Lorentzian peaks and matches a single Lorentzian peak in each of its iterations. Thus, we propose the name Lorentzian peak matching pursuit (LPMP). We also consider certain modification of the algorithm by introducing the allowed positions of the Lorentzian peaks' centers. Our results show that the LPMP algorithm outperforms other CS algorithms when applied to exponentially decaying signals. PMID:25609044

  10. Geovisualization and analysis of the Good Country Index

    NASA Astrophysics Data System (ADS)

    Tan, C.; Dramowicz, K.

    2016-04-01

    The Good Country Index measures the contribution of a single country in the humanity and health aspects that are beneficial to the planet. Countries which are globally good for our planet do not necessarily have to be good for their own citizens. The Good Country Index is based on the following seven categories: science and technology, culture, international peace and security, world order, planet and climate, prosperity and equality, and health and well-being. The Good Country Index is focused on the external effects, in contrast to other global indices (for example, the Human Development Index, or the Social Progress Index) showing the level of development of a single country in benefiting its own citizens. The authors verify if these global indices may be good proxies as potential predictors, as well as indicators of a country's ‘goodness’. Non-spatial analysis included analyzing relationships between the overall Good Country Index and the seven contributing categories, as well as between the overall Good Country Index and other global indices. Data analytics was used for building various predictive models and selecting the most accurate model to predict the overall Good Country Index. The most important rules for high and low index values were identified. Spatial analysis included spatial autocorrelation to analyze similarity of index values of a country in relation to its neighbors. Hot spot analysis was used to identify and map significant clusters of countries with high and low index values. Similar countries were grouped into geographically compact clusters and mapped.

  11. A novel stochastic optimization algorithm.

    PubMed

    Li, B; Jiang, W

    2000-01-01

    This paper presents a new stochastic approach SAGACIA based on proper integration of simulated annealing algorithm (SAA), genetic algorithm (GA), and chemotaxis algorithm (CA) for solving complex optimization problems. SAGACIA combines the advantages of SAA, GA, and CA together. It has the following features: (1) it is not the simple mix of SAA, GA, and CA; (2) it works from a population; (3) it can be easily used to solve optimization problems either with continuous variables or with discrete variables, and it does not need coding and decoding,; and (4) it can easily escape from local minima and converge quickly. Good solutions can be obtained in a very short time. The search process of SAGACIA can be explained with Markov chains. In this paper, it is proved that SAGACIA has the property of global asymptotical convergence. SAGACIA has been applied to solve such problems as scheduling, the training of artificial neural networks, and the optimizing of complex functions. In all the test cases, the performance of SAGACIA is better than that of SAA, GA, and CA. PMID:18244742

  12. Evolution of cooperation in rotating indivisible goods game.

    PubMed

    Koike, Shimpei; Nakamaru, Mayuko; Tsujimoto, Masahiro

    2010-05-01

    Collective behavior is theoretically and experimentally studied through a public goods game in which players contribute resources or efforts to produce goods (or pool), which are then divided equally among all players regardless of the amount of their contribution. However, if goods are indivisible, only one player can receive the goods. In this case, the problem is how to distribute indivisible goods, and here therefore we propose a new game, namely the "rotating indivisible goods game." In this game, the goods are not divided but distributed by regular rotation. An example is rotating savings and credit associations (ROSCAs), which exist all over the world and serve as efficient and informal institutions for collecting savings for small investments. In a ROSCA, members regularly contribute money to produce goods and to distribute them to each member on a regular rotation. It has been pointed out that ROSCA members are selected based on their reliability or reputation, and that defectors who stop contributing are excluded. We elucidate mechanisms that sustain cooperation in rotating indivisible goods games by means of evolutionary simulations. First, we investigate the effect of the peer selection rule by which the group chooses members based on the players reputation, also by which players choose groups based on their reputation. Regardless of the peer selection rule, cooperation is not sustainable in a rotating indivisible goods game. Second, we introduce the forfeiture rule that forbids a member who has not contributed earlier from receiving goods. These analyses show that employing these two rules can sustain cooperation in the rotating indivisible goods game, although employing either of the two cannot. Finally, we prove that evolutionary simulation can be a tool for investigating institutional designs that promote cooperation. PMID:20064533

  13. Goodness-of-fit diagnostics for Bayesian hierarchical models.

    PubMed

    Yuan, Ying; Johnson, Valen E

    2012-03-01

    This article proposes methodology for assessing goodness of fit in Bayesian hierarchical models. The methodology is based on comparing values of pivotal discrepancy measures (PDMs), computed using parameter values drawn from the posterior distribution, to known reference distributions. Because the resulting diagnostics can be calculated from standard output of Markov chain Monte Carlo algorithms, their computational costs are minimal. Several simulation studies are provided, each of which suggests that diagnostics based on PDMs have higher statistical power than comparable posterior-predictive diagnostic checks in detecting model departures. The proposed methodology is illustrated in a clinical application; an application to discrete data is described in supplementary material. PMID:22050079

  14. "That School Had Become All about Show": Image Making and the Ironies of Constructing a Good Urban School

    ERIC Educational Resources Information Center

    Niesz, Tricia

    2010-01-01

    The last two decades have seen dramatic change in U.S. schooling as a response to high-stakes accountability and market-based reform movements. Critics cite a number of unfortunate consequences of these movements, especially for students in urban schools. This article explores the troubling ironies related to one strategy for survival in this…

  15. Semioptimal practicable algorithmic cooling

    SciTech Connect

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-04-15

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  16. Scheduling with genetic algorithms

    NASA Technical Reports Server (NTRS)

    Fennel, Theron R.; Underbrink, A. J., Jr.; Williams, George P. W., Jr.

    1994-01-01

    In many domains, scheduling a sequence of jobs is an important function contributing to the overall efficiency of the operation. At Boeing, we develop schedules for many different domains, including assembly of military and commercial aircraft, weapons systems, and space vehicles. Boeing is under contract to develop scheduling systems for the Space Station Payload Planning System (PPS) and Payload Operations and Integration Center (POIC). These applications require that we respect certain sequencing restrictions among the jobs to be scheduled while at the same time assigning resources to the jobs. We call this general problem scheduling and resource allocation. Genetic algorithms (GA's) offer a search method that uses a population of solutions and benefits from intrinsic parallelism to search the problem space rapidly, producing near-optimal solutions. Good intermediate solutions are probabalistically recombined to produce better offspring (based upon some application specific measure of solution fitness, e.g., minimum flowtime, or schedule completeness). Also, at any point in the search, any intermediate solution can be accepted as a final solution; allowing the search to proceed longer usually produces a better solution while terminating the search at virtually any time may yield an acceptable solution. Many processes are constrained by restrictions of sequence among the individual jobs. For a specific job, other jobs must be completed beforehand. While there are obviously many other constraints on processes, it is these on which we focussed for this research: how to allocate crews to jobs while satisfying job precedence requirements and personnel, and tooling and fixture (or, more generally, resource) requirements.

  17. Good Health Before Pregnancy: Preconception Care

    MedlinePlus

    ... Login Join Pay Dues Follow us: Women's Health Care Physicians Contact Us My ACOG ACOG Departments Donate ... Patients About ACOG Good Health Before Pregnancy: Preconception Care Home For Patients Search FAQs Good Health Before ...

  18. 76 FR 12731 - Good Neighbor Environmental Board

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-08

    ... From the Federal Register Online via the Government Publishing Office ENVIRONMENTAL PROTECTION AGENCY Good Neighbor Environmental Board AGENCY: Environmental Protection Agency (EPA). ACTION: Notice of... meeting of the Good Neighbor Environmental Board (Board). The Board usually meets three times...

  19. 76 FR 76973 - Good Neighbor Environmental Board

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-09

    ... From the Federal Register Online via the Government Publishing Office ENVIRONMENTAL PROTECTION AGENCY Good Neighbor Environmental Board AGENCY: Environmental Protection Agency (EPA). ACTION: Notice of... meeting of the Good Neighbor Environmental Board (Board). The Board usually meets three times...

  20. 76 FR 52662 - Good Neighbor Environmental Board

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-23

    ... From the Federal Register Online via the Government Publishing Office ENVIRONMENTAL PROTECTION AGENCY Good Neighbor Environmental Board AGENCY: Environmental Protection Agency (EPA). ACTION: Notice of... meeting of the Good Neighbor Environmental Board (Board). The Board usually meets three times...

  1. 76 FR 31328 - Good Neighbor Environmental Board

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-31

    ... From the Federal Register Online via the Government Publishing Office ENVIRONMENTAL PROTECTION AGENCY Good Neighbor Environmental Board AGENCY: Environmental Protection Agency (EPA). ACTION: Notice of... meeting of the Good Neighbor Environmental Board (Board). The Board usually meets three times...

  2. 75 FR 8699 - Good Neighbor Environmental Board

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-25

    ... From the Federal Register Online via the Government Publishing Office ENVIRONMENTAL PROTECTION AGENCY Good Neighbor Environmental Board AGENCY: Environmental Protection Agency (EPA). ACTION: Notice of... meeting of the Good Neighbor Environmental Board (Board). The Board usually meets three times...

  3. 77 FR 32636 - Good Neighbor Environmental Board

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-01

    ... From the Federal Register Online via the Government Publishing Office ENVIRONMENTAL PROTECTION AGENCY Good Neighbor Environmental Board AGENCY: Environmental Protection Agency (EPA). ACTION: Request for nominations to the Good Neighbor Environmental Board. SUMMARY: The U.S. Environmental...

  4. 77 FR 32636 - Good Neighbor Environmental Board

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-01

    ... From the Federal Register Online via the Government Publishing Office ENVIRONMENTAL PROTECTION AGENCY Good Neighbor Environmental Board AGENCY: Environmental Protection Agency (EPA). ACTION: Notice of... meeting of the Good Neighbor Environmental Board (Board). The Board usually meets three times...

  5. 76 FR 73631 - Good Neighbor Environmental Board

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-29

    ... From the Federal Register Online via the Government Publishing Office ENVIRONMENTAL PROTECTION AGENCY Good Neighbor Environmental Board AGENCY: Environmental Protection Agency (EPA). ACTION: Notice of... meeting of the Good Neighbor Environmental Board (Board). The Board usually meets three times...

  6. Teaching a 'good' ward round.

    PubMed

    Powell, Natalie; Bruce, Christopher G; Redfern, Oliver

    2015-04-01

    Ward rounds are a vital part of hospital medicine and junior doctors play a key role in their delivery. Despite the importance of ward rounds to patient care and experience, we believe that junior doctors may lack the training and skills to carry them out most effectively. We designed a simulation-based training session focusing on ward round skills themed to key patient safety issues and have delivered the training to over 100 learners (medical students and foundation year one doctors). Few learners had any prior training in ward rounds. The session was highly valued by all participants and surveys completed both before and after the session showed statistically significant improvements in confidence in leading and documenting ward rounds. In addition, 94% of final year medical students and 93% of doctors felt such training should be included in the undergraduate curriculum. We believe there is a current gap in training around ward round skills and would strongly encourage simulation-based ward round training to be developed for undergraduates. Further sessions following qualification may then consolidate and develop ward round skills adapted to the level of the doctor. PMID:25824064

  7. Development of a regional rain retrieval algorithm for exclusive mesoscale convective systems over peninsular India

    NASA Astrophysics Data System (ADS)

    Dutta, Devajyoti; Sharma, Sanjay; Das, Jyotirmay; Gairola, R. M.

    2012-06-01

    The present study emphasize the development of a region specific rain retrieval algorithm by taking into accounts the cloud features. Brightness temperatures (Tbs) from various TRMM Microwave Imager (TMI) channels are calibrated with near surface rain intensity as observed from the TRMM - Precipitation Radar. It shows that Tb-R relations during exclusive-Mesoscale Convective System (MCS) events have greater dynamical range compared to combined events of non-MCS and MCS. Increased dynamical range of Tb-R relations for exclusive-MCS events have led to the development of an Artificial Neural Network (ANN) based regional algorithm for rain intensity estimation. By using the exclusive MCSs algorithm, reasonably good improvement in the accuracy of rain intensity estimation is observed. A case study of a comparison of rain intensity estimation by the exclusive-MCS regional algorithm and the global TRMM 2A12 rain product with a Doppler Weather Radar shows significant improvement in rain intensity estimation by the developed regional algorithm.

  8. The good body: when big is better.

    PubMed

    Cassidy, C M

    1991-09-01

    An important cultural question is, "What is a 'good'--desirable, beautiful, impressive--body?" The answers are legion; here I examine why bigger bodies represent survival skill, and how this power symbolism is embodied by behaviors that guide larger persons toward the top of the social hierarchy. bigness is a complex concept comprising tallness, boniness, muscularity and fattiness. Data show that most people worldwide want to be big--both tall and fat. Those who achieve the ideal are disproportionately among the society's most socially powerful. In the food-secure West, fascination with power and the body has not waned, but has been redefined such that thinness is desired. This apparent anomaly is resolved by realizing that thinness in the midst of abundance--as long as one is also tall and muscular--still projects the traditional message of power, and brings such social boons as upward mobility. PMID:1961102

  9. Benefits of tolerance in public goods games

    NASA Astrophysics Data System (ADS)

    Szolnoki, Attila; Chen, Xiaojie

    2015-10-01

    Leaving the joint enterprise when defection is unveiled is always a viable option to avoid being exploited. Although loner strategy helps the population not to be trapped into the tragedy of the commons state, it could offer only a modest income for nonparticipants. In this paper we demonstrate that showing some tolerance toward defectors could not only save cooperation in harsh environments but in fact results in a surprisingly high average payoff for group members in public goods games. Phase diagrams and the underlying spatial patterns reveal the high complexity of evolving states where cyclic dominant strategies or two-strategy alliances can characterize the final state of evolution. We identify microscopic mechanisms which are responsible for the superiority of global solutions containing tolerant players. This phenomenon is robust and can be observed both in well-mixed and in structured populations highlighting the importance of tolerance in our everyday life.

  10. Benefits of tolerance in public goods games.

    PubMed

    Szolnoki, Attila; Chen, Xiaojie

    2015-10-01

    Leaving the joint enterprise when defection is unveiled is always a viable option to avoid being exploited. Although loner strategy helps the population not to be trapped into the tragedy of the commons state, it could offer only a modest income for nonparticipants. In this paper we demonstrate that showing some tolerance toward defectors could not only save cooperation in harsh environments but in fact results in a surprisingly high average payoff for group members in public goods games. Phase diagrams and the underlying spatial patterns reveal the high complexity of evolving states where cyclic dominant strategies or two-strategy alliances can characterize the final state of evolution. We identify microscopic mechanisms which are responsible for the superiority of global solutions containing tolerant players. This phenomenon is robust and can be observed both in well-mixed and in structured populations highlighting the importance of tolerance in our everyday life. PMID:26565295

  11. When good news leads to bad choices.

    PubMed

    McDevitt, Margaret A; Dunn, Roger M; Spetch, Marcia L; Ludvig, Elliot A

    2016-01-01

    Pigeons and other animals sometimes deviate from optimal choice behavior when given informative signals for delayed outcomes. For example, when pigeons are given a choice between an alternative that always leads to food after a delay and an alternative that leads to food only half of the time after a delay, preference changes dramatically depending on whether the stimuli during the delays are correlated with (signal) the outcomes or not. With signaled outcomes, pigeons show a much greater preference for the suboptimal alternative than with unsignaled outcomes. Key variables and research findings related to this phenomenon are reviewed, including the effects of durations of the choice and delay periods, probability of reinforcement, and gaps in the signal. We interpret the available evidence as reflecting a preference induced by signals for good news in a context of uncertainty. Other explanations are briefly summarized and compared. PMID:26781050

  12. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models

    PubMed Central

    Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou

    2015-01-01

    Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1)βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations. PMID:26502409

  13. The Orthogonally Partitioned EM Algorithm: Extending the EM Algorithm for Algorithmic Stability and Bias Correction Due to Imperfect Data.

    PubMed

    Regier, Michael D; Moodie, Erica E M

    2016-05-01

    We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience. PMID:27227718

  14. 76 FR 7845 - Good Neighbor Environmental Board

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-11

    ... AGENCY Good Neighbor Environmental Board AGENCY: Environmental Protection Agency (EPA). ACTION: Request for Nominations to the Good Neighbor Environmental Board. SUMMARY: The U.S. Environmental Protection... appointment to its Good Neighbor Environmental Board. Vacancies are anticipated to be filled by May...

  15. Toward a Logic of Good Reasons.

    ERIC Educational Resources Information Center

    Fisher, Walter R.

    1978-01-01

    Explores the assumptions underlying the role of values in rhetorical interactions, the meaning of "logic" in relation to "good reasons," a reconceptualization of "good reasons," implementation of a "logic of 'good reasons'," and the uses of hierarchies of values in assessing rhetorical reasoning. (JMF)

  16. Good Secondary Schools. What Makes Them Tick?

    ERIC Educational Resources Information Center

    McKinney, Kay; And Others

    The characteristics common to 571 public secondary schools that have been recognized as exemplary as part of the "Secondary School Recognition Program" are described in this booklet. Descriptions are given of: (1) good principals; (2) good teachers; (3) teacher rewards and recognition; (4) good student-teacher relationships; (5) high expectations;…

  17. Is the Good Life the Easy Life?

    ERIC Educational Resources Information Center

    Scollon, Christie Napa; King, Laura A.

    2004-01-01

    Three studies examined folk concepts of the good life. Participants rated the desirability and moral goodness of a life as a function of the happiness, meaning, and effort experienced. Happiness and meaning were solid predictors of the good life, replicating King and Napa (1998). Study 1 (N = 381) included wealth as an additional factor. Results…

  18. Efficient generation of statistically good pseudonoise by linearly interconnected shift registers

    NASA Technical Reports Server (NTRS)

    Hurd, W. J.

    1974-01-01

    A number of efficient new algorithms for generating digital pseudonoise are presented. The algorithms are efficient because a large number of new pseudorandom bits are generated at each iteration of a computer implementation or at each clock pulse of a hardware implementation. The maximal-length shift register sequences obtained have good randomness properties. Some of the properties of p-n sequences are reviewed and the results of an extensive statistical evaluation of the pseudonoise are presented.

  19. High Rate Pulse Processing Algorithms for Microcalorimeters

    NASA Astrophysics Data System (ADS)

    Tan, Hui; Breus, Dimitry; Hennig, Wolfgang; Sabourov, Konstantin; Collins, Jeffrey W.; Warburton, William K.; Bertrand Doriese, W.; Ullom, Joel N.; Bacrania, Minesh K.; Hoover, Andrew S.; Rabin, Michael W.

    2009-12-01

    It has been demonstrated that microcalorimeter spectrometers based on superconducting transition-edge-sensors can readily achieve sub-100 eV energy resolution near 100 keV. However, the active volume of a single microcalorimeter has to be small in order to maintain good energy resolution, and pulse decay times are normally on the order of milliseconds due to slow thermal relaxation. Therefore, spectrometers are typically built with an array of microcalorimeters to increase detection efficiency and count rate. For large arrays, however, as much pulse processing as possible must be performed at the front end of readout electronics to avoid transferring large amounts of waveform data to a host computer for post-processing. In this paper, we present digital filtering algorithms for processing microcalorimeter pulses in real time at high count rates. The goal for these algorithms, which are being implemented in readout electronics that we are also currently developing, is to achieve sufficiently good energy resolution for most applications while being: a) simple enough to be implemented in the readout electronics; and, b) capable of processing overlapping pulses, and thus achieving much higher output count rates than those achieved by existing algorithms. Details of our algorithms are presented, and their performance is compared to that of the "optimal filter" that is currently the predominantly used pulse processing algorithm in the cryogenic-detector community.

  20. High rate pulse processing algorithms for microcalorimeters

    SciTech Connect

    Rabin, Michael; Hoover, Andrew S; Bacrania, Mnesh K; Tan, Hui; Breus, Dimitry; Henning, Wolfgang; Sabourov, Konstantin; Collins, Jeff; Warburton, William K; Dorise, Bertrand; Ullom, Joel N

    2009-01-01

    It has been demonstrated that microcalorimeter spectrometers based on superconducting transition-edge-sensor can readily achieve sub-100 eV energy resolution near 100 keV. However, the active volume of a single microcalorimeter has to be small to maintain good energy resolution, and pulse decay times are normally in the order of milliseconds due to slow thermal relaxation. Consequently, spectrometers are typically built with an array of microcalorimeters to increase detection efficiency and count rate. Large arrays, however, require as much pulse processing as possible to be performed at the front end of the readout electronics to avoid transferring large amounts of waveform data to a host computer for processing. In this paper, they present digital filtering algorithms for processing microcalorimeter pulses in real time at high count rates. The goal for these algorithms, which are being implemented in the readout electronics that they are also currently developing, is to achieve sufficiently good energy resolution for most applications while being (a) simple enough to be implemented in the readout electronics and (b) capable of processing overlapping pulses and thus achieving much higher output count rates than the rates that existing algorithms are currently achieving. Details of these algorithms are presented, and their performance was compared to that of the 'optimal filter' that is the dominant pulse processing algorithm in the cryogenic-detector community.

  1. RATE-ADJUSTMENT ALGORITHM FOR AGGREGATE TCP CONGESTION CONTROL

    SciTech Connect

    P. TINNAKORNSRISUPHAP, ET AL

    2000-09-01

    The TCP congestion-control mechanism is an algorithm designed to probe the available bandwidth of the network path that TCP packets traverse. However, it is well-known that the TCP congestion-control mechanism does not perform well on networks with a large bandwidth-delay product due to the slow dynamics in adapting its congestion window, especially for short-lived flows. One promising solution to the problem is to aggregate and share the path information among TCP connections that traverse the same bottleneck path, i.e., Aggregate TCP. However, this paper shows via a queueing analysis of a generalized processor-sharing (GPS) queue with regularly-varying service time that a simple aggregation of local TCP connections together into a single aggregate TCP connection can result in a severe performance degradation. To prevent such a degradation, we introduce a rate-adjustment algorithm. Our simulation confirms that by utilizing our rate-adjustment algorithm on aggregate TCP, connections which would normally receive poor service achieve significant performance improvements without penalizing connections which already receive good service.

  2. Development of an Inverse Algorithm for Resonance Inspection

    SciTech Connect

    Lai, Canhai; Xu, Wei; Sun, Xin

    2012-10-01

    Resonance inspection (RI), which employs the natural frequency spectra shift between the good and the anomalous part populations to detect defects, is a non-destructive evaluation (NDE) technique with many advantages such as low inspection cost, high testing speed, and broad applicability to structures with complex geometry compared to other contemporary NDE methods. It has already been widely used in the automobile industry for quality inspections of safety critical parts. Unlike some conventionally used NDE methods, the current RI technology is unable to provide details, i.e. location, dimension, or types, of the flaws for the discrepant parts. Such limitation severely hinders its wide spread applications and further development. In this study, an inverse RI algorithm based on maximum correlation function is proposed to quantify the location and size of flaws for a discrepant part. A dog-bone shaped stainless steel sample with and without controlled flaws are used for algorithm development and validation. The results show that multiple flaws can be accurately pinpointed back using the algorithms developed, and the prediction accuracy decreases with increasing flaw numbers and decreasing distance between flaws.

  3. Secure 3D watermarking algorithm based on point set projection

    NASA Astrophysics Data System (ADS)

    Liu, Quan; Zhang, Xiaomei

    2007-11-01

    3D digital models greatly facilitate the distribution and storage of information. While its copyright protection problems attract more and more research interests. A novel secure digital watermarking algorithm for 3D models is proposed in this paper. In order to survive most attacks like rotation, cropping, smoothing, adding noise, etc, the projection of the model's point set is chosen as the carrier of the watermark in the presented algorithm, in which contains the copyright information as logos, text, and so on. Then projection of the model's point set onto x, y and z plane are calculated respectively. Before watermark embedding process, the original watermark is scrambled by a key. Each projection is singular value decomposed, and the scrambled watermark is embedded into the SVD(singular value decomposed) domain of the above x, y and z plane respectively. After that we use the watermarked x, y and z plane to recover the vertices of the model and the watermarked model is attained. Only the legal user can remove the watermark from the watermarked models using the private key. Experiments are presented in the paper to show that the proposed algorithm has good performance on various malicious attacks.

  4. An effective algorithm for quick fractal analysis of movement biosignals.

    PubMed

    Ripoli, A; Belardinelli, A; Palagi, G; Franchi, D; Bedini, R

    1999-01-01

    The problem of numerically classifying patterns, of crucial importance in the biomedical field, is here faced by means of their fractal dimension. A new simple algorithm was developed to characterize biomedical mono-dimensional signals avoiding computationally expensive methods, generally required by the classical approach of the fractal theory. The algorithm produces a number related to the geometric behaviour of the pattern providing information on the studied phenomenon. The results are independent of the signal amplitude and exhibit a fractal measure ranging from 1 to 2 for monotonically going-forwards monodimensional curves, in accordance with theory. Accurate calibration and qualification were accomplished by analysing basic waveforms. Further studies concerned the biomedical field with special reference to gait analysis: so far, well controlled movements such as walking, going up and downstairs and running, have been investigated. Controlled conditions of the test environment guaranteed the necessary repeatability and the accuracy of the practical experiments in setting up the methodology. The algorithm showed good performance in classifying the considered simple movements in the selected sample of normal subjects. The results obtained encourage us to use this technique for an effective on-line movement correlation with other long-term monitored variables such as blood pressure, ECG, etc. PMID:10738684

  5. Implementation of several mathematical algorithms to breast tissue density classification

    NASA Astrophysics Data System (ADS)

    Quintana, C.; Redondo, M.; Tirao, G.

    2014-02-01

    The accuracy of mammographic abnormality detection methods is strongly dependent on breast tissue characteristics, where a dense breast tissue can hide lesions causing cancer to be detected at later stages. In addition, breast tissue density is widely accepted to be an important risk indicator for the development of breast cancer. This paper presents the implementation and the performance of different mathematical algorithms designed to standardize the categorization of mammographic images, according to the American College of Radiology classifications. These mathematical techniques are based on intrinsic properties calculations and on comparison with an ideal homogeneous image (joint entropy, mutual information, normalized cross correlation and index Q) as categorization parameters. The algorithms evaluation was performed on 100 cases of the mammographic data sets provided by the Ministerio de Salud de la Provincia de Córdoba, Argentina—Programa de Prevención del Cáncer de Mama (Department of Public Health, Córdoba, Argentina, Breast Cancer Prevention Program). The obtained breast classifications were compared with the expert medical diagnostics, showing a good performance. The implemented algorithms revealed a high potentiality to classify breasts into tissue density categories.

  6. An optimized hybrid encode based compression algorithm for hyperspectral image

    NASA Astrophysics Data System (ADS)

    Wang, Cheng; Miao, Zhuang; Feng, Weiyi; He, Weiji; Chen, Qian; Gu, Guohua

    2013-12-01

    Compression is a kernel procedure in hyperspectral image processing due to its massive data which will bring great difficulty in date storage and transmission. In this paper, a novel hyperspectral compression algorithm based on hybrid encoding which combines with the methods of the band optimized grouping and the wavelet transform is proposed. Given the characteristic of correlation coefficients between adjacent spectral bands, an optimized band grouping and reference frame selection method is first utilized to group bands adaptively. Then according to the band number of each group, the redundancy in the spatial and spectral domain is removed through the spatial domain entropy coding and the minimum residual based linear prediction method. Thus, embedded code streams are obtained by encoding the residual images using the improved embedded zerotree wavelet based SPIHT encode method. In the experments, hyperspectral images collected by the Airborne Visible/ Infrared Imaging Spectrometer (AVIRIS) were used to validate the performance of the proposed algorithm. The results show that the proposed approach achieves a good performance in reconstructed image quality and computation complexity.The average peak signal to noise ratio (PSNR) is increased by 0.21~0.81dB compared with other off-the-shelf algorithms under the same compression ratio.

  7. Ballast: A Ball-based Algorithm for Structural Motifs

    PubMed Central

    He, Lu; Vandin, Fabio; Pandurangan, Gopal

    2013-01-01

    Abstract Structural motifs encapsulate local sequence-structure-function relationships characteristic of related proteins, enabling the prediction of functional characteristics of new proteins, providing molecular-level insights into how those functions are performed, and supporting the development of variants specifically maintaining or perturbing function in concert with other properties. Numerous computational methods have been developed to search through databases of structures for instances of specified motifs. However, it remains an open problem how best to leverage the local geometric and chemical constraints underlying structural motifs in order to develop motif-finding algorithms that are both theoretically and practically efficient. We present a simple, general, efficient approach, called Ballast (ball-based algorithm for structural motifs), to match given structural motifs to given structures. Ballast combines the best properties of previously developed methods, exploiting the composition and local geometry of a structural motif and its possible instances in order to effectively filter candidate matches. We show that on a wide range of motif-matching problems, Ballast efficiently and effectively finds good matches, and we provide theoretical insights into why it works well. By supporting generic measures of compositional and geometric similarity, Ballast provides a powerful substrate for the development of motif-matching algorithms. PMID:23383999

  8. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.; Subrahmanyam, P.A.

    1988-12-01

    The authors present a methodology for verifying correctness of systolic algorithms. The methodology is based on solving a set of Uniform Recurrence Equations obtained from a description of systolic algorithms as a set of recursive equations. They present an approach to mechanically verify correctness of systolic algorithms, using the Boyer-Moore theorem proven. A mechanical correctness proof of an example from the literature is also presented.

  9. Charting taxonomic knowledge through ontologies and ranking algorithms

    NASA Astrophysics Data System (ADS)

    Huber, Robert; Klump, Jens

    2009-04-01

    Since the inception of geology as a modern science, paleontologists have described a large number of fossil species. This makes fossilized organisms an important tool in the study of stratigraphy and past environments. Since taxonomic classifications of organisms, and thereby their names, change frequently, the correct application of this tool requires taxonomic expertise in finding correct synonyms for a given species name. Much of this taxonomic information has already been published in journals and books where it is compiled in carefully prepared synonymy lists. Because this information is scattered throughout the paleontological literature, it is difficult to find and sometimes not accessible. Also, taxonomic information in the literature is often difficult to interpret for non-taxonomists looking for taxonomic synonymies as part of their research. The highly formalized structure makes Open Nomenclature synonymy lists ideally suited for computer aided identification of taxonomic synonyms. Because a synonymy list is a list of citations related to a taxon name, its bibliographic nature allows the application of bibliometric techniques to calculate the impact of synonymies and taxonomic concepts. TaxonRank is a ranking algorithm based on bibliometric analysis and Internet page ranking algorithms. TaxonRank uses published synonymy list data stored in TaxonConcept, a taxonomic information system. The basic ranking algorithm has been modified to include a measure of confidence on species identification based on the Open Nomenclature notation used in synonymy list, as well as other synonymy specific criteria. The results of our experiments show that the output of the proposed ranking algorithm gives a good estimate of the impact a published taxonomic concept has on the taxonomic opinions in the geological community. Also, our results show that treating taxonomic synonymies as part of on an ontology is a way to record and manage taxonomic knowledge, and thus contribute

  10. A novel artificial bee colony algorithm based on modified search equation and orthogonal learning.

    PubMed

    Gao, Wei-feng; Liu, San-yang; Huang, Ling-ling

    2013-06-01

    The artificial bee colony (ABC) algorithm is a relatively new optimization technique which has been shown to be competitive to other population-based algorithms. However, ABC has an insufficiency regarding its solution search equation, which is good at exploration but poor at exploitation. To address this concerning issue, we first propose an improved ABC method called as CABC where a modified search equation is applied to generate a candidate solution to improve the search ability of ABC. Furthermore, we use the orthogonal experimental design (OED) to form an orthogonal learning (OL) strategy for variant ABCs to discover more useful information from the search experiences. Owing to OED's good character of sampling a small number of well representative combinations for testing, the OL strategy can construct a more promising and efficient candidate solution. In this paper, the OL strategy is applied to three versions of ABC, i.e., the standard ABC, global-best-guided ABC (GABC), and CABC, which yields OABC, OGABC, and OCABC, respectively. The experimental results on a set of 22 benchmark functions demonstrate the effectiveness and efficiency of the modified search equation and the OL strategy. The comparisons with some other ABCs and several state-of-the-art algorithms show that the proposed algorithms significantly improve the performance of ABC. Moreover, OCABC offers the highest solution quality, fastest global convergence, and strongest robustness among all the contenders on almost all the test functions. PMID:23086528

  11. Gender Equality or Primacy of the Mother? Ambivalent Descriptions of Good Parents

    ERIC Educational Resources Information Center

    Perl-Littunen, Satu

    2007-01-01

    The ideology of gender equality is accepted as the norm in the Nordic countries. When asked to describe what they thought was required to be a good mother and a good father, Finnish informants (N = 387) showed uneasiness in describing good parents separately, however, often describing only a good mother. This article aims to explore the ambivalent…

  12. A Parallel Newton-Krylov-Schur Algorithm for the Reynolds-Averaged Navier-Stokes Equations

    NASA Astrophysics Data System (ADS)

    Osusky, Michal

    Aerodynamic shape optimization and multidisciplinary optimization algorithms have the potential not only to improve conventional aircraft, but also to enable the design of novel configurations. By their very nature, these algorithms generate and analyze a large number of unique shapes, resulting in high computational costs. In order to improve their efficiency and enable their use in the early stages of the design process, a fast and robust flow solution algorithm is necessary. This thesis presents an efficient parallel Newton-Krylov-Schur flow solution algorithm for the three-dimensional Navier-Stokes equations coupled with the Spalart-Allmaras one-equation turbulence model. The algorithm employs second-order summation-by-parts (SBP) operators on multi-block structured grids with simultaneous approximation terms (SATs) to enforce block interface coupling and boundary conditions. The discrete equations are solved iteratively with an inexact-Newton method, while the linear system at each Newton iteration is solved using the flexible Krylov subspace iterative method GMRES with an approximate-Schur parallel preconditioner. The algorithm is thoroughly verified and validated, highlighting the correspondence of the current algorithm with several established flow solvers. The solution for a transonic flow over a wing on a mesh of medium density (15 million nodes) shows good agreement with experimental results. Using 128 processors, deep convergence is obtained in under 90 minutes. The solution of transonic flow over the Common Research Model wing-body geometry with grids with up to 150 million nodes exhibits the expected grid convergence behavior. This case was completed as part of the Fifth AIAA Drag Prediction Workshop, with the algorithm producing solutions that compare favourably with several widely used flow solvers. The algorithm is shown to scale well on over 6000 processors. The results demonstrate the effectiveness of the SBP-SAT spatial discretization, which can

  13. A new frame-based registration algorithm.

    PubMed

    Yan, C H; Whalen, R T; Beaupre, G S; Sumanaweera, T S; Yen, S Y; Napel, S

    1998-01-01

    This paper presents a new algorithm for frame registration. Our algorithm requires only that the frame be comprised of straight rods, as opposed to the N structures or an accurate frame model required by existing algorithms. The algorithm utilizes the full 3D information in the frame as well as a least squares weighting scheme to achieve highly accurate registration. We use simulated CT data to assess the accuracy of our algorithm. We compare the performance of the proposed algorithm to two commonly used algorithms. Simulation results show that the proposed algorithm is comparable to the best existing techniques with knowledge of the exact mathematical frame model. For CT data corrupted with an unknown in-plane rotation or translation, the proposed technique is also comparable to the best existing techniques. However, in situations where there is a discrepancy of more than 2 mm (0.7% of the frame dimension) between the frame and the mathematical model, the proposed technique is significantly better (p < or = 0.05) than the existing techniques. The proposed algorithm can be applied to any existing frame without modification. It provides better registration accuracy and is robust against model mis-match. It allows greater flexibility on the frame structure. Lastly, it reduces the frame construction cost as adherence to a concise model is not required. PMID:9472834

  14. A new frame-based registration algorithm

    NASA Technical Reports Server (NTRS)

    Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Sumanaweera, T. S.; Yen, S. Y.; Napel, S.

    1998-01-01

    This paper presents a new algorithm for frame registration. Our algorithm requires only that the frame be comprised of straight rods, as opposed to the N structures or an accurate frame model required by existing algorithms. The algorithm utilizes the full 3D information in the frame as well as a least squares weighting scheme to achieve highly accurate registration. We use simulated CT data to assess the accuracy of our algorithm. We compare the performance of the proposed algorithm to two commonly used algorithms. Simulation results show that the proposed algorithm is comparable to the best existing techniques with knowledge of the exact mathematical frame model. For CT data corrupted with an unknown in-plane rotation or translation, the proposed technique is also comparable to the best existing techniques. However, in situations where there is a discrepancy of more than 2 mm (0.7% of the frame dimension) between the frame and the mathematical model, the proposed technique is significantly better (p < or = 0.05) than the existing techniques. The proposed algorithm can be applied to any existing frame without modification. It provides better registration accuracy and is robust against model mis-match. It allows greater flexibility on the frame structure. Lastly, it reduces the frame construction cost as adherence to a concise model is not required.

  15. Good soldiers and good actors: prosocial and impression management motives as interactive predictors of affiliative citizenship behaviors.

    PubMed

    Grant, Adam M; Mayer, David M

    2009-07-01

    Researchers have discovered inconsistent relationships between prosocial motives and citizenship behaviors. We draw on impression management theory to propose that impression management motives strengthen the association between prosocial motives and affiliative citizenship by encouraging employees to express citizenship in ways that both "do good" and "look good." We report 2 studies that examine the interactions of prosocial and impression management motives as predictors of affiliative citizenship using multisource data from 2 different field samples. Across the 2 studies, we find positive interactions between prosocial and impression management motives as predictors of affiliative citizenship behaviors directed toward other people (helping and courtesy) and the organization (initiative). Study 2 also shows that only prosocial motives predict voice-a challenging citizenship behavior. Our results suggest that employees who are both good soldiers and good actors are most likely to emerge as good citizens in promoting the status quo. PMID:19594233

  16. Classification of urban vegetation patterns from hyperspectral imagery: hybrid algorithm based on genetic algorithm tuned fuzzy support vector machine

    NASA Astrophysics Data System (ADS)

    Zhou, Mandi; Shu, Jiong; Chen, Zhigang; Ji, Minhe

    2012-11-01

    Hyperspectral imagery has been widely used in terrain classification for its high resolution. Urban vegetation, known as an essential part of the urban ecosystem, can be difficult to discern due to high similarity of spectral signatures among some land-cover classes. In this paper, we investigate a hybrid approach of the genetic-algorithm tuned fuzzy support vector machine (GA-FSVM) technique and apply it to urban vegetation classification from aerial hyperspectral urban imagery. The approach adopts the genetic algorithm to optimize parameters of support vector machine, and employs the K-nearest neighbor algorithm to calculate the membership function for each fuzzy parameter, aiming to reduce the effects of the isolated and noisy samples. Test data come from push-broom hyperspectral imager (PHI) hyperspectral remote sensing image which partially covers a corner of the Shanghai World Exposition Park, while PHI is a hyper-spectral sensor developed by Shanghai Institute of Technical Physics. Experimental results show the GA-FSVM model generates overall accuracy of 71.2%, outperforming the maximum likelihood classifier with 49.4% accuracy and the artificial neural network method with 60.8% accuracy. It indicates GA-FSVM is a promising model for vegetation classification from hyperspectral urban data, and has good advantage in the application of classification involving abundant mixed pixels and small samples problem.

  17. FITEST: A computer program for ``exact chi-square'' goodness-of-fit significance tests

    NASA Astrophysics Data System (ADS)

    Romesburg, H. Charles; Marshall, Kim; Mauk, Timothy P.

    FITEST, a FORTRAN IV computer program, performs what is termed an exact chi-square test (ECST) to assess the goodness-of-fit between an observed and a theoretical distribution. This test is an alternative to the chi-square and Kolmogorov-Smirnov goodness-of-fit tests. Because it is based on less restrictive assumptions, the ECST may be more appropriate. However, the test imposes a computational burden which, if not handled by an efficiently designed computer algorithm, makes it prohibitively expensive on all but trivial problems. FITEST, through an efficiently designed algorithm, makes an ECST possible for any problem at a reasonable cost.

  18. Model of wealth and goods dynamics in a closed market

    NASA Astrophysics Data System (ADS)

    Ausloos, Marcel; Peķalski, Andrzej

    2007-01-01

    A simple computer simulation model of a closed market on a fixed network with free flow of goods and money is introduced. The model contains only two variables: the amount of goods and money beside the size of the system. An initially flat distribution of both variables is presupposed. We show that under completely random rules, i.e. through the choice of interacting agent pairs on the network and of the exchange rules that the market stabilizes in time and shows diversification of money and goods. We also indicate that the difference between poor and rich agents increases for small markets, as well as for systems in which money is steadily deduced from the market through taxation. It is also found that the price of goods decreases when taxes are introduced, likely due to the less availability of money.

  19. How to Set Focal Categories for Brief Implicit Association Test? "Good" Is Good, "Bad" Is Not So Good.

    PubMed

    Shi, Yuanyuan; Cai, Huajian; Shen, Yiqin Alicia; Yang, Jing

    2016-01-01

    Three studies were conducted to examine the validity of the four versions of BIATs that are supposed to measure the same construct but differ in shared focal category. Study 1 investigated the criterion validity of four BIATs measuring attitudes toward flower versus insect. Study 2 examined the experimental sensitivity of four BIATs by considering attitudes toward induced ingroup versus outgroup. Study 3 examined the predictive power of the four BIATs by investigating attitudes toward the commercial beverages Coke versus Sprite. The findings suggested that for the two attributes "good" and "bad," "good" rather than "bad" proved to be good as a shared focal category; for two targets, so long as they clearly differed in goodness or valence, the "good" rather than "bad" target emerged as good for a shared focal category. Beyond this case, either target worked well. These findings may facilitate the understanding of the BIAT and its future applications. PMID:26869948

  20. Immunoscintigraphy with indium-111 labeled monoclonal antibodies: The importance of a good display method

    SciTech Connect

    Liehn, J.C.; Hannequin, P.; Nasca, S.; Lebrun, D.; Fernandez-Valoni, A.; Valeyre, J. )

    1989-03-01

    A major drawback of In-111-labeled monoclonal antibodies (MoAb) is the presence of intense liver, renal, and bone marrow nonspecific activity. This makes the display of the images hardly optimal and their visual interpretation difficult. In this study, the intrinsic color scale (which consists of selecting the limits of the color scale as the highest and the lowest pixel value of the image) was compared to a new, simple algorithm for the determination of the limits of the color scale. This algorithm was based on the count density in the iliac crest areas. OC-125 or anti-CEA In-111 MoAb F(ab')2 fragments were used in 32 patients with suspected recurrence of ovarian (19 patients) or colorectal cancer (13 patients). Final diagnosis was assessed by surgery (21 patients), biopsy (five patients), or followup (six patients). A 10-minute abdomino-pelvic anterior view was recorded two days after injection. These views are displayed using the two methods and interpreted by two observers. Using their responses in each quadrant of the pelvis, the authors calculated two ROC curves. The comparison of the ROC curves showed better performances for the new method. For example, for the same specificity (73%), the sensitivity of the new method was significantly better (78% versus 68%). This result confirmed the importance of a good methodology for displaying immunoscintigraphic images.

  1. A new bio-optical algorithm for the remote sensing of algal blooms in complex ocean waters

    NASA Astrophysics Data System (ADS)

    Shanmugam, Palanisamy

    2011-04-01

    A new bio-optical algorithm has been developed to provide accurate assessments of chlorophyll a (Chl a) concentration for detection and mapping of algal blooms from satellite data in optically complex waters, where the presence of suspended sediments and dissolved substances can interfere with phytoplankton signal and thus confound conventional band ratio algorithms. A global data set of concurrent measurements of pigment concentration and radiometric reflectance was compiled and used to develop this algorithm that uses the normalized water-leaving radiance ratios along with an algal bloom index (ABI) between three visible bands to determine Chl a concentrations. The algorithm is derived using Sea-viewing Wide Field-of-view Sensor bands, and it is subsequently tuned to be applicable to Moderate Resolution Imaging Spectroradiometer (MODIS)/Aqua data. When compared with large in situ data sets and satellite matchups in a variety of coastal and ocean waters the present algorithm makes good retrievals of the Chl a concentration and shows statistically significant improvement over current global algorithms (e.g., OC3 and OC4v4). An examination of the performance of these algorithms on several MODIS/Aqua images in complex waters of the Arabian Sea and west Florida shelf shows that the new algorithm provides a better means for detecting and differentiating algal blooms from other turbid features, whereas the OC3 algorithm has significant errors although yielding relatively consistent results in clear waters. These findings imply that, provided that an accurate atmospheric correction scheme is available to deal with complex waters, the current MODIS/Aqua, MERIS and OCM data could be extensively used for quantitative and operational monitoring of algal blooms in various regional and global waters.

  2. Algorithm That Synthesizes Other Algorithms for Hashing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2010-01-01

    An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the

  3. Parameter estimation and optimal scheduling algorithm for a mathematical model of intermittent androgen suppression therapy for prostate cancer

    NASA Astrophysics Data System (ADS)

    Guo, Qian; Lu, Zhichang; Hirata, Yoshito; Aihara, Kazuyuki

    2013-12-01

    We propose an algorithm based on cross-entropy to determine parameters of a piecewise linear model, which describes intermittent androgen suppression therapy for prostate cancer. By comparing with clinical data, the parameter estimation for the switched system shows good fitting accuracy and efficiency. We further optimize switching time points for the piecewise linear model to obtain a feasible therapeutic schedule. The simulation results of therapeutic effect are superior to those of previous strategy.

  4. Wire Detection Algorithms for Navigation

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia I.

    2002-01-01

    In this research we addressed the problem of obstacle detection for low altitude rotorcraft flight. In particular, the problem of detecting thin wires in the presence of image clutter and noise was studied. Wires present a serious hazard to rotorcrafts. Since they are very thin, their detection early enough so that the pilot has enough time to take evasive action is difficult, as their images can be less than one or two pixels wide. Two approaches were explored for this purpose. The first approach involved a technique for sub-pixel edge detection and subsequent post processing, in order to reduce the false alarms. After reviewing the line detection literature, an algorithm for sub-pixel edge detection proposed by Steger was identified as having good potential to solve the considered task. The algorithm was tested using a set of images synthetically generated by combining real outdoor images with computer generated wire images. The performance of the algorithm was evaluated both, at the pixel and the wire levels. It was observed that the algorithm performs well, provided that the wires are not too thin (or distant) and that some post processing is performed to remove false alarms due to clutter. The second approach involved the use of an example-based learning scheme namely, Support Vector Machines. The purpose of this approach was to explore the feasibility of an example-based learning based approach for the task of detecting wires from their images. Support Vector Machines (SVMs) have emerged as a promising pattern classification tool and have been used in various applications. It was found that this approach is not suitable for very thin wires and of course, not suitable at all for sub-pixel thick wires. High dimensionality of the data as such does not present a major problem for SVMs. However it is desirable to have a large number of training examples especially for high dimensional data. The main difficulty in using SVMs (or any other example-based learning

  5. Quantifying capital goods for waste incineration

    SciTech Connect

    Brogaard, L.K.; Riber, C.; Christensen, T.H.

    2013-06-15

    Highlights: • Materials and energy used for the construction of waste incinerators were quantified. • The data was collected from five incineration plants in Scandinavia. • Included were six main materials, electronic systems, cables and all transportation. • The capital goods contributed 2–3% compared to the direct emissions impact on GW. - Abstract: Materials and energy used for the construction of modern waste incineration plants were quantified. The data was collected from five incineration plants (72,000–240,000 tonnes per year) built in Scandinavia (Norway, Finland and Denmark) between 2006 and 2012. Concrete for the buildings was the main material used amounting to 19,000–26,000 tonnes per plant. The quantification further included six main materials, electronic systems, cables and all transportation. The energy used for the actual on-site construction of the incinerators was in the range 4000–5000 MW h. In terms of the environmental burden of producing the materials used in the construction, steel for the building and the machinery contributed the most. The material and energy used for the construction corresponded to the emission of 7–14 kg CO{sub 2} per tonne of waste combusted throughout the lifetime of the incineration plant. The assessment showed that, compared to data reported in the literature on direct emissions from the operation of incinerators, the environmental impacts caused by the construction of buildings and machinery (capital goods) could amount to 2–3% with respect to kg CO{sub 2} per tonne of waste combusted.

  6. Parallel scheduling algorithms

    SciTech Connect

    Dekel, E.; Sahni, S.

    1983-01-01

    Parallel algorithms are given for scheduling problems such as scheduling to minimize the number of tardy jobs, job sequencing with deadlines, scheduling to minimize earliness and tardiness penalties, channel assignment, and minimizing the mean finish time. The shared memory model of parallel computers is used to obtain fast algorithms. 26 references.

  7. Developmental Algorithms Have Meaning!

    ERIC Educational Resources Information Center

    Green, John

    1997-01-01

    Adapts Stanic and McKillip's ideas for the use of developmental algorithms to propose that the present emphasis on symbolic manipulation should be tempered with an emphasis on the conceptual understanding of the mathematics underlying the algorithm. Uses examples from the areas of numeric computation, algebraic manipulation, and equation solving…

  8. TrackEye tracking algorithm characterization

    NASA Astrophysics Data System (ADS)

    Valley, Michael T.; Shields, Robert W.; Reed, Jack M.

    2004-10-01

    TrackEye is a film digitization and target tracking system that offers the potential for quantitatively measuring the dynamic state variables (e.g., absolute and relative position, orientation, linear and angular velocity/acceleration, spin rate, trajectory, angle of attack, etc.) for moving objects using captured single or dual view image sequences. At the heart of the system is a set of tracking algorithms that automatically find and quantify the location of user selected image details such as natural test article features or passive fiducials that have been applied to cooperative test articles. This image position data is converted into real world coordinates and rates with user specified information such as the image scale and frame rate. Though tracking methods such as correlation algorithms are typically robust by nature, the accuracy and suitability of each TrackEye tracking algorithm is in general unknown even under good imaging conditions. The challenges of optimal algorithm selection and algorithm performance/measurement uncertainty are even more significant for long range tracking of high-speed targets where temporally varying atmospheric effects degrade the imagery. This paper will present the preliminary results from a controlled test sequence used to characterize the performance of the TrackEye tracking algorithm suite.

  9. TrackEye tracking algorithm characterization.

    SciTech Connect

    Reed, Jack W.; Shields, Rob W; Valley, Michael T.

    2004-08-01

    TrackEye is a film digitization and target tracking system that offers the potential for quantitatively measuring the dynamic state variables (e.g., absolute and relative position, orientation, linear and angular velocity/acceleration, spin rate, trajectory, angle of attack, etc.) for moving objects using captured single or dual view image sequences. At the heart of the system is a set of tracking algorithms that automatically find and quantify the location of user selected image details such as natural test article features or passive fiducials that have been applied to cooperative test articles. This image position data is converted into real world coordinates and rates with user specified information such as the image scale and frame rate. Though tracking methods such as correlation algorithms are typically robust by nature, the accuracy and suitability of each TrackEye tracking algorithm is in general unknown even under good imaging conditions. The challenges of optimal algorithm selection and algorithm performance/measurement uncertainty are even more significant for long range tracking of high-speed targets where temporally varying atmospheric effects degrade the imagery. This paper will present the preliminary results from a controlled test sequence used to characterize the performance of the TrackEye tracking algorithm suite.

  10. Cape of Good Hope: Teacher Description and Project Plan.

    ERIC Educational Resources Information Center

    Moyo, Kimya

    1998-01-01

    Presents detailed information about the Cape of Good Hope project in which pairs of students designed capes and cloaks out of the garbage bags for a fashion show. Also describes student objectives, unit goals, group activities, products required, and the final show and presentation. (ASK)

  11. Motion Cueing Algorithm Development: Piloted Performance Testing of the Cueing Algorithms

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.

    2005-01-01

    The relative effectiveness in simulating aircraft maneuvers with both current and newly developed motion cueing algorithms was assessed with an eleven-subject piloted performance evaluation conducted on the NASA Langley Visual Motion Simulator (VMS). In addition to the current NASA adaptive algorithm, two new cueing algorithms were evaluated: the optimal algorithm and the nonlinear algorithm. The test maneuvers included a straight-in approach with a rotating wind vector, an offset approach with severe turbulence and an on/off lateral gust that occurs as the aircraft approaches the runway threshold, and a takeoff both with and without engine failure after liftoff. The maneuvers were executed with each cueing algorithm with added visual display delay conditions ranging from zero to 200 msec. Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. Piloted performance parameters for the approach maneuvers, the vertical velocity upon touchdown and the runway touchdown position, were also analyzed but did not show any noticeable difference among the cueing algorithms. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach were less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.

  12. An image hiding method based on cascaded iterative Fourier transform and public-key encryption algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Sang, Jun; Alam, Mohammad S.

    2013-03-01

    An image hiding method based on cascaded iterative Fourier transform and public-key encryption algorithm was proposed. Firstly, the original secret image was encrypted into two phase-only masks M1 and M2 via cascaded iterative Fourier transform (CIFT) algorithm. Then, the public-key encryption algorithm RSA was adopted to encrypt M2 into M2' . Finally, a host image was enlarged by extending one pixel into 2×2 pixels and each element in M1 and M2' was multiplied with a superimposition coefficient and added to or subtracted from two different elements in the 2×2 pixels of the enlarged host image. To recover the secret image from the stego-image, the two masks were extracted from the stego-image without the original host image. By applying public-key encryption algorithm, the key distribution was facilitated, and also compared with the image hiding method based on optical interference, the proposed method may reach higher robustness by employing the characteristics of the CIFT algorithm. Computer simulations show that this method has good robustness against image processing.

  13. Fast algorithm for scaling analysis with higher-order detrending moving average method

    NASA Astrophysics Data System (ADS)

    Tsujimoto, Yutaka; Miki, Yuki; Shimatani, Satoshi; Kiyono, Ken

    2016-05-01

    Among scaling analysis methods based on the root-mean-square deviation from the estimated trend, it has been demonstrated that centered detrending moving average (DMA) analysis with a simple moving average has good performance when characterizing long-range correlation or fractal scaling behavior. Furthermore, higher-order DMA has also been proposed; it is shown to have better detrending capabilities, removing higher-order polynomial trends than original DMA. However, a straightforward implementation of higher-order DMA requires a very high computational cost, which would prevent practical use of this method. To solve this issue, in this study, we introduce a fast algorithm for higher-order DMA, which consists of two techniques: (1) parallel translation of moving averaging windows by a fixed interval; (2) recurrence formulas for the calculation of summations. Our algorithm can significantly reduce computational cost. Monte Carlo experiments show that the computational time of our algorithm is approximately proportional to the data length, although that of the conventional algorithm is proportional to the square of the data length. The efficiency of our algorithm is also shown by a systematic study of the performance of higher-order DMA, such as the range of detectable scaling exponents and detrending capability for removing polynomial trends. In addition, through the analysis of heart-rate variability time series, we discuss possible applications of higher-order DMA.

  14. A hybrid artificial bee colony algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Alqattan, Zakaria N.; Abdullah, Rosni

    2015-02-01

    Artificial Bee Colony (ABC) algorithm is one of the swarm intelligence algorithms; it has been introduced by Karaboga in 2005. It is a meta-heuristic optimization search algorithm inspired from the intelligent foraging behavior of the honey bees in nature. Its unique search process made it as one of the most competitive algorithm with some other search algorithms in the area of optimization, such as Genetic algorithm (GA) and Particle Swarm Optimization (PSO). However, the ABC performance of the local search process and the bee movement or the solution improvement equation still has some weaknesses. The ABC is good in avoiding trapping at the local optimum but it spends its time searching around unpromising random selected solutions. Inspired by the PSO, we propose a Hybrid Particle-movement ABC algorithm called HPABC, which adapts the particle movement process to improve the exploration of the original ABC algorithm. Numerical benchmark functions were used in order to experimentally test the HPABC algorithm. The results illustrate that the HPABC algorithm can outperform the ABC algorithm in most of the experiments (75% better in accuracy and over 3 times faster).

  15. Self-adaptive parameters in genetic algorithms

    NASA Astrophysics Data System (ADS)

    Pellerin, Eric; Pigeon, Luc; Delisle, Sylvain

    2004-04-01

    Genetic algorithms are powerful search algorithms that can be applied to a wide range of problems. Generally, parameter setting is accomplished prior to running a Genetic Algorithm (GA) and this setting remains unchanged during execution. The problem of interest to us here is the self-adaptive parameters adjustment of a GA. In this research, we propose an approach in which the control of a genetic algorithm"s parameters can be encoded within the chromosome of each individual. The parameters" values are entirely dependent on the evolution mechanism and on the problem context. Our preliminary results show that a GA is able to learn and evaluate the quality of self-set parameters according to their degree of contribution to the resolution of the problem. These results are indicative of a promising approach to the development of GAs with self-adaptive parameter settings that do not require the user to pre-adjust parameters at the outset.

  16. A Parallel Rendering Algorithm for MIMD Architectures

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.; Orloff, Tobias

    1991-01-01

    Applications such as animation and scientific visualization demand high performance rendering of complex three dimensional scenes. To deliver the necessary rendering rates, highly parallel hardware architectures are required. The challenge is then to design algorithms and software which effectively use the hardware parallelism. A rendering algorithm targeted to distributed memory MIMD architectures is described. For maximum performance, the algorithm exploits both object-level and pixel-level parallelism. The behavior of the algorithm is examined both analytically and experimentally. Its performance for large numbers of processors is found to be limited primarily by communication overheads. An experimental implementation for the Intel iPSC/860 shows increasing performance from 1 to 128 processors across a wide range of scene complexities. It is shown that minimal modifications to the algorithm will adapt it for use on shared memory architectures as well.

  17. Smooth transitions between bump rendering algorithms

    SciTech Connect

    Becker, B.G. Max, N.L. |

    1993-01-04

    A method is described for switching smoothly between rendering algorithms as required by the amount of visible surface detail. The result will be more realism with less computation for displaying objects whose surface detail can be described by one or more bump maps. The three rendering algorithms considered are bidirectional reflection distribution function (BRDF), bump-mapping, and displacement-mapping. The bump-mapping has been modified to make it consistent with the other two. For a given viewpoint, one of these algorithms will show a better trade-off between quality, computation time, and aliasing than the other two. Thus, it needs to be determined for any given viewpoint which regions of the object(s) will be rendered with each algorithm The decision as to which algorithm is appropriate is a function of distance, viewing angle, and the frequency of bumps in the bump map.

  18. Implementation of the phase gradient algorithm

    SciTech Connect

    Wahl, D.E.; Eichel, P.H.; Jakowatz, C.V. Jr.

    1990-01-01

    The recently introduced Phase Gradient Autofocus (PGA) algorithm is a non-parametric autofocus technique which has been shown to be quite effective for phase correction of Synthetic Aperture Radar (SAR) imagery. This paper will show that this powerful algorithm can be executed at near real-time speeds and also be implemented in a relatively small piece of hardware. A brief review of the PGA will be presented along with an overview of some critical implementation considerations. In addition, a demonstration of the PGA algorithm running on a 7 in. {times} 10 in. printed circuit board containing a TMS320C30 digital signal processing (DSP) chip will be given. With this system, using only the 20 range bins which contain the brightest points in the image, the algorithm can correct a badly degraded 256 {times} 256 image in as little as 3 seconds. Using all range bins, the algorithm can correct the image in 9 seconds. 4 refs., 2 figs.

  19. A Fast Exact k-Nearest Neighbors Algorithm for High Dimensional Search Using k-Means Clustering and Triangle Inequality

    PubMed Central

    Wang, Xueyi

    2011-01-01

    The k-nearest neighbors (k-NN) algorithm is a widely used machine learning method that finds nearest neighbors of a test object in a feature space. We present a new exact k-NN algorithm called kMkNN (k-Means for k-Nearest Neighbors) that uses the k-means clustering and the triangle inequality to accelerate the searching for nearest neighbors in a high dimensional space. The kMkNN algorithm has two stages. In the buildup stage, instead of using complex tree structures such as metric trees, kd-trees, or ball-tree, kMkNN uses a simple k-means clustering method to preprocess the training dataset. In the searching stage, given a query object, kMkNN finds nearest training objects starting from the nearest cluster to the query object and uses the triangle inequality to reduce the distance calculations. Experiments show that the performance of kMkNN is surprisingly good compared to the traditional k-NN algorithm and tree-based k-NN algorithms such as kd-trees and ball-trees. On a collection of 20 datasets with up to 106 records and 104 dimensions, kMkNN shows a 2-to 80-fold reduction of distance calculations and a 2- to 60-fold speedup over the traditional k-NN algorithm for 16 datasets. Furthermore, kMkNN performs significant better than a kd-tree based k-NN algorithm for all datasets and performs better than a ball-tree based k-NN algorithm for most datasets. The results show that kMkNN is effective for searching nearest neighbors in high dimensional spaces. PMID:22247818

  20. Comparison of update algorithms for pure Gauge SU(3)

    SciTech Connect

    Gupta, R.; Kilcup, G.W.; Patel, A.; Sharpe, S.R.; Deforcrand, P.

    1988-10-01

    The authors show that the overrelaxed algorithm of Creutz and of Brown and Woch is the optimal local update algorithm for simulation of pure gauge SU(3). The authors' comparison criterion includes computer efficiency and decorrelation times. They also investigate the rate of decorrelation for the Hybrid Monte Carlo algorithm.

  1. Good veterinary governance: definition, measurement and challenges.

    PubMed

    Msellati, L; Commault, J; Dehove, A

    2012-08-01

    Good veterinary governance assumes the provision of veterinary services that are sustainably financed, universally available, and provided efficiently without waste or duplication, in a manner that is transparent and free of fraud or corruption. Good veterinary governance is a necessary condition for sustainable economic development insomuch as it promotes the effective delivery of services and improves the overall performance of animal health systems. This article defines governance in Veterinary Services and proposes a framework for its measurement. It also discusses the role of Veterinary Services and analyses the governance dimensions of the performance-assessment tools developed by the World Organisation for Animal Health (OIE). These tools (OIE PVS Tool and PVS Gap Analysis) track the performance of Veterinary Services across countries (a harmonised tool) and over time (the PVS Pathway). The article shows the usefulness of the OIE PVS Tool for measuring governance, but also points to two shortcomings, namely (i) the lack of clear outcome indicators, which is an impediment to a comprehensive assessment of the performance of Veterinary Services, and (ii) the lack of specific measures for assessing the extent of corruption within Veterinary Services and the extent to which demand for better governance is being strengthened within the animal health system. A discussion follows on the drivers of corruption and instruments for perception-based assessments of country governance and corruption. Similarly, the article introduces the concept of social accountability, which is an approach to enhancing government transparency and accountability, and shows how supply-side and demand-side mechanisms complement each other in improving the governance of service delivery. It further elaborates on two instruments--citizen report card surveys and grievance redress mechanisms--because of their wider relevance and their possible applications in many settings, including Veterinary

  2. Random Volumetric MRI Trajectories via Genetic Algorithms

    PubMed Central

    Curtis, Andrew Thomas; Anand, Christopher Kumar

    2008-01-01

    A pseudorandom, velocity-insensitive, volumetric k-space sampling trajectory is designed for use with balanced steady-state magnetic resonance imaging. Individual arcs are designed independently and do not fit together in the way that multishot spiral, radial or echo-planar trajectories do. Previously, it was shown that second-order cone optimization problems can be defined for each arc independent of the others, that nulling of zeroth and higher moments can be encoded as constraints, and that individual arcs can be optimized in seconds. For use in steady-state imaging, sampling duty cycles are predicted to exceed 95 percent. Using such pseudorandom trajectories, aliasing caused by under-sampling manifests itself as incoherent noise. In this paper, a genetic algorithm (GA) is formulated and numerically evaluated. A large set of arcs is designed using previous methods, and the GA choses particular fit subsets of a given size, corresponding to a desired acquisition time. Numerical simulations of 1 second acquisitions show good detail and acceptable noise for large-volume imaging with 32 coils. PMID:18604305

  3. Advancements to the planogram frequency–distance rebinning algorithm

    PubMed Central

    Champley, Kyle M; Raylman, Raymond R; Kinahan, Paul E

    2010-01-01

    In this paper we consider the task of image reconstruction in positron emission tomography (PET) with the planogram frequency–distance rebinning (PFDR) algorithm. The PFDR algorithm is a rebinning algorithm for PET systems with panel detectors. The algorithm is derived in the planogram coordinate system which is a native data format for PET systems with panel detectors. A rebinning algorithm averages over the redundant four-dimensional set of PET data to produce a three-dimensional set of data. Images can be reconstructed from this rebinned three-dimensional set of data. This process enables one to reconstruct PET images more quickly than reconstructing directly from the four-dimensional PET data. The PFDR algorithm is an approximate rebinning algorithm. We show that implementing the PFDR algorithm followed by the (ramp) filtered backprojection (FBP) algorithm in linogram coordinates from multiple views reconstructs a filtered version of our image. We develop an explicit formula for this filter which can be used to achieve exact reconstruction by means of a modified FBP algorithm applied to the stack of rebinned linograms and can also be used to quantify the errors introduced by the PFDR algorithm. This filter is similar to the filter in the planogram filtered backprojection algorithm derived by Brasse et al. The planogram filtered backprojection and exact reconstruction with the PFDR algorithm require complete projections which can be completed with a reprojection algorithm. The PFDR algorithm is similar to the rebinning algorithm developed by Kao et al. By expressing the PFDR algorithm in detector coordinates, we provide a comparative analysis between the two algorithms. Numerical experiments using both simulated data and measured data from a positron emission mammography/tomography (PEM/PET) system are performed. Images are reconstructed by PFDR+FBP (PFDR followed by 2D FBP reconstruction), PFDRX (PFDR followed by the modified FBP algorithm for exact

  4. Efficient Homotopy Continuation Algorithms with Application to Computational Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Brown, David A.

    New homotopy continuation algorithms are developed and applied to a parallel implicit finite-difference Newton-Krylov-Schur external aerodynamic flow solver for the compressible Euler, Navier-Stokes, and Reynolds-averaged Navier-Stokes equations with the Spalart-Allmaras one-equation turbulence model. Many new analysis tools, calculations, and numerical algorithms are presented for the study and design of efficient and robust homotopy continuation algorithms applicable to solving very large and sparse nonlinear systems of equations. Several specific homotopies are presented and studied and a methodology is presented for assessing the suitability of specific homotopies for homotopy continuation. . A new class of homotopy continuation algorithms, referred to as monolithic homotopy continuation algorithms, is developed. These algorithms differ from classical predictor-corrector algorithms by combining the predictor and corrector stages into a single update, significantly reducing the amount of computation and avoiding wasted computational effort resulting from over-solving in the corrector phase. The new algorithms are also simpler from a user perspective, with fewer input parameters, which also improves the user's ability to choose effective parameters on the first flow solve attempt. Conditional convergence is proved analytically and studied numerically for the new algorithms. The performance of a fully-implicit monolithic homotopy continuation algorithm is evaluated for several inviscid, laminar, and turbulent flows over NACA 0012 airfoils and ONERA M6 wings. The monolithic algorithm is demonstrated to be more efficient than the predictor-corrector algorithm for all applications investigated. It is also demonstrated to be more efficient than the widely-used pseudo-transient continuation algorithm for all inviscid and laminar cases investigated, and good performance scaling with grid refinement is demonstrated for the inviscid cases. Performance is also demonstrated

  5. Online Planning Algorithms for POMDPs

    PubMed Central

    Ross, Stéphane; Pineau, Joelle; Paquet, Sébastien; Chaib-draa, Brahim

    2009-01-01

    Partially Observable Markov Decision Processes (POMDPs) provide a rich framework for sequential decision-making under uncertainty in stochastic domains. However, solving a POMDP is often intractable except for small problems due to their complexity. Here, we focus on online approaches that alleviate the computational complexity by computing good local policies at each decision step during the execution. Online algorithms generally consist of a lookahead search to find the best action to execute at each time step in an environment. Our objectives here are to survey the various existing online POMDP methods, analyze their properties and discuss their advantages and disadvantages; and to thoroughly evaluate these online approaches in different environments under various metrics (return, error bound reduction, lower bound improvement). Our experimental results indicate that state-of-the-art online heuristic search methods can handle large POMDP domains efficiently. PMID:19777080

  6. Ab initio study on (CO2)n clusters via electrostatics- and molecular tailoring-based algorithm

    NASA Astrophysics Data System (ADS)

    Jovan Jose, K. V.; Gadre, Shridhar R.

    An algorithm based on molecular electrostatic potential (MESP) and molecular tailoring approach (MTA) for building energetically favorable molecular clusters is presented. This algorithm is tested on prototype (CO2)n clusters with n = 13, 20, and 25 to explore their structure, energetics, and properties. The most stable clusters in this series are seen to show more number of triangular motifs. Many-body energy decomposition analysis performed on the most stable clusters reveals that the 2-body is the major contributor (>96%) to the total interaction energy. Vibrational frequencies and molecular electrostatic potentials are also evaluated for these large clusters through MTA. The MTA-based MESPs of these clusters show a remarkably good agreement with the corresponding actual ones. The most intense MTA-based normal mode frequencies are in fair agreement with the actual ones for smaller clusters. These calculated asymmetric stretching frequencies are blue-shifted with reference to the CO2 monomer.

  7. A fast high-order finite difference algorithm for pricing American options

    NASA Astrophysics Data System (ADS)

    Tangman, D. Y.; Gopaul, A.; Bhuruth, M.

    2008-12-01

    We describe an improvement of Han and Wu's algorithm [H. Han, X.Wu, A fast numerical method for the Black-Scholes equation of American options, SIAM J. Numer. Anal. 41 (6) (2003) 2081-2095] for American options. A high-order optimal compact scheme is used to discretise the transformed Black-Scholes PDE under a singularity separating framework. A more accurate free boundary location based on the smooth pasting condition and the use of a non-uniform grid with a modified tridiagonal solver lead to an efficient implementation of the free boundary value problem. Extensive numerical experiments show that the new finite difference algorithm converges rapidly and numerical solutions with good accuracy are obtained. Comparisons with some recently proposed methods for the American options problem are carried out to show the advantage of our numerical method.

  8. Detection of Cheating by Decimation Algorithm

    NASA Astrophysics Data System (ADS)

    Yamanaka, Shogo; Ohzeki, Masayuki; Decelle, Aurélien

    2015-02-01

    We expand the item response theory to study the case of "cheating students" for a set of exams, trying to detect them by applying a greedy algorithm of inference. This extended model is closely related to the Boltzmann machine learning. In this paper we aim to infer the correct biases and interactions of our model by considering a relatively small number of sets of training data. Nevertheless, the greedy algorithm that we employed in the present study exhibits good performance with a few number of training data. The key point is the sparseness of the interactions in our problem in the context of the Boltzmann machine learning: the existence of cheating students is expected to be very rare (possibly even in real world). We compare a standard approach to infer the sparse interactions in the Boltzmann machine learning to our greedy algorithm and we find the latter to be superior in several aspects.

  9. Recent Advancements in Lightning Jump Algorithm Work

    NASA Technical Reports Server (NTRS)

    Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.

    2010-01-01

    In the past year, the primary objectives were to show the usefulness of total lightning as compared to traditional cloud-to-ground (CG) networks, test the lightning jump algorithm configurations in other regions of the country, increase the number of thunderstorms within our thunderstorm database, and to pinpoint environments that could prove difficult for any lightning jump configuration. A total of 561 thunderstorms have been examined in the past year (409 non-severe, 152 severe) from four regions of the country (North Alabama, Washington D.C., High Plains of CO/KS, and Oklahoma). Results continue to indicate that the 2 lightning jump algorithm configuration holds the most promise in terms of prospective operational lightning jump algorithms, with a probability of detection (POD) at 81%, a false alarm rate (FAR) of 45%, a critical success index (CSI) of 49% and a Heidke Skill Score (HSS) of 0.66. The second best performing algorithm configuration was the Threshold 4 algorithm, which had a POD of 72%, FAR of 51%, a CSI of 41% and an HSS of 0.58. Because a more complex algorithm configuration shows the most promise in terms of prospective operational lightning jump algorithms, accurate thunderstorm cell tracking work must be undertaken to track lightning trends on an individual thunderstorm basis over time. While these numbers for the 2 configuration are impressive, the algorithm does have its weaknesses. Specifically, low-topped and tropical cyclone thunderstorm environments are present issues for the 2 lightning jump algorithm, because of the suppressed vertical depth impact on overall flash counts (i.e., a relative dearth in lightning). For example, in a sample of 120 thunderstorms from northern Alabama that contained 72 missed events by the 2 algorithm 36% of the misses were associated with these two environments (17 storms).

  10. 31 CFR 575.414 - Imports of Iraqi goods and purchases of goods from Iraq.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 31 Money and Finance: Treasury 3 2010-07-01 2010-07-01 false Imports of Iraqi goods and purchases of goods from Iraq. 575.414 Section 575.414 Money and Finance: Treasury Regulations Relating to Money... REGULATIONS Interpretations § 575.414 Imports of Iraqi goods and purchases of goods from Iraq....

  11. 19 CFR 10.605 - Goods classifiable as goods put up in sets.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 19 Customs Duties 1 2011-04-01 2011-04-01 false Goods classifiable as goods put up in sets. 10.605... put up in sets. Notwithstanding the specific rules set forth in General Note 29(n), HTSUS, goods classifiable as goods put up in sets for retail sale as provided for in General Rule of Interpretation 3,...

  12. A set-covering based heuristic algorithm for the periodic vehicle routing problem.

    PubMed

    Cacchiani, V; Hemmelmayr, V C; Tricoire, F

    2014-01-30

    We present a hybrid optimization algorithm for mixed-integer linear programming, embedding both heuristic and exact components. In order to validate it we use the periodic vehicle routing problem (PVRP) as a case study. This problem consists of determining a set of minimum cost routes for each day of a given planning horizon, with the constraints that each customer must be visited a required number of times (chosen among a set of valid day combinations), must receive every time the required quantity of product, and that the number of routes per day (each respecting the capacity of the vehicle) does not exceed the total number of available vehicles. This is a generalization of the well-known vehicle routing problem (VRP). Our algorithm is based on the linear programming (LP) relaxation of a set-covering-like integer linear programming formulation of the problem, with additional constraints. The LP-relaxation is solved by column generation, where columns are generated heuristically by an iterated local search algorithm. The whole solution method takes advantage of the LP-solution and applies techniques of fixing and releasing of the columns as a local search, making use of a tabu list to avoid cycling. We show the results of the proposed algorithm on benchmark instances from the literature and compare them to the state-of-the-art algorithms, showing the effectiveness of our approach in producing good quality solutions. In addition, we report the results on realistic instances of the PVRP introduced in Pacheco et al. (2011)  [24] and on benchmark instances of the periodic traveling salesman problem (PTSP), showing the efficacy of the proposed algorithm on these as well. Finally, we report the new best known solutions found for all the tested problems. PMID:24748696

  13. Algorithm for Autonomous Landing

    NASA Technical Reports Server (NTRS)

    Kuwata, Yoshiaki

    2011-01-01

    Because of their small size, high maneuverability, and easy deployment, micro aerial vehicles (MAVs) are used for a wide variety of both civilian and military missions. One of their current drawbacks is the vast array of sensors (such as GPS, altimeter, radar, and the like) required to make a landing. Due to the MAV s small payload size, this is a major concern. Replacing the imaging sensors with a single monocular camera is sufficient to land a MAV. By applying optical flow algorithms to images obtained from the camera, time-to-collision can be measured. This is a measurement of position and velocity (but not of absolute distance), and can avoid obstacles as well as facilitate a landing on a flat surface given a set of initial conditions. The key to this approach is to calculate time-to-collision based on some image on the ground. By holding the angular velocity constant, horizontal speed decreases linearly with the height, resulting in a smooth landing. Mathematical proofs show that even with actuator saturation or modeling/ measurement uncertainties, MAVs can land safely. Landings of this nature may have a higher velocity than is desirable, but this can be compensated for by a cushioning or dampening system, or by using a system of legs to grab onto a surface. Such a monocular camera system can increase vehicle payload size (or correspondingly reduce vehicle size), increase speed of descent, and guarantee a safe landing by directly correlating speed to height from the ground.

  14. Terascale spectral element algorithms and implementations.

    SciTech Connect

    Fischer, P. F.; Tufo, H. M.

    1999-08-17

    We describe the development and implementation of an efficient spectral element code for multimillion gridpoint simulations of incompressible flows in general two- and three-dimensional domains. We review basic and recently developed algorithmic underpinnings that have resulted in good parallel and vector performance on a broad range of architectures, including the terascale computing systems now coming online at the DOE labs. Sustained performance of 219 GFLOPS has been recently achieved on 2048 nodes of the Intel ASCI-Red machine at Sandia.

  15. Genetic Algorithm Tuned Fuzzy Logic for Gliding Return Trajectories

    NASA Technical Reports Server (NTRS)

    Burchett, Bradley T.

    2003-01-01

    The problem of designing and flying a trajectory for successful recovery of a reusable launch vehicle is tackled using fuzzy logic control with genetic algorithm optimization. The plant is approximated by a simplified three degree of freedom non-linear model. A baseline trajectory design and guidance algorithm consisting of several Mamdani type fuzzy controllers is tuned using a simple genetic algorithm. Preliminary results show that the performance of the overall system is shown to improve with genetic algorithm tuning.

  16. Radiation Hormesis: The Good, the Bad, and the Ugly

    PubMed Central

    Luckey, T.D.

    2006-01-01

    Three aspects of hormesis with low doses of ionizing radiation are presented: the good, the bad, and the ugly. The good is acceptance by France, Japan, and China of the thousands of studies showing stimulation and/or benefit, with no harm, from low dose irradiation. This includes thousands of people who live in good health with high background radiation. The bad is the nonacceptance of radiation hormesis by the U. S. and most other governments; their linear no threshold (LNT) concept promulgates fear of all radiation and produces laws which have no basis in mammalian physiology. The LNT concept leads to poor health, unreasonable medicine and oppressed industries. The ugly is decades of deception by medical and radiation committees which refuse to consider valid evidence of radiation hormesis in cancer, other diseases, and health. Specific examples are provided for the good, the bad, and the ugly in radiation hormesis. PMID:18648595

  17. Cyclic cooling algorithm

    SciTech Connect

    Rempp, Florian; Mahler, Guenter; Michel, Mathias

    2007-09-15

    We introduce a scheme to perform the cooling algorithm, first presented by Boykin et al. in 2002, for an arbitrary number of times on the same set of qbits. We achieve this goal by adding an additional SWAP gate and a bath contact to the algorithm. This way one qbit may repeatedly be cooled without adding additional qbits to the system. By using a product Liouville space to model the bath contact we calculate the density matrix of the system after a given number of applications of the algorithm.

  18. Parallel algorithms and architectures

    SciTech Connect

    Albrecht, A.; Jung, H.; Mehlhorn, K.

    1987-01-01

    Contents of this book are the following: Preparata: Deterministic simulation of idealized parallel computers on more realistic ones; Convex hull of randomly chosen points from a polytope; Dataflow computing; Parallel in sequence; Towards the architecture of an elementary cortical processor; Parallel algorithms and static analysis of parallel programs; Parallel processing of combinatorial search; Communications; An O(nlogn) cost parallel algorithms for the single function coarsest partition problem; Systolic algorithms for computing the visibility polygon and triangulation of a polygonal region; and RELACS - A recursive layout computing system. Parallel linear conflict-free subtree access.

  19. Evaluation of prostate segmentation algorithms for MRI: the PROMISE12 challenge.

    PubMed

    Litjens, Geert; Toth, Robert; van de Ven, Wendy; Hoeks, Caroline; Kerkstra, Sjoerd; van Ginneken, Bram; Vincent, Graham; Guillard, Gwenael; Birbeck, Neil; Zhang, Jindang; Strand, Robin; Malmberg, Filip; Ou, Yangming; Davatzikos, Christos; Kirschner, Matthias; Jung, Florian; Yuan, Jing; Qiu, Wu; Gao, Qinquan; Edwards, Philip Eddie; Maan, Bianca; van der Heijden, Ferdinand; Ghose, Soumya; Mitra, Jhimli; Dowling, Jason; Barratt, Dean; Huisman, Henkjan; Madabhushi, Anant

    2014-02-01

    -atlas registration, both on accuracy and computation time. Although average algorithm performance was good to excellent and the Imorphics algorithm outperformed the second observer on average, we showed that algorithm combination might lead to further improvement, indicating that optimal performance for prostate segmentation is not yet obtained. All results are available online at http://promise12.grand-challenge.org/. PMID:24418598

  20. 29 CFR 779.107 - Goods defined.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 3 2013-07-01 2013-07-01 false Goods defined. 779.107 Section 779.107 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR STATEMENTS OF GENERAL POLICY OR INTERPRETATION NOT DIRECTLY RELATED TO REGULATIONS THE FAIR LABOR STANDARDS ACT AS APPLIED TO RETAILERS OF GOODS OR SERVICES Employment to...

  1. How To Achieve Good Library Acoustics.

    ERIC Educational Resources Information Center

    Wiens, Janet

    2003-01-01

    Discusses how to create a good acoustical environment for college libraries, focusing on requirements related to the HVAC system and lighting, and noting the importance of good maintenance. A sidebar looks at how to design and achieve the most appropriate HVAC and lighting systems for optimum library acoustics. (SM)

  2. Static and evolutionary quantum public goods games

    NASA Astrophysics Data System (ADS)

    Liao, Zeyang; Qin, Gan; Hu, Lingzhi; Li, Songjian; Xu, Nanyang; Du, Jiangfeng

    2008-05-01

    We apply the continuous-variable quantization scheme to quantize public goods game and find that new pure strategy Nash equilibria emerge in the static case. Furthermore, in the evolutionary public goods game, entanglement can also contribute to the persistence of cooperation under various population structures without altruism, voluntary participation, and punishment.

  3. Student View: What Do Good Teachers Do?

    ERIC Educational Resources Information Center

    Educational Horizons, 2012

    2012-01-01

    Students know what good teaching looks like--but educators rarely ask them. See what these high school students, who are members of the Future Educators Association[R] and want to be teachers themselves, said. FEA is a part of the PDK family of education associations, which includes Pi Lambda Theta. Get insider advice on good teaching from some…

  4. 19 CFR 10.2014 - Originating goods.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 19 Customs Duties 1 2014-04-01 2014-04-01 false Originating goods. 10.2014 Section 10.2014 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY... Rules of Origin § 10.2014 Originating goods. Except as otherwise provided in this subpart and...

  5. What a Good Accompanist Needs to Know

    ERIC Educational Resources Information Center

    Lamb, Christina G.

    2006-01-01

    A good accompanist is a choir director's dream come true! At one time, student pianists were quite plentiful, but they have become rarer today. While many student pianists may be capable of reading music, not all can be good accompanists. Accompanying is an art of its own that requires some unique capabilities. In this article, the author presents…

  6. Rationality and the Logic of Good Reasons.

    ERIC Educational Resources Information Center

    Fisher, Walter R.

    This paper contends that the rationality of the logic of good reasons is constituted in its use. To support this claim, the paper presents an analysis of the relationship between being reasonable and being rational. It then considers how following the logic of good reasons leads to rationality in the behavior of individuals and groups; the latter…

  7. Elderly Consumers and the Used Goods Economy.

    ERIC Educational Resources Information Center

    Dobbs, Ralph C.

    A study examined the used goods market as it affects older adults. A set of open-ended questions was administered to 100 respondents over sixty years of age who were either retired or near retirement, married or widowed, and suburban or rural. Interviews were conducted to derermine the effects of the used goods market on the elderly consumer, to…

  8. Integrating Education: Parekhian Multiculturalism and Good Practice

    ERIC Educational Resources Information Center

    McGlynn, Claire

    2009-01-01

    This paper explores the concept of good practice in integrating education in divided societies. Using Northern Ireland as a case study, the paper draws on data from eight schools (both integrated Catholic and Protestant, and separate) that are identified as exemplifying good practice in response to cultural diversity. Analysis is provided through…

  9. Toward an Aristotelian Conception of Good Listening

    ERIC Educational Resources Information Center

    Rice, Suzanne

    2011-01-01

    In this essay Suzanne Rice examines Aristotle's ideas about virtue, character, and education as elements in an Aristotelian conception of good listening. Rice begins by surveying of several different contexts in which listening typically occurs, using this information to introduce the argument that what should count as "good listening" must be…

  10. 19 CFR 10.303 - Originating goods.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 19 Customs Duties 1 2013-04-01 2013-04-01 false Originating goods. 10.303 Section 10.303 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY ARTICLES CONDITIONALLY FREE, SUBJECT TO A REDUCED RATE, ETC. United States-Canada Free Trade Agreement § 10.303 Originating goods. (a) General....

  11. 19 CFR 10.594 - Originating goods.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY... States Free Trade Agreement Rules of Origin § 10.594 Originating goods. Except as otherwise provided in this subpart and General Note 29(m), HTSUS, a good imported into the customs territory of the...

  12. 19 CFR 10.594 - Originating goods.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY... States Free Trade Agreement Rules of Origin § 10.594 Originating goods. Except as otherwise provided in this subpart and General Note 29(m), HTSUS, a good imported into the customs territory of the...

  13. 19 CFR 10.594 - Originating goods.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY... States Free Trade Agreement Rules of Origin § 10.594 Originating goods. Except as otherwise provided in this subpart and General Note 29(m), HTSUS, a good imported into the customs territory of the...

  14. 19 CFR 10.594 - Originating goods.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY... States Free Trade Agreement Rules of Origin § 10.594 Originating goods. Except as otherwise provided in this subpart and General Note 29(m), HTSUS, a good imported into the customs territory of the...

  15. Paleolithic Counseling - The Good Old Days.

    ERIC Educational Resources Information Center

    King, Paul T.

    This paper outlines what clients were like in the "Good Ol' Days", as compared with what they are like now. Formerly clients appeared to come in with a plethora of ego energy, while now it seems more like a depletion. Explicit in our culture now is the idea that it is almost healthy and good to publicize one's private experience. Some of…

  16. Timeless Rules for Good Instruction. Training Classics.

    ERIC Educational Resources Information Center

    Zemke, Ron; Armstrong, Judy

    1997-01-01

    Summarizes a 1964 book on programmed instruction, "Good Frames and Bad" by Susan Meyer Markle, which presents still-valid principles for instructional programmers: subject matter sophistication; communication, behavior analysis, and diagnostic skills; and commonsense rules for writing good instructional frames (today's computer screens). (SK)

  17. An advanced dispatch simulator with advanced dispatch algorithm

    SciTech Connect

    Kafka, R.J. ); Fink, L.H. ); Balu, N.J. ); Crim, H.G. )

    1989-01-01

    This paper reports on an interactive automatic generation control (AGC) simulator. Improved and timely information regarding fossil fired plant performance is potentially useful in the economic dispatch of system generating units. Commonly used economic dispatch algorithms are not able to take full advantage of this information. The dispatch simulator was developed to test and compare economic dispatch algorithms which might be able to show improvement over standard economic dispatch algorithms if accurate unit information were available. This dispatch simulator offers substantial improvements over previously available simulators. In addition, it contains an advanced dispatch algorithm which shows control and performance advantages over traditional dispatch algorithms for both plants and electric systems.

  18. Quantifying capital goods for waste incineration.

    PubMed

    Brogaard, L K; Riber, C; Christensen, T H

    2013-06-01

    Materials and energy used for the construction of modern waste incineration plants were quantified. The data was collected from five incineration plants (72,000-240,000 tonnes per year) built in Scandinavia (Norway, Finland and Denmark) between 2006 and 2012. Concrete for the buildings was the main material used amounting to 19,000-26,000 tonnes per plant. The quantification further included six main materials, electronic systems, cables and all transportation. The energy used for the actual on-site construction of the incinerators was in the range 4000-5000 MW h. In terms of the environmental burden of producing the materials used in the construction, steel for the building and the machinery contributed the most. The material and energy used for the construction corresponded to the emission of 7-14 kg CO2 per tonne of waste combusted throughout the lifetime of the incineration plant. The assessment showed that, compared to data reported in the literature on direct emissions from the operation of incinerators, the environmental impacts caused by the construction of buildings and machinery (capital goods) could amount to 2-3% with respect to kg CO2 per tonne of waste combusted. PMID:23561797

  19. Developing a game plan for good sportsmanship.

    PubMed

    Lodl, Kathleen

    2005-01-01

    It is widely believed in the United States that competition is beneficial for youngsters. However, the media are full of examples of players, fans, and coaches whose behavior veers out of control. There have been well-documented examples of youth in livestock competitions illegally medicating show animals to make them appear calmer, officials biasing their rulings toward a team that will take the most fans to a playoff game, and team rivalries that have become so caustic as to be dangerous for competitors and fans. A university extension and its partners created a program called "Great Fans. Great Sports." in order to teach the kinds of behaviors we wish to instill among all who are involved in competitions. It requires entire communities to develop and implement plans for enhancing sportsmanship in music, debate, drama, 4-H, and other arenas, as well as sports. The goal is to make good sportsmanship not the exception but the norm. The authors provide anecdotal evidence that "Great Fans. Great Sports." is having a positive impact on the attitudes and behaviors of competitors, fans, and communities. PMID:16570883

  20. How to pick a good fight.

    PubMed

    Joni, Saj-nicole A; Beyer, Damon

    2009-12-01

    Peace and harmony are overrated. Though conflict-free teamwork is often held up as the be-all and end-all of organizational life, it actually can be the worst thing to ever happen to a company. Look at Lehman Brothers. When Dick Fuld took over, he transformed a notoriously contentious workplace into one of Wall Street's most harmonious firms. But his efforts backfired--directors and managers became too agreeable, afraid to rock the boat by pointing out that the firm was heading into a crisis. Research shows that the single greatest predictor of poor company performance is complacency, which is why every organization needs a healthy dose of dissent. Not all kinds of conflict are productive, of course -companies need to find the right balance of alignment and competition and make sure that people's energies are pointed in a positive direction. In this article, two seasoned business advisers lay down ground rules for the right kinds of fights. First, the stakes must be worthwhile: The issue should involve a noble purpose or create noticeable--preferably game-changing--value. Next, good fights focus on the future; they're never about placing blame for the past. And it's critical for leaders to keep fights sportsmanlike, allow informal give-and-take in the trenches, and help soften the blow for the losing parties. PMID:19968056

  1. Learning to Show You're Listening

    ERIC Educational Resources Information Center

    Ward, Nigel G.; Escalante, Rafael; Al Bayyari, Yaffa; Solorio, Thamar

    2007-01-01

    Good listeners generally produce back-channel feedback, that is, short utterances such as "uh-huh" which signal active listening. As the rules governing back-channeling vary from language to language, second-language learners may need help acquiring this skill. This paper is an initial exploration of how to provide this. It presents a training…

  2. Constrained Multiobjective Biogeography Optimization Algorithm

    PubMed Central

    Mo, Hongwei; Xu, Zhidan; Xu, Lifang; Wu, Zhou; Ma, Haiping

    2014-01-01

    Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA) is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA. PMID:25006591

  3. Constrained multiobjective biogeography optimization algorithm.

    PubMed

    Mo, Hongwei; Xu, Zhidan; Xu, Lifang; Wu, Zhou; Ma, Haiping

    2014-01-01

    Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA) is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA. PMID:25006591

  4. Optimisation algorithms for microarray biclustering.

    PubMed

    Perrin, Dimitri; Duhamel, Christophe

    2013-01-01

    In providing simultaneous information on expression profiles for thousands of genes, microarray technologies have, in recent years, been largely used to investigate mechanisms of gene expression. Clustering and classification of such data can, indeed, highlight patterns and provide insight on biological processes. A common approach is to consider genes and samples of microarray datasets as nodes in a bipartite graphs, where edges are weighted e.g. based on the expression levels. In this paper, using a previously-evaluated weighting scheme, we focus on search algorithms and evaluate, in the context of biclustering, several variations of Genetic Algorithms. We also introduce a new heuristic "Propagate", which consists in recursively evaluating neighbour solutions with one more or one less active conditions. The results obtained on three well-known datasets show that, for a given weighting scheme, optimal or near-optimal solutions can be identified. PMID:24109756

  5. Line Thinning Algorithm

    NASA Astrophysics Data System (ADS)

    Feigin, G.; Ben-Yosef, N.

    1983-10-01

    A thinning algorithm, of the banana-peel type, is presented. In each iteration pixels are attacked from all directions (there are no sub-iterations), and the deletion criteria depend on the 24 nearest neighbours.

  6. Diagnostic Algorithm Benchmarking

    NASA Technical Reports Server (NTRS)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  7. Algorithmically specialized parallel computers

    SciTech Connect

    Snyder, L.; Jamieson, L.H.; Gannon, D.B.; Siegel, H.J.

    1985-01-01

    This book is based on a workshop which dealt with array processors. Topics considered include algorithmic specialization using VLSI, innovative architectures, signal processing, speech recognition, image processing, specialized architectures for numerical computations, and general-purpose computers.

  8. Data quality system using reference dictionaries and edit distance algorithms

    NASA Astrophysics Data System (ADS)

    Karbarz, Radosław; Mulawka, Jan

    2015-09-01

    The real art of management it is important to make smart decisions, what in most of the cases is not a trivial task. Those decisions may lead to determination of production level, funds allocation for investments etc. Most of the parameters in decision-making process such as: interest rate, goods value or exchange rate may change. It is well know that these parameters in the decision-making are based on the data contained in datamarts or data warehouse. However, if the information derived from the processed data sets is the basis for the most important management decisions, it is required that the data is accurate, complete and current. In order to achieve high quality data and to gain from them measurable business benefits, data quality system should be used. The article describes the approach to the problem, shows the algorithms in details and their usage. Finally the test results are provide. Test results show the best algorithms (in terms of quality and quantity) for different parameters and data distribution.

  9. Algorithme intelligent d'optimisation d'un design structurel de grande envergure

    NASA Astrophysics Data System (ADS)

    Dominique, Stephane

    genetic algorithm that prevents new individuals to be born too close to previously evaluated solutions. The restricted area becomes smaller or larger during the optimisation to allow global or local search when necessary. Also, a new search operator named Substitution Operator is incorporated in GATE. This operator allows an ANN surrogate model to guide the algorithm toward the most promising areas of the design space. The suggested CBR approach and GATE were tested on several simple test problems, as well as on the industrial problem of designing a gas turbine engine rotor's disc. These results are compared to other results obtained for the same problems by many other popular optimisation algorithms, such as (depending of the problem) gradient algorithms, binary genetic algorithm, real number genetic algorithm, genetic algorithm using multiple parents crossovers, differential evolution genetic algorithm, Hookes & Jeeves generalized pattern search method and POINTER from the software I-SIGHT 3.5. Results show that GATE is quite competitive, giving the best results for 5 of the 6 constrained optimisation problem. GATE also provided the best results of all on problem produced by a Maximum Set Gaussian landscape generator. Finally, GATE provided a disc 4.3% lighter than the best other tested algorithm (POINTER) for the gas turbine engine rotor's disc problem. One drawback of GATE is a lesser efficiency for highly multimodal unconstrained problems, for which he gave quite poor results with respect to its implementation cost. To conclude, according to the preliminary results obtained during this thesis, the suggested CBR process, combined with GATE, seems to be a very good candidate to automate and accelerate the structural design of mechanical devices, potentially reducing significantly the cost of industrial preliminary design processes.

  10. Quantum hyperparallel algorithm for matrix multiplication

    NASA Astrophysics Data System (ADS)

    Zhang, Xin-Ding; Zhang, Xiao-Ming; Xue, Zheng-Yuan

    2016-04-01

    Hyperentangled states, entangled states with more than one degree of freedom, are considered as promising resource in quantum computation. Here we present a hyperparallel quantum algorithm for matrix multiplication with time complexity O(N2), which is better than the best known classical algorithm. In our scheme, an N dimensional vector is mapped to the state of a single source, which is separated to N paths. With the assistance of hyperentangled states, the inner product of two vectors can be calculated with a time complexity independent of dimension N. Our algorithm shows that hyperparallel quantum computation may provide a useful tool in quantum machine learning and “big data” analysis.

  11. Quantum hyperparallel algorithm for matrix multiplication.

    PubMed

    Zhang, Xin-Ding; Zhang, Xiao-Ming; Xue, Zheng-Yuan

    2016-01-01

    Hyperentangled states, entangled states with more than one degree of freedom, are considered as promising resource in quantum computation. Here we present a hyperparallel quantum algorithm for matrix multiplication with time complexity O(N(2)), which is better than the best known classical algorithm. In our scheme, an N dimensional vector is mapped to the state of a single source, which is separated to N paths. With the assistance of hyperentangled states, the inner product of two vectors can be calculated with a time complexity independent of dimension N. Our algorithm shows that hyperparallel quantum computation may provide a useful tool in quantum machine learning and "big data" analysis. PMID:27125586

  12. OpenEIS Algorithms

    2013-07-29

    The OpenEIS Algorithm package seeks to provide a low-risk path for building owners, service providers and managers to explore analytical methods for improving building control and operational efficiency. Users of this software can analyze building data, and learn how commercial implementations would provide long-term value. The code also serves as a reference implementation for developers who wish to adapt the algorithms for use in commercial tools or service offerings.

  13. Development and verification of an analytical algorithm to predict absorbed dose distributions in ocular proton therapy using Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Koch, Nicholas C.; Newhauser, Wayne D.

    2010-02-01

    Proton beam radiotherapy is an effective and non-invasive treatment for uveal melanoma. Recent research efforts have focused on improving the dosimetric accuracy of treatment planning and overcoming the present limitation of relative analytical dose calculations. Monte Carlo algorithms have been shown to accurately predict dose per monitor unit (D/MU) values, but this has yet to be shown for analytical algorithms dedicated to ocular proton therapy, which are typically less computationally expensive than Monte Carlo algorithms. The objective of this study was to determine if an analytical method could predict absolute dose distributions and D/MU values for a variety of treatment fields like those used in ocular proton therapy. To accomplish this objective, we used a previously validated Monte Carlo model of an ocular nozzle to develop an analytical algorithm to predict three-dimensional distributions of D/MU values from pristine Bragg peaks and therapeutically useful spread-out Bragg peaks (SOBPs). Results demonstrated generally good agreement between the analytical and Monte Carlo absolute dose calculations. While agreement in the proximal region decreased for beams with less penetrating Bragg peaks compared with the open-beam condition, the difference was shown to be largely attributable to edge-scattered protons. A method for including this effect in any future analytical algorithm was proposed. Comparisons of D/MU values showed typical agreement to within 0.5%. We conclude that analytical algorithms can be employed to accurately predict absolute proton dose distributions delivered by an ocular nozzle.

  14. A discrete artificial bee colony algorithm incorporating differential evolution for the flow-shop scheduling problem with blocking

    NASA Astrophysics Data System (ADS)

    Han, Yu-Yan; Gong, Dunwei; Sun, Xiaoyan

    2015-07-01

    A flow-shop scheduling problem with blocking has important applications in a variety of industrial systems but is underrepresented in the research literature. In this study, a novel discrete artificial bee colony (ABC) algorithm is presented to solve the above scheduling problem with a makespan criterion by incorporating the ABC with differential evolution (DE). The proposed algorithm (DE-ABC) contains three key operators. One is related to the employed bee operator (i.e. adopting mutation and crossover operators of discrete DE to generate solutions with good quality); the second is concerned with the onlooker bee operator, which modifies the selected solutions using insert or swap operators based on the self-adaptive strategy; and the last is for the local search, that is, the insert-neighbourhood-based local search with a small probability is adopted to improve the algorithm's capability in exploitation. The performance of the proposed DE-ABC algorithm is empirically evaluated by applying it to well-known benchmark problems. The experimental results show that the proposed algorithm is superior to the compared algorithms in minimizing the makespan criterion.

  15. PACCE: Perl Algorithm to Compute Continuum and Equivalent Widths

    NASA Astrophysics Data System (ADS)

    Riffel, Rogério; Borges Vale, Tibério

    2011-05-01

    We present Perl Algorithm to Compute continuum and Equivalent Widths (pacce). We describe the methods used in the computations and the requirements for its usage. We compare the measurements made with pacce and "manual" ones made using iraf splot task. These tests show that for SSP models the equivalent widths strengths are very similar (differences <0.2A) for both measurements. In real stellar spectra, the correlation between both values is still very good, but with differences of up to 0.5A. pacce is also able to determine mean continuum and continuum at line center values, which are helpful in stellar population studies. In addition, it is also able to compute the uncertainties in the equivalent widths using photon statistics.

  16. State-Space Algorithms for Estimating Spike Rate Functions

    PubMed Central

    Smith, Anne C.; Scalon, Joao D.; Wirth, Sylvia; Yanike, Marianna; Suzuki, Wendy A.; Brown, Emery N.

    2010-01-01

    The accurate characterization of spike firing rates including the determination of when changes in activity occur is a fundamental issue in the analysis of neurophysiological data. Here we describe a state-space model for estimating the spike rate function that provides a maximum likelihood estimate of the spike rate, model goodness-of-fit assessments, as well as confidence intervals for the spike rate function and any other associated quantities of interest. Using simulated spike data, we first compare the performance of the state-space approach with that of Bayesian adaptive regression splines (BARS) and a simple cubic spline smoothing algorithm. We show that the state-space model is computationally efficient and comparable with other spline approaches. Our results suggest both a theoretically sound and practical approach for estimating spike rate functions that is applicable to a wide range of neurophysiological data. PMID:19911062

  17. A novel regularized edge-preserving super-resolution algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Hui; Chen, Fu-sheng; Zhang, Zhi-jie; Wang, Chen-sheng

    2013-09-01

    Using super-resolution (SR) technology is a good approach to obtain high-resolution infrared image. However, Image super-resolution reconstruction is essentially an ill-posed problem, it is important to design an effective regularization term (image prior). Gaussian prior is widely used in the regularization term, but the reconstructed SR image becomes over-smoothness. Here, a novel regularization term called non-local means (NLM) term is derived based on the assumption that the natural image content is likely to repeat itself within some neighborhood. In the proposed framework, the estimated high image is obtained by minimizing a cost function. The iteration method is applied to solve the optimum problem. With the progress of iteration, the regularization term is adaptively updated. The proposed algorithm has been tested in several experiments. The experimental results show that the proposed approach is robust and can reconstruct higher quality images both in quantitative term and perceptual effect.

  18. A novel kernel extreme learning machine algorithm based on self-adaptive artificial bee colony optimisation strategy

    NASA Astrophysics Data System (ADS)

    Ma, Chao; Ouyang, Jihong; Chen, Hui-Ling; Ji, Jin-Chao

    2016-04-01

    In this paper, we propose a novel learning algorithm, named SABC-MKELM, based on a kernel extreme learning machine (KELM) method for single-hidden-layer feedforward networks. In SABC-MKELM, the combination of Gaussian kernels is used as the activate function of KELM instead of simple fixed kernel learning, where the related parameters of kernels and the weights of kernels can be optimised by a novel self-adaptive artificial bee colony (SABC) approach simultaneously. SABC-MKELM outperforms six other state-of-the-art approaches in general, as it could effectively determine solution updating strategies and suitable parameters to produce a flexible kernel function involved in SABC. Simulations have demonstrated that the proposed algorithm not only self-adaptively determines suitable parameters and solution updating strategies learning from the previous experiences, but also achieves better generalisation performances than several related methods, and the results show good stability of the proposed algorithm.

  19. A Comparative Analysis of Community Detection Algorithms on Artificial Networks.

    PubMed

    Yang, Zhao; Algesheimer, René; Tessone, Claudio J

    2016-01-01

    Many community detection algorithms have been developed to uncover the mesoscopic properties of complex networks. However how good an algorithm is, in terms of accuracy and computing time, remains still open. Testing algorithms on real-world network has certain restrictions which made their insights potentially biased: the networks are usually small, and the underlying communities are not defined objectively. In this study, we employ the Lancichinetti-Fortunato-Radicchi benchmark graph to test eight state-of-the-art algorithms. We quantify the accuracy using complementary measures and algorithms' computing time. Based on simple network properties and the aforementioned results, we provide guidelines that help to choose the most adequate community detection algorithm for a given network. Moreover, these rules allow uncovering limitations in the use of specific algorithms given macroscopic network properties. Our contribution is threefold: firstly, we provide actual techniques to determine which is the most suited algorithm in most circumstances based on observable properties of the network under consideration. Secondly, we use the mixing parameter as an easily measurable indicator of finding the ranges of reliability of the different algorithms. Finally, we study the dependency with network size focusing on both the algorithm's predicting power and the effective computing time. PMID:27476470

  20. Accelerating k-NN Algorithm with Hybrid MPI and OpenSHMEM

    SciTech Connect

    Lin, Jian; Hamidouche, Khaled; Zheng, Jie; Lu, Xiaoyi; Vishnu, Abhinav; Panda, Dhabaleswar

    2015-08-05

    Machine Learning algorithms are benefiting from the continuous improvement of programming models, including MPI, MapReduce and PGAS. k-Nearest Neighbors (k-NN) algorithm is a widely used machine learning algorithm, applied to supervised learning tasks such as classification. Several parallel implementations of k-NN have been proposed in the literature and practice. However, on high-performance computing systems with high-speed interconnects, it is important to further accelerate existing designs of the k-NN algorithm through taking advantage of scalable programming models. To improve the performance of k-NN on large-scale environment with InfiniBand network, this paper proposes several alternative hybrid MPI+OpenSHMEM designs and performs a systemic evaluation and analysis on typical workloads. The hybrid designs leverage the one-sided memory access to better overlap communication with computation than the existing pure MPI design, and propose better schemes for efficient buffer management. The implementation based on k-NN program from MaTEx with MVAPICH2-X (Unified MPI+PGAS Communication Runtime over InfiniBand) shows up to 9.0% time reduction for training KDD Cup 2010 workload over 512 cores, and 27.6% time reduction for small workload with balanced communication and computation. Experiments of running with varied number of cores show that our design can maintain good scalability.

  1. A Comparison of Three Algorithms for Approximating the Distance Distribution in Real-World Graphs

    NASA Astrophysics Data System (ADS)

    Crescenzi, Pierluigi; Grossi, Roberto; Lanzi, Leonardo; Marino, Andrea

    The distance for a pair of vertices in a graph G is the length of the shortest path between them. The distance distribution for G specifies how many vertex pairs are at distance h, for all feasible values h. We study three fast randomized algorithms to approximate the distance distribution in large graphs. The Eppstein-Wang (ew) algorithm exploits sampling through a limited (logarithmic) number of Breadth-First Searches (bfses). The Size-Estimation Framework (sef) by Cohen employs random ranking and least-element lists to provide several estimators. Finally, the Approximate Neighborhood Function (anf) algorithm by Palmer, Gibbons, and Faloutsos makes use of the probabilistic counting technique introduced by Flajolet and Martin, in order to estimate the number of distinct elements in a large multiset. We investigate how good is the approximation of the distance distribution, when the three algorithms are run in similar settings. The analysis of anf derives from the results on the probabilistic counting method, while the one of sef is given by Cohen. For what concerns ew (originally designed for another problem), we extend its simple analysis in order to bound its error with high probability and to show its convergence. We then perform an experimental study on 30 real-world graphs, showing that our implementation of ew combines the accuracy of sef with the performance of anf.

  2. An algorithm for the kinetics of tire pyrolysis under different heating rates.

    PubMed

    Quek, Augustine; Balasubramanian, Rajashekhar

    2009-07-15

    Tires exhibit different kinetic behaviors when pyrolyzed under different heating rates. A new algorithm has been developed to investigate pyrolysis behavior of scrap tires. The algorithm includes heat and mass transfer equations to account for the different extents of thermal lag as the tire is heated at different heating rates. The algorithm uses an iterative approach to fit model equations to experimental data to obtain quantitative values of kinetic parameters. These parameters describe the pyrolysis process well, with good agreement (r(2)>0.96) between the model and experimental data when the model is applied to three different brands of automobile tires heated under five different heating rates in a pure nitrogen atmosphere. The model agrees with other researchers' results that frequencies factors increased and time constants decreased with increasing heating rates. The model also shows the change in the behavior of individual tire components when the heating rates are increased above 30 K min(-1). This result indicates that heating rates, rather than temperature, can significantly affect pyrolysis reactions. This algorithm is simple in structure and yet accurate in describing tire pyrolysis under a wide range of heating rates (10-50 K min(-1)). It improves our understanding of the tire pyrolysis process by showing the relationship between the heating rate and the many components in a tire that depolymerize as parallel reactions. PMID:19111984

  3. Full tensor gravity gradiometry data inversion: Performance analysis of parallel computing algorithms

    NASA Astrophysics Data System (ADS)

    Hou, Zhen-Long; Wei, Xiao-Hui; Huang, Da-Nian; Sun, Xu

    2015-09-01

    We apply reweighted inversion focusing to full tensor gravity gradiometry data using message-passing interface (MPI) and compute unified device architecture (CUDA) parallel computing algorithms, and then combine MPI with CUDA to formulate a hybrid algorithm. Parallel computing performance metrics are introduced to analyze and compare the performance of the algorithms. We summarize the rules for the performance evaluation of parallel algorithms. We use model and real data from the Vinton salt dome to test the algorithms. We find good match between model and real density data, and verify the high efficiency and feasibility of parallel computing algorithms in the inversion of full tensor gravity gradiometry data.

  4. An Iterative Soft-Decision Decoding Algorithm

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Koumoto, Takuya; Takata, Toyoo; Kasami, Tadao

    1996-01-01

    This paper presents a new minimum-weight trellis-based soft-decision iterative decoding algorithm for binary linear block codes. Simulation results for the RM(64,22), EBCH(64,24), RM(64,42) and EBCH(64,45) codes show that the proposed decoding algorithm achieves practically (or near) optimal error performance with significant reduction in decoding computational complexity. The average number of search iterations is also small even for low signal-to-noise ratio.

  5. Reconsidering the “Good Divorce”

    PubMed Central

    Amato, Paul R.; Kane, Jennifer B.; James, Spencer

    2011-01-01

    This study attempted to assess the notion that a “good divorce” protects children from the potential negative consequences of marital dissolution. A cluster analysis of data on postdivorce parenting from 944 families resulted in three groups: cooperative coparenting, parallel parenting, and single parenting. Children in the cooperative coparenting (good divorce) cluster had the smallest number of behavior problems and the closest ties to their fathers. Nevertheless, children in this cluster did not score significantly better than other children on 10 additional outcomes. These findings provide only modest support for the good divorce hypothesis. PMID:22125355

  6. Node Deployment Algorithm Based on Connected Tree for Underwater Sensor Networks.

    PubMed

    Jiang, Peng; Wang, Xingmin; Jiang, Lurong

    2015-01-01

    Designing an efficient deployment method to guarantee optimal monitoring quality is one of the key topics in underwater sensor networks. At present, a realistic approach of deployment involves adjusting the depths of nodes in water. One of the typical algorithms used in such process is the self-deployment depth adjustment algorithm (SDDA). This algorithm mainly focuses on maximizing network coverage by constantly adjusting node depths to reduce coverage overlaps between two neighboring nodes, and thus, achieves good performance. However, the connectivity performance of SDDA is irresolute. In this paper, we propose a depth adjustment algorithm based on connected tree (CTDA). In CTDA, the sink node is used as the first root node to start building a connected tree. Finally, the network can be organized as a forest to maintain network connectivity. Coverage overlaps between the parent node and the child node are then reduced within each sub-tree to optimize coverage. The hierarchical strategy is used to adjust the distance between the parent node and the child node to reduce node movement. Furthermore, the silent mode is adopted to reduce communication cost. Simulations show that compared with SDDA, CTDA can achieve high connectivity with various communication ranges and different numbers of nodes. Moreover, it can realize coverage as high as that of SDDA with various sensing ranges and numbers of nodes but with less energy consumption. Simulations under sparse environments show that the connectivity and energy consumption performances of CTDA are considerably better than those of SDDA. Meanwhile, the connectivity and coverage performances of CTDA are close to those depth adjustment algorithms base on connected dominating set (CDA), which is an algorithm similar to CTDA. However, the energy consumption of CTDA is less than that of CDA, particularly in sparse underwater environments. PMID:26184209

  7. Node Deployment Algorithm Based on Connected Tree for Underwater Sensor Networks

    PubMed Central

    Jiang, Peng; Wang, Xingmin; Jiang, Lurong

    2015-01-01

    Designing an efficient deployment method to guarantee optimal monitoring quality is one of the key topics in underwater sensor networks. At present, a realistic approach of deployment involves adjusting the depths of nodes in water. One of the typical algorithms used in such process is the self-deployment depth adjustment algorithm (SDDA). This algorithm mainly focuses on maximizing network coverage by constantly adjusting node depths to reduce coverage overlaps between two neighboring nodes, and thus, achieves good performance. However, the connectivity performance of SDDA is irresolute. In this paper, we propose a depth adjustment algorithm based on connected tree (CTDA). In CTDA, the sink node is used as the first root node to start building a connected tree. Finally, the network can be organized as a forest to maintain network connectivity. Coverage overlaps between the parent node and the child node are then reduced within each sub-tree to optimize coverage. The hierarchical strategy is used to adjust the distance between the parent node and the child node to reduce node movement. Furthermore, the silent mode is adopted to reduce communication cost. Simulations show that compared with SDDA, CTDA can achieve high connectivity with various communication ranges and different numbers of nodes. Moreover, it can realize coverage as high as that of SDDA with various sensing ranges and numbers of nodes but with less energy consumption. Simulations under sparse environments show that the connectivity and energy consumption performances of CTDA are considerably better than those of SDDA. Meanwhile, the connectivity and coverage performances of CTDA are close to those depth adjustment algorithms base on connected dominating set (CDA), which is an algorithm similar to CTDA. However, the energy consumption of CTDA is less than that of CDA, particularly in sparse underwater environments. PMID:26184209

  8. Improved Bat Algorithm Applied to Multilevel Image Thresholding

    PubMed Central

    2014-01-01

    Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed. PMID:25165733

  9. Improved bat algorithm applied to multilevel image thresholding.

    PubMed

    Alihodzic, Adis; Tuba, Milan

    2014-01-01

    Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed. PMID:25165733

  10. A multiagent evolutionary algorithm for constraint satisfaction problems.

    PubMed

    Liu, Jing; Zhong, Weicai; Jiao, Licheng

    2006-02-01

    With the intrinsic properties of constraint satisfaction problems (CSPs) in mind, we divide CSPs into two types, namely, permutation CSPs and nonpermutation CSPs. According to their characteristics, several behaviors are designed for agents by making use of the ability of agents to sense and act on the environment. These behaviors are controlled by means of evolution, so that the multiagent evolutionary algorithm for constraint satisfaction problems (MAEA-CSPs) results. To overcome the disadvantages of the general encoding methods, the minimum conflict encoding is also proposed. Theoretical analyzes show that MAEA-CSPs has a linear space complexity and converges to the global optimum. The first part of the experiments uses 250 benchmark binary CSPs and 79 graph coloring problems from the DIMACS challenge to test the performance of MAEA-CSPs for nonpermutation CSPs. MAEA-CSPs is compared with six well-defined algorithms and the effect of the parameters is analyzed systematically. The second part of the experiments uses a classical CSP, n-queen problems, and a more practical case, job-shop scheduling problems (JSPs), to test the performance of MAEA-CSPs for permutation CSPs. The scalability of MAEA-CSPs along n for n-queen problems is studied with great care. The results show that MAEA-CSPs achieves good performance when n increases from 10(4) to 10(7), and has a linear time complexity. Even for 10(7)-queen problems, MAEA-CSPs finds the solutions by only 150 seconds. For JSPs, 59 benchmark problems are used, and good performance is also obtained. PMID:16468566

  11. Protein folding simulations of the hydrophobic-hydrophilic model by combining tabu search with genetic algorithms

    NASA Astrophysics Data System (ADS)

    Jiang, Tianzi; Cui, Qinghua; Shi, Guihua; Ma, Songde

    2003-08-01

    In this paper, a novel hybrid algorithm combining genetic algorithms and tabu search is presented. In the proposed hybrid algorithm, the idea of tabu search is applied to the crossover operator. We demonstrate that the hybrid algorithm can be applied successfully to the protein folding problem based on a hydrophobic-hydrophilic lattice model. The results show that in all cases the hybrid algorithm works better than a genetic algorithm alone. A comparison with other methods is also made.

  12. Improved hybrid optimization algorithm for 3D protein structure prediction.

    PubMed

    Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang

    2014-07-01

    A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins. PMID:25069136

  13. Combined string searching algorithm based on knuth-morris- pratt and boyer-moore algorithms

    NASA Astrophysics Data System (ADS)

    Tsarev, R. Yu; Chernigovskiy, A. S.; Tsareva, E. A.; Brezitskaya, V. V.; Nikiforov, A. Yu; Smirnov, N. A.

    2016-04-01

    The string searching task can be classified as a classic information processing task. Users either encounter the solution of this task while working with text processors or browsers, employing standard built-in tools, or this task is solved unseen by the users, while they are working with various computer programmes. Nowadays there are many algorithms for solving the string searching problem. The main criterion of these algorithms’ effectiveness is searching speed. The larger the shift of the pattern relative to the string in case of pattern and string characters’ mismatch is, the higher is the algorithm running speed. This article offers a combined algorithm, which has been developed on the basis of well-known Knuth-Morris-Pratt and Boyer-Moore string searching algorithms. These algorithms are based on two different basic principles of pattern matching. Knuth-Morris-Pratt algorithm is based upon forward pattern matching and Boyer-Moore is based upon backward pattern matching. Having united these two algorithms, the combined algorithm allows acquiring the larger shift in case of pattern and string characters’ mismatch. The article provides an example, which illustrates the results of Boyer-Moore and Knuth-Morris- Pratt algorithms and combined algorithm’s work and shows advantage of the latter in solving string searching problem.

  14. Optimizing scheduling problem using an estimation of distribution algorithm and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Qun, Jiang; Yang, Ou; Dong, Shi-Du

    2007-12-01

    This paper presents a methodology for using heuristic search methods to optimize scheduling problem. Specifically, an Estimation of Distribution Algorithm (EDA)- Population Based Incremental Learning (PBIL), and Genetic Algorithm (GA) have been applied to finding effective arrangement of curriculum schedule of Universities. To our knowledge, EDAs have been applied to fewer real world problems compared to GAs, and the goal of the present paper is to expand the application domain of this technique. The experimental results indicate a good applicability of PBIL to optimize scheduling problem.

  15. Quantum Adiabatic Algorithms and Large Spin Tunnelling

    NASA Technical Reports Server (NTRS)

    Boulatov, A.; Smelyanskiy, V. N.

    2003-01-01

    We provide a theoretical study of the quantum adiabatic evolution algorithm with different evolution paths proposed in this paper. The algorithm is applied to a random binary optimization problem (a version of the 3-Satisfiability problem) where the n-bit cost function is symmetric with respect to the permutation of individual bits. The evolution paths are produced, using the generic control Hamiltonians H (r) that preserve the bit symmetry of the underlying optimization problem. In the case where the ground state of H(0) coincides with the totally-symmetric state of an n-qubit system the algorithm dynamics is completely described in terms of the motion of a spin-n/2. We show that different control Hamiltonians can be parameterized by a set of independent parameters that are expansion coefficients of H (r) in a certain universal set of operators. Only one of these operators can be responsible for avoiding the tunnelling in the spin-n/2 system during the quantum adiabatic algorithm. We show that it is possible to select a coefficient for this operator that guarantees a polynomial complexity of the algorithm for all problem instances. We show that a successful evolution path of the algorithm always corresponds to the trajectory of a classical spin-n/2 and provide a complete characterization of such paths.

  16. Validation and robustness of an atmospheric correction algorithm for hyperspectral images

    NASA Astrophysics Data System (ADS)

    Boucher, Yannick; Poutier, Laurent; Achard, Veronique; Lenot, Xavier; Miesch, Christophe

    2002-08-01

    The Optics Department of ONERA has developed and implemented an inverse algorithm, COSHISE, to correct hyperspectral images of the atmosphere effects in the visible-NIR-SWIR domain (0,4-2,5 micrometers ). This algorithm automatically determine the integrated water-vapor content for each pixel, from the radiance at sensor level by using a LIRR-type (Linear Regression Ratio) technique. It then retrieves the spectral reflectance at ground level using atmospheric parameters computed with Modtran4, included the water-vapor spatial dependence as obtained in the first stop. The adjacency effects are taken into account using spectral kernels obtained by two Monte-Carlo codes. Results obtained with COCHISE code on real hyperspectral data are first compared to ground based reflectance measurements. AVIRIS images of Railroad Valley Playa, CA, and HyMap images of Hartheim, France, are use. The inverted reflectance agrees perfectly with the measurement at ground level for the AVIRIS data set, which validates COCHISE algorithm/ for the HyMap data set, the results are still good but cannot be considered as validating the code. The robustness of COCHISE code is evaluated. For this, spectral radiance images are modeled at the sensor level, with the direct algorithm COMANCHE, which is the reciprocal code of COCHISE. The COCHISE algorithm is then used to compute the reflectance at ground level from the simulated at-sensor radiance. A sensitivity analysis has been performed, as a function of errors on several atmospheric parameter and instruments defaults, by comparing the retrieved reflectance with the original one. COCHISE code shows a quite good robustness to errors on input parameter, except for aerosol type.

  17. Fixed-point error analysis of Winograd Fourier transform algorithms

    NASA Technical Reports Server (NTRS)

    Patterson, R. W.; Mcclellan, J. H.

    1978-01-01

    The quantization error introduced by the Winograd Fourier transform algorithm (WFTA) when implemented in fixed-point arithmetic is studied and compared with that of the fast Fourier transform (FFT). The effect of ordering the computational modules and the relative contributions of data quantization error and coefficient quantization error are determined. In addition, the quantization error introduced by the Good-Winograd (GW) algorithm, which uses Good's prime-factor decomposition for the discrete Fourier transform (DFT) together with Winograd's short length DFT algorithms, is studied. Error introduced by the WFTA is, in all cases, worse than that of the FFT. In general, the WFTA requires one or two more bits for data representation to give an error similar to that of the FFT. Error introduced by the GW algorithm is approximately the same as that of the FFT.

  18. Family Activities for Fun and Good Health

    MedlinePlus

    ... a Partner Family Activities for Fun and Good Health Being physically active with your family is a ... or grandchild, you’ll be rewarded with improved health and time spent together. Family gatherings are the ...

  19. What Constitutes a Good Accounting Teacher

    ERIC Educational Resources Information Center

    Flesher, Dale L.

    1978-01-01

    The good accounting and bookkeeping instructor is one who keeps the students involved at all times, answers all questions that are asked, explains the theory behind the procedures, and uses the voice as a teaching aid. (BM)

  20. Good Health Is a Global Issue

    MedlinePlus

    ... Past Issues Special Section Good Health Is a Global Issue Past Issues / Spring 2008 Table of Contents ... reasons, many of the research efforts related to global health must now deal with these non-communicable ...