Science.gov

Sample records for algorithm takes advantage

  1. Too Few Americans Take Advantage of Local Parks

    MedlinePlus

    ... gov/medlineplus/news/fullstory_158915.html Too Few Americans Take Advantage of Local Parks Modest changes would ... a center of physical activity for adults, older Americans and females," Cohen said in a news release ...

  2. TAKING SCIENTIFIC ADVANTAGE OF A DISASTROUS OIL SPILL

    EPA Science Inventory

    On 19 January 1996, the North Cape barge ran aground on Moonstone Beach in southern Rhode Island, releasing 828,000 gallons of refined oil. This opportunistic study was designed to take scientific advantage of the most severely affected seabird, the common loon (Gavia immer) . As...

  3. Invited Commentary: Taking Advantage of Time-Varying Neighborhood Environments

    PubMed Central

    Lovasi, Gina S.; Goldsmith, Jeff

    2014-01-01

    Neighborhood built environment characteristics may encourage physical activity, but previous literature on the topic has been critiqued for its reliance on cross-sectional data. In this issue of the Journal, Knuiman et al. (Am J Epidemiol. 2014;180(5):453–461) present longitudinal analyses of built environment characteristics as predictors of neighborhood transportation walking. We take this opportunity to comment on self-selection, exposure measurement, outcome form, analyses, and future directions. The Residential Environments (RESIDE) Study follows individuals as they relocate into new housing. The outcome, which is neighborhood transportation walking, has several important limitations with regards to public health relevance, dichotomization, and potential bias. Three estimation strategies were pursued: marginal modeling, random-effects modeling, and fixed-effects modeling. Knuiman et al. defend fixed-effects modeling as the one that most effectively controls for unmeasured time-invariant confounders, and it will do so as long as confounders have a constant effect over time. Fixed-effects modeling requires no distributional assumptions regarding the heterogeneity of subject-specific effects. Associations of time-varying neighborhood characteristics with walking are interpreted at the subject level for both fixed- and random-effects models. Cross-sectional data have set the stage for the next generation of neighborhood research, which should leverage longitudinal changes in both place and health behaviors. Careful interpretation is warranted as longitudinal data become available for analysis. PMID:25117659

  4. Neural Correlates of Traditional Chinese Medicine Induced Advantageous Risk-Taking Decision Making

    ERIC Educational Resources Information Center

    Lee, Tiffany M. Y.; Guo, Li-guo; Shi, Hong-zhi; Li, Yong-zhi; Luo, Yue-jia; Sung, Connie Y. Y.; Chan, Chetwyn C. H.; Lee, Tatia M. C.

    2009-01-01

    This fMRI study examined the neural correlates of the observed improvement in advantageous risk-taking behavior, as measured by the number of adjusted pumps in the Balloon Analogue Risk Task (BART), following a 60-day course of a Traditional Chinese Medicine (TCM) recipe, specifically designed to regulate impulsiveness in order to modulate…

  5. Student Perceptions of the Advantages and Disadvantages of Geologic Note-taking with iPads

    NASA Astrophysics Data System (ADS)

    Dohaney, J. A.; Kennedy, B.; Gravley, D. M.

    2015-12-01

    During fieldwork, students and professionals record information and hypotheses into their geologic notebook. In a pilot study, students on an upper-level volcanology field trip were given iPads, with an open-source geology note-taking application (GeoFieldBook) and volunteered to record notes at two sites (i.e., Tongariro Volcanic Complex and Orakei Korako) in New Zealand. A group of students (n=9) were interviewed several weeks after fieldwork to reflect on using this technology. We aimed to characterise their experiences, strategies and examine the perceived benefits and challenges of hardcopy and digital note-taking. Students reported having a diverse range of strategies when taking notes but the most common strategies mentioned were: a) looking for/describing the differences, b) supporting note-taking with sketches, c) writing everything down, and d) focusing first on structure, texture and then composition of an outcrop. Additionally, students said they that the strategies they used were context-dependent (i.e., bedrock mapping versus detailed outcrop descriptions). When using the iPad, students reported that they specifically used different strategies: varying the length of text (from more to less), increasing the number of sites described (i.e., preferring to describe sites in more spatial detail rather than summarising several features in close proximity), and taking advantage of the 'editability' of iPad notes (abandoning rigid, systematic approaches). Overall, the reported advantages to iPad note-taking included allowing the user to be more efficient, organised and using the GPS mapping function to help them make observations and interpretations in real-time. Students also reported a range of disadvantages, but focused predominantly on the inability to annotate/draw sketches with the iPad in the same manner as pen and paper methods. These differences likely encourage different overall approaches to note-taking and cognition in the field environment, and

  6. Are we taking full advantage of the growing number of pharmacological treatment options for osteoporosis?

    PubMed Central

    Jepsen, Karl J.; Schlecht, Stephen H.; Kozloff, Kenneth M.

    2014-01-01

    We are becoming increasingly aware that the manner in which our skeleton ages is not uniform within and between populations. Pharmacological treatment options with the potential to combat age-related reductions in skeletal strength continue to become available on the market, notwithstanding our current inability to fully utilize these treatments by accounting for an individual’s unique biomechanical needs. Revealing new molecular mechanisms that improve the targeted delivery of pharmaceuticals is important; however, this only addresses one part of the solution for differential age-related bone loss. To improve current treatment regimes, we must also consider specific biomechanical mechanisms that define how these molecular pathways ultimately impact whole bone fracture resistance. By improving our understanding of the relationship between molecular and biomechanical mechanisms, clinicians will be better equipped to take full advantage of the mounting pharmacological treatments available. Ultimately this will enable us to reduce fracture risk among the elderly more strategically, more effectively, and more economically. In this interest, the following review summarizes the biomechanical basis of current treatment strategies while defining how different biomechanical mechanisms lead to reduced fracture resistance. It is hoped that this may serve as a template for the identification of new targets for pharmacological treatments that will enable clinicians to personalize care so that fracture incidence may be globally reduced. PMID:24747363

  7. Taking advantage of ground data systems attributes to achieve quality results in testing software

    NASA Technical Reports Server (NTRS)

    Sigman, Clayton B.; Koslosky, John T.; Hageman, Barbara H.

    1994-01-01

    During the software development life cycle process, basic testing starts with the development team. At the end of the development process, an acceptance test is performed for the user to ensure that the deliverable is acceptable. Ideally, the delivery is an operational product with zero defects. However, the goal of zero defects is normally not achieved but is successful to various degrees. With the emphasis on building low cost ground support systems while maintaining a quality product, a key element in the test process is simulator capability. This paper reviews the Transportable Payload Operations Control Center (TPOCC) Advanced Spacecraft Simulator (TASS) test tool that is used in the acceptance test process for unmanned satellite operations control centers. The TASS is designed to support the development, test and operational environments of the Goddard Space Flight Center (GSFC) operations control centers. The TASS uses the same basic architecture as the operations control center. This architecture is characterized by its use of distributed processing, industry standards, commercial off-the-shelf (COTS) hardware and software components, and reusable software. The TASS uses much of the same TPOCC architecture and reusable software that the operations control center developer uses. The TASS also makes use of reusable simulator software in the mission specific versions of the TASS. Very little new software needs to be developed, mainly mission specific telemetry communication and command processing software. By taking advantage of the ground data system attributes, successful software reuse for operational systems provides the opportunity to extend the reuse concept into the test area. Consistency in test approach is a major step in achieving quality results.

  8. Taking advantage of photometric galaxy catalogues to determine the halo occupation distribution

    NASA Astrophysics Data System (ADS)

    Rodriguez, F.; Merchán, M.; Sgró, M. A.

    2015-08-01

    Context. Halo occupation distribution (HOD) is a powerful statistic that allows the study of several aspects of the matter distribution in the Universe, such as evaluating semi-analytic models of galaxy formation or imposing constraints on cosmological models. Consequently, it is important to have a reliable method for estimating this statistic, taking full advantage of the available information on current and future galaxy surveys. Aims: The main goal of this project is to combine photometric and spectroscopic information using a discount method of background galaxies in order to extend the range of absolute magnitudes and to increase the upper limit of masses in which the HOD is estimated. We also evaluate the proposed method and apply it to estimating the HOD on the Sloan Digital Sky Survey Data Release 7 (SDSS DR7) galaxy survey. Methods: We propose the background subtraction technique to mel information provided by spectroscopic galaxy groups and photometric survey of galaxies. To evaluate the feasibility of the method, we implement the proposed technique on a mock catalogue built from a semi-analytic model of galaxy formation. Furthermore, we apply the method to the SDSS DR7 using a galaxy group catalogue taken from spectroscopic version and the corresponding photometric galaxy survey. Results: We demonstrated the validity of the method using the mock catalogue. We applied this technique to obtain the SDSS DR7 HOD in absolute magnitudes ranging from M = -21.5 to M = -16.0 and masses up to ≃1015 M⊙ throughout this range. On the brighter extreme, we found that our results are in excellent agreement with those obtained in previous works.

  9. Taking Advantage of STEM (Science, Technology, Engineering, and Math) Popularity to Enhance Student/Public Engagement

    NASA Astrophysics Data System (ADS)

    Dittrich, T. M.

    2011-12-01

    . Our goal is to expand the use of these modules to a more broad public audience, including at a future campus/public event know as "All Things Water". We have also organized a walking tour/demo with 3rd-5th graders in a small mining town west of Boulder where we hiked to an old historical mine site, measured water quality (pH, dissolved lead, conductivity), and coated the inside of small bottles with silver. Organizing and hosting a conference can also be a great way to facilitate a discussion of ideas within the community. "All Things STEM" organized a broad student research conference related to water quality and water treatment which included research from 22 students from 11 different countries. We worked with 12 local engineering consultants, municipalities, and local businesses to provide 2000 for student awards. Our presentation will focus on lessons we have learned on how to take advantage of student energy, excitement, and time on campus to receive funding opportunities for planning events that engage the public. We will also talk about our experiences in using student energy to develop partnerships with K-12 schools, community groups, and industry professionals.

  10. Tax Breaks for Parents of Private School Students: Who Favors Them and Who Would Take Advantage of Them?

    ERIC Educational Resources Information Center

    Bezdek, Robert; Cross, Ray

    1983-01-01

    A sample of Corpus Christi, Texas, citizens were interviewed for their views on granting income tax breaks to parents of private school students. Results indicate that, while low-income Hispanics are most in favor of the tax breaks, upper income Anglos are most likely to take advantage of them. (KH)

  11. Perspective-Taking Ability in Bilingual Children: Extending Advantages in Executive Control to Spatial Reasoning

    ERIC Educational Resources Information Center

    Greenberg, Anastasia; Bellana, Buddhika; Bialystok, Ellen

    2013-01-01

    Monolingual and bilingual 8-year-olds performed a computerized spatial perspective-taking task. Children were asked to decide how an observer saw a four-block array from one of three different positions (90 degrees, 180 degrees, and 270 degrees counter-clockwise from the child's position) by selecting one of four responses--the correct response,…

  12. Cell-Mediated Delivery of Nanoparticles: Taking Advantage of Circulatory Cells to Target Nanoparticles

    PubMed Central

    Anselmo, Aaron C.; Mitragotri, Samir

    2014-01-01

    Cellular hitchhiking leverages the use of circulatory cells to enhance the biological outcome of nanoparticle drug delivery systems, which often suffer from poor circulation time and limited targeting. Cellular hitchhiking utilizes the natural abilities of circulatory cells to: (i) navigate the vasculature while avoiding immune system clearance, (ii) remain relatively inert until needed and (iii) perform specific functions, including nutrient delivery to tissues, clearance of pathogens, and immune system surveillance. A variety of synthetic nanoparticles attempt to mimic these functional attributes of circulatory cells for drug delivery purposes. By combining the advantages of circulatory cells and synthetic nanoparticles, many advanced drug delivery systems have been developed that adopt the concept of cellular hitchhiking. Here, we review the development and specific applications of cellular hitchhiking-based drug delivery systems. PMID:24747161

  13. A Spatiotemporal-Chaos-Based Cryptosystem Taking Advantage of Both Synchronous and Self-Synchronizing Schemes

    NASA Astrophysics Data System (ADS)

    Lü, Hua-Ping; Wang, Shi-Hong; Li, Xiao-Wen; Tang, Guo-Ning; Kuang, Jin-Yu; Ye, Wei-Ping; Hu, Gang

    2004-06-01

    Two-dimensional one-way coupled map lattices are used for cryptography where multiple space units produce chaotic outputs in parallel. One of the outputs plays the role of driving for synchronization of the decryption system while the others perform the function of information encoding. With this separation of functions the receiver can establish a self-checking and self-correction mechanism, and enjoys the advantages of both synchronous and self-synchronizing schemes. A comparison between the present system with the system of advanced encryption standard (AES) is presented in the aspect of channel noise influence. Numerical investigations show that our system is much stronger than AES against channel noise perturbations, and thus can be better used for secure communications with large channel noise.

  14. Perspective-Taking Ability in Bilingual Children: Extending Advantages in Executive Control to Spatial Reasoning

    PubMed Central

    Greenberg, Anastasia; Bellana, Buddhika; Bialystok, Ellen

    2012-01-01

    Monolingual and bilingual 8-year-olds performed a computerized spatial perspective-taking task. Children were asked to decide how an observer saw a four-block array from one of three different positions (90°, 180°, and 270° counter-clockwise from the child's position) by selecting one of four responses -- the correct response, the egocentric error, an incorrect choice in which the array was correct but in the wrong orientation for the viewer, and an incorrect choice in which the array included an internal spatial error. All children performed similarly on background measures, including fluid intelligence, but bilingual children were more accurate than monolingual children in calculating the observer's view across all three positions, with no differences in the pattern of errors committed by the two language groups. The results are discussed in terms of the effect of bilingualism on modifying performance in a complex spatial task that has implications for academic achievement. PMID:23486486

  15. Taking advantage of the MEMO orbiter to improve the determination of Mars' gravity field.

    NASA Astrophysics Data System (ADS)

    Rosenblatt, P.; Le Maitre, S.; Marty, J. C.; Duron, J.; Dehant, V.

    2007-08-01

    In the context of future ESA's mission to Mars, it is proposed an orbiter named MEMO (Mars Escape and Magnetic Orbiter) to especially improve the measurement of the atmospheric escape and the magnetic field of the planet. Its orbit is planned to have an inclination of 77 degrees and periapsis and apoapsis altitude of 130 km and 1000 km, respectively. In addition, such an orbit is scheduled to be maintained during one Martian year. This differs from the usual near-polar, near-circular orbit with a periapsis altitude of at least 200 km, such as for Mars Reconnaissance Orbiter (MRO). Even if the MEMO orbiter is not dedicated to Mars' gravity field investigation, we propose to take this opportunity to improve our knowledge of Mars' gravity field. Indeed, the sensitivity of an orbiter to the gravity field strongly depends on the semi-major axis, inclination and eccentricity of its orbit. In this study, we quantitatively estimate the improvement on the determination of local gravity anomalies, of seasonal variations of the first zonal harmonics and of the k2 Love number of Mars. We base our work on both analytical and numerical approaches in order to simulate the Mars' gravity field determination from spacecraft tracking data from the Earth.We also add in our simulations the possibility to have an accelerometer onboard the MEMO spacecraft. Indeed, if it is placed at the center of mass of the spacecraft, it could provide measurements of the non-gravitational forces acting on it, especially the atmospheric drag. A good determination of the contribution of this force to the spacecraft motion would bring information about the atmospheric density at altitude between 100 and 200 km, and would improve the gravity field determination from tracking data of the spacecraft.

  16. Astrosociology and Space Exploration: Taking Advantage of the Other Branch of Science

    NASA Astrophysics Data System (ADS)

    Pass, Jim

    2008-01-01

    The space age marches on. Following President Bush's Vision for Space Exploration (VSE) and our recent celebration of the fiftieth anniversary of spaceflight on October 4, 2007, we should now take time to contemplate where we have been as it relates to where we are going. Space exploration has depended most strongly on engineers and space scientists in the past. This made sense when crews remained small, manned missions tended to operate in low Earth orbit and on a temporary basis, and the bulk of missions were carried out by robotic spacecraft. The question one must now ask is this: What will change in the next fifty years? One fundamental answer to this question involves the strong probability that human beings will increasingly go into space to live and work on long-duration missions and begin to live in space permanently. This article addresses the need to utilize the other neglected branch of science, comprised of the social and behavioral sciences along with the humanities, as it relates to the shift to a more substantial human presence in space. It focuses on the social science perspective needed to make this possible rather than the practical aspects of doing so, such as the engineering of functional habitats. A most important consideration involves the permanent establishment of a formal collaborative mechanism between astrosociologists and the engineers and space scientists who traditionally comprise the space community. The theoretical and applied aspects of astrosociology each have much to contribute toward the human dimension of space exploration, both on the Earth and beyond its atmosphere. The bottom line is that a social species such as ours cannot determine how to live in space without the input from a social science perspective, namely astrosociology.

  17. The development of a charge protocol to take advantage of off- and on-peak demand economics at facilities

    SciTech Connect

    Jeffrey Wishart

    2012-02-01

    This document reports the work performed under Task 1.2.1.1: 'The development of a charge protocol to take advantage of off- and on-peak demand economics at facilities'. The work involved in this task included understanding the experimental results of the other tasks of SOW-5799 in order to take advantage of the economics of electricity pricing differences between on- and off-peak hours and the demonstrated charging and facility energy demand profiles. To undertake this task and to demonstrate the feasibility of plug-in hybrid electric vehicle (PHEV) and electric vehicle (EV) bi-directional electricity exchange potential, BEA has subcontracted Electric Transportation Applications (now known as ECOtality North America and hereafter ECOtality NA) to use the data from the demand and energy study to focus on reducing the electrical power demand of the charging facility. The use of delayed charging as well as vehicle-to-grid (V2G) and vehicle-to-building (V2B) operations were to be considered.

  18. 5 CFR 792.207 - When does the child care subsidy program law become effective and how may agencies take advantage...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false When does the child care subsidy program law become effective and how may agencies take advantage of this law? 792.207 Section 792.207... When does the child care subsidy program law become effective and how may agencies take advantage...

  19. Building Models in the Classroom: Taking Advantage of Sophisticated Geomorphic Numerical Tools Using a Simple Graphical User Interface

    NASA Astrophysics Data System (ADS)

    Roy, S. G.; Koons, P. O.; Gerbi, C. C.; Capps, D. K.; Tucker, G. E.; Rogers, Z. A.

    2014-12-01

    Sophisticated numerical tools exist for modeling geomorphic processes and linking them to tectonic and climatic systems, but they are often seen as inaccessible for users with an exploratory level of interest. We have improved the accessibility of landscape evolution models by producing a simple graphics user interface (GUI) that takes advantage of the Channel-Hillslope Integrated Landscape Development (CHILD) model. Model access is flexible: the user can edit values for basic geomorphic, tectonic, and climate parameters, or obtain greater control by defining the spatiotemporal distributions of those parameters. Users can make educated predictions by choosing their own parametric values for the governing equations and interpreting the results immediately through model graphics. This method of modeling allows users to iteratively build their understanding through experimentation. Use of this GUI is intended for inquiry and discovery-based learning activities. We discuss a number of examples of how the GUI can be used at the upper high school, introductory university, and advanced university level. Effective teaching modules initially focus on an inquiry-based example guided by the instructor. As students become familiar with the GUI and the CHILD model, the class can shift to more student-centered exploration and experimentation. To make model interpretations more robust, digital elevation models can be imported and direct comparisons can be made between CHILD model results and natural topography. The GUI is available online through the University of Maine's Earth and Climate Sciences website, through the Community Surface Dynamics Modeling System (CSDMS) model repository, or by contacting the corresponding author.

  20. Benefits of a programme taking advantage of patient‐instructors to teach and assess musculoskeletal skills in medical students

    PubMed Central

    Bideau, M; Guerne, P‐A; Bianchi, M‐P; Huber, P

    2006-01-01

    Aim To evaluate a rheumatoid arthritis patient‐instructor‐based formation–assessment programme for its ability to improve and assess musculoskeletal knowledge and skills in third‐year medical students. Methods (1) The quality of our musculoskeletal teaching was assessed before patient‐instructor intervention through an open‐questions test (pre‐test) and performance record forms (PRFs) filled in by the patient‐instructors. (2) The improvement afforded by patient‐instructors was evaluated through a second (identical) open‐questions test (post‐test). (3) The resulting skills in the students were further assessed by an individual patient‐instructors physical status record form (PSRF), filled in by the students. Results Pre‐tests and post‐tests showed an improvement in correct answers from a mean score of 39% to 47%. The history‐taking questions that obtained <50% scores in the pre‐test mostly dealt with the consequences of a chronic illness. Intervention of patient‐instructors especially improved knowledge of the psychosocial aspects and side effects of drugs. With regard to physical examination, patient‐instructors makedly improved the identification of assessment of signs of active and chronic inflammation. PRF analysis showed that 10 of 28 questions answered by <50% of the students were related to disease characteristics of rheumatoid arthritis, extra‐articular signs, side effects of drugs and psychosocial aspects. Analysis of the PSRF indicated that the weakness of our students' physical examination abilities in particular is related to recognising the types of swelling and differentiating tenderness from pain on motion. Conclusion This study proves the considerable benefits of the involvement of patient‐instructors in the teaching and assessment of clinical skills in students. PMID:16707537

  1. Take Advantage of Constitution Day

    ERIC Educational Resources Information Center

    McCune, Bonnie F.

    2008-01-01

    The announcement of the mandate for Constitution and Citizenship Day shortly before September, 2005, probably led to groans of dismay. Not another "must-do" for teachers and schools already stressed by federal and state requirements for standardized tests, increasingly rigid curricula, and scrutiny from the public and officials. But the idea and…

  2. Taking Advantage of the Disadvantaged.

    ERIC Educational Resources Information Center

    Fantini, Mario D.; Weinstein, Gerald

    1967-01-01

    The problem of the effective development of educational programs for the educationally disadvantaged is discussed. Salient points for the revitalization of American education are presented, including the major thesis that all American children are educationally disadvantaged. To improve the education of disadvantaged children, the educational…

  3. Comparative advantages of novel algorithms using MSR threshold and MSR difference threshold for biclustering gene expression data.

    PubMed

    Das, Shyama; Idicula, Sumam Mary

    2011-01-01

    The goal of biclustering in gene expression data matrix is to find a submatrix such that the genes in the submatrix show highly correlated activities across all conditions in the submatrix. A measure called mean squared residue (MSR) is used to simultaneously evaluate the coherence of rows and columns within the submatrix. MSR difference is the incremental increase in MSR when a gene or condition is added to the bicluster. In this chapter, three biclustering algorithms using MSR threshold (MSRT) and MSR difference threshold (MSRDT) are experimented and compared. All these methods use seeds generated from K-Means clustering algorithm. Then these seeds are enlarged by adding more genes and conditions. The first algorithm makes use of MSRT alone. Both the second and third algorithms make use of MSRT and the newly introduced concept of MSRDT. Highly coherent biclusters are obtained using this concept. In the third algorithm, a different method is used to calculate the MSRDT. The results obtained on bench mark datasets prove that these algorithms are better than many of the metaheuristic algorithms. PMID:21431553

  4. Optimized MPPT algorithm for boost converters taking into account the environmental variables

    NASA Astrophysics Data System (ADS)

    Petit, Pierre; Sawicki, Jean-Paul; Saint-Eve, Frédéric; Maufay, Fabrice; Aillerie, Michel

    2016-07-01

    This paper presents a study on the specific behavior of the Boost DC-DC converters generally used for powering conversion of PV panels connected to a HVDC (High Voltage Direct Current) Bus. It follows some works pointing out that converter MPPT (Maximum Power Point Tracker) is severely perturbed by output voltage variations due to physical dependency of parameters as the input voltage, the output voltage and the duty cycle of the PWM switching control of the MPPT. As a direct consequence many converters connected together on a same load perturb each other because of the output voltage variations induced by fluctuations on the HVDC bus essentially due to a not insignificant bus impedance. In this paper we show that it is possible to include an internal computed variable in charge to compensate local and external variations to take into account the environment variables.

  5. Taking advantage of data on N leaching to improve estimates of N2O emission reductions from agriculture in response to management changes

    NASA Astrophysics Data System (ADS)

    Gurwick, N. P.; Tonitto, C.

    2012-12-01

    Estimates of reductions in N2O emissions from agricultural soils associated with different crop management practices often focus on in-field emissions. This is particularly true in the context of policy development for carbon offsets which are highly relevant in California, given the state's global warming protection law (AB 32). However, data sets often do not cover an entire year, missing key times such as spring thaw, and only rarely do they span multiple years even though inter-annual variation can be large. In the most productive grain systems on tile-drained Mollisols in the U.S. there are no long-term data sets of N2O flux, although these agroecosystems have the highest application rates of N fertilizer in grain systems and are prime candidates for large reductions in N2O emissions. In contrast, estimates of the influence of management practices like cover crops are much stronger because more data are available, and downstream N2O emissions should shift proportionally. Nevertheless, these changes in downstream emissions are frequently not included in estimates of N2O flux change. As an example, cereal cover crops reduce N leakage by 70%, and leguminous cover crops reduce N leakage by 40%. These data should inform estimates of downstream N2O emissions from agricultural fields, particularly in the context of protocol development, where project developers or aggregators will have information about basic management of individual crop fields. Even the IPCC default guidelines for simple (Tier 1) emission factors could take this information into account. Despite the complexity of estimating downstream N2O emissions in the absence of site-specific hydrology data, the IPCC estimates that 30% of applied N is lost and that between 0.75% and 1.0 % of lost N is converted to N2O. That single estimate should be refined based on data showing that leaching varies with management practices.

  6. Taking Full Advantage of Children's Literature

    ERIC Educational Resources Information Center

    Serafini, Frank

    2012-01-01

    Teachers need a deeper understanding of the texts being discussed, in particular the various textual and visual aspects of picturebooks themselves, including the images, written text and design elements, to support how readers made sense of these texts. As teachers become familiar with aspects of literary criticism, art history, visual grammar,…

  7. A Tax Break for Teachers. Take Advantage!

    ERIC Educational Resources Information Center

    Milam, Edward E.

    1974-01-01

    Teachers are permitted to deduct certain education expenses for income tax purposes--one way of letting Uncle Sam bear part of the financial burden. Keeping accurate records and properly claiming the deductions on the tax return are necessary. (Author/MW)

  8. Educational advantage.

    PubMed

    2006-03-01

    What special advantage does JERHRE offer to research ethics education? Empirical research employs concepts and methods for understanding and addressing problems; the methods employed can be generalized to related problems in new contexts. Research published in JERHRE uses concepts and methods designed to understand and solve ethical problems in human research. These tools can be reused by JERHRE's readership as part of their learning and problem solving. Instead of telling scientists, students, ethics committee members and others what they ought to do, educators can use curriculum based on the empirical articles contained in JERHRE to enable learners to solve the particular research-related problems they confront. Each issue of JERHRE publishes curriculum based on articles published therein. The lesson plans are deliberately general so that they can be adapted to the particular learners. PMID:19385863

  9. Educational advantage.

    PubMed

    2006-06-01

    WHAT SPECIAL ADVANTAGE DOES JERHRE offer to research ethics education? Empirical research employs concepts and methods for understanding and addressing problems; the methods employed can be generalized to related problems in new contexts. Research published in JERHRE uses concepts and methods designed to understand and solve ethical problems in human research. These tools can be reused by JERHRE's readership as part of their learning and problem solving. Instead of telling scientists, students, ethics committee members and others what they ought to do, educators can use curriculum based on the empirical articles contained in JERHRE to enable learners to solve the particular research-related problems they confront. Each issue of JERHRE publishes curriculum based on articles published therein. The lesson plans are deliberately general so that they can be adapted to the particular learners. PMID:19385873

  10. [Validation of the modified algorithm for predicting host susceptibility to viruses taking into account susceptibility parameters of primary target cell cultures and natural immunity factors].

    PubMed

    Zhukov, V A; Shishkina, L N; Safatov, A S; Sergeev, A A; P'iankov, O V; Petrishchenko, V A; Zaĭtsev, B N; Toporkov, V S; Sergeev, A N; Nesvizhskiĭ, Iu V; Vorob'ev, A A

    2010-01-01

    The paper presents results of testing a modified algorithm for predicting virus ID50 values in a host of interest by extrapolation from a model host taking into account immune neutralizing factors and thermal inactivation of the virus. The method was tested for A/Aichi/2/68 influenza virus in SPF Wistar rats, SPF CD-1 mice and conventional ICR mice. Each species was used as a host of interest while the other two served as model hosts. Primary lung and trachea cells and secretory factors of the rats' airway epithelium were used to measure parameters needed for the purpose of prediction. Predicted ID50 values were not significantly different (p = 0.05) from those experimentally measured in vivo. The study was supported by ISTC/DARPA Agreement 450p. PMID:20608042

  11. An advanced dispatch simulator with advanced dispatch algorithm

    SciTech Connect

    Kafka, R.J. ); Fink, L.H. ); Balu, N.J. ); Crim, H.G. )

    1989-01-01

    This paper reports on an interactive automatic generation control (AGC) simulator. Improved and timely information regarding fossil fired plant performance is potentially useful in the economic dispatch of system generating units. Commonly used economic dispatch algorithms are not able to take full advantage of this information. The dispatch simulator was developed to test and compare economic dispatch algorithms which might be able to show improvement over standard economic dispatch algorithms if accurate unit information were available. This dispatch simulator offers substantial improvements over previously available simulators. In addition, it contains an advanced dispatch algorithm which shows control and performance advantages over traditional dispatch algorithms for both plants and electric systems.

  12. Creationists Take Advantage of Michigan's Charter School Law.

    ERIC Educational Resources Information Center

    Matsumura, Molleen

    1994-01-01

    Reports on the establishment of a charter school in Michigan that serves home-schooling families and stresses a creationist curriculum. Describes the legal action taken against public funding of the charter school. (DDR)

  13. Taking advantage of Google's Web-based applications and services.

    PubMed

    Brigham, Tara J

    2014-01-01

    Google is a company that is constantly expanding and growing its services and products. While most librarians possess a "love/hate" relationship with Google, there are a number of reasons you should consider exploring some of the tools Google has created and made freely available. Applications and services such as Google Docs, Slides, and Google+ are functional and dynamic without the cost of comparable products. This column will address some of the issues users should be aware of before signing up to use Google's tools, and a description of some of Google's Web applications and services, plus how they can be useful to librarians in health care. PMID:24735269

  14. Take Advantage of Technology to Boost Principal Power.

    ERIC Educational Resources Information Center

    Sharman, Charles C.; Cothern, Harold L.

    1986-01-01

    The role of computers in elementary schools is outlined for principals. Computers are part of the future and have the potential for increasing administrator's productivity and for freeing principals to provide the leadership expected of them. (MD)

  15. Taking Advantage of Murder and Mayhem for Social Studies.

    ERIC Educational Resources Information Center

    Harden, G. Daniel

    1991-01-01

    Suggests the use of key historical antisocial acts to teach social studies concepts as a means of arousing the interest of adolescents. Recommends overcoming initial sensationalism by shifting emphasis to more appropriate interests. Includes discussion of the Abraham Lincoln and John F. Kennedy assassinations and the Rosenberg spy case. Suggests…

  16. Superior piezoelectric composite films: taking advantage of carbon nanomaterials.

    PubMed

    Saber, Nasser; Araby, Sherif; Meng, Qingshi; Hsu, Hung-Yao; Yan, Cheng; Azari, Sara; Lee, Sang-Heon; Xu, Yanan; Ma, Jun; Yu, Sirong

    2014-01-31

    Piezoelectric composites comprising an active phase of ferroelectric ceramic and a polymer matrix have recently found numerous sensory applications. However, it remains a major challenge to further improve their electromechanical response for advanced applications such as precision control and monitoring systems. We here investigated the incorporation of graphene platelets (GnPs) and multi-walled carbon nanotubes (MWNTs), each with various weight fractions, into PZT (lead zirconate titanate)/epoxy composites to produce three-phase nanocomposites. The nanocomposite films show markedly improved piezoelectric coefficients and electromechanical responses (50%) besides an enhancement of ~200% in stiffness. The carbon nanomaterials strengthened the impact of electric field on the PZT particles by appropriately raising the electrical conductivity of the epoxy. GnPs have been proved to be far more promising in improving the poling behavior and dynamic response than MWNTs. The superior dynamic sensitivity of GnP-reinforced composite may be caused by the GnPs' high load transfer efficiency arising from their two-dimensional geometry and good compatibility with the matrix. The reduced acoustic impedance mismatch resulting from the improved thermal conductance may also contribute to the higher sensitivity of GnP-reinforced composite. This research pointed out the potential of employing GnPs to develop highly sensitive piezoelectric composites for sensing applications. PMID:24398819

  17. Taking Advantage of Alice to Teach Programming Concepts

    ERIC Educational Resources Information Center

    Biju, Soly Mathew

    2013-01-01

    Learning the fundamentals of programming languages has always been a difficult task for students. It is equally challenging for lecturers to teach these concepts. A number of methods have been deployed by teachers to teach these concepts. This article analyses the result of a class test to identify fundamental programming concepts that students…

  18. Too Few Americans Take Advantage of Local Parks

    MedlinePlus

    ... many people use their services, but parks and recreation departments have not had any metrics to adequately ... managed by more than 9,000 park and recreation departments. Local parks range in size from 2 ...

  19. Taking advantage of acoustic inhomogeneities in photoacoustic measurements

    NASA Astrophysics Data System (ADS)

    Da Silva, Anabela; Handschin, Charles; Riedinger, Christophe; Piasecki, Julien; Mensah, Serge; Litman, Amélie; Akhouayri, Hassan

    2016-03-01

    Photoacoustic offers promising perspectives in probing and imaging subsurface optically absorbing structures in biological tissues. The optical uence absorbed is partly dissipated into heat accompanied with microdilatations that generate acoustic pressure waves, the intensity which is related to the amount of fluuence absorbed. Hence the photoacoustic signal measured offers access, at least potentially, to a local monitoring of the absorption coefficient, in 3D if tomographic measurements are considered. However, due to both the diffusing and absorbing nature of the surrounding tissues, the major part of the uence is deposited locally at the periphery of the tissue, generating an intense acoustic pressure wave that may hide relevant photoacoustic signals. Experimental strategies have been developed in order to measure exclusively the photoacoustic waves generated by the structure of interest (orthogonal illumination and detection). Temporal or more sophisticated filters (wavelets) can also be applied. However, the measurement of this primary acoustic wave carries a lot of information about the acoustically inhomogeneous nature of the medium. We propose a protocol that includes the processing of this primary intense acoustic wave, leading to the quantification of the surrounding medium sound speed, and, if appropriate to an acoustical parametric image of the heterogeneities. This information is then included as prior knowledge in the photoacoustic reconstruction scheme to improve the localization and quantification.

  20. Sink or Swim: Taking Advantage of Developments in Video Streaming

    ERIC Educational Resources Information Center

    Fill, Karen; Ottewill, Roger

    2006-01-01

    Amongst the many recent developments in learning technology, video streaming appears to offer a considerable range of benefits for tutors and learners alike. For these to be fully realised, however, various conditions have to be met. Merely making streams available and directing students to them, does not necessarily result in quality, or indeed…

  1. Tax-advantaged investing: a wise choice.

    PubMed

    Smith, J

    2001-01-01

    Your investment strategy should be just that--a plan to make the most of your assets. Considering the tax advantages and disadvantages help you stretch your investments and take full advantage of stocks, mutual funds, and other investments. PMID:11862646

  2. Sorting on STAR. [CDC computer algorithm timing comparison

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1978-01-01

    Timing comparisons are given for three sorting algorithms written for the CDC STAR computer. One algorithm is Hoare's (1962) Quicksort, which is the fastest or nearly the fastest sorting algorithm for most computers. A second algorithm is a vector version of Quicksort that takes advantage of the STAR's vector operations. The third algorithm is an adaptation of Batcher's (1968) sorting algorithm, which makes especially good use of vector operations but has a complexity of N(log N)-squared as compared with a complexity of N log N for the Quicksort algorithms. In spite of its worse complexity, Batcher's sorting algorithm is competitive with the serial version of Quicksort for vectors up to the largest that can be treated by STAR. Vector Quicksort outperforms the other two algorithms and is generally preferred. These results indicate that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis.

  3. Parallelism of the SANDstorm hash algorithm.

    SciTech Connect

    Torgerson, Mark Dolan; Draelos, Timothy John; Schroeppel, Richard Crabtree

    2009-09-01

    Mainstream cryptographic hashing algorithms are not parallelizable. This limits their speed and they are not able to take advantage of the current trend of being run on multi-core platforms. Being limited in speed limits their usefulness as an authentication mechanism in secure communications. Sandia researchers have created a new cryptographic hashing algorithm, SANDstorm, which was specifically designed to take advantage of multi-core processing and be parallelizable on a wide range of platforms. This report describes a late-start LDRD effort to verify the parallelizability claims of the SANDstorm designers. We have shown, with operating code and bench testing, that the SANDstorm algorithm may be trivially parallelized on a wide range of hardware platforms. Implementations using OpenMP demonstrates a linear speedup with multiple cores. We have also shown significant performance gains with optimized C code and the use of assembly instructions to exploit particular platform capabilities.

  4. Spatial search algorithms on Hanoi networks

    NASA Astrophysics Data System (ADS)

    Marquezino, Franklin de Lima; Portugal, Renato; Boettcher, Stefan

    2013-01-01

    We use the abstract search algorithm and its extension due to Tulsi to analyze a spatial quantum search algorithm that finds a marked vertex in Hanoi networks of degree 4 faster than classical algorithms. We also analyze the effect of using non-Groverian coins that take advantage of the small-world structure of the Hanoi networks. We obtain the scaling of the total cost of the algorithm as a function of the number of vertices. We show that Tulsi's technique plays an important role to speed up the searching algorithm. We can improve the algorithm's efficiency by choosing a non-Groverian coin if we do not implement Tulsi's method. Our conclusions are based on numerical implementations.

  5. Algorithmic Perspectives on Problem Formulations in MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2000-01-01

    This work is concerned with an approach to formulating the multidisciplinary optimization (MDO) problem that reflects an algorithmic perspective on MDO problem solution. The algorithmic perspective focuses on formulating the problem in light of the abilities and inabilities of optimization algorithms, so that the resulting nonlinear programming problem can be solved reliably and efficiently by conventional optimization techniques. We propose a modular approach to formulating MDO problems that takes advantage of the problem structure, maximizes the autonomy of implementation, and allows for multiple easily interchangeable problem statements to be used depending on the available resources and the characteristics of the application problem.

  6. The Certification Advantage

    ERIC Educational Resources Information Center

    Foster, John C.; Pritz, Sandra G.

    2006-01-01

    Certificates have become an important career credential and can give students an advantage when they enter the workplace. However, many types of certificates exist, and the number of people seeking them and organizations offering them are both growing rapidly. In a time of such growth, the authors review some of the basics about certification--the…

  7. Empirical study of parallel LRU simulation algorithms

    NASA Technical Reports Server (NTRS)

    Carr, Eric; Nicol, David M.

    1994-01-01

    This paper reports on the performance of five parallel algorithms for simulating a fully associative cache operating under the LRU (Least-Recently-Used) replacement policy. Three of the algorithms are SIMD, and are implemented on the MasPar MP-2 architecture. Two other algorithms are parallelizations of an efficient serial algorithm on the Intel Paragon. One SIMD algorithm is quite simple, but its cost is linear in the cache size. The two other SIMD algorithm are more complex, but have costs that are independent on the cache size. Both the second and third SIMD algorithms compute all stack distances; the second SIMD algorithm is completely general, whereas the third SIMD algorithm presumes and takes advantage of bounds on the range of reference tags. Both MIMD algorithm implemented on the Paragon are general and compute all stack distances; they differ in one step that may affect their respective scalability. We assess the strengths and weaknesses of these algorithms as a function of problem size and characteristics, and compare their performance on traces derived from execution of three SPEC benchmark programs.

  8. Taking antacids

    MedlinePlus

    ... magnesium may cause diarrhea. Brands with calcium or aluminum may cause constipation. Rarely, brands with calcium may ... you take large amounts of antacids that contain aluminum, you may be at risk for calcium loss, ...

  9. Taking Time

    ERIC Educational Resources Information Center

    Perry, Tonya

    2004-01-01

    The opportunity for students to successfully complete the material increases when teachers take time and care about what they are reading. Students can read the contents of a text successfully if they keep their thoughts moving and ideas developing.

  10. A limited-memory algorithm for bound-constrained optimization

    SciTech Connect

    Byrd, R.H.; Peihuang, L.; Nocedal, J. |

    1996-03-01

    An algorithm for solving large nonlinear optimization problems with simple bounds is described. It is based on the gradient projection method and uses a limited-memory BFGS matrix to approximate the Hessian of the objective function. We show how to take advantage of the form of the limited-memory approximation to implement the algorithm efficiently. The results of numerical tests on a set of large problems are reported.

  11. Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems

    DOEpatents

    Van Benthem, Mark H.; Keenan, Michael R.

    2008-11-11

    A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.

  12. Creating corporate advantage.

    PubMed

    Collis, D J; Montgomery, C A

    1998-01-01

    What differentiates truly great corporate strategies from the merely adequate? How can executives at the corporate level create tangible advantage for their businesses that makes the whole more than the sum of the parts? This article presents a comprehensive framework for value creation in the multibusiness company. It addresses the most fundamental questions of corporate strategy: What businesses should a company be in? How should it coordinate activities across businesses? What role should the corporate office play? How should the corporation measure and control performance? Through detailed case studies of Tyco International, Sharp, the Newell Company, and Saatchi and Saatchi, the authors demonstrate that the answers to all those questions are driven largely by the nature of a company's special resources--its assets, skills, and capabilities. These range along a continuum from the highly specialized at one end to the very general at the other. A corporation's location on the continuum constrains the set of businesses it should compete in and limits its choices about the design of its organization. Applying the framework, the authors point out the common mistakes that result from misaligned corporate strategies. Companies mistakenly enter businesses based on similarities in products rather than the resources that contribute to competitive advantage in each business. Instead of tailoring organizational structures and systems to the needs of a particular strategy, they create plain-vanilla corporate offices and infrastructures. The company examples demonstrate that one size does not fit all. One can find great corporate strategies all along the continuum. PMID:10179655

  13. Binocular advantages in reading.

    PubMed

    Jainta, Stephanie; Blythe, Hazel I; Liversedge, Simon P

    2014-03-01

    Reading, an essential skill for successful function in today's society, is a complex psychological process involving vision, memory, and language comprehension. Variability in fixation durations during reading reflects the ease of text comprehension, and increased word frequency results in reduced fixation times. Critically, readers not only process the fixated foveal word but also preprocess the parafoveal word to its right, thereby facilitating subsequent foveal processing. Typically, text is presented binocularly, and the oculomotor control system precisely coordinates the two frontally positioned eyes online. Binocular, compared to monocular, visual processing typically leads to superior performance, termed the "binocular advantage"; few studies have investigated the binocular advantage in reading. We used saccade-contingent display change methodology to demonstrate the benefit of binocular relative to monocular text presentation for both parafoveal and foveal lexical processing during reading. Our results demonstrate that denial of a unified visual signal derived from binocular inputs provides a cost to the efficiency of reading, particularly in relation to high-frequency words. Our findings fit neatly with current computational models of eye movement control during reading, wherein successful word identification is a primary determinant of saccade initiation. PMID:24530062

  14. Filtered refocusing: a volumetric reconstruction algorithm for plenoptic-PIV

    NASA Astrophysics Data System (ADS)

    Fahringer, Timothy W.; Thurow, Brian S.

    2016-09-01

    A new algorithm for reconstruction of 3D particle fields from plenoptic image data is presented. The algorithm is based on the technique of computational refocusing with the addition of a post reconstruction filter to remove the out of focus particles. This new algorithm is tested in terms of reconstruction quality on synthetic particle fields as well as a synthetically generated 3D Gaussian ring vortex. Preliminary results indicate that the new algorithm performs as well as the MART algorithm (used in previous work) in terms of the reconstructed particle position accuracy, but produces more elongated particles. The major advantage to the new algorithm is the dramatic reduction in the computational cost required to reconstruct a volume. It is shown that the new algorithm takes 1/9th the time to reconstruct the same volume as MART while using minimal resources. Experimental results are presented in the form of the wake behind a cylinder at a Reynolds number of 185.

  15. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  16. Double Take

    ERIC Educational Resources Information Center

    Educational Leadership, 2011

    2011-01-01

    This paper begins by discussing the results of two studies recently conducted in Australia. According to the two studies, taking a gap year between high school and college may help students complete a degree once they return to school. The gap year can involve such activities as travel, service learning, or work. Then, the paper presents links to…

  17. Taking Turns

    ERIC Educational Resources Information Center

    Hopkins, Brian

    2010-01-01

    Two people take turns selecting from an even number of items. Their relative preferences over the items can be described as a permutation, then tools from algebraic combinatorics can be used to answer various questions. We describe each person's optimal selection strategies including how each could make use of knowing the other's preferences. We…

  18. A novel algorithm combining finite state method and genetic algorithm for solving crude oil scheduling problem.

    PubMed

    Duan, Qian-Qian; Yang, Gen-Ke; Pan, Chang-Chun

    2014-01-01

    A hybrid optimization algorithm combining finite state method (FSM) and genetic algorithm (GA) is proposed to solve the crude oil scheduling problem. The FSM and GA are combined to take the advantage of each method and compensate deficiencies of individual methods. In the proposed algorithm, the finite state method makes up for the weakness of GA which is poor at local searching ability. The heuristic returned by the FSM can guide the GA algorithm towards good solutions. The idea behind this is that we can generate promising substructure or partial solution by using FSM. Furthermore, the FSM can guarantee that the entire solution space is uniformly covered. Therefore, the combination of the two algorithms has better global performance than the existing GA or FSM which is operated individually. Finally, a real-life crude oil scheduling problem from the literature is used for conducting simulation. The experimental results validate that the proposed method outperforms the state-of-art GA method. PMID:24772031

  19. Improved hybrid optimization algorithm for 3D protein structure prediction.

    PubMed

    Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang

    2014-07-01

    A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins. PMID:25069136

  20. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  1. Evaluation of TCP congestion control algorithms.

    SciTech Connect

    Long, Robert Michael

    2003-12-01

    Sandia, Los Alamos, and Lawrence Livermore National Laboratories currently deploy high speed, Wide Area Network links to permit remote access to their Supercomputer systems. The current TCP congestion algorithm does not take full advantage of high delay, large bandwidth environments. This report involves evaluating alternative TCP congestion algorithms and comparing them with the currently used congestion algorithm. The goal was to find if an alternative algorithm could provide higher throughput with minimal impact on existing network traffic. The alternative congestion algorithms used were Scalable TCP and High-Speed TCP. Network lab experiments were run to record the performance of each algorithm under different network configurations. The network configurations used were back-to-back with no delay, back-to-back with a 30ms delay, and two-to-one with a 30ms delay. The performance of each algorithm was then compared to the existing TCP congestion algorithm to determine if an acceptable alternative had been found. Comparisons were made based on throughput, stability, and fairness.

  2. A new algorithm for agile satellite-based acquisition operations

    NASA Astrophysics Data System (ADS)

    Bunkheila, Federico; Ortore, Emiliano; Circi, Christian

    2016-06-01

    Taking advantage of the high manoeuvrability and the accurate pointing of the so-called agile satellites, an algorithm which allows efficient management of the operations concerning optical acquisitions is described. Fundamentally, this algorithm can be subdivided into two parts: in the first one the algorithm operates a geometric classification of the areas of interest and a partitioning of these areas into stripes which develop along the optimal scan directions; in the second one it computes the succession of the time windows in which the acquisition operations of the areas of interest are feasible, taking into consideration the potential restrictions associated with these operations and with the geometric and stereoscopic constraints. The results and the performances of the proposed algorithm have been determined and discussed considering the case of the Periodic Sun-Synchronous Orbits.

  3. A real-time FORTRAN implementation of a sensor failure detection, isolation and accommodation algorithm

    NASA Technical Reports Server (NTRS)

    Delaat, J. C.

    1984-01-01

    An advanced, sensor failure detection, isolation, and accomodation algorithm has been developed by NASA for the F100 turbofan engine. The algorithm takes advantage of the analytical redundancy of the sensors to improve the reliability of the sensor set. The method requires the controls computer, to determine when a sensor failure has occurred without the help of redundant hardware sensors in the control system. The controls computer provides an estimate of the correct value of the output of the failed sensor. The algorithm has been programmed in FORTRAN using a real-time microprocessor-based controls computer. A detailed description of the algorithm and its implementation on a microprocessor is given.

  4. Algorithm Optimally Allocates Actuation of a Spacecraft

    NASA Technical Reports Server (NTRS)

    Motaghedi, Shi

    2007-01-01

    A report presents an algorithm that solves the following problem: Allocate the force and/or torque to be exerted by each thruster and reaction-wheel assembly on a spacecraft for best performance, defined as minimizing the error between (1) the total force and torque commanded by the spacecraft control system and (2) the total of forces and torques actually exerted by all the thrusters and reaction wheels. The algorithm incorporates the matrix vector relationship between (1) the total applied force and torque and (2) the individual actuator force and torque values. It takes account of such constraints as lower and upper limits on the force or torque that can be applied by a given actuator. The algorithm divides the aforementioned problem into two optimization problems that it solves sequentially. These problems are of a type, known in the art as semi-definite programming problems, that involve linear matrix inequalities. The algorithm incorporates, as sub-algorithms, prior algorithms that solve such optimization problems very efficiently. The algorithm affords the additional advantage that the solution requires the minimum rate of consumption of fuel for the given best performance.

  5. Genetic algorithm for bundle adjustment in aerial panoramic stitching

    NASA Astrophysics Data System (ADS)

    Zhang, Chunxiao; Wen, Gaojin; Wu, Chunnan; Wang, Hongmin; Shang, Zhiming; Zhang, Qian

    2015-03-01

    This paper presents a genetic algorithm for bundle adjustment in aerial panoramic stitching. Compared with the conventional LM (Levenberg-Marquardt) algorithm for bundle adjustment, the proposed bundle adjustment combining the genetic algorithm optimization eliminates the possibility of sticking into the local minimum, and not requires the initial estimation of desired parameters, naturally avoiding the associated steps, that includes the normalization of matches, the computation of homography transformation, the calculations of rotation transformation and the focal length. Since the proposed bundle adjustment is composed of the directional vectors of matches, taking the advantages of genetic algorithm (GA), the Jacobian matrix and the normalization of residual error are not involved in the searching process. The experiment verifies that the proposed bundle adjustment based on the genetic algorithm can yield the global solution even in the unstable aerial imaging condition.

  6. Modeling words with subword units in an articulatorily constrained speech recognition algorithm

    SciTech Connect

    Hogden, J.

    1997-11-20

    The goal of speech recognition is to find the most probable word given the acoustic evidence, i.e. a string of VQ codes or acoustic features. Speech recognition algorithms typically take advantage of the fact that the probability of a word, given a sequence of VQ codes, can be calculated.

  7. Flow mediated endothelium function: advantages of an automatic measuring technique

    NASA Astrophysics Data System (ADS)

    Maio, Yamila; Casciaro, Mariano E.; José Urcola y, Maria; Craiem, Damian

    2007-11-01

    The objective of this work is to show the advantages of a non invasive automated method for measuring flow mediated dilation (FMD) in the forearm. This dilation takes place in answer to a shear tension generated by the increase of blood flow, sensed by the endothelium, after the liberation of an occlusion sustained in the time. The method consists of three stages: the continuous acquisition of images of the brachial artery using ultrasound techniques, the pulse to pulse measurement of the vessel's diameter by means of a border detection algorithm, and the later analysis of the results. By means of this technique one cannot only obtain the maximum dilation percentage (FMD%), but a continuous diameter curve that allows to evaluate other relevant aspects such as dilation speed, dilation sustain in time and general maneuver performance. The simplicity of this method, robustness of the technique and accessibility of the required elements makes it a viable alternative of great clinical value for diagnosis in the early detection of numerous cardiovascular pathologies.

  8. Enhanced decomposition algorithm for multistage stochastic hydroelectric scheduling. Technical report

    SciTech Connect

    Morton, D.P.

    1994-01-01

    Handling uncertainty in natural inflow is an important part of a hydroelectric scheduling model. In a stochastic programming formulation, natural inflow may be modeled as a random vector with known distribution, but the size of the resulting mathematical program can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We develop an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of stochastic hydroelectric scheduling problems. Stochastic programming, Hydroelectric scheduling, Large-scale Systems.

  9. Implicit, nonswitching, vector-oriented algorithm for steady transonic flow

    NASA Technical Reports Server (NTRS)

    Lottati, I.

    1983-01-01

    A rapid computation of a sequence of transonic flow solutions has to be performed in many areas of aerodynamic technology. The employment of low-cost vector array processors makes the conduction of such calculations economically feasible. However, for a full utilization of the new hardware, the developed algorithms must take advantage of the special characteristics of the vector array processor. The present investigation has the objective to develop an efficient algorithm for solving transonic flow problems governed by mixed partial differential equations on an array processor.

  10. Home advantage in Greek football.

    PubMed

    Armatas, Vasilis; Pollard, Richard

    2014-01-01

    Home advantage as it relates to team performance at football was examined in Superleague Greece using nine seasons of game-by-game performance data, a total of 2160 matches. After adjusting for team ability and annual fluctuations in home advantage, there were significant differences between teams. Previous findings regarding the role of territorial protection were strengthened by the fact that home advantage was above average for the team from Xanthi (P =0.015), while lower for teams from the capital city Athens (P =0.008). There were differences between home and away teams in the incidence of most of the 13 within-game match variables, but associated effect sizes were only moderate. In contrast, outcome ratios derived from these variables, and measuring shot success, had negligible effect sizes. This supported a previous finding that home and away teams differed in the incidence of on-the-ball behaviours, but not in their outcomes. By far the most important predictor of home advantage, as measured by goal difference, was the difference between home and away teams in terms of kicked shots from inside the penalty area. Other types of shots had little effect on the final score. The absence of a running track between spectators and the playing field was also a significant predictor of goal difference, worth an average of 0.102 goals per game to the home team. Travel distance did not affect home advantage. PMID:24533517

  11. Comparison between iterative wavefront control algorithm and direct gradient wavefront control algorithm for adaptive optics system

    NASA Astrophysics Data System (ADS)

    Cheng, Sheng-Yi; Liu, Wen-Jin; Chen, Shan-Qiu; Dong, Li-Zhi; Yang, Ping; Xu, Bing

    2015-08-01

    Among all kinds of wavefront control algorithms in adaptive optics systems, the direct gradient wavefront control algorithm is the most widespread and common method. This control algorithm obtains the actuator voltages directly from wavefront slopes through pre-measuring the relational matrix between deformable mirror actuators and Hartmann wavefront sensor with perfect real-time characteristic and stability. However, with increasing the number of sub-apertures in wavefront sensor and deformable mirror actuators of adaptive optics systems, the matrix operation in direct gradient algorithm takes too much time, which becomes a major factor influencing control effect of adaptive optics systems. In this paper we apply an iterative wavefront control algorithm to high-resolution adaptive optics systems, in which the voltages of each actuator are obtained through iteration arithmetic, which gains great advantage in calculation and storage. For AO system with thousands of actuators, the computational complexity estimate is about O(n2) ˜ O(n3) in direct gradient wavefront control algorithm, while the computational complexity estimate in iterative wavefront control algorithm is about O(n) ˜ (O(n)3/2), in which n is the number of actuators of AO system. And the more the numbers of sub-apertures and deformable mirror actuators, the more significant advantage the iterative wavefront control algorithm exhibits. Project supported by the National Key Scientific and Research Equipment Development Project of China (Grant No. ZDYZ2013-2), the National Natural Science Foundation of China (Grant No. 11173008), and the Sichuan Provincial Outstanding Youth Academic Technology Leaders Program, China (Grant No. 2012JQ0012).

  12. Simulation System of Car Crash Test in C-NCAP Analysis Based on an Improved Apriori Algorithm*

    NASA Astrophysics Data System (ADS)

    Xiang, LI

    In order to analysis car crash test in C-NCAP, an improved algorithm is given based on Apriori algorithm in this paper. The new algorithm is implemented with vertical data layout, breadth first searching, and intersecting. It takes advantage of the efficiency of vertical data layout and intersecting, and prunes candidate frequent item sets like Apriori. Finally, the new algorithm is applied in simulation of car crash test analysis system. The result shows that the relations will affect the C-NCAP test results, and it can provide a reference for the automotive design.

  13. Advantages of proteins being disordered

    PubMed Central

    Liu, Zhirong; Huang, Yongqi

    2014-01-01

    The past decade has witnessed great advances in our understanding of protein structure-function relationships in terms of the ubiquitous existence of intrinsically disordered proteins (IDPs) and intrinsically disordered regions (IDRs). The structural disorder of IDPs/IDRs enables them to play essential functions that are complementary to those of ordered proteins. In addition, IDPs/IDRs are persistent in evolution. Therefore, they are expected to possess some advantages over ordered proteins. In this review, we summarize and survey nine possible advantages of IDPs/IDRs: economizing genome/protein resources, overcoming steric restrictions in binding, achieving high specificity with low affinity, increasing binding rate, facilitating posttranslational modifications, enabling flexible linkers, preventing aggregation, providing resistance to non-native conditions, and allowing compatibility with more available sequences. Some potential advantages of IDPs/IDRs are not well understood and require both experimental and theoretical approaches to decipher. The connection with protein design is also briefly discussed. PMID:24532081

  14. Computational evolution: taking liberties.

    PubMed

    Correia, Luís

    2010-09-01

    Evolution has, for a long time, inspired computer scientists to produce computer models mimicking its behavior. Evolutionary algorithm (EA) is one of the areas where this approach has flourished. EAs have been used to model and study evolution, but they have been especially developed for their aptitude as optimization tools for engineering. Developed models are quite simple in comparison with their natural sources of inspiration. However, since EAs run on computers, we have the freedom, especially in optimization models, to test approaches both realistic and outright speculative, from the biological point of view. In this article, we discuss different common evolutionary algorithm models, and then present some alternatives of interest. These include biologically inspired models, such as co-evolution and, in particular, symbiogenetics and outright artificial operators and representations. In each case, the advantages of the modifications to the standard model are identified. The other area of computational evolution, which has allowed us to study basic principles of evolution and ecology dynamics, is the development of artificial life platforms for open-ended evolution of artificial organisms. With these platforms, biologists can test theories by directly manipulating individuals and operators, observing the resulting effects in a realistic way. An overview of the most prominent of such environments is also presented. If instead of artificial platforms we use the real world for evolving artificial life, then we are dealing with evolutionary robotics (ERs). A brief description of this area is presented, analyzing its relations to biology. Finally, we present the conclusions and identify future research avenues in the frontier of computation and biology. Hopefully, this will help to draw the attention of more biologists and computer scientists to the benefits of such interdisciplinary research. PMID:20532997

  15. Genetic-algorithm-based tri-state neural networks

    NASA Astrophysics Data System (ADS)

    Uang, Chii-Maw; Chen, Wen-Gong; Horng, Ji-Bin

    2002-09-01

    A new method, using genetic algorithms, for constructing a tri-state neural network is presented. The global searching features of the genetic algorithms are adopted to help us easily find the interconnection weight matrix of a bipolar neural network. The construction method is based on the biological nervous systems, which evolve the parameters encoded in genes. Taking the advantages of conventional (binary) genetic algorithms, a two-level chromosome structure is proposed for training the tri-state neural network. A Matlab program is developed for simulating the network performances. The results show that the proposed genetic algorithms method not only has the features of accurate of constructing the interconnection weight matrix, but also has better network performance.

  16. Optimization of circuits using a constructive learning algorithm

    SciTech Connect

    Beiu, V.

    1997-05-01

    The paper presents an application of a constructive learning algorithm to optimization of circuits. For a given Boolean function f. a fresh constructive learning algorithm builds circuits belonging to the smallest F{sub n,m} class of functions (n inputs and having m groups of ones in their truth table). The constructive proofs, which show how arbitrary Boolean functions can be implemented by this algorithm, are shortly enumerated An interesting aspect is that the algorithm can be used for generating both classical Boolean circuits and threshold gate circuits (i.e. analogue inputs and digital outputs), or a mixture of them, thus taking advantage of mixed analogue/digital technologies. One illustrative example is detailed The size and the area of the different circuits are compared (special cost functions can be used to closer estimate the area and the delay of VLSI implementations). Conclusions and further directions of research are ending the paper.

  17. Using an improved association rules mining optimization algorithm in web-based mobile-learning system

    NASA Astrophysics Data System (ADS)

    Huang, Yin; Chen, Jianhua; Xiong, Shaojun

    2009-07-01

    Mobile-Learning (M-learning) makes many learners get the advantages of both traditional learning and E-learning. Currently, Web-based Mobile-Learning Systems have created many new ways and defined new relationships between educators and learners. Association rule mining is one of the most important fields in data mining and knowledge discovery in databases. Rules explosion is a serious problem which causes great concerns, as conventional mining algorithms often produce too many rules for decision makers to digest. Since Web-based Mobile-Learning System collects vast amounts of student profile data, data mining and knowledge discovery techniques can be applied to find interesting relationships between attributes of learners, assessments, the solution strategies adopted by learners and so on. Therefore ,this paper focus on a new data-mining algorithm, combined with the advantages of genetic algorithm and simulated annealing algorithm , called ARGSA(Association rules based on an improved Genetic Simulated Annealing Algorithm), to mine the association rules. This paper first takes advantage of the Parallel Genetic Algorithm and Simulated Algorithm designed specifically for discovering association rules. Moreover, the analysis and experiment are also made to show the proposed method is superior to the Apriori algorithm in this Mobile-Learning system.

  18. Selective advantage for sexual reproduction

    NASA Astrophysics Data System (ADS)

    Tannenbaum, Emmanuel

    2006-06-01

    This paper develops a simplified model for sexual reproduction within the quasispecies formalism. The model assumes a diploid genome consisting of two chromosomes, where the fitness is determined by the number of chromosomes that are identical to a given master sequence. We also assume that there is a cost to sexual reproduction, given by a characteristic time τseek during which haploid cells seek out a mate with which to recombine. If the mating strategy is such that only viable haploids can mate, then when τseek=0 , it is possible to show that sexual reproduction will always out compete asexual reproduction. However, as τseek increases, sexual reproduction only becomes advantageous at progressively higher mutation rates. Once the time cost for sex reaches a critical threshold, the selective advantage for sexual reproduction disappears entirely. The results of this paper suggest that sexual reproduction is not advantageous in small populations per se, but rather in populations with low replication rates. In this regime, the cost for sex is sufficiently low that the selective advantage obtained through recombination leads to the dominance of the strategy. In fact, at a given replication rate and for a fixed environment volume, sexual reproduction is selected for in high populations because of the reduced time spent finding a reproductive partner.

  19. Selective advantage for sexual reproduction

    NASA Astrophysics Data System (ADS)

    Tannenbaum, Emmanuel

    2006-03-01

    We develop a simplified model for sexual replication within the quasispecies formalism. We assume that the genomes of the replicating organisms are two-chromosomed and diploid, and that the fitness is determined by the number of chromosomes that are identical to a given master sequence. We also assume that there is a cost to sexual replication, given by a characteristic time τseek during which haploid cells seek out a mate with which to recombine. If the mating strategy is such that only viable haploids can mate, then when τseek= 0 , it is possible to show that sexual replication will always outcompete asexual replication. However, as τseek increases, sexual replication only becomes advantageous at progressively higher mutation rates. Once the time cost for sex reaches a critical threshold, the selective advantage for sexual replication disappears entirely. The results of this talk suggest that sexual replication is not advantageous in small populations per se, but rather in populations with low replication rates. In this regime, the cost for sex is sufficiently low that the selective advantage obtained through recombination leads to the dominance of the strategy. In fact, at a given replication rate and for a fixed environment volume, sexual replication is selected for in high populations because of the reduced time spent finding a reproductive partner.

  20. Achieving a sustainable service advantage.

    PubMed

    Coyne, K P

    1993-01-01

    Many managers believe that superior service should play little or no role in competitive strategy; they maintain that service innovations are inherently copiable. However, the author states that this view is too narrow. For a company to achieve a lasting service advantage, it must base a new service on a capability gap that competitors cannot or will not copy. PMID:10123422

  1. An Experiment in Comparative Advantage.

    ERIC Educational Resources Information Center

    Haupert, Michael J.

    1996-01-01

    Describes an undergraduate economics course experiment designed to teach the concepts of comparative advantage and opportunity costs. Students have a limited number of labor hours and can chose to produce either wheat or steel. As the project progresses, the students trade commodities in an attempt to maximize use of their labor hours. (MJP)

  2. Competitive Intelligence and Social Advantage.

    ERIC Educational Resources Information Center

    Davenport, Elisabeth; Cronin, Blaise

    1994-01-01

    Presents an overview of issues concerning civilian competitive intelligence (CI). Topics discussed include competitive advantage in academic and research environments; public domain information and libraries; covert and overt competitive intelligence; data diversity; use of the Internet; cooperative intelligence; and implications for library and…

  3. Information Technology: Tomorrow's Advantage Today.

    ERIC Educational Resources Information Center

    Haag, Stephen; Keen, Peter

    This textbook is designed for a one-semester introductory course in which the goal is to give students a foundation in the basics of information technology (IT). It focuses on how the technology works, issues relating to its use and development, how it can lend personal and business advantages, and how it is creating a globally networked society.…

  4. Energy Advantages for Green Schools

    ERIC Educational Resources Information Center

    Griffin, J. Tim

    2012-01-01

    Because of many advantages associated with central utility systems, school campuses, from large universities to elementary schools, have used district energy for decades. District energy facilities enable thermal and electric utilities to be generated with greater efficiency and higher system reliability, while requiring fewer maintenance and…

  5. A projected preconditioned conjugate gradient algorithm for computing many extreme eigenpairs of a Hermitian matrix

    SciTech Connect

    Vecharynski, Eugene; Yang, Chao; Pask, John E.

    2015-06-01

    We present an iterative algorithm for computing an invariant subspace associated with the algebraically smallest eigenvalues of a large sparse or structured Hermitian matrix A. We are interested in the case in which the dimension of the invariant subspace is large (e.g., over several hundreds or thousands) even though it may still be small relative to the dimension of A. These problems arise from, for example, density functional theory (DFT) based electronic structure calculations for complex materials. The key feature of our algorithm is that it performs fewer Rayleigh–Ritz calculations compared to existing algorithms such as the locally optimal block preconditioned conjugate gradient or the Davidson algorithm. It is a block algorithm, and hence can take advantage of efficient BLAS3 operations and be implemented with multiple levels of concurrency. We discuss a number of practical issues that must be addressed in order to implement the algorithm efficiently on a high performance computer.

  6. Minimalist ensemble algorithms for genome-wide protein localization prediction

    PubMed Central

    2012-01-01

    proposed a method for rational design of minimalist ensemble algorithms using feature selection and classifiers. The proposed minimalist ensemble algorithm based on logistic regression can achieve equal or better prediction performance while using only half or one-third of individual predictors compared to other ensemble algorithms. The results also suggested that meta-predictors that take advantage of a variety of features by combining individual predictors tend to achieve the best performance. The LR ensemble server and related benchmark datasets are available at http://mleg.cse.sc.edu/LRensemble/cgi-bin/predict.cgi. PMID:22759391

  7. Algorithms to Automate LCLS Undulator Tuning

    SciTech Connect

    Wolf, Zachary

    2010-12-03

    Automation of the LCLS undulator tuning offers many advantages to the project. Automation can make a substantial reduction in the amount of time the tuning takes. Undulator tuning is fairly complex and automation can make the final tuning less dependent on the skill of the operator. Also, algorithms are fixed and can be scrutinized and reviewed, as opposed to an individual doing the tuning by hand. This note presents algorithms implemented in a computer program written for LCLS undulator tuning. The LCLS undulators must meet the following specifications. The maximum trajectory walkoff must be less than 5 {micro}m over 10 m. The first field integral must be below 40 x 10{sup -6} Tm. The second field integral must be below 50 x 10{sup -6} Tm{sup 2}. The phase error between the electron motion and the radiation field must be less than 10 degrees in an undulator. The K parameter must have the value of 3.5000 {+-} 0.0005. The phase matching from the break regions into the undulator must be accurate to better than 10 degrees. A phase change of 113 x 2{pi} must take place over a distance of 3.656 m centered on the undulator. Achieving these requirements is the goal of the tuning process. Most of the tuning is done with Hall probe measurements. The field integrals are checked using long coil measurements. An analysis program written in Matlab takes the Hall probe measurements and computes the trajectories, phase errors, K value, etc. The analysis program and its calculation techniques were described in a previous note. In this note, a second Matlab program containing tuning algorithms is described. The algorithms to determine the required number and placement of the shims are discussed in detail. This note describes the operation of a computer program which was written to automate LCLS undulator tuning. The algorithms used to compute the shim sizes and locations are discussed.

  8. A model selection algorithm for a posteriori probability estimation with neural networks.

    PubMed

    Arribas, Juan Ignacio; Cid-Sueiro, Jesús

    2005-07-01

    This paper proposes a novel algorithm to jointly determine the structure and the parameters of a posteriori probability model based on neural networks (NNs). It makes use of well-known ideas of pruning, splitting, and merging neural components and takes advantage of the probabilistic interpretation of these components. The algorithm, so called a posteriori probability model selection (PPMS), is applied to an NN architecture called the generalized softmax perceptron (GSP) whose outputs can be understood as probabilities although results shown can be extended to more general network architectures. Learning rules are derived from the application of the expectation-maximization algorithm to the GSP-PPMS structure. Simulation results show the advantages of the proposed algorithm with respect to other schemes. PMID:16121722

  9. Fractal Landscape Algorithms for Environmental Simulations

    NASA Astrophysics Data System (ADS)

    Mao, H.; Moran, S.

    2014-12-01

    Natural science and geographical research are now able to take advantage of environmental simulations that more accurately test experimental hypotheses, resulting in deeper understanding. Experiments affected by the natural environment can benefit from 3D landscape simulations capable of simulating a variety of terrains and environmental phenomena. Such simulations can employ random terrain generation algorithms that dynamically simulate environments to test specific models against a variety of factors. Through the use of noise functions such as Perlin noise, Simplex noise, and diamond square algorithms, computers can generate simulations that model a variety of landscapes and ecosystems. This study shows how these algorithms work together to create realistic landscapes. By seeding values into the diamond square algorithm, one can control the shape of landscape. Perlin noise and Simplex noise are also used to simulate moisture and temperature. The smooth gradient created by coherent noise allows more realistic landscapes to be simulated. Terrain generation algorithms can be used in environmental studies and physics simulations. Potential studies that would benefit from simulations include the geophysical impact of flash floods or drought on a particular region and regional impacts on low lying area due to global warming and rising sea levels. Furthermore, terrain generation algorithms also serve as aesthetic tools to display landscapes (Google Earth), and simulate planetary landscapes. Hence, it can be used as a tool to assist science education. Algorithms used to generate these natural phenomena provide scientists a different approach in analyzing our world. The random algorithms used in terrain generation not only contribute to the generating the terrains themselves, but are also capable of simulating weather patterns.

  10. Acoustic multiple scattering using recursive algorithms

    NASA Astrophysics Data System (ADS)

    Amirkulova, Feruza A.; Norris, Andrew N.

    2015-10-01

    Acoustic multiple scattering by a cluster of cylinders in an acoustic medium is considered. A fast recursive technique is described which takes advantage of the multilevel Block Toeplitz structure of the linear system. A parallelization technique is described that enables efficient application of the proposed recursive algorithm for solving multilevel Block Toeplitz systems on high performance computer clusters. Numerical comparisons of CPU time and total elapsed time taken to solve the linear system using the direct LAPACK and TOEPLITZ libraries on Intel FORTRAN, show the advantage of the TOEPLITZ solver. Computations are optimized by multi-threading which displays improved efficiency of the TOEPLITZ solver with the increase of the number of scatterers and frequency.

  11. An Assessment of Current Satellite Precipitation Algorithms

    NASA Technical Reports Server (NTRS)

    Smith, Eric A.

    2007-01-01

    The H-SAF Program requires an experimental operational European-centric Satellite Precipitation Algorithm System (E-SPAS) that produces medium spatial resolution and high temporal resolution surface rainfall and snowfall estimates over the Greater European Region including the Greater Mediterranean Basin. Currently, there are various types of experimental operational algorithm methods of differing spatiotemporal resolutions that generate global precipitation estimates. This address will first assess the current status of these methods and then recommend a methodology for the H-SAF Program that deviates somewhat from the current approach under development but one that takes advantage of existing techniques and existing software developed for the TRMM Project and available through the public domain.

  12. CAST: Contraction Algorithm for Symmetric Tensors

    SciTech Connect

    Rajbhandari, Samyam; NIkam, Akshay; Lai, Pai-Wei; Stock, Kevin; Krishnamoorthy, Sriram; Sadayappan, Ponnuswamy

    2014-09-22

    Tensor contractions represent the most compute-intensive core kernels in ab initio computational quantum chemistry and nuclear physics. Symmetries in these tensor contractions makes them difficult to load balance and scale to large distributed systems. In this paper, we develop an efficient and scalable algorithm to contract symmetric tensors. We introduce a novel approach that avoids data redistribution in contracting symmetric tensors while also avoiding redundant storage and maintaining load balance. We present experimental results on two parallel supercomputers for several symmetric contractions that appear in the CCSD quantum chemistry method. We also present a novel approach to tensor redistribution that can take advantage of parallel hyperplanes when the initial distribution has replicated dimensions, and use collective broadcast when the final distribution has replicated dimensions, making the algorithm very efficient.

  13. The optimal algorithm for Multi-source RS image fusion.

    PubMed

    Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan

    2016-01-01

    In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA. PMID:27408827

  14. High performance graphics processor based computed tomography reconstruction algorithms for nuclear and other large scale applications.

    SciTech Connect

    Jimenez, Edward Steven,

    2013-09-01

    The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.

  15. Spectrally-accurate algorithm for the analysis of flows in two-dimensional vibrating channels

    NASA Astrophysics Data System (ADS)

    Zandi, S.; Mohammadi, A.; Floryan, J. M.

    2015-11-01

    A spectral algorithm based on the immersed boundary conditions (IBC) concept has been developed for the analysis of flows in channels bounded by vibrating walls. The vibrations take the form of travelling waves of arbitrary profile. The algorithm uses a fixed computational domain with the flow domain immersed in its interior. Boundary conditions enter the algorithm in the form of constraints. The spatial discretization uses a Fourier expansion in the stream-wise direction and a Chebyshev expansion in the wall-normal direction. Use of the Galileo transformation converts the unsteady problem into a steady one. An efficient solver which takes advantage of the structure of the coefficient matrix has been used. It is demonstrated that the method can be extended to more extreme geometries using the overdetermined formulation. Various tests confirm the spectral accuracy of the algorithm.

  16. Evaluating the power of GPU acceleration for IDW interpolation algorithm.

    PubMed

    Mei, Gang

    2014-01-01

    We first present two GPU implementations of the standard Inverse Distance Weighting (IDW) interpolation algorithm, the tiled version that takes advantage of shared memory and the CDP version that is implemented using CUDA Dynamic Parallelism (CDP). Then we evaluate the power of GPU acceleration for IDW interpolation algorithm by comparing the performance of CPU implementation with three GPU implementations, that is, the naive version, the tiled version, and the CDP version. Experimental results show that the tilted version has the speedups of 120x and 670x over the CPU version when the power parameter p is set to 2 and 3.0, respectively. In addition, compared to the naive GPU implementation, the tiled version is about two times faster. However, the CDP version is 4.8x ∼ 6.0x slower than the naive GPU version, and therefore does not have any potential advantages in practical applications. PMID:24707195

  17. Evaluating the Power of GPU Acceleration for IDW Interpolation Algorithm

    PubMed Central

    2014-01-01

    We first present two GPU implementations of the standard Inverse Distance Weighting (IDW) interpolation algorithm, the tiled version that takes advantage of shared memory and the CDP version that is implemented using CUDA Dynamic Parallelism (CDP). Then we evaluate the power of GPU acceleration for IDW interpolation algorithm by comparing the performance of CPU implementation with three GPU implementations, that is, the naive version, the tiled version, and the CDP version. Experimental results show that the tilted version has the speedups of 120x and 670x over the CPU version when the power parameter p is set to 2 and 3.0, respectively. In addition, compared to the naive GPU implementation, the tiled version is about two times faster. However, the CDP version is 4.8x∼6.0x slower than the naive GPU version, and therefore does not have any potential advantages in practical applications. PMID:24707195

  18. Advantages of GPU technology in DFT calculations of intercalated graphene

    NASA Astrophysics Data System (ADS)

    Pešić, J.; Gajić, R.

    2014-09-01

    Over the past few years, the expansion of general-purpose graphic-processing unit (GPGPU) technology has had a great impact on computational science. GPGPU is the utilization of a graphics-processing unit (GPU) to perform calculations in applications usually handled by the central processing unit (CPU). Use of GPGPUs as a way to increase computational power in the material sciences has significantly decreased computational costs in already highly demanding calculations. A level of the acceleration and parallelization depends on the problem itself. Some problems can benefit from GPU acceleration and parallelization, such as the finite-difference time-domain algorithm (FTDT) and density-functional theory (DFT), while others cannot take advantage of these modern technologies. A number of GPU-supported applications had emerged in the past several years (www.nvidia.com/object/gpu-applications.html). Quantum Espresso (QE) is reported as an integrated suite of open source computer codes for electronic-structure calculations and materials modeling at the nano-scale. It is based on DFT, the use of a plane-waves basis and a pseudopotential approach. Since the QE 5.0 version, it has been implemented as a plug-in component for standard QE packages that allows exploiting the capabilities of Nvidia GPU graphic cards (www.qe-forge.org/gf/proj). In this study, we have examined the impact of the usage of GPU acceleration and parallelization on the numerical performance of DFT calculations. Graphene has been attracting attention worldwide and has already shown some remarkable properties. We have studied an intercalated graphene, using the QE package PHonon, which employs GPU. The term ‘intercalation’ refers to a process whereby foreign adatoms are inserted onto a graphene lattice. In addition, by intercalating different atoms between graphene layers, it is possible to tune their physical properties. Our experiments have shown there are benefits from using GPUs, and we reached an

  19. The Take Action Project

    ERIC Educational Resources Information Center

    Boudreau, Sue

    2010-01-01

    The Take Action Project (TAP) was created to help middle school students take informed and effective action on science-related issues. The seven steps of TAP ask students to (1) choose a science-related problem of interest to them, (2) research their problem, (3) select an action to take on the problem, (4) plan that action, (5) take action, (6)…

  20. March 2013: Medicare Advantage update.

    PubMed

    Sayavong, Sarah; Kemper, Leah; Barker, Abigail; McBride, Timothy

    2013-09-01

    Key Data Findings. (1) From March 2012 to March 2013, rural enrollment in Medicare Advantage (MA) and other prepaid plans increased by over 200,000 enrollees, to more than 1.9 million. (2) Preferred provider organization (PPO) plan enrollment increased to nearly one million enrollees, accounting for more than 51% of the rural MA market (up from 48% in March 2012). (3) Health maintenance organization (HMO) enrollment continued to grow in 2013, with over 31% of the rural MA market, while private fee-for-service (PFFS) plan enrollment decreased to less than 10% of market share. (4) Despite recent changes to MA payment, rural MA enrollment continues to increase. PMID:25399464

  1. Upward Wealth Mobility: Exploring the Roman Catholic Advantage

    ERIC Educational Resources Information Center

    Keister, Lisa A.

    2007-01-01

    Wealth inequality is among the most extreme forms of stratification in the United States, and upward wealth mobility is not common. Yet mobility is possible, and this paper takes advantage of trends among a unique group to explore the processes that generate mobility. I show that non-Hispanic whites raised in Roman Catholic families have been…

  2. Sensitive algorithm for multiple-excitation-wavelength resonance Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Yellampalle, Balakishore; Wu, Hai-Shan; McCormick, William; Sluch, Mikhail; Martin, Robert; Ice, Robert; Lemoff, Brian E.

    2014-05-01

    Raman spectroscopy is a widely used spectroscopic technique with a number of applications. During the past few years, we explored the use of simultaneous multiple-excitation-wavelengths (MEW) in resonance Raman spectroscopy. This approach takes advantage of Raman band intensity variations across the Resonance Raman spectra obtained from two or more excitation wavelengths. Amplitude variations occur between corresponding Raman bands in Resonance Raman spectra due to complex interplay of resonant enhancement, self-absorption and laser penetration depth. We have developed a very sensitive algorithm to estimate concentration of an analyte from spectra obtained using the MEW technique. The algorithm uses correlations and least-square minimization approach to calculate an estimate for the concentration. For two or more excitation wavelengths, measured spectra were stacked in a two dimensional matrix. In a simple realization of the algorithm, we approximated peaks in the ideal library spectra as triangles. In this work, we present the performance of the algorithm with measurements obtained from a dual-excitation-wavelength Resonance Raman sensor. The novel sensor, developed at WVHTCF, detects explosives from a standoff distance. The algorithm was able to detect explosives with very high sensitivity even at signal-to-noise ratios as low as ~1.6. Receiver operating characteristics calculated using the algorithm showed a clear benefit in using the dual-excitation-wavelength technique over single-excitation-wavelength techniques. Variants of the algorithm that add more weight to amplitude variation information showed improved specificity to closely resembling spectra.

  3. Enhanced Deep Blue Aerosol Retrieval Algorithm: The Second Generation

    NASA Technical Reports Server (NTRS)

    Hsu, N. C.; Jeong, M.-J.; Bettenhausen, C.; Sayer, A. M.; Hansell, R.; Seftor, C. S.; Huang, J.; Tsay, S.-C.

    2013-01-01

    The aerosol products retrieved using the MODIS collection 5.1 Deep Blue algorithm have provided useful information about aerosol properties over bright-reflecting land surfaces, such as desert, semi-arid, and urban regions. However, many components of the C5.1 retrieval algorithm needed to be improved; for example, the use of a static surface database to estimate surface reflectances. This is particularly important over regions of mixed vegetated and non- vegetated surfaces, which may undergo strong seasonal changes in land cover. In order to address this issue, we develop a hybrid approach, which takes advantage of the combination of pre-calculated surface reflectance database and normalized difference vegetation index in determining the surface reflectance for aerosol retrievals. As a result, the spatial coverage of aerosol data generated by the enhanced Deep Blue algorithm has been extended from the arid and semi-arid regions to the entire land areas.

  4. Advanced signal separation and recovery algorithms for digital x-ray spectroscopy

    NASA Astrophysics Data System (ADS)

    Mahmoud, Imbaby I.; El Tokhy, Mohamed S.

    2015-02-01

    X-ray spectroscopy is widely used for in-situ applications for samples analysis. Therefore, spectrum drawing and assessment of x-ray spectroscopy with high accuracy is the main scope of this paper. A Silicon Lithium Si(Li) detector that cooled with a nitrogen is used for signal extraction. The resolution of the ADC is 12 bits. Also, the sampling rate of ADC is 5 MHz. Hence, different algorithms are implemented. These algorithms were run on a personal computer with Intel core TM i5-3470 CPU and 3.20 GHz. These algorithms are signal preprocessing, signal separation and recovery algorithms, and spectrum drawing algorithm. Moreover, statistical measurements are used for evaluation of these algorithms. Signal preprocessing based on DC-offset correction and signal de-noising is performed. DC-offset correction was done by using minimum value of radiation signal. However, signal de-noising was implemented using fourth order finite impulse response (FIR) filter, linear phase least-square FIR filter, complex wavelet transforms (CWT) and Kalman filter methods. We noticed that Kalman filter achieves large peak signal to noise ratio (PSNR) and lower error than other methods. However, CWT takes much longer execution time. Moreover, three different algorithms that allow correction of x-ray signal overlapping are presented. These algorithms are 1D non-derivative peak search algorithm, second derivative peak search algorithm and extrema algorithm. Additionally, the effect of signal separation and recovery algorithms on spectrum drawing is measured. Comparison between these algorithms is introduced. The obtained results confirm that second derivative peak search algorithm as well as extrema algorithm have very small error in comparison with 1D non-derivative peak search algorithm. However, the second derivative peak search algorithm takes much longer execution time. Therefore, extrema algorithm introduces better results over other algorithms. It has the advantage of recovering and

  5. Optimization of image processing algorithms on mobile platforms

    NASA Astrophysics Data System (ADS)

    Poudel, Pramod; Shirvaikar, Mukul

    2011-03-01

    This work presents a technique to optimize popular image processing algorithms on mobile platforms such as cell phones, net-books and personal digital assistants (PDAs). The increasing demand for video applications like context-aware computing on mobile embedded systems requires the use of computationally intensive image processing algorithms. The system engineer has a mandate to optimize them so as to meet real-time deadlines. A methodology to take advantage of the asymmetric dual-core processor, which includes an ARM and a DSP core supported by shared memory, is presented with implementation details. The target platform chosen is the popular OMAP 3530 processor for embedded media systems. It has an asymmetric dual-core architecture with an ARM Cortex-A8 and a TMS320C64x Digital Signal Processor (DSP). The development platform was the BeagleBoard with 256 MB of NAND RAM and 256 MB SDRAM memory. The basic image correlation algorithm is chosen for benchmarking as it finds widespread application for various template matching tasks such as face-recognition. The basic algorithm prototypes conform to OpenCV, a popular computer vision library. OpenCV algorithms can be easily ported to the ARM core which runs a popular operating system such as Linux or Windows CE. However, the DSP is architecturally more efficient at handling DFT algorithms. The algorithms are tested on a variety of images and performance results are presented measuring the speedup obtained due to dual-core implementation. A major advantage of this approach is that it allows the ARM processor to perform important real-time tasks, while the DSP addresses performance-hungry algorithms.

  6. Evolutionary advantages of adaptive rewarding

    NASA Astrophysics Data System (ADS)

    Szolnoki, Attila; Perc, Matjaž

    2012-09-01

    Our well-being depends on both our personal success and the success of our society. The realization of this fact makes cooperation an essential trait. Experiments have shown that rewards can elevate our readiness to cooperate, but since giving a reward inevitably entails paying a cost for it, the emergence and stability of such behavior remains elusive. Here we show that allowing for the act of rewarding to self-organize in dependence on the success of cooperation creates several evolutionary advantages that instill new ways through which collaborative efforts are promoted. Ranging from indirect territorial battle to the spontaneous emergence and destruction of coexistence, phase diagrams and the underlying spatial patterns reveal fascinatingly rich social dynamics that explain why this costly behavior has evolved and persevered. Comparisons with adaptive punishment, however, uncover an Achilles heel of adaptive rewarding, coming from over-aggression, which in turn hinders optimal utilization of network reciprocity. This may explain why, despite its success, rewarding is not as firmly embedded into our societal organization as punishment.

  7. Smart Sensors: Advantages and Pitfalls

    NASA Astrophysics Data System (ADS)

    French, Paddy James

    For almost 50 years, silicon sensors have been on the market. There have been many examples of success stories for simple silicon sensors, such as the Hall plate and photo-diode. These have found mass-market applications. The development of micromachining techniques brought pressure sensors and accelerometers into the market and later the gyroscope. These have also achieved mass-market. The remaining issue is how far to integrate. Many of the devices on the market use a simple sensor with external electronics or read-out electronics in the same package (system-in-a-package). However, there are also many examples of fully integrated sensors (smart sensors) where the whole system is integrated into a single chip. If the application and the device technology permit this, there can be many advantages. A broader look at sensors shows a wealth of integrated devices. The critical issues are reliability and packaging if these devices are to find the applications. A number of silicon sensors and actuators have shown great commercial success, but still many more have to find their way out of the laboratory. This paper will examine the development of the technologies, some of the success stories and the opportunities for integrated Microsystems as well as the pitfalls.

  8. An algorithm for the solution of dynamic linear programs

    NASA Technical Reports Server (NTRS)

    Psiaki, Mark L.

    1989-01-01

    The algorithm's objective is to efficiently solve Dynamic Linear Programs (DLP) by taking advantage of their special staircase structure. This algorithm constitutes a stepping stone to an improved algorithm for solving Dynamic Quadratic Programs, which, in turn, would make the nonlinear programming method of Successive Quadratic Programs more practical for solving trajectory optimization problems. The ultimate goal is to being trajectory optimization solution speeds into the realm of real-time control. The algorithm exploits the staircase nature of the large constraint matrix of the equality-constrained DLPs encountered when solving inequality-constrained DLPs by an active set approach. A numerically-stable, staircase QL factorization of the staircase constraint matrix is carried out starting from its last rows and columns. The resulting recursion is like the time-varying Riccati equation from multi-stage LQR theory. The resulting factorization increases the efficiency of all of the typical LP solution operations over that of a dense matrix LP code. At the same time numerical stability is ensured. The algorithm also takes advantage of dynamic programming ideas about the cost-to-go by relaxing active pseudo constraints in a backwards sweeping process. This further decreases the cost per update of the LP rank-1 updating procedure, although it may result in more changes of the active set that if pseudo constraints were relaxed in a non-stagewise fashion. The usual stability of closed-loop Linear/Quadratic optimally-controlled systems, if it carries over to strictly linear cost functions, implies that the saving due to reduced factor update effort may outweigh the cost of an increased number of updates. An aerospace example is presented in which a ground-to-ground rocket's distance is maximized. This example demonstrates the applicability of this class of algorithms to aerospace guidance. It also sheds light on the efficacy of the proposed pseudo constraint relaxation

  9. Tailored logistics: the next advantage.

    PubMed

    Fuller, J B; O'Conor, J; Rawlinson, R

    1993-01-01

    How many top executives have ever visited with managers who move materials from the factory to the store? How many still reduce the costs of logistics to the rent of warehouses and the fees charged by common carriers? To judge by hours of senior management attention, logistics problems do not rank high. But logistics have the potential to become the next governing element of strategy. Whether they know it or not, senior managers of every retail store and diversified manufacturing company compete in logistically distinct businesses. Customer needs vary, and companies can tailor their logistics systems to serve their customers better and more profitably. Companies do not create value for customers and sustainable advantage for themselves merely by offering varieties of goods. Rather, they offer goods in distinct ways. A particular can of Coca-Cola, for example, might be a can of Coca-Cola going to a vending machine, or a can of Coca-Cola that comes with billing services. There is a fortune buried in this distinction. The goal of logistics strategy is building distinct approaches to distinct groups of customers. The first step is organizing a cross-functional team to proceed through the following steps: segmenting customers according to purchase criteria, establishing different standards of service for different customer segments, tailoring logistics pipelines to support each segment, and creating economics of scale to determine which assets can be shared among various pipelines. The goal of establishing logistically distinct businesses is familiar: improved knowledge of customers and improved means of satisfying them. PMID:10126157

  10. How Successful Is Medicare Advantage?

    PubMed Central

    Newhouse, Joseph P; McGuire, Thomas G

    2014-01-01

    Context Medicare Part C, or Medicare Advantage (MA), now almost 30 years old, has generally been viewed as a policy disappointment. Enrollment has vacillated but has never come close to the penetration of managed care plans in the commercial insurance market or in Medicaid, and because of payment policy decisions and selection, the MA program is viewed as having added to cost rather than saving funds for the Medicare program. Recent changes in Medicare policy, including improved risk adjustment, however, may have changed this picture. Methods This article summarizes findings from our group's work evaluating MA's recent performance and investigating payment options for improving its performance even more. We studied the behavior of both beneficiaries and plans, as well as the effects of Medicare policy. Findings Beneficiaries make “mistakes” in their choice of MA plan options that can be explained by behavioral economics. Few beneficiaries make an active choice after they enroll in Medicare. The high prevalence of “zero-premium” plans signals inefficiency in plan design and in the market's functioning. That is, Medicare premium policies interfere with economically efficient choices. The adverse selection problem, in which healthier, lower-cost beneficiaries tend to join MA, appears much diminished. The available measures, while limited, suggest that, on average, MA plans offer care of equal or higher quality and for less cost than traditional Medicare (TM). In counties, greater MA penetration appears to improve TM's performance. Conclusions Medicare policies regarding lock-in provisions and risk adjustment that were adopted in the mid-2000s have mitigated the adverse selection problem previously plaguing MA. On average, MA plans appear to offer higher value than TM, and positive spillovers from MA into TM imply that reimbursement should not necessarily be neutral. Policy changes in Medicare that reform the way that beneficiaries are charged for MA plan

  11. Taking multiple medicines safely

    MedlinePlus

    ... medlineplus.gov/ency/patientinstructions/000883.htm Taking multiple medicines safely To use the sharing features on this ... directed. Why you may Need More Than one Medicine You may take more than one medicine to ...

  12. A global plan policy for coherent co-operation in distributed dynamic load balancing algorithms

    NASA Astrophysics Data System (ADS)

    Kara, M.

    1995-12-01

    Distributed-controlled dynamic load balancing algorithms are known to have several advantages over centralized algorithms such as scalability, and fault tolerance. Distributed implies that the control is decentralized and that a copy of the algorithm (called a scheduler) is replicated on each host of the network. However, distributed control also contributes to the lack of global goals and lack of coherence. This paper presents a new algorithm called DGP (decentralized global plans) that addresses the problem of coherence and co-ordination in distributed dynamic load balancing algorithms. The DGP algorithm is based on a strategy called global plans (GP), and aims at maintaining all computational loads of a distributed system within a band called delta . The rationale for the design of DGP is to allow each scheduler to consider the actions of its peer schedulers. With this level of co-ordination, the schedulers can act more as a coherent team. This new approach first explicitly specifies a global goal and then designs a strategy around this global goal such that each scheduler (i) takes into account local decisions made by other schedulers; (ii) takes into account the effect of its local decisions on the overall system and (iii) ensures load balancing. An experimental evaluation of DGP with two other well known dynamic load balancing algorithms published in the literature shows that DGP performs consistently better. More significantly, the results indicate that the global plan approach provides a better framework for the design of distributed dynamic load balancing algorithms.

  13. Nurses’ Creativity: Advantage or Disadvantage

    PubMed Central

    Shahsavari Isfahani, Sara; Hosseini, Mohammad Ali; Fallahi Khoshknab, Masood; Peyrovi, Hamid; Khanke, Hamid Reza

    2015-01-01

    Background Recently, global nursing experts have been aggressively encouraging nurses to pursue creativity and innovation in nursing to improve nursing outcomes. Nurses’ creativity plays a significant role in health and well-being. In most health systems across the world, nurses provide up to 80% of the primary health care; therefore, they are critically positioned to provide creative solutions for current and future global health challenges. Objectives The purpose of this study was to explore Iranian nurses’ perceptions and experiences toward the expression of creativity in clinical settings and the outcomes of their creativity for health care organizations. Patients and Methods A qualitative approach using content analysis was adopted. Data were collected through in-depth semistructured interviews with 14 nurses who were involved in the creative process in educational hospitals affiliated to Jahrom and Tehran Universities of Medical Sciences in Iran. Results Four themes emerged from the data analysis, including a) Improvement in quality of patient care, b) Improvement in nurses’ quality of work, personal and social life, c) Promotion of organization, and d) Unpleasant outcomes. Conclusions The findings indicated that nurses’ creativity in health care organizations can lead to major changes of nursing practice, improvement of care and organizational performance. Therefore, policymakers, nurse educators, nursing and hospital managers should provide a nurturing environment that is conducive to creative thinking, giving the nurses opportunity for flexibility, creativity, support for change, and risk taking. PMID:25793116

  14. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  15. Quantum computation with classical light: Implementation of the Deutsch-Jozsa algorithm

    NASA Astrophysics Data System (ADS)

    Perez-Garcia, Benjamin; McLaren, Melanie; Goyal, Sandeep K.; Hernandez-Aranda, Raul I.; Forbes, Andrew; Konrad, Thomas

    2016-05-01

    We propose an optical implementation of the Deutsch-Jozsa Algorithm using classical light in a binary decision-tree scheme. Our approach uses a ring cavity and linear optical devices in order to efficiently query the oracle functional values. In addition, we take advantage of the intrinsic Fourier transforming properties of a lens to read out whether the function given by the oracle is balanced or constant.

  16. Cloud model bat algorithm.

    PubMed

    Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi

    2014-01-01

    Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425

  17. Predicting the performance of a spatial gamut mapping algorithm

    NASA Astrophysics Data System (ADS)

    Bakke, Arne M.; Farup, Ivar; Hardeberg, Jon Y.

    2009-01-01

    Gamut mapping algorithms are currently being developed to take advantage of the spatial information in an image to improve the utilization of the destination gamut. These algorithms try to preserve the spatial information between neighboring pixels in the image, such as edges and gradients, without sacrificing global contrast. Experiments have shown that such algorithms can result in significantly improved reproduction of some images compared with non-spatial methods. However, due to the spatial processing of images, they introduce unwanted artifacts when used on certain types of images. In this paper we perform basic image analysis to predict whether a spatial algorithm is likely to perform better or worse than a good, non-spatial algorithm. Our approach starts by detecting the relative amount of areas in the image that are made up of uniformly colored pixels, as well as the amount of areas that contain details in out-of-gamut areas. A weighted difference is computed from these numbers, and we show that the result has a high correlation with the observed performance of the spatial algorithm in a previously conducted psychophysical experiment.

  18. A quantum search algorithm for future spacecraft attitude determination

    NASA Astrophysics Data System (ADS)

    Tsai, Jack; Hsiao, Fu-Yuen; Li, Yi-Ju; Shen, Jen-Fu

    2011-04-01

    In this paper we study the potential application of a quantum search algorithm to spacecraft navigation with a focus on attitude determination. Traditionally, attitude determination is achieved by recognizing the relative position/attitude with respect to the background stars using sun sensors, earth limb sensors, or star trackers. However, due to the massive celestial database, star pattern recognition is a complicated and power consuming job. We propose a new method of attitude determination by applying the quantum search algorithm to the search for a specific star or star pattern. The quantum search algorithm, proposed by Grover in 1996, could search the specific data out of an unstructured database containing a number N of data in only O(√{N}) steps, compared to an average of N/2 steps in conventional computers. As a result, by taking advantage of matching a particular star in a vast celestial database in very few steps, we derive a new algorithm for attitude determination, collaborated with Grover's search algorithm and star catalogues of apparent magnitude and absorption spectra. Numerical simulations and examples are also provided to demonstrate the feasibility and robustness of our new algorithm.

  19. Physical parameter determinations of young Ms. Taking advantage of the Virtual Observatory to compare methodologies

    NASA Astrophysics Data System (ADS)

    Bayo, A.; Rodrigo, C.; Barrado, D.; Allard, F.

    One of the very first steps astronomers working in stellar physics perform to advance in their studies, is to determine the most common/relevant physical parameters of the objects of study (effective temperature, bolometric luminosity, surface gravity, etc.). Different methodologies exist depending on the nature of the data, intrinsic properties of the objects, etc. One common approach is to compare the observational data with theoretical models passed through some simulator that will leave in the synthetic data the same imprint than the observational data carries, and see what set of parameters reproduce the observations best. Even in this case, depending on the kind of data the astronomer has, the methodology changes slightly. After parameters are published, the community tend to quote, praise and criticize them, sometimes paying little attention on whether the possible discrepancies come from the theoretical models, the data themselves or just the methodology used in the analysis. In this work we perform the simple, yet interesting, exercise of comparing the effective temperatures obtained via SED and more detailed spectral fittings (to the same grid of models), of a sample of well known and characterized young M-type objects members to different star forming regions and show how differences in temperature of up to 350 K can be expected just from the difference in methodology/data used. On the other hand we show how these differences are smaller for colder objects even when the complexity of the fit increases like for example introducing differential extinction. To perform this exercise we benefit greatly from the framework offered by the Virtual Observaotry.

  20. Take advantage of mycorrhizal fungi for improved soil fertility and plant health

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Arbuscular mycorrhizal [AM] fungi are naturally-occurring soil fungi that form a beneficial symbiosis with the roots of most crops. The plants benefit because the symbiosis increases mineral nutrient uptake, drought resistance, and disease resistance. These characteristics make utilization of AM f...

  1. Taking Advantage of Automated Assessment of Student-Constructed Graphs in Science

    ERIC Educational Resources Information Center

    Vitale, Jonathan M.; Lai, Kevin; Linn, Marcia C.

    2015-01-01

    We present a new system for automated scoring of graph construction items that address complex science concepts, feature qualitative prompts, and support a range of possible solutions. This system utilizes analysis of spatial features (e.g., slope of a line) to evaluate potential student ideas represented within graphs. Student ideas are then…

  2. Taking Advantage of a Corrosion Problem to Solve a Pollution Problem

    ERIC Educational Resources Information Center

    Palomar-Ramirez, Carlos F.; Bazan-Martinez, Jose A.; Palomar-Pardave, Manuel E.; Romero-Romo, Mario A.; Ramirez-Silva, Maria Teresa

    2011-01-01

    Some simple chemistry is used to demonstrate how Fe(II) ions, formed during iron corrosion in acid aqueous solution, can reduce toxic Cr(VI) species, forming soluble Cr(III) and Fe(III) ions. These ions, in turn, can be precipitated by neutralizing the solution. The procedure provides a treatment for industrial wastewaters commonly found in…

  3. Preparing to Take Advantage of New Programs in the Economic Stimulus Package

    ERIC Educational Resources Information Center

    Stelow, Shawn

    2009-01-01

    The proposed federal stimulus package could help to strengthen existing programs with track records of proven performance and strong accountability and administrative systems. This is good news for afterschool programs leaders who can leverage existing relationships and sustainability plans to access new funding opportunities. In these times, a…

  4. Taking Advantage of Selective Change Driven Processing for 3D Scanning

    PubMed Central

    Vegara, Francisco; Zuccarello, Pedro; Boluda, Jose A.; Pardo, Fernando

    2013-01-01

    This article deals with the application of the principles of SCD (Selective Change Driven) vision to 3D laser scanning. Two experimental sets have been implemented: one with a classical CMOS (Complementary Metal-Oxide Semiconductor) sensor, and the other one with a recently developed CMOS SCD sensor for comparative purposes, both using the technique known as Active Triangulation. An SCD sensor only delivers the pixels that have changed most, ordered by the magnitude of their change since their last readout. The 3D scanning method is based on the systematic search through the entire image to detect pixels that exceed a certain threshold, showing the SCD approach to be ideal for this application. Several experiments for both capturing strategies have been performed to try to find the limitations in high speed acquisition/processing. The classical approach is limited by the sequential array acquisition, as predicted by the Nyquist–Shannon sampling theorem, and this has been experimentally demonstrated in the case of a rotating helix. These limitations are overcome by the SCD 3D scanning prototype achieving a significantly higher performance. The aim of this article is to compare both capturing strategies in terms of performance in the time and frequency domains, so they share all the static characteristics including resolution, 3D scanning method, etc., thus yielding the same 3D reconstruction in static scenes. PMID:24084110

  5. Taking Advantage of Model-Driven Engineering Foundations for Mixed Interaction Design

    NASA Astrophysics Data System (ADS)

    Gauffre, Guillaume; Dubois, Emmanuel

    New forms of interactive systems, hereafter referred to as Mixed Interactive Systems (MIS), are based on the use of physical artefacts present in the environment. Mixing the digital and physical worlds affects the development of interactive systems, especially from the point of view of the design resources which need to express new dimensions. Consequently, there is a crucial need to clearly describe the content and utility of the recent models associated to these new interaction forms. Based on existing initiatives in the field of HCI, this chapter first highlights the interest of using a Model-Driven Engineering (MDE) approach for the design of MIS. Then, this chapter retraces the application of a MDE approach on a specific Mixed Interaction design resource. The resulted contribution is a motivated, explicit, complete and standardized definition of the ASUR model, a model for mixed interaction design. This definition constitutes a basis to promote the use of this model, to support its diffusion and to derive design tools from this model. The model-driven development of a flexible ASUR editor is finally introduced, thus facilitating the insertion of model extensions and articulations.

  6. New frontiers for diagnostic testing: taking advantage of forces changing health care.

    PubMed

    Allawi, S J; Hill, B T; Shah, N R

    1998-01-01

    The transformation of the health-care industry holds great economic potential for laboratory diagnostic testing providers who understand the five market forces driving change and who are shaping their own roles in the emerging market. Because of these trends, provider-based laboratories (PBLs) are competing with independent laboratories (ILs) for the latter's traditional client base--outpatients and nonpatients. PBLs will continue to service acute care patients while becoming more IL-like in logistics, sales, customer service, and marketing. Forced to compete on price, ILs have engaged in mega-mergers and will try to break into acute care via joint ventures. The ILs will need to choose their markets carefully, solidly integrate with parent organizations, and find ways to be profit centers. Consumers' demands also are forcing change. Consumers want accurate, legible bills and simplified eligibility determination and registration. They want an emphasis on prevention and wellness, which means that diagnostic testing must address early identification and monitoring of high-risk groups. To realize cost-efficiencies under whole-life capitation, laboratory networks must be part of a completely integrated health-care system. The laboratory of the future will be multicentered, without walls, and with quick access to information through technology. PMID:10178702

  7. Taking advantage of surface proximity effects with aero-marine vehicles

    NASA Astrophysics Data System (ADS)

    Trillo, Robert L.

    A comprehensive account is given of the operational characteristics and limiting operational conditions of 'wing-in-ground' surface effect marine aircraft. It is emphasized that operation must be restricted to the calmest of river and lake waters, not merely for the sake of safety margin enhancement but for the maximization of air cushion effect efficiency, and its consequent economic benefits. Quietness of operation in these environments is also essential, strongly recommending reliance on shrouded propellers as the primary means of propulsion.

  8. Science Learning and Instruction: Taking Advantage of Technology to Promote Knowledge Integration

    ERIC Educational Resources Information Center

    Linn, Marcia C.; Eylon, Bat-Sheva

    2011-01-01

    "Science Learning and Instruction" describes advances in understanding the nature of science learning and their implications for the design of science instruction. The authors show how design patterns, design principles, and professional development opportunities coalesce to create and sustain effective instruction in each primary scientific…

  9. Taking advantage of reduced droplet-surface interaction to optimize transport of bioanalytes in digital microfluidics.

    PubMed

    Freire, Sergio L S; Thorne, Nathaniel; Wutkowski, Michael; Dao, Selina

    2014-01-01

    Digital microfluidics (DMF), a technique for manipulation of droplets, is a promising alternative for the development of "lab-on-a-chip" platforms. Often, droplet motion relies on the wetting of a surface, directly associated with the application of an electric field; surface interactions, however, make motion dependent on droplet contents, limiting the breadth of applications of the technique. Some alternatives have been presented to minimize this dependence. However, they rely on the addition of extra chemical species to the droplet or its surroundings, which could potentially interact with droplet moieties. Addressing this challenge, our group recently developed Field-DW devices to allow the transport of cells and proteins in DMF, without extra additives. Here, the protocol for device fabrication and operation is provided, including the electronic interface for motion control. We also continue the studies with the devices, showing that multicellular, relatively large, model organisms can also be transported, arguably unaffected by the electric fields required for device operation. PMID:25407533

  10. Taking Advantage: The Rural Competitive Preference in the Investing in Innovation Program

    ERIC Educational Resources Information Center

    Strange, Marty

    2011-01-01

    The Investing in Innovation (i3) program is a U.S. Department of Education competitive grant program supporting innovation in public schools. To encourage projects focusing on rural education in its first round of grants in 2010 the Department offered two bonus points in the scoring system for "projects that would implement innovative practices,…

  11. Utilities and state regulators are failing to take advantage of emission allowance trading

    SciTech Connect

    Bohi, D.R. )

    1994-03-01

    Regulators are not providing active encouragement to utilities to engage in emissions trading, and utilities are behaving as if trading were restricted to state or system borders. If this pattern of behavior continues, emission allowance trading among U.S. electric utilities will prove to be considerably less successful--and less cost-effective--than originally expected.

  12. How HIV-1 Takes Advantage of the Cytoskeleton during Replication and Cell-to-Cell Transmission

    PubMed Central

    Lehmann, Martin; Nikolic, Damjan S.; Piguet, Vincent

    2011-01-01

    Human immunodeficiency virus 1 (HIV-1) infects T cells, macrophages and dendritic cells and can manipulate their cytoskeleton structures at multiple steps during its replication cycle. Based on pharmacological and genetic targeting of cytoskeleton modulators, new imaging approaches and primary cell culture models, important roles for actin and microtubules during entry and cell-to-cell transfer have been established. Virological synapses and actin-containing membrane extensions can mediate HIV-1 transfer from dendritic cells or macrophage cells to T cells and between T cells. We will review the role of the cytoskeleton in HIV-1 entry, cellular trafficking and cell-to-cell transfer between primary cells. PMID:21994805

  13. SeDiCi: An Authentication Service Taking Advantage of Zero-Knowledge Proofs

    NASA Astrophysics Data System (ADS)

    Grzonkowski, Sławomir

    Transmission of users' profiles over insecure communication means is a crucial task of today's ecommerce applications. In addition, the users have to createmany profiles and remember many credentials. Thus they retype the same information over and over again. Each time the users type their credentials, they expose them to phishing or eavesdropping attempts.These problems could be solved by using Single Sign-on (SSO). The idea of SSO is that the users keep using the same set of credentials when visiting different websites. For web-aplications, OpenID1. is the most prominent solution that partially impelemtns SSO. However, OpenID is prone to phishing attempts and it does not preserve users' privacy [1].

  14. How Can Monolingual Teachers Take Advantage of Learners' Native Language in Class?

    ERIC Educational Resources Information Center

    Pappamihiel, Eleni; Lynn, C. Allen

    2014-01-01

    With the increasing linguistic diversity of students in many classrooms around the world, teachers need to be well-equipped with strategies to address the learning needs of students with limited proficiency in the dominant language of the classroom. This article outlines various strategies that might help teachers reach that goal by taking…

  15. Taking advantage of local structure descriptors to analyze interresidue contacts in protein structures and protein complexes.

    PubMed

    Martin, Juliette; Regad, Leslie; Etchebest, Catherine; Camproux, Anne-Claude

    2008-11-15

    Interresidue protein contacts in proteins structures and at protein-protein interface are classically described by the amino acid types of interacting residues and the local structural context of the contact, if any, is described using secondary structures. In this study, we present an alternate analysis of interresidue contact using local structures defined by the structural alphabet introduced by Camproux et al. This structural alphabet allows to describe a 3D structure as a sequence of prototype fragments called structural letters, of 27 different types. Each residue can then be assigned to a particular local structure, even in loop regions. The analysis of interresidue contacts within protein structures defined using Voronoï tessellations reveals that pairwise contact specificity is greater in terms of structural letters than amino acids. Using a simple heuristic based on specificity score comparison, we find that 74% of the long-range contacts within protein structures are better described using structural letters than amino acid types. The investigation is extended to a set of protein-protein complexes, showing that the similar global rules apply as for intraprotein contacts, with 64% of the interprotein contacts best described by local structures. We then present an evaluation of pairing functions integrating structural letters to decoy scoring and show that some complexes could benefit from the use of structural letter-based pairing functions. PMID:18491388

  16. Unidirectional movement of an actin filament taking advantage of temperature gradients.

    PubMed

    Kawaguchi, Tomoaki; Honda, Hajime

    2007-01-01

    An actin filament with heat acceptors attached to its Cys374 residue in each actin monomer could move unidirectionally even under heat pulsation alone, while in the total absence of both ATP and myosin. The prime driver for the movement was temperature gradients operating between locally heated portions on an actin filament and its cooler surroundings. In this report, we investigated how the mitigation of the temperature gradients induces a unidirectional movement of an actin filament. We then observed the transversal fluctuations of the filament in response to heat pulsation and their transition into longitudinally unidirectional movement. The transition was significantly accelerated when Cys374 and Lys336 were simultaneously excited within an actin monomer. These results suggest that the mitigation of the temperature gradients within each actin monomer first went through the energy transformation to transversal fluctuations of the filament, and then followed by the transformation further down to longitudinal movements of the filament. The faster mitigation of temperature gradients within actin monomer helps build up the transition from the transversal to longitudinal movements of the filament by coordinating the interaction between the neighboring monomers. PMID:17030086

  17. How the new optoelectronic design automation industry is taking advantage of preexisting EDA standards

    NASA Astrophysics Data System (ADS)

    Nesmith, Kevin A.; Carver, Susan

    2014-05-01

    With the advancements in design processes down to the sub 7nm levels, the Electronic Design Automation industry appears to be coming to an end of advancements, as the size of the silicon atom becomes the limiting factor. Or is it? The commercial viability of mass-producing silicon photonics is bringing about the Optoelectronic Design Automation (OEDA) industry. With the science of photonics in its infancy, adding these circuits to ever-increasing complex electronic designs, will allow for new generations of advancements. Learning from the past 50 years of the EDA industry's mistakes and missed opportunities, the photonics industry is starting with electronic standards and extending them to become photonically aware. Adapting the use of pre-existing standards into this relatively new industry will allow for easier integration into the present infrastructure and faster time to market.

  18. Taking advantage of the ESA G-POD service to study deformation processes in mountain areas

    NASA Astrophysics Data System (ADS)

    Manconi, Andrea; Cignetti, Martina; Ardizzone, Francesca; Giordan, Daniele; Allasia, Paolo; De Luca, Claudio; Manunta, Michele; Casu, Francesco

    2015-04-01

    In mountain environments, the analysis of surface displacements is extremely important for a better understanding the effects of mass wasting phenomena, such as landslides, rock-glaciers, and glacier activity. In this scenario, the use of straightforward tools and approaches to monitor surface displacements at high spatial and temporal resolutions is a real need. Here we use the Parallel-SBAS service recently released within the ESA's Grid Processing On Demand environment (G-POD, http://gpod.eo.esa.int/) to generate Earth's surface deformation time series and interferometric production. This service performs the full SBAS-DInSAR chain starting from Level 0 data, and generates displacement time series. We use the data available on the Virtual Archive 4 (http://eo-virtual-archive4.esa.int/, in the framework of Supersite initiative. In the framework of the HAMMER project (part of the NextData initiative, http://www.nextdataproject.it/ ), we produced mean deformation velocity maps, as well as deformation time series, on a regional scale case (Aosta Valley Region, northern Italy), and at local landslide scale (Puy landslide, Piedmont, northen Italy). The possibility to gather the final results in less than 24h (by processing an average of about 30 SAR images for each frame considered), allowed to perform in relatively short time a large number of attempts. By "tuning" the processing, we have maximized for both datasets the final coverage of coherent points, by analysing the effect of SAR images acquired in the winter season, as well as of the impact of perpendicular and temporal baseline constraints. The results obtained with P-SBAS G-POD service on Valle d'Aosta region have been compared to the Deep Seated Gravitational Slope Deformation (DGSD, reference IFFI project), finding a good correlation with the anomalous areas of surface deformation and the catalogued DGSD. In addition, the results obtained on Valle d'Aosta and Piedmont regions show a good agreement to the mean velocity maps available retrieved from the "Portale Cartografico Nazionale" http://www.pcn.minambiente.it/GN/, which was instead processed by considering PSInSAR technique on the same Envisat ASAR dataset. Finally, we discuss possible future developments of the P-SBAS G-POD service in the Sentinel-1 scenario, when a large amount of SAR images will be available to a greater audience, and how this may affect the analysis of surface deformation at different spatial and temporal scales.

  19. Extend Instruction outside the Classroom: Take Advantage of Your Learning Management System

    ERIC Educational Resources Information Center

    Jensen, Lauren A.

    2010-01-01

    Numerous institutions of higher education have implemented a learning management system (LMS) or are considering doing so. This web-based software package provides self-service and quick (often personalized) access to content in a dynamic environment. Learning management systems support administrative, reporting, and documentation activities. LMSs…

  20. Taking advantage of the polymorphism of the MSA-2 family for Babesia bovis strain characterization

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Merozoite Surface Antigen-2 (MSA-2) family of Babesia bovis is a group of variable genes which share conserved 5' and 3' conserved ends. These genes encode membrane anchored glycoproteins, named MSA-2a1, a2, b and c, which are immunodominant antigens located on the surface of sporozoites and mer...

  1. Transport implementation of the Bernstein–Vazirani algorithm with ion qubits

    NASA Astrophysics Data System (ADS)

    Fallek, S. D.; Herold, C. D.; McMahon, B. J.; Maller, K. M.; Brown, K. R.; Amini, J. M.

    2016-08-01

    Using trapped ion quantum bits in a scalable microfabricated surface trap, we perform the Bernstein–Vazirani algorithm. Our architecture takes advantage of the ion transport capabilities of such a trap. The algorithm is demonstrated using two- and three-ion chains. For three ions, an improvement is achieved compared to a classical system using the same number of oracle queries. For two ions and one query, we correctly determine an unknown bit string with probability 97.6(8)%. For three ions, we succeed with probability 80.9(3)%.

  2. Medicare Advantage Plans Pay Hospitals Less Than Traditional Medicare Pays.

    PubMed

    Baker, Laurence C; Bundorf, M Kate; Devlin, Aileen M; Kessler, Daniel P

    2016-08-01

    There is ongoing debate about how prices paid to providers by Medicare Advantage plans compare to prices paid by fee-for-service Medicare. We used data from Medicare and the Health Care Cost Institute to identify the prices paid for hospital services by fee-for-service (FFS) Medicare, Medicare Advantage plans, and commercial insurers in 2009 and 2012. We calculated the average price per admission, and its trend over time, in each of the three types of insurance for fixed baskets of hospital admissions across metropolitan areas. After accounting for differences in hospital networks, geographic areas, and case-mix between Medicare Advantage and FFS Medicare, we found that Medicare Advantage plans paid 5.6 percent less for hospital services than FFS Medicare did. Without taking into account the narrower networks of Medicare Advantage, the program paid 8.0 percent less than FFS Medicare. We also found that the rates paid by commercial plans were much higher than those of either Medicare Advantage or FFS Medicare, and growing. At least some of this difference comes from the much higher prices that commercial plans pay for profitable service lines. PMID:27503970

  3. A DRAM compiler algorithm for high performance VLSI embedded memories

    NASA Technical Reports Server (NTRS)

    Eldin, A. G.

    1992-01-01

    In many applications, the limited density of the embedded SRAM does not allow integrating the memory on the same chip with other logic and functional blocks. In such cases, the embedded DRAM provides the optimum combination of very high density, low power, and high performance. For ASIC's to take full advantage of this design strategy, an efficient and highly reliable DRAM compiler must be used. The embedded DRAM architecture, cell, and peripheral circuit design considerations and the algorithm of a high performance memory compiler are presented .

  4. Teachable Moment: Google Earth Takes Us There

    ERIC Educational Resources Information Center

    Williams, Ann; Davinroy, Thomas C.

    2015-01-01

    In the current educational climate, where clearly articulated learning objectives are required, it is clear that the spontaneous teachable moment still has its place. Authors Ann Williams and Thomas Davinroy think that instructors from almost any discipline can employ Google Earth as a tool to take advantage of teachable moments through the…

  5. Fast algorithms for transport models. Final report

    SciTech Connect

    Manteuffel, T.A.

    1994-10-01

    This project has developed a multigrid in space algorithm for the solution of the S{sub N} equations with isotropic scattering in slab geometry. The algorithm was developed for the Modified Linear Discontinuous (MLD) discretization in space which is accurate in the thick diffusion limit. It uses a red/black two-cell {mu}-line relaxation. This relaxation solves for all angles on two adjacent spatial cells simultaneously. It takes advantage of the rank-one property of the coupling between angles and can perform this inversion in O(N) operations. A version of the multigrid in space algorithm was programmed on the Thinking Machines Inc. CM-200 located at LANL. It was discovered that on the CM-200 a block Jacobi type iteration was more efficient than the block red/black iteration. Given sufficient processors all two-cell block inversions can be carried out simultaneously with a small number of parallel steps. The bottleneck is the need for sums of N values, where N is the number of discrete angles, each from a different processor. These are carried out by machine intrinsic functions and are well optimized. The overall algorithm has computational complexity O(log(M)), where M is the number of spatial cells. The algorithm is very efficient and represents the state-of-the-art for isotropic problems in slab geometry. For anisotropic scattering in slab geometry, a multilevel in angle algorithm was developed. A parallel version of the multilevel in angle algorithm has also been developed. Upon first glance, the shifted transport sweep has limited parallelism. Once the right-hand-side has been computed, the sweep is completely parallel in angle, becoming N uncoupled initial value ODE`s. The author has developed a cyclic reduction algorithm that renders it parallel with complexity O(log(M)). The multilevel in angle algorithm visits log(N) levels, where shifted transport sweeps are performed. The overall complexity is O(log(N)log(M)).

  6. Using integration technology as a strategic advantage.

    PubMed

    Fry, P A

    1993-08-01

    The underlying premise of the Managed Competition Act previously cited is that through managed competition providers will be forced to lower care costs while increasing the level of positive care outcomes. Because it may also be that tomorrow's hospitals will find a severe rationing of technology, what can they do to prepare? Most of the systems in place today already have built within them all the necessary potential to address this premise and technology requirement with no change, no conversion, no expense for new equipment and software, and no disruption in day-to-day operations, just a little re-engineering. Today, however, these systems are similar to a 20-mule team pulling in different directions: all the power is there, but the wagon remains motionless and totally unable to reach its objective. It takes a skilled wagonmaster to bring them together, to make the mules work as a cohesive unit, to make the power of 20 mules greater than the sum of 20 mules. So it is and will be for the hospital of tomorrow. System integration is no longer a question of whether but of when. Those hospitals that use it today as a strategic advantage will be in a better position tomorrow to use it as a competitive strategic advantage in an environment that will reward low cost and high positive care outcomes and will penalize those that cannot compete. The technology is already here and economically within reach of nearly every hospital, just waiting to be used. The question that must nag all of us who want to make the health care system of America better is, Why not make the when now? Rich Helppie, president of Superior Consultant Company, summarized the solution well: The old ways will not give way to the new overnight. The re-engineering process in healthcare must evolve. Compared to the last 20 years, however, such evolution may appear to be a massive, forthright, complete, comprehensive, drastic and rapid revolution. Survival is the name of the game, and for healthcare

  7. Taking the Long View

    ERIC Educational Resources Information Center

    Bennett, Robert B., Jr.

    2010-01-01

    Legal studies faculty need to take the long view in their academic and professional lives. Taking the long view would seem to be a cliched piece of advice, but too frequently legal studies faculty, like their students, get focused on meeting the next short-term hurdle--getting through the next class, grading the next stack of papers, making it…

  8. A novel sliding window algorithm for 2D discrete Fourier transform

    NASA Astrophysics Data System (ADS)

    Dong, Zhifang; Wu, Jiasong; Gui, Jiyong

    2015-12-01

    Discrete Fourier transform (DFT) is one of the most wildly used tools for signal processing. In this paper, a novel sliding window algorithm is presented for fast computing 2D DFT when sliding window shifts more than one-point. The propose algorithm computing the DFT of the current window using that of the previous window. For fast computation, we take advantage of the recursive process of 2D SDFT and butterfly-based algorithm. So it can be directly applied to 2D signal processing. The theoretical analysis shows that the computational complexity is equal to 2D SDFT when one sample comes into current window. As well, the number of additions and multiplications of our proposed algorithm are less than those of 2D vector radix FFT when sliding window shifts mutiple-point.

  9. How to obtain efficient GPU kernels: An illustration using FMM & FGT algorithms

    NASA Astrophysics Data System (ADS)

    Cruz, Felipe A.; Layton, Simon K.; Barba, L. A.

    2011-10-01

    Computing on graphics processors is maybe one of the most important developments in computational science to happen in decades. Not since the arrival of the Beowulf cluster, which combined open source software with commodity hardware to truly democratize high-performance computing, has the community been so electrified. Like then, the opportunity comes with challenges. The formulation of scientific algorithms to take advantage of the performance offered by the new architecture requires rethinking core methods. Here, we have tackled fast summation algorithms (fast multipole method and fast Gauss transform), and applied algorithmic redesign for attaining performance on GPUs. The progression of performance improvements attained illustrates the exercise of formulating algorithms for the massively parallel architecture of the GPU. The end result has been GPU kernels that run at over 500 Gop/s on one NVIDIATESLA C1060 card, thereby reaching close to practical peak.

  10. Study on algorithm and real-time implementation of infrared image processing based on FPGA

    NASA Astrophysics Data System (ADS)

    Pang, Yulin; Ding, Ruijun; Liu, Shanshan; Chen, Zhe

    2010-10-01

    With the fast development of Infrared Focal Plane Arrays (IRFPA) detectors, high quality real-time image processing becomes more important in infrared imaging system. Facing the demand of better visual effect and good performance, we find FPGA is an ideal choice of hardware to realize image processing algorithm that fully taking advantage of its high speed, high reliability and processing a great amount of data in parallel. In this paper, a new idea of dynamic linear extension algorithm is introduced, which has the function of automatically finding the proper extension range. This image enhancement algorithm is designed in Verilog HDL and realized on FPGA. It works on higher speed than serial processing device like CPU and DSP. Experiment shows that this hardware unit of dynamic linear extension algorithm enhances the visual effect of infrared image effectively.

  11. A New Algorithm for Complex Non-Orthogonal Joint Diagonalization Based on Shear and Givens Rotations

    NASA Astrophysics Data System (ADS)

    Mesloub, Ammar; Abed-Meraim, Karim; Belouchrani, Adel

    2014-04-01

    This paper introduces a new algorithm to approximate non orthogonal joint diagonalization (NOJD) of a set of complex matrices. This algorithm is based on the Frobenius norm formulation of the JD problem and takes advantage from combining Givens and Shear rotations to attempt the approximate joint diagonalization (JD). It represents a non trivial generalization of the JDi (Joint Diagonalization) algorithm (Souloumiac 2009) to the complex case. The JDi is first slightly modified then generalized to the CJDi (i.e. Complex JDi) using complex to real matrix transformation. Also, since several methods exist already in the literature, we propose herein a brief overview of existing NOJD algorithms then we provide an extensive comparative study to illustrate the effectiveness and stability of the CJDi w.r.t. various system parameters and application contexts.

  12. Image enhancement based on edge boosting algorithm

    NASA Astrophysics Data System (ADS)

    Ngernplubpla, Jaturon; Chitsobhuk, Orachat

    2015-12-01

    In this paper, a technique for image enhancement based on proposed edge boosting algorithm to reconstruct high quality image from a single low resolution image is described. The difficulty in single-image super-resolution is that the generic image priors resided in the low resolution input image may not be sufficient to generate the effective solutions. In order to achieve a success in super-resolution reconstruction, efficient prior knowledge should be estimated. The statistics of gradient priors in terms of priority map based on separable gradient estimation, maximum likelihood edge estimation, and local variance are introduced. The proposed edge boosting algorithm takes advantages of these gradient statistics to select the appropriate enhancement weights. The larger weights are applied to the higher frequency details while the low frequency details are smoothed. From the experimental results, the significant performance improvement quantitatively and perceptually is illustrated. It can be seen that the proposed edge boosting algorithm demonstrates high quality results with fewer artifacts, sharper edges, superior texture areas, and finer detail with low noise.

  13. Competitive advantage on a warming planet.

    PubMed

    Lash, Jonathan; Wellington, Fred

    2007-03-01

    Whether you're in a traditional smokestack industry or a "clean" business like investment banking, your company will increasingly feel the effects of climate change. Even people skeptical about global warming's dangers are recognizing that, simply because so many others are concerned, the phenomenon has wide-ranging implications. Investors already are discounting share prices of companies poorly positioned to compete in a warming world. Many businesses face higher raw material and energy costs as more and more governments enact policies placing a cost on emissions. Consumers are taking into account a company's environmental record when making purchasing decisions. There's also a burgeoning market in greenhouse gas emission allowances (the carbon market), with annual trading in these assets valued at tens of billions of dollars. Companies that manage and mitigate their exposure to the risks associated with climate change while seeking new opportunities for profit will generate a competitive advantage over rivals in a carbon-constrained future. This article offers a systematic approach to mapping and responding to climate change risks. According to Jonathan Lash and Fred Wellington of the World Resources Institute, an environmental think tank, the risks can be divided into six categories: regulatory (policies such as new emissions standards), products and technology (the development and marketing of climate-friendly products and services), litigation (lawsuits alleging environmental harm), reputational (how a company's environmental policies affect its brand), supply chain (potentially higher raw material and energy costs), and physical (such as an increase in the incidence of hurricanes). The authors propose a four-step process for responding to climate change risk: Quantify your company's carbon footprint; identify the risks and opportunities you face; adapt your business in response; and do it better than your competitors. PMID:17348173

  14. Vector processor algorithms for transonic flow calculations

    NASA Technical Reports Server (NTRS)

    South, J. C., Jr.; Keller, J. D.; Hafez, M. M.

    1979-01-01

    This paper discusses a number of algorithms for solving the transonic full-potential equation in conservative form on a vector computer, such as the CDC STAR-100 or the CRAY-1. Recent research with the 'artificial density' method for transonics has led to development of some new iteration schemes which take advantage of vector-computer architecture without suffering significant loss of convergence rate. Several of these more promising schemes are described and 2-D and 3-D results are shown comparing the computational rates on the STAR and CRAY vector computers, and the CYBER-175 serial computer. Schemes included are: (1) Checkerboard SOR, (2) Checkerboard Leapfrog, (3) odd-even vertical line SOR, and (4) odd-even horizontal line SOR.

  15. Give/Take

    2007-09-12

    Give and Take are set of companion utilities that allow a secure transfer of files from one user to another without exposing the files to third parties. The named files are copied to a spool area. The reciever can retrieve the files by running the "take" program. Ownership of the files remains with the giver until they are taken. Certain users may be limited to take files only from specific givers. For these users, filesmore » may only be taken from givers who are members of the gt-uid-group where uid is the UNIX id of the limited user.« less

  16. Faster algorithms for RNA-folding using the Four-Russians method

    PubMed Central

    2014-01-01

    Background The secondary structure that maximizes the number of non-crossing matchings between complimentary bases of an RNA sequence of length n can be computed in O(n3) time using Nussinov’s dynamic programming algorithm. The Four-Russians method is a technique that reduces the running time for certain dynamic programming algorithms by a multiplicative factor after a preprocessing step where solutions to all smaller subproblems of a fixed size are exhaustively enumerated and solved. Frid and Gusfield designed an O(n3logn) algorithm for RNA folding using the Four-Russians technique. In their algorithm the preprocessing is interleaved with the algorithm computation. Theoretical results We simplify the algorithm and the analysis by doing the preprocessing once prior to the algorithm computation. We call this the two-vector method. We also show variants where instead of exhaustive preprocessing, we only solve the subproblems encountered in the main algorithm once and memoize the results. We give a simple proof of correctness and explore the practical advantages over the earlier method. The Nussinov algorithm admits an O(n2) time parallel algorithm. We show a parallel algorithm using the two-vector idea that improves the time bound to O(n2logn). Practical results We have implemented the parallel algorithm on graphics processing units using the CUDA platform. We discuss the organization of the data structures to exploit coalesced memory access for fast running times. The ideas to organize the data structures also help in improving the running time of the serial algorithms. For sequences of length up to 6000 bases the parallel algorithm takes only about 2.5 seconds and the two-vector serial method takes about 57 seconds on a desktop and 15 seconds on a server. Among the serial algorithms, the two-vector and memoized versions are faster than the Frid-Gusfield algorithm by a factor of 3, and are faster than Nussinov by up to a factor of 20. The source-code for the

  17. Take Your Medicines Safely

    MedlinePlus Videos and Cool Tools

    ... better, the antibiotic is working in killing the bacteria, but it might not completely give what they call a "bactericidal effect." That means taking the bacteria completely out of the system. It might be ...

  18. Algorithms for physical segregation of coal

    NASA Astrophysics Data System (ADS)

    Ganguli, Rajive

    The capability for on-line measurement of the quality characteristics of conveyed coal now enables mine operators to take advantage of the inherent heterogeneity of those streams and split them into wash and no-wash stocks. Relative to processing the entire stream, this reduces the amount of coal that must be washed at the mine and thereby reduces processing costs, recovery losses, and refuse generation levels. In this dissertation, two classes of segregation algorithms, using time series models and moving windows are developed and demonstrated using field and simulated data. In all of the developed segregation algorithms, a "cut-off" ash value was computed for coal scanned on the running conveyor belt by the ash analyzer. It determined if the coal was sent to the wash pile or to the nowash pile. Forecasts from time series models, at various lead times ahead, were used in one class of the developed algorithms, to determine the cut-off ash levels. The time series models were updated from time to time to reflect changes in process. Statistical Process Control (SPC) techniques were used to determine if an update was necessary at a given time. When an update was deemed necessary, optimization techniques were used to determine the next best set of model parameters. In the other class of segregation algorithms, "few" of the immediate past observations were used to determine the cut-off ash value. These "few" observations were called the window width . The window width was kept constant in some variants of this class of algorithms. The other variants of this class were an improvement over the fixed window width algorithms. Here, the window widths were varied rather than kept constant. In these cases, SPC was used to determine the window width at any instant. Statistics of the empirical distribution and the normal distribution were used in computation of the cut-off ash value in all the variants of this class of algorithms. The good performance of the developed algorithms

  19. Taking the Gloves off.

    ERIC Educational Resources Information Center

    Srratr, James; Shelton, Stella

    2001-01-01

    One eager reporter plus a media-savvy parent can undermine a school system's image and educational mission. Districts are hampered by the Family Educational Rights and Privacy Act, which prohibits open discussion of an issue concerning a student and his/her parents. Newsworthy incidents illustrate advantages of a waiver policy. (MLH)

  20. The Democratic Take

    ERIC Educational Resources Information Center

    Lehane, Christopher S.

    2008-01-01

    The 2008 presidential election stands as a "change" election. The public's anxiety over the challenges globalization poses to the future of the American Dream is driving a desire for the country to change direction. The American people understand that what will give the nation a competitive advantage in a global marketplace are the skills,…

  1. Analytical advantages of multivariate data processing. One, two, three, infinity?

    PubMed

    Olivieri, Alejandro C

    2008-08-01

    Multidimensional data are being abundantly produced by modern analytical instrumentation, calling for new and powerful data-processing techniques. Research in the last two decades has resulted in the development of a multitude of different processing algorithms, each equipped with its own sophisticated artillery. Analysts have slowly discovered that this body of knowledge can be appropriately classified, and that common aspects pervade all these seemingly different ways of analyzing data. As a result, going from univariate data (a single datum per sample, employed in the well-known classical univariate calibration) to multivariate data (data arrays per sample of increasingly complex structure and number of dimensions) is known to provide a gain in sensitivity and selectivity, combined with analytical advantages which cannot be overestimated. The first-order advantage, achieved using vector sample data, allows analysts to flag new samples which cannot be adequately modeled with the current calibration set. The second-order advantage, achieved with second- (or higher-) order sample data, allows one not only to mark new samples containing components which do not occur in the calibration phase but also to model their contribution to the overall signal, and most importantly, to accurately quantitate the calibrated analyte(s). No additional analytical advantages appear to be known for third-order data processing. Future research may permit, among other interesting issues, to assess if this "1, 2, 3, infinity" situation of multivariate calibration is really true. PMID:18613646

  2. Research on THz CT system and image reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Li, Ming-liang; Wang, Cong; Cheng, Hong

    2009-07-01

    Terahertz Computed Tomography takes the advantages of not only high resolution in space and density without image overlap but also the capability of being directly used in digital processing and spectral analysis, which determine it to be a good choice in parameter detection for process control. But Diffraction and scattering of THz wave will obfuscate or distort the reconstructed image. In order to find the most effective reconstruction method to build THz CT model. Because of the expensive cost, a fan-shaped THz CT industrial detection system scanning model, which consists of 8 emitters and 32 receivers, is established based on studying infrared CT technology. The model contains control and interface, data collecting and image reconstruction sub-system. It analyzes all the sub-function modules then reconstructs images with algebraic reconstruction algorithm. The experimental result proves it to be an effective, efficient algorithm with high resolution and even better than back-projection method.

  3. A Flexible Reservation Algorithm for Advance Network Provisioning

    SciTech Connect

    Balman, Mehmet; Chaniotakis, Evangelos; Shoshani, Arie; Sim, Alex

    2010-04-12

    Many scientific applications need support from a communication infrastructure that provides predictable performance, which requires effective algorithms for bandwidth reservations. Network reservation systems such as ESnet's OSCARS, establish guaranteed bandwidth of secure virtual circuits for a certain bandwidth and length of time. However, users currently cannot inquire about bandwidth availability, nor have alternative suggestions when reservation requests fail. In general, the number of reservation options is exponential with the number of nodes n, and current reservation commitments. We present a novel approach for path finding in time-dependent networks taking advantage of user-provided parameters of total volume and time constraints, which produces options for earliest completion and shortest duration. The theoretical complexity is only O(n2r2) in the worst-case, where r is the number of reservations in the desired time interval. We have implemented our algorithm and developed efficient methodologies for incorporation into network reservation frameworks. Performance measurements confirm the theoretical predictions.

  4. Why is Boris Algorithm So Good?

    SciTech Connect

    et al, Hong Qin

    2013-03-03

    Due to its excellent long term accuracy, the Boris algorithm is the de facto standard for advancing a charged particle. Despite its popularity, up to now there has been no convincing explanation why the Boris algorithm has this advantageous feature. In this letter, we provide an answer to this question. We show that the Boris algorithm conserves phase space volume, even though it is not symplectic. The global bound on energy error typically associated with symplectic algorithms still holds for the Boris algorithm, making it an effective algorithm for the multi-scale dynamics of plasmas.

  5. Why is Boris algorithm so good?

    SciTech Connect

    Qin, Hong; Plasma Physics Laboratory, Princeton University, Princeton, New Jersey 08543 ; Zhang, Shuangxi; Xiao, Jianyuan; Liu, Jian; Sun, Yajuan; Tang, William M.

    2013-08-15

    Due to its excellent long term accuracy, the Boris algorithm is the de facto standard for advancing a charged particle. Despite its popularity, up to now there has been no convincing explanation why the Boris algorithm has this advantageous feature. In this paper, we provide an answer to this question. We show that the Boris algorithm conserves phase space volume, even though it is not symplectic. The global bound on energy error typically associated with symplectic algorithms still holds for the Boris algorithm, making it an effective algorithm for the multi-scale dynamics of plasmas.

  6. IR and visual image registration based on mutual information and PSO-Powell algorithm

    NASA Astrophysics Data System (ADS)

    Zhuang, Youwen; Gao, Kun; Miu, Xianghu

    2014-11-01

    Infrared and visual image registration has a wide application in the fields of remote sensing and military. Mutual information (MI) has proved effective and successful in infrared and visual image registration process. To find the most appropriate registration parameters, optimal algorithms, such as Particle Swarm Optimization (PSO) algorithm or Powell search method, are often used. The PSO algorithm has strong global search ability and search speed is fast at the beginning, while the weakness is low search performance in late search stage. In image registration process, it often takes a lot of time to do useless search and solution's precision is low. Powell search method has strong local search ability. However, the search performance and time is more sensitive to initial values. In image registration, it is often obstructed by local maximum and gets wrong results. In this paper, a novel hybrid algorithm, which combined PSO algorithm and Powell search method, is proposed. It combines both advantages that avoiding obstruction caused by local maximum and having higher precision. Firstly, using PSO algorithm gets a registration parameter which is close to global minimum. Based on the result in last stage, the Powell search method is used to find more precision registration parameter. The experimental result shows that the algorithm can effectively correct the scale, rotation and translation additional optimal algorithm. It can be a good solution to register infrared difference of two images and has a greater performance on time and precision than traditional and visible images.

  7. Fully relativistic lattice Boltzmann algorithm

    SciTech Connect

    Romatschke, P.; Mendoza, M.; Succi, S.

    2011-09-15

    Starting from the Maxwell-Juettner equilibrium distribution, we develop a relativistic lattice Boltzmann (LB) algorithm capable of handling ultrarelativistic systems with flat, but expanding, spacetimes. The algorithm is validated through simulations of a quark-gluon plasma, yielding excellent agreement with hydrodynamic simulations. The present scheme opens the possibility of transferring the recognized computational advantages of lattice kinetic theory to the context of both weakly and ultrarelativistic systems.

  8. Localization algorithm for acoustic emission

    NASA Astrophysics Data System (ADS)

    Salinas, V.; Vargas, Y.; Ruzzante, J.; Gaete, L.

    2010-01-01

    In this paper, an iterative algorithm for localization of acoustic emission (AE) source is presented. The main advantage of the system is that it is independent of the 'ability' in the determination of signal level to triggering the signal by the researcher. The system was tested in cylindrical samples with an AE localized in a known position; the precision in the source determination was of about 2 mm, better than the precision obtained with classic localization algorithms (˜1 cm).

  9. A possible heterozygous advantage in muscular dystrophy.

    PubMed

    Emery, A E H

    2016-01-01

    In certain autosomal recessive disorders there is suggestive evidence that heterozygous carriers may have some selective advantage over normal homozygotes. These include, for example, cystic fibrosis, Tay-Sachs disease and phenylketonuria. The best example so far, however, is that of significant heterozygous advantage in sickle-cell anaemia with increased resistance to falciparum malaria. PMID:27245530

  10. The Down Syndrome Advantage: Fact or Fiction?

    ERIC Educational Resources Information Center

    Corrice, April M.; Glidden, Laraine Masters

    2009-01-01

    The "Down syndrome advantage" is the popular conception that children with Down syndrome are easier to rear than children with other developmental disabilities. We assessed whether mothers of children with developmental disabilities would demonstrate a consistent Down syndrome advantage as their children aged from 12 to 18 years. Results did not…

  11. Learning to take actions

    SciTech Connect

    Khardon, R.

    1996-12-31

    We formalize a model for supervised learning of action strategies in dynamic stochastic domains, and show that pac-learning results on Occam algorithms hold in this model as well. We then identify a particularly useful bias for action strategies based on production rule systems. We show that a subset of production rule systems, including rules in predicate calculus style, small hidden state, and unobserved support predicates, is properly learnable. The bias we introduce enables the learning algorithm to invent the recursive support predicates which are used in the action strategy, and to reconstruct the internal state of the strategy. It is also shown that hierarchical strategies are learnable if a helpful teacher is available, but that otherwise the problem is computationally hard.

  12. Take Pride in America.

    ERIC Educational Resources Information Center

    Indiana State Dept. of Education, Indianapolis. Center for School Improvement and Performance.

    During the 1987-88 school year the Indiana Department of Education assisted the United States Department of the Interior and the Indiana Department of Natural Resources with a program which asked students to become involved in activities to maintain and manage public lands. The 1987 Take Pride in America (TPIA) school program encouraged volunteer…

  13. Teachers Taking Professional Abuse

    ERIC Educational Resources Information Center

    Normore, Anthony H.; Floyd, Andrea

    2005-01-01

    Preservice teachers get their first teaching position hoping to take the first step toward becoming professional educators and expecting support from experienced colleagues and administrators, who often serve as their mentors. In this article, the authors present the story of Kristine (a pseudonym), who works at a middle school in a large U.S.…

  14. Take a Bow

    ERIC Educational Resources Information Center

    Spitzer, Greg; Ogurek, Douglas J.

    2009-01-01

    Performing-arts centers can provide benefits at the high school and collegiate levels, and administrators can take steps now to get the show started. When a new performing-arts center comes to town, local businesses profit. Events and performances draw visitors to the community. Ideally, a performing-arts center will play many roles: entertainment…

  15. Take time for laughter.

    PubMed

    Huntley, Mary I

    2009-01-01

    Taking time for positive laughter in the workplace every day is energizing, health-promoting, and rewarding. Humor happenings and mirthful moments are all around us; we need to be receptive to them. Research provides evidence that laughter is a powerful tool when used appropriately in our personal and professional life journey. PMID:19343850

  16. Simulating Price-Taking

    ERIC Educational Resources Information Center

    Engelhardt, Lucas M.

    2015-01-01

    In this article, the author presents a price-takers' market simulation geared toward principles-level students. This simulation demonstrates that price-taking behavior is a natural result of the conditions that create perfect competition. In trials, there is a significant degree of price convergence in just three or four rounds. Students find this…

  17. Take action: influence diversity.

    PubMed

    Gomez, Norma J

    2013-01-01

    Increased diversity brings strength to nursing and ANNA. Being a more diverse association will require all of us working together. There is an old proverb that says: "one hand cannot cover the sky; it takes many hands." ANNA needs every one of its members to be a part of the diversity initiative. PMID:24579394

  18. Taking the thrombin "fork".

    PubMed

    Mann, Kenneth G

    2010-07-01

    The proverb that probably best exemplifies my career in research is attributable to Yogi Berra (http://www.yogiberra.com/), ie, "when you come to a fork in the road ... take it." My career is a consequence of chance interactions with great mentors and talented students and the opportunities provided by a succession of ground-breaking improvements in technology. PMID:20554951

  19. Taking Library Leadership Personally

    ERIC Educational Resources Information Center

    Davis, Heather; Macauley, Peter

    2011-01-01

    This paper outlines the emerging trends for leadership in the knowledge era. It discusses these within the context of leading, creating and sustaining the performance development cultures that libraries require. The first step is to recognise that we all need to take leadership personally no matter whether we see ourselves as leaders or followers.…

  20. Quantum algorithms

    NASA Astrophysics Data System (ADS)

    Abrams, Daniel S.

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

  1. The advantage of first mention in Spanish

    PubMed Central

    CARREIRAS, MANUEL; GERNSBACHER, MORTON ANN; VILLA, VICTOR

    2015-01-01

    An advantage of first mention—that is, faster access to participants mentioned first in a sentence—has previously been demonstrated only in English. We report three experiments demonstrating that the advantage of first mention occurs also in Spanish sentences, regardless of whether the first-mentioned participants are syntactic subjects, and regardless, too, of whether they are proper names or inanimate objects. Because greater word-order flexibility is allowed in Spanish than in English (e.g., nonpassive object-verb-subject constructions exist in Spanish), these findings provide additional evidence that the advantage of first mention is a general cognitive phenomenon. PMID:24203596

  2. Algorithm That Synthesizes Other Algorithms for Hashing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2010-01-01

    An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the

  3. Paedomorphic facial expressions give dogs a selective advantage.

    PubMed

    Waller, Bridget M; Peirce, Kate; Caeiro, Cátia C; Scheider, Linda; Burrows, Anne M; McCune, Sandra; Kaminski, Juliane

    2013-01-01

    How wolves were first domesticated is unknown. One hypothesis suggests that wolves underwent a process of self-domestication by tolerating human presence and taking advantage of scavenging possibilities. The puppy-like physical and behavioural traits seen in dogs are thought to have evolved later, as a byproduct of selection against aggression. Using speed of selection from rehoming shelters as a proxy for artificial selection, we tested whether paedomorphic features give dogs a selective advantage in their current environment. Dogs who exhibited facial expressions that enhance their neonatal appearance were preferentially selected by humans. Thus, early domestication of wolves may have occurred not only as wolf populations became tamer, but also as they exploited human preferences for paedomorphic characteristics. These findings, therefore, add to our understanding of early dog domestication as a complex co-evolutionary process. PMID:24386109

  4. Paedomorphic Facial Expressions Give Dogs a Selective Advantage

    PubMed Central

    Waller, Bridget M.; Peirce, Kate; Caeiro, Cátia C.; Scheider, Linda; Burrows, Anne M.; McCune, Sandra; Kaminski, Juliane

    2013-01-01

    How wolves were first domesticated is unknown. One hypothesis suggests that wolves underwent a process of self-domestication by tolerating human presence and taking advantage of scavenging possibilities. The puppy-like physical and behavioural traits seen in dogs are thought to have evolved later, as a byproduct of selection against aggression. Using speed of selection from rehoming shelters as a proxy for artificial selection, we tested whether paedomorphic features give dogs a selective advantage in their current environment. Dogs who exhibited facial expressions that enhance their neonatal appearance were preferentially selected by humans. Thus, early domestication of wolves may have occurred not only as wolf populations became tamer, but also as they exploited human preferences for paedomorphic characteristics. These findings, therefore, add to our understanding of early dog domestication as a complex co-evolutionary process. PMID:24386109

  5. 76 FR 56262 - Community Advantage Pilot Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-12

    ... Community Advantage Pilot Program (``CA Pilot Program'') (76 FR 9626). Pursuant to the authority provided to... small businesses and entrepreneurs in underserved markets, SBA is issuing this notice to revise...

  6. THE HOME ADVANTAGE IN MAJOR LEAGUE BASEBALL.

    PubMed

    Jones, Marshall B

    2015-12-01

    Home advantage is smaller in baseball than in other major professional sports for men, specifically football, basketball, or soccer. This paper advances an explanation. It begins by reviewing the main observations to support the view that there is little or no home advantage in individual sports. It then presents the case that home advantage originates in impaired teamwork among the away players. The need for teamwork and the extent of it vary from sport to sport. To the extent that a sport requires little teamwork it is more like an individual sport, and the home team would be expected to enjoy only a small advantage. Interactions among players on the same side (teamwork) are much less common in baseball than in the other sports considered. PMID:26654988

  7. Electronic Recruiting: An Alternative with Many Advantages.

    ERIC Educational Resources Information Center

    Hicks, Mary J.; Woo, Tony Chi-Hung

    1989-01-01

    Discusses electronic recruiting, a process which enables employers to directly access computerized college student resume information via modem. Addresses such advantages as providing up-to-date student information and alleviating paperwork. Provides an example of this system. (BHK)

  8. Challenges and Recent Developments in Hearing Aids: Part I. Speech Understanding in Noise, Microphone Technologies and Noise Reduction Algorithms

    PubMed Central

    Chung, King

    2004-01-01

    This review discusses the challenges in hearing aid design and fitting and the recent developments in advanced signal processing technologies to meet these challenges. The first part of the review discusses the basic concepts and the building blocks of digital signal processing algorithms, namely, the signal detection and analysis unit, the decision rules, and the time constants involved in the execution of the decision. In addition, mechanisms and the differences in the implementation of various strategies used to reduce the negative effects of noise are discussed. These technologies include the microphone technologies that take advantage of the spatial differences between speech and noise and the noise reduction algorithms that take advantage of the spectral difference and temporal separation between speech and noise. The specific technologies discussed in this paper include first-order directional microphones, adaptive directional microphones, second-order directional microphones, microphone matching algorithms, array microphones, multichannel adaptive noise reduction algorithms, and synchrony detection noise reduction algorithms. Verification data for these technologies, if available, are also summarized. PMID:15678225

  9. Haplotyping algorithms

    SciTech Connect

    Sobel, E.; Lange, K.; O`Connell, J.R.

    1996-12-31

    Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.

  10. Quasi-independence, fitness, and advantageousness.

    PubMed

    Brosnan, Kevin

    2009-09-01

    I argue that the idea of 'quasi-independence' [Lewontin, R. C. (1978). Adaptation. Scientific American, 239(3), 212-230] cannot be understood without attending to the distinction between fitness and advantageousness [Sober, E. (1993). Philosophy of biology. Boulder: Westview Press]. Natural selection increases the frequency of fitter traits, not necessarily of advantageous ones. A positive correlation between an advantageous trait and a disadvantageous one may prevent the advantageous trait from evolving. The quasi-independence criterion is aimed at specifying the conditions under which advantageous traits will evolve by natural selection in this type of situation. Contrary to what others have argued [Sterelny, K. (1992). Evolutionary explanations of human behavior. Australian Journal of Philosophy, 70(2), 156-172, and Sterelny, K., & Griffiths, P. (1999). Sex and death. Chicago: University of Chicago Press], these conditions must involve a precise quantitative measure of (a) the extent to which advantageous traits are beneficial, and (b) the degree to which they are correlated with other traits. Driscoll (2004) [Driscoll, C. (2004). Can behaviors be adaptations? Philosophy of Science, 71, 16-35] recognizes the need for such a measure, but I argue that she does not provide the correct formulation. The account of quasi-independence that I offer clarifies this point. PMID:19720331

  11. The Oilheat Manufacturers Associations Oilheat Advantages Project

    SciTech Connect

    Hedden, R.; Bately, J.E.

    1995-04-01

    The Oilheat Advantages Project is the Oilheat Manufacturers Association`s first project. It involves the creation and disseminaiton of the unified, well documented, compellingly packaged oilheat story. The project invovles three steps: the first step is to pull together all the existing data on the advantages of oilheat into a single, well documented engineering report. The second step will be to rewrite and package the technical document into a consumer piece and a scripted presentation supported with overheads, and to disseminate the information throughout the industry. The third step will be to fund new research to update existing information and discover new advantages of oilheat. This step will begin next year. The inforamtion will be packaged in the following formats: The Engineering Document. This will include all the technical information including the creditable third party sources for all the findings on the many advantages of oilheat; the Consumer Booklet. This summarizes all the findings in the Engineering Document in simple language with easy to understand illustrations and graphs; a series of single topic Statement Stuffers on each of the advantages; an Overhead Transparency-Supported Scripted Show that can be used by industry representatives for presentations to the general public, schools, civic groups, and service clubs; and the Periodic publication of updates to the Oilheat Advantages Study.

  12. Auditory perspective taking.

    PubMed

    Martinson, Eric; Brock, Derek

    2013-06-01

    Effective communication with a mobile robot using speech is a difficult problem even when you can control the auditory scene. Robot self-noise or ego noise, echoes and reverberation, and human interference are all common sources of decreased intelligibility. Moreover, in real-world settings, these problems are routinely aggravated by a variety of sources of background noise. Military scenarios can be punctuated by high decibel noise from materiel and weaponry that would easily overwhelm a robot's normal speaking volume. Moreover, in nonmilitary settings, fans, computers, alarms, and transportation noise can cause enough interference to make a traditional speech interface unusable. This work presents and evaluates a prototype robotic interface that uses perspective taking to estimate the effectiveness of its own speech presentation and takes steps to improve intelligibility for human listeners. PMID:23096077

  13. Take the "C" Train

    ERIC Educational Resources Information Center

    Lawton, Rebecca

    2008-01-01

    In this essay, the author recalls several of her experiences in which she successfully pulled her boats out of river holes by throwing herself to the water as a sea-anchor. She learned this trick from her senior guides at a spring training. Her guides told her, "When you're stuck in a hole, take the "C" train."" "Meaning?" The author asked her…

  14. "Greenbook Algorithms and Hardware Needs Analysis"

    SciTech Connect

    De Jong, Wibe A.; Oehmen, Chris S.; Baxter, Douglas J.

    2007-01-09

    "This document describes the algorithms, and hardware balance requirements needed to enable the solution of real scientific problems in the DOE core mission areas of environmental and subsurface chemistry, computational and systems biology, and climate science. The MSCF scientific drivers have been outlined in the Greenbook, which is available online at http://mscf.emsl.pnl.gov/docs/greenbook_for_web.pdf . Historically, the primary science driver has been the chemical and the molecular dynamics of the biological science area, whereas the remaining applications in the biological and environmental systems science areas have been occupying a smaller segment of the available hardware resources. To go from science drivers to hardware balance requirements, the major applications were identified. Major applications on the MSCF resources are low- to high-accuracy electronic structure methods, molecular dynamics, regional climate modeling, subsurface transport, and computational biology. The algorithms of these applications were analyzed to identify the computational kernels in both sequential and parallel execution. This analysis shows that a balanced architecture is needed with respect to processor speed, peak flop rate, peak integer operation rate, and memory hierarchy, interprocessor communication, and disk access and storage. A single architecture can satisfy the needs of all of the science areas, although some areas may take greater advantage of certain aspects of the architecture. "

  15. Libpsht: Algorithms for Efficient Spherical Harmonic Transforms

    NASA Astrophysics Data System (ADS)

    Reinecke, Martin

    2010-10-01

    Libpsht (or "library for Performing Spherical Harmonic Transforms") is a collection of algorithms for efficient conversion between spatial-domain and spectral-domain representations of data defined on the sphere. The package supports transforms of scalars as well as spin-1 and spin-2 quantities, and can be used for a wide range of pixelisations (including HEALPix, GLESP and ECP). It will take advantage of hardware features like multiple processor cores and floating-point vector operations, if available. Even without this additional acceleration, the employed algorithms are among the most efficient (in terms of CPU time as well as memory consumption) currently being used in the astronomical community. The library is written in strictly standard-conforming C90, ensuring portability to many different hard- and software platforms, and allowing straightforward integration with codes written in various programming languages like C, C++, Fortran, Python etc. Libpsht is distributed under the terms of the GNU General Public License (GPL) version 2. Development on this project has ended; its successor is libsharp (ascl:1402.033).

  16. Libpsht - algorithms for efficient spherical harmonic transforms

    NASA Astrophysics Data System (ADS)

    Reinecke, M.

    2011-02-01

    Libpsht (or "library for performant spherical harmonic transforms") is a collection of algorithms for efficient conversion between spatial-domain and spectral-domain representations of data defined on the sphere. The package supports both transforms of scalars and spin-1 and spin-2 quantities, and can be used for a wide range of pixelisations (including HEALPix, GLESP, and ECP). It will take advantage of hardware features such as multiple processor cores and floating-point vector operations, if available. Even without this additional acceleration, the employed algorithms are among the most efficient (in terms of CPU time, as well as memory consumption) currently being used in the astronomical community. The library is written in strictly standard-conforming C90, ensuring portability to many different hard- and software platforms, and allowing straightforward integration with codes written in various programming languages like C, C++, Fortran, Python etc. Libpsht is distributed under the terms of the GNU General Public License (GPL) version 2 and can be downloaded from .

  17. A Grammar-Based Semantic Similarity Algorithm for Natural Language Sentences

    PubMed Central

    Chang, Jia Wei; Hsieh, Tung Cheng

    2014-01-01

    This paper presents a grammar and semantic corpus based similarity algorithm for natural language sentences. Natural language, in opposition to “artificial language”, such as computer programming languages, is the language used by the general public for daily communication. Traditional information retrieval approaches, such as vector models, LSA, HAL, or even the ontology-based approaches that extend to include concept similarity comparison instead of cooccurrence terms/words, may not always determine the perfect matching while there is no obvious relation or concept overlap between two natural language sentences. This paper proposes a sentence similarity algorithm that takes advantage of corpus-based ontology and grammatical rules to overcome the addressed problems. Experiments on two famous benchmarks demonstrate that the proposed algorithm has a significant performance improvement in sentences/short-texts with arbitrary syntax and structure. PMID:24982952

  18. An 'adding' algorithm for the Markov chain formalism for radiation transfer

    NASA Technical Reports Server (NTRS)

    Esposito, L. W.

    1979-01-01

    An adding algorithm is presented, that extends the Markov chain method and considers a preceding calculation as a single state of a new Markov chain. This method takes advantage of the description of the radiation transport as a stochastic process. Successive application of this procedure makes calculation possible for any optical depth without increasing the size of the linear system used. It is determined that the time required for the algorithm is comparable to that for a doubling calculation for homogeneous atmospheres. For an inhomogeneous atmosphere the new method is considerably faster than the standard adding routine. It is concluded that the algorithm is efficient, accurate, and suitable for smaller computers in calculating the diffuse intensity scattered by an inhomogeneous planetary atmosphere.

  19. Robust interacting multiple model algorithms based on multi-sensor fusion criteria

    NASA Astrophysics Data System (ADS)

    Zhou, Weidong; Liu, Mengmeng

    2016-01-01

    This paper is concerned with the state estimation problem for a class of Markov jump linear discrete-time stochastic systems. Three novel interacting multiple model (IMM) algorithms are proposed based on the H∞ technique, the correlation among estimation errors of mode-conditioned filters and the multi-sensor optimal information fusion criteria. Mode probabilities in the novel algorithms are derived based on the error cross-covariances instead of likelihood functions. The H∞ technique taking the place of Kalman filtering is applied to enhance the robustness of the new approaches. Theoretical analysis and Monte Carlo simulation results indicate that the proposed algorithms are effective and have an obvious advantage in velocity estimation when tracking a maneuvering target.

  20. A Hybrid Neural Network-Genetic Algorithm Technique for Aircraft Engine Performance Diagnostics

    NASA Technical Reports Server (NTRS)

    Kobayashi, Takahisa; Simon, Donald L.

    2001-01-01

    In this paper, a model-based diagnostic method, which utilizes Neural Networks and Genetic Algorithms, is investigated. Neural networks are applied to estimate the engine internal health, and Genetic Algorithms are applied for sensor bias detection and estimation. This hybrid approach takes advantage of the nonlinear estimation capability provided by neural networks while improving the robustness to measurement uncertainty through the application of Genetic Algorithms. The hybrid diagnostic technique also has the ability to rank multiple potential solutions for a given set of anomalous sensor measurements in order to reduce false alarms and missed detections. The performance of the hybrid diagnostic technique is evaluated through some case studies derived from a turbofan engine simulation. The results show this approach is promising for reliable diagnostics of aircraft engines.

  1. Physics Take-Outs

    NASA Astrophysics Data System (ADS)

    Riendeau, Diane; Hawkins, Stephanie; Beutlich, Scott

    2016-03-01

    Most teachers want students to think about their course content not only during class but also throughout their day. So, how do you get your students to see how what they learn in class applies to their lives outside of class? As physics teachers, we are fortunate that our students are continually surrounded by our content. How can we get them to notice the physics around them? How can we get them to make connections between the classroom content and their everyday lives? We would like to offer a few suggestions, Physics Take-Outs, to solve this problem.

  2. Measuring home advantage in Spanish handball.

    PubMed

    Gutiérrez Aguilar, Oscar; Saavedra García, Miguel; Fernández Romero, Juan José

    2012-02-01

    Since Pollard established the system for analysing home advantage in 1986, it has been demonstrated and quantified in various sports, including many team sports. This study aims to assess whether home advantage exists in handball, using a sample of more than 19,000 Spanish handball league games. Results of the games played at home and away, the sex of the players, and the levels of the competition were included as variables. In Spanish handball, there was a home advantage of 61%, which means, on average, the team playing at home wins 61% of points available. This value varies according to sex and according to competition level, increasing as competition level decreases and season rank improves. PMID:22582700

  3. Is There an Islamist Political Advantage?

    PubMed Central

    Cammett, Melani; Luong, Pauline Jones

    2014-01-01

    There is a widespread presumption that Islamists have an advantage over their opponents when it comes to generating mass appeal and winning elections. The question remains, however, as to whether these advantages—or, what we refer to collectively as an Islamist political advantage—actually exist. We argue that—to the extent that Islamists have a political advantage—the primary source of this advantage is reputation rather than the provision of social services, organizational capacity, or ideological hegemony. Our purpose is not to dismiss the main sources of the Islamist governance advantage identified in scholarly literature and media accounts, but to suggest a different causal path whereby each of these factors individually and sometimes jointly promotes a reputation for Islamists as competent, trustworthy, and pure. It is this reputation for good governance that enables Islamists to distinguish themselves in the streets and at the ballot box. PMID:25767370

  4. Routing Algorithm Exploits Spatial Relations

    NASA Technical Reports Server (NTRS)

    Okino, Clayton; Jennings, Esther

    2004-01-01

    A recently developed routing algorithm for broadcasting in an ad hoc wireless communication network takes account of, and exploits, the spatial relationships among the locations of nodes, in addition to transmission power levels and distances between the nodes. In contrast, most prior algorithms for discovering routes through ad hoc networks rely heavily on transmission power levels and utilize limited graph-topology techniques that do not involve consideration of the aforesaid spatial relationships. The present algorithm extracts the relevant spatial-relationship information by use of a construct denoted the relative-neighborhood graph (RNG).

  5. Cubit Adaptive Meshing Algorithm Library

    2004-09-01

    CAMAL (Cubit adaptive meshing algorithm library) is a software component library for mesh generation. CAMAL 2.0 includes components for triangle, quad and tetrahedral meshing. A simple Application Programmers Interface (API) takes a discrete boundary definition and CAMAL computes a quality interior unstructured grid. The triangle and quad algorithms may also import a geometric definition of a surface on which to define the grid. CAMAL’s triangle meshing uses a 3D space advancing front method, the quadmore » meshing algorithm is based upon Sandia’s patented paving algorithm and the tetrahedral meshing algorithm employs the GHS3D-Tetmesh component developed by INRIA, France.« less

  6. Categorizing Variations of Student-Implemented Sorting Algorithms

    ERIC Educational Resources Information Center

    Taherkhani, Ahmad; Korhonen, Ari; Malmi, Lauri

    2012-01-01

    In this study, we examined freshmen students' sorting algorithm implementations in data structures and algorithms' course in two phases: at the beginning of the course before the students received any instruction on sorting algorithms, and after taking a lecture on sorting algorithms. The analysis revealed that many students have insufficient…

  7. Solving matrix-effects exploiting the second order advantage in the resolution and determination of eight tetracycline antibiotics in effluent wastewater by modelling liquid chromatography data with multivariate curve resolution-alternating least squares and unfolded-partial least squares followed by residual bilinearization algorithms I. Effect of signal pre-treatment.

    PubMed

    De Zan, M M; Gil García, M D; Culzoni, M J; Siano, R G; Goicoechea, H C; Martínez Galera, M

    2008-02-01

    The effect of piecewise direct standardization (PDS) and baseline correction approaches was evaluated in the performance of multivariate curve resolution (MCR-ALS) algorithm for the resolution of three-way data sets from liquid chromatography with diode-array detection (LC-DAD). First, eight tetracyclines (tetracycline, oxytetracycline, chlorotetracycline, demeclocycline, methacycline, doxycycline, meclocycline and minocycline) were isolated from 250 mL effluent wastewater samples by solid-phase extraction (SPE) with Oasis MAX 500 mg/6 mL cartridges and then separated on an Aquasil C18 150 mm x 4.6mm (5 microm particle size) column by LC and detected by DAD. Previous experiments, carried out with Milli-Q water samples, showed a considerable loss of the most polar analytes (minocycline, oxitetracycline and tetracycline) due to breakthrough. PDS was applied to overcome this important drawback. Conversion of chromatograms obtained from standards prepared in solvent was performed obtaining a high correlation with those corresponding to the real situation (r2 = 0.98). Although the enrichment and clean-up steps were carefully optimized, the sample matrix caused a large baseline drift, and also additive interferences were present at the retention times of the analytes. These problems were solved with the baseline correction method proposed by Eilers. MCR-ALS was applied to the corrected and uncorrected three-way data sets to obtain spectral and chromatographic profiles of each tetracycline, as well as those corresponding to the co-eluting interferences. The complexity of the calibration model built from uncorrected data sets was higher, as expected, and the quality of the spectral and chromatographic profiles was worse. PMID:18093603

  8. 76 FR 9626 - Community Advantage Pilot Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-18

    ... and members of the military community. The Community Advantage Pilot Program will allow mission... military community. This new pilot program will replace the current Community Express Pilot Loan Program, which has been extended through April 30, 2011. (75 FR 80561, December 22, 2010) No new...

  9. Creating Competitive Advantage through Effective Management Education.

    ERIC Educational Resources Information Center

    Longenecker, Clinton O.; Ariss, Sonny S.

    2002-01-01

    Managers trained in executive education programs (n=203) identified ways in which management education can increase an organization's competitive advantage: exposure to new ideas and practices, skill development, and motivation. Characteristics of effective management education included experience-based learning orientation, credible instructors,…

  10. Advantages of Studying Processes in Educational Research

    ERIC Educational Resources Information Center

    Schmitz, Bernhard

    2006-01-01

    It is argued that learning and instruction could be conceptualized from a process-analytic perspective. Important questions from the field of learning and instruction are presented which can be answered using our approach of process analyses. A classification system of process concepts and methods is given. One main advantage of this kind of…

  11. A fast non-local image denoising algorithm

    NASA Astrophysics Data System (ADS)

    Dauwe, A.; Goossens, B.; Luong, H. Q.; Philips, W.

    2008-02-01

    In this paper we propose several improvements to the original non-local means algorithm introduced by Buades et al. which obtains state-of-the-art denoising results. The strength of this algorithm is to exploit the repetitive character of the image in order to denoise the image unlike conventional denoising algorithms, which typically operate in a local neighbourhood. Due to the enormous amount of weight computations, the original algorithm has a high computational cost. An improvement of image quality towards the original algorithm is to ignore the contributions from dissimilar windows. Even though their weights are very small at first sight, the new estimated pixel value can be severely biased due to the many small contributions. This bad influence of dissimilar windows can be eliminated by setting their corresponding weights to zero. Using the preclassification based on the first three statistical moments, only contributions from similar neighborhoods are computed. To decide whether a window is similar or dissimilar, we will derive thresholds for images corrupted with additive white Gaussian noise. Our accelerated approach is further optimized by taking advantage of the symmetry in the weights, which roughly halves the computation time, and by using a lookup table to speed up the weight computations. Compared to the original algorithm, our proposed method produces images with increased PSNR and better visual performance in less computation time. Our proposed method even outperforms state-of-the-art wavelet denoising techniques in both visual quality and PSNR values for images containing a lot of repetitive structures such as textures: the denoised images are much sharper and contain less artifacts. The proposed optimizations can also be applied in other image processing tasks which employ the concept of repetitive structures such as intra-frame super-resolution or detection of digital image forgery.

  12. The half-truth of first-mover advantage.

    PubMed

    Suarez, Fernando; Lanzolla, Gianvito

    2005-04-01

    Many executives take for granted that the first company in a new product category gets an unbeatable head start and reaps long-lasting benefits. But that doesn't always happen. The authors of this article discovered that much depends on the pace at which the category's technology is changing and the speed at which the market is evolving. By analyzing these two factors, companies can improve their odds of succeeding as first movers with the resources they possess. Gradual evolution in both the technology and the market provides a first mover with the best conditions for creating a dominant position that is long lasting (Hoover in the vacuum cleaner industry is a good example). In such calm waters, a company can defend its advantages even without exceptional skills or extensive financial resources. When the market is changing rapidly and the product isn't, a first entrant with extensive resources can obtain a long-lasting advantage (as Sony did with its Walkman personal stereo); a company with only limited resources probably must settle for a short-term benefit. When the market is static but the product is changing constantly, first-mover advantages of either kind--durable or short-lived--are unlikely. Only companies with very deep pockets can survive (think of Sony and the digital cameras it pioneered). Rapid churn in both the technology and the market creates the worst conditions. But if companies have an acute sense of when to exit-as Netscape demonstrated when it agreed to be acquired by AOL-a worthwhile short-term gain is possible. Before venturing into a newly forming market, you need to analyze the environment, assess your resources, then determine which type offirst-mover advantage is most achievable. Once you've gone into the water, you have no choice but to swim. PMID:15807045

  13. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  14. Advantages of Parallel Processing and the Effects of Communications Time

    NASA Technical Reports Server (NTRS)

    Eddy, Wesley M.; Allman, Mark

    2000-01-01

    Many computing tasks involve heavy mathematical calculations, or analyzing large amounts of data. These operations can take a long time to complete using only one computer. Networks such as the Internet provide many computers with the ability to communicate with each other. Parallel or distributed computing takes advantage of these networked computers by arranging them to work together on a problem, thereby reducing the time needed to obtain the solution. The drawback to using a network of computers to solve a problem is the time wasted in communicating between the various hosts. The application of distributed computing techniques to a space environment or to use over a satellite network would therefore be limited by the amount of time needed to send data across the network, which would typically take much longer than on a terrestrial network. This experiment shows how much faster a large job can be performed by adding more computers to the task, what role communications time plays in the total execution time, and the impact a long-delay network has on a distributed computing system.

  15. Sustainable competitive advantage for accountable care organizations.

    PubMed

    Macfarlane, Michael Alex

    2014-01-01

    In the current period of health industry reform, accountable care organizations (ACOs) have emerged as a new model for the delivery of high-quality and cost-effective healthcare. However, few ACOs operate in direct competition with one another, and the accountable care business model has yet to present a means of continually developing new marginal value for patients and network partners. With value-based purchasing and patient consumerism strengthening as market forces, ACOs must build organizational sustainability and competitive advantage to meet the value demands set by customers and competitors. This essay proposes a strategy, adapted from the disciplines of agile software development and Lean product development, through which ACOs can engage internal and external customers in the development of new products that will provide sustainability and competitive advantage to the organization by decreasing waste in development, promoting specialized knowledge, and closely targeting customer value. PMID:25154124

  16. What factors contribute to an ownership advantage?

    PubMed

    Fayed, S A; Jennions, M D; Backwell, P R Y

    2008-04-23

    In most taxa, owners win fights when defending a territory against intruders. We calculated effect sizes for four factors that potentially contribute to an 'owner advantage'. We studied male fiddler crabs Uca mjoebergi, where owners won 92% of natural fights. Owners were not more successful because they were inherently better fighters (r=0.02). There was a small effect (r=0.18) of the owner's knowledge of territory quality (food availability) and a medium effect (r=0.29) of his having established relations with neighbours (duration of active tenure), but neither was statistically significant. There was, however, a significant effect due to the mechanical advantage the owner gained through access to the burrow during fights (r=0.48, p<0.005). PMID:18089521

  17. Implementation on Landsat Data of a Simple Cloud Mask Algorithm Developed for MODIS Land Bands

    NASA Technical Reports Server (NTRS)

    Oreopoulos, Lazaros; Wilson, Michael J.; Varnai, Tamas

    2010-01-01

    This letter assesses the performance on Landsat-7 images of a modified version of a cloud masking algorithm originally developed for clear-sky compositing of Moderate Resolution Imaging Spectroradiometer (MODIS) images at northern mid-latitudes. While data from recent Landsat missions include measurements at thermal wavelengths, and such measurements are also planned for the next mission, thermal tests are not included in the suggested algorithm in its present form to maintain greater versatility and ease of use. To evaluate the masking algorithm we take advantage of the availability of manual (visual) cloud masks developed at USGS for the collection of Landsat scenes used here. As part of our evaluation we also include the Automated Cloud Cover Assesment (ACCA) algorithm that includes thermal tests and is used operationally by the Landsat-7 mission to provide scene cloud fractions, but no cloud masks. We show that the suggested algorithm can perform about as well as ACCA both in terms of scene cloud fraction and pixel-level cloud identification. Specifically, we find that the algorithm gives an error of 1.3% for the scene cloud fraction of 156 scenes, and a root mean square error of 7.2%, while it agrees with the manual mask for 93% of the pixels, figures very similar to those from ACCA (1.2%, 7.1%, 93.7%).

  18. Coating-removal techniques: Advantages and disadvantages

    SciTech Connect

    Reitz, W.E.

    1994-07-01

    The removal of radioactive and nonradioactive coatings from various surfaces is a subject of increasing interest for a variety of reasons, including remaining life assessment, nondestructive evaluation of structural integrity, and life extension through the adoption of new surface-modification methods. This review summarizes the state of the art in coating-removal technologies, presenting their advantages and limitations. The methods covered include laser ablation, microwaves, flashlamps, ice, CO2, and plastic blast media.

  19. The oblique mastectomy incision: advantages and outcomes.

    PubMed

    Gronet, Edward M; Halvorson, Eric G

    2014-01-01

    Mastectomy has traditionally been performed using a transverse elliptical incision. The disadvantages of this approach are a potentially visible scar medially and poor subincisional soft-tissue coverage of implants laterally. A more natural and aesthetic result is obtained with an oblique incision running parallel to the pectoralis major muscle fibers. This approach offers women more freedom of choice in clothing as well as the potential for complete subincisional muscle coverage in alloplastic breast reconstruction, in addition to other functional advantages. PMID:24835870

  20. Sinus pericranii: advantages of MR imaging.

    PubMed

    Bigot, J L; Iacona, C; Lepreux, A; Dhellemmes, P; Motte, J; Gomes, H

    2000-10-01

    Sinus pericranii is a rare vascular anomaly involving an abnormal communication between the extracranial and intracranial circulations. A 3-year-old girl presented with a 2 x 2-cm, midline soft-tissue mass at the vertex. Plain skull films and CT using bone windows showed erosion of the parietal bones. MRI confirmed the clinical diagnosis by identifying communication of the vascular mass with the intracranial dural venous sinus. The advantages of MRI are discussed. PMID:11075608

  1. Advantages of polarization experiments at RHIC

    SciTech Connect

    Underwood, D.G.

    1990-01-01

    We point out various spin experiments that could be done if the polarized beam option is pursued at RHIC. The advantages of RHIC for investigating several current and future physics problems are discussed. In particular, the gluon spin dependent structure function of the nucleon could be measured cleanly and systematically. Relevant experience developed in conjunction with the Fermilab Polarized Beam program is also presented. 8 refs., 2 tabs.

  2. Explaining Asian Americans’ academic advantage over whites

    PubMed Central

    Hsin, Amy; Xie, Yu

    2014-01-01

    The superior academic achievement of Asian Americans is a well-documented phenomenon that lacks a widely accepted explanation. Asian Americans’ advantage in this respect has been attributed to three groups of factors: (i) socio-demographic characteristics, (ii) cognitive ability, and (iii) academic effort as measured by characteristics such as attentiveness and work ethic. We combine data from two nationally representative cohort longitudinal surveys to compare Asian-American and white students in their educational trajectories from kindergarten through high school. We find that the Asian-American educational advantage is attributable mainly to Asian students exerting greater academic effort and not to advantages in tested cognitive abilities or socio-demographics. We test explanations for the Asian–white gap in academic effort and find that the gap can be further attributed to (i) cultural differences in beliefs regarding the connection between effort and achievement and (ii) immigration status. Finally, we highlight the potential psychological and social costs associated with Asian-American achievement success. PMID:24799702

  3. Advantageous effect of theanine intake on cognition.

    PubMed

    Tamano, Haruna; Fukura, Kotaro; Suzuki, Miki; Sakamoto, Kazuhiro; Yokogoshi, Hidehiko; Takeda, Atsushi

    2014-11-01

    Theanine, γ-glutamylethylamide, is one of the major amino acid components in green tea. On the basis of the preventive effect of theanine intake after weaning on stress-induced impairment of recognition memory, the advantageous effect of theanine intake on recognition memory was examined in young rats, which were fed water containing 0.3% theanine for 3 weeks after weaning. The rats were subjected to object recognition test. Object recognition memory was maintained in theanine-administered rats 48 hours after the training, but not in the control rats. When in vivo dentate gyrus long-term potentiation (LTP) was induced, it was more greatly induced in theanine-administered rats than in the control rats. The levels of brain-derived neurotropic factor and nerve growth factor in the hippocampus were significantly higher in theanine-administered rats than in the control rats. The present study indicates the advantageous effect of theanine intake after weaning on recognition memory. It is likely that theanine intake is of advantage to the development of hippocampal function after weaning. PMID:24621060

  4. Reconstruction algorithm for limited-angle diffraction tomography for microwave NDE

    SciTech Connect

    Paladhi, P. Roy; Klaser, J.; Tayebi, A.; Udpa, L.; Udpa, S.

    2014-02-18

    Microwave tomography is becoming a popular imaging modality in nondestructive evaluation and medicine. A commonly encountered challenge in tomography in general is that in many practical situations a full 360° angular access is not possible and with limited access, the quality of reconstructed image is compromised. This paper presents an approach for reconstruction with limited angular access in diffraction tomography. The algorithm takes advantage of redundancies in image Fourier space data obtained from diffracted field measurements and couples it to an error minimization technique using a constrained total variation (CTV) minimization. Initial results from simulated data have been presented here to validate the approach.

  5. Note: Fast imaging of DNA in atomic force microscopy enabled by a local raster scan algorithm

    SciTech Connect

    Huang, Peng; Andersson, Sean B.

    2014-06-15

    Approaches to high-speed atomic force microscopy typically involve some combination of novel mechanical design to increase the physical bandwidth and advanced controllers to take maximum advantage of the physical capabilities. For certain classes of samples, however, imaging time can be reduced on standard instruments by reducing the amount of measurement that is performed to image the sample. One such technique is the local raster scan algorithm, developed for imaging of string-like samples. Here we provide experimental results on the use of this technique to image DNA samples, demonstrating the efficacy of the scheme and illustrating the order-of-magnitude improvement in imaging time that it provides.

  6. Flap reconstruction of the knee: A review of current concepts and a proposed algorithm

    PubMed Central

    Gravvanis, Andreas; Kyriakopoulos, Antonios; Kateros, Konstantinos; Tsoutsos, Dimosthenis

    2014-01-01

    A literature search focusing on flap knee reconstruction revealed much controversy regarding the optimal management of around the knee defects. Muscle flaps are the preferred option, mainly in infected wounds. Perforator flaps have recently been introduced in knee coverage with significant advantages due to low donor morbidity and long pedicles with wide arc of rotation. In the case of free flap the choice of recipient vessels is the key point to the reconstruction. Taking the published experience into account, a reconstructive algorithm is proposed according to the size and location of the wound, the presence of infection and/or 3-dimensional defect. PMID:25405089

  7. Spectral Classification of Similar Materials using the Tetracorder Algorithm: The Calcite-Epidote-Chlorite Problem

    NASA Technical Reports Server (NTRS)

    Dalton, J. Brad; Bove, Dana; Mladinich, Carol; Clark, Roger; Rockwell, Barnaby; Swayze, Gregg; King, Trude; Church, Stanley

    2001-01-01

    Recent work on automated spectral classification algorithms has sought to distinguish ever-more similar materials. From modest beginnings separating shade, soil, rock and vegetation to ambitious attempts to discriminate mineral types and specific plant species, the trend seems to be toward using increasingly subtle spectral differences to perform the classification. Rule-based expert systems exploiting the underlying physics of spectroscopy such as the US Geological Society Tetracorder system are now taking advantage of the high spectral resolution and dimensionality of current imaging spectrometer designs to discriminate spectrally similar materials. The current paper details recent efforts to discriminate three minerals having absorptions centered at the same wavelength, with encouraging results.

  8. Note: Fast imaging of DNA in atomic force microscopy enabled by a local raster scan algorithm

    PubMed Central

    Huang, Peng; Andersson, Sean B.

    2014-01-01

    Approaches to high-speed atomic force microscopy typically involve some combination of novel mechanical design to increase the physical bandwidth and advanced controllers to take maximum advantage of the physical capabilities. For certain classes of samples, however, imaging time can be reduced on standard instruments by reducing the amount of measurement that is performed to image the sample. One such technique is the local raster scan algorithm, developed for imaging of string-like samples. Here we provide experimental results on the use of this technique to image DNA samples, demonstrating the efficacy of the scheme and illustrating the order-of-magnitude improvement in imaging time that it provides. PMID:24985865

  9. Efficient unstructured mesh generation by means of Delaunay triangulation and Bowyer-Watson algorithm

    SciTech Connect

    Rebay, S. )

    1993-05-01

    This work is devoted to the description of an efficient unstructured mesh generation method entirely based on the Delaunay triangulation. The distinctive characteristic of the proposed method is that point positions and connections are computed simultaneously. This result is achieved by taking advantage of the sequential way in which the Bowyer-Watson algorithm computes the Delaunay triangulation. Two methods are proposed which have great geometrical flexibility, in that they allow us to treat domains of arbitrary shape and topology and to generate arbitrarily nonuniform meshes. The methods are computationally efficient and are applicable both in two and three dimensions. 11 refs., 20 figs., 1 tab.

  10. A review of ocean chlorophyll algorithms and primary production models

    NASA Astrophysics Data System (ADS)

    Li, Jingwen; Zhou, Song; Lv, Nan

    2015-12-01

    This paper mainly introduces the five ocean chlorophyll concentration inversion algorithm and 3 main models for computing ocean primary production based on ocean chlorophyll concentration. Through the comparison of five ocean chlorophyll inversion algorithm, sums up the advantages and disadvantages of these algorithm,and briefly analyzes the trend of ocean primary production model.

  11. Modified Golomb Algorithm for Computing Unit Fraction Expansions

    ERIC Educational Resources Information Center

    Man, Yiu-Kwong

    2004-01-01

    In this note, a modified Golomb algorithm for computing unit fraction expansions is presented. This algorithm has the advantage that the maximal denominators involved in the expansions will not exceed those computed by the original algorithm. In fact, the differences between the maximal denominators or the number of terms obtained by these two…

  12. A Note on Evolutionary Algorithms and Its Applications

    ERIC Educational Resources Information Center

    Bhargava, Shifali

    2013-01-01

    This paper introduces evolutionary algorithms with its applications in multi-objective optimization. Here elitist and non-elitist multiobjective evolutionary algorithms are discussed with their advantages and disadvantages. We also discuss constrained multiobjective evolutionary algorithms and their applications in various areas.

  13. An Evolutionary Algorithm with Double-Level Archives for Multiobjective Optimization.

    PubMed

    Chen, Ni; Chen, Wei-Neng; Gong, Yue-Jiao; Zhan, Zhi-Hui; Zhang, Jun; Li, Yun; Tan, Yu-Song

    2015-09-01

    Existing multiobjective evolutionary algorithms (MOEAs) tackle a multiobjective problem either as a whole or as several decomposed single-objective sub-problems. Though the problem decomposition approach generally converges faster through optimizing all the sub-problems simultaneously, there are two issues not fully addressed, i.e., distribution of solutions often depends on a priori problem decomposition, and the lack of population diversity among sub-problems. In this paper, a MOEA with double-level archives is developed. The algorithm takes advantages of both the multiobjective-problem-level and the sub-problem-level approaches by introducing two types of archives, i.e., the global archive and the sub-archive. In each generation, self-reproduction with the global archive and cross-reproduction between the global archive and sub-archives both breed new individuals. The global archive and sub-archives communicate through cross-reproduction, and are updated using the reproduced individuals. Such a framework thus retains fast convergence, and at the same time handles solution distribution along Pareto front (PF) with scalability. To test the performance of the proposed algorithm, experiments are conducted on both the widely used benchmarks and a set of truly disconnected problems. The results verify that, compared with state-of-the-art MOEAs, the proposed algorithm offers competitive advantages in distance to the PF, solution coverage, and search speed. PMID:25343775

  14. Taking centre stage...

    NASA Astrophysics Data System (ADS)

    1998-11-01

    HAMLET (Highly Automated Multimedia Light Enhanced Theatre) was the star performance at the recent finals of the `Young Engineer for Britain' competition, held at the Commonwealth Institute in London. This state-of-the-art computer-controlled theatre lighting system won the title `Young Engineers for Britain 1998' for David Kelnar, Jonathan Scott, Ramsay Waller and John Wyllie (all aged 16) from Merchiston Castle School, Edinburgh. HAMLET replaces conventional manually-operated controls with a special computer program, and should find use in the thousands of small theatres, schools and amateur drama productions that operate with limited resources and without specialist expertise. The four students received a £2500 prize between them, along with £2500 for their school, and in addition they were invited to spend a special day with the Royal Engineers. A project designed to improve car locking systems enabled Ian Robinson of Durham University to take the `Working in industry award' worth £1000. He was also given the opportunity of a day at sea with the Royal Navy. Other prizewinners with their projects included: Jun Baba of Bloxham School, Banbury (a cardboard armchair which converts into a desk and chair); Kobika Sritharan and Gemma Hancock, Bancroft's School, Essex (a rain warning system for a washing line); and Alistair Clarke, Sam James and Ruth Jenkins, Bishop of Llandaff High School, Cardiff (a mechanism to open and close the retractable roof of the Millennium Stadium in Cardiff). The two principal national sponsors of the competition, which is organized by the Engineering Council, are Lloyd's Register and GEC. Industrial companies, professional engineering institutions and educational bodies also provided national and regional prizes and support. During this year's finals, various additional activities took place, allowing the students to surf the Internet and navigate individual engineering websites on a network of computers. They also visited the

  15. Constraint treatment techniques and parallel algorithms for multibody dynamic analysis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chiou, Jin-Chern

    1990-01-01

    Computational procedures for kinematic and dynamic analysis of three-dimensional multibody dynamic (MBD) systems are developed from the differential-algebraic equations (DAE's) viewpoint. Constraint violations during the time integration process are minimized and penalty constraint stabilization techniques and partitioning schemes are developed. The governing equations of motion, a two-stage staggered explicit-implicit numerical algorithm, are treated which takes advantage of a partitioned solution procedure. A robust and parallelizable integration algorithm is developed. This algorithm uses a two-stage staggered central difference algorithm to integrate the translational coordinates and the angular velocities. The angular orientations of bodies in MBD systems are then obtained by using an implicit algorithm via the kinematic relationship between Euler parameters and angular velocities. It is shown that the combination of the present solution procedures yields a computationally more accurate solution. To speed up the computational procedures, parallel implementation of the present constraint treatment techniques, the two-stage staggered explicit-implicit numerical algorithm was efficiently carried out. The DAE's and the constraint treatment techniques were transformed into arrowhead matrices to which Schur complement form was derived. By fully exploiting the sparse matrix structural analysis techniques, a parallel preconditioned conjugate gradient numerical algorithm is used to solve the systems equations written in Schur complement form. A software testbed was designed and implemented in both sequential and parallel computers. This testbed was used to demonstrate the robustness and efficiency of the constraint treatment techniques, the accuracy of the two-stage staggered explicit-implicit numerical algorithm, and the speed up of the Schur-complement-based parallel preconditioned conjugate gradient algorithm on a parallel computer.

  16. Enforced Clonality Confers a Fitness Advantage

    PubMed Central

    Martínková, Jana; Klimešová, Jitka

    2016-01-01

    In largely clonal plants, splitting of a maternal plant into potentially independent plants (ramets) is usually spontaneous; however, such fragmentation also occurs in otherwise non-clonal species due to application of external force. This process might play an important yet largely overlooked role for otherwise non-clonal plants by providing a mechanism to regenerate after disturbance. Here, in a 5-year garden experiment on two short-lived, otherwise non-clonal species, Barbarea vulgaris and Barbarea stricta, we compared the fitness of plants fragmented by simulated disturbance (“enforced ramets”) both with plants that contemporaneously originate in seed and with individuals unscathed by the disturbance event. Because the ability to regrow from fragments is related to plant age and stored reserves, we compared the effects of disturbance applied during three different ontogenetic stages of the plants. In B. vulgaris, enforced ramet fitness was higher than the measured fitness values of both uninjured plants and plants established from seed after the disturbance. This advantage decreased with increasing plant age at the time of fragmentation. In B. stricta, enforced ramet fitness was lower than or similar to fitness of uninjured plants and plants grown from seed. Our results likely reflect the habitat preferences of the study species, as B. vulgaris occurs in anthropogenic, disturbed habitats where body fragmentation is more probable and enforced clonality thus more advantageous than in the more natural habitats preferred by B. stricta. Generalizing from our results, we see that increased fitness yielded by enforced clonality would confer an evolutionary advantage in the face of disturbance, especially in habitats where a seed bank has not been formed, e.g., during invasion or colonization. Our results thus imply that enforced clonality should be taken into account when studying population dynamics and life strategies of otherwise non-clonal species in disturbed

  17. Advantages of semiconductor CZT for medical imaging

    NASA Astrophysics Data System (ADS)

    Wagenaar, Douglas J.; Parnham, Kevin; Sundal, Bjorn; Maehlum, Gunnar; Chowdhury, Samir; Meier, Dirk; Vandehei, Thor; Szawlowski, Marek; Patt, Bradley E.

    2007-09-01

    Cadmium zinc telluride (CdZnTe, or CZT) is a room-temperature semiconductor radiation detector that has been developed in recent years for a variety of applications. CZT has been investigated for many potential uses in medical imaging, especially in the field of single photon emission computed tomography (SPECT). CZT can also be used in positron emission tomography (PET) as well as photon-counting and integration-mode x-ray radiography and computed tomography (CT). The principal advantages of CZT are 1) direct conversion of x-ray or gamma-ray energy into electron-hole pairs; 2) energy resolution; 3) high spatial resolution and hence high space-bandwidth product; 4) room temperature operation, stable performance, high density, and small volume; 5) depth-of-interaction (DOI) available through signal processing. These advantages will be described in detail with examples from our own CZT systems. The ability to operate at room temperature, combined with DOI and very small pixels, make the use of multiple, stationary CZT "mini-gamma cameras" a realistic alternative to today's large Anger-type cameras that require motion to obtain tomographic sampling. The compatibility of CZT with Magnetic Resonance Imaging (MRI)-fields is demonstrated for a new type of multi-modality medical imaging, namely SPECT/MRI. For pre-clinical (i.e., laboratory animal) imaging, the advantages of CZT lie in spatial and energy resolution, small volume, automated quality control, and the potential for DOI for parallax removal in pinhole imaging. For clinical imaging, the imaging of radiographically dense breasts with CZT enables scatter rejection and hence improved contrast. Examples of clinical breast images with a dual-head CZT system are shown.

  18. A Support Vector Machine Blind Equalization Algorithm Based on Immune Clone Algorithm

    NASA Astrophysics Data System (ADS)

    Yecai, Guo; Rui, Ding

    Aiming at affecting of the parameter selection method of support vector machine(SVM) on its application in blind equalization algorithm, a SVM constant modulus blind equalization algorithm based on immune clone selection algorithm(CSA-SVM-CMA) is proposed. In this proposed algorithm, the immune clone algorithm is used to optimize the parameters of the SVM on the basis advantages of its preventing evolutionary precocious, avoiding local optimum, and fast convergence. The proposed algorithm can improve the parameter selection efficiency of SVM constant modulus blind equalization algorithm(SVM-CMA) and overcome the defect of the artificial setting parameters. Accordingly, the CSA-SVM-CMA has faster convergence rate and smaller mean square error than the SVM-CMA. Computer simulations in underwater acoustic channels have proved the validity of the algorithm.

  19. The TOMS V9 Algorithm for OMPS Nadir Mapper Total Ozone: An Enhanced Design That Ensures Data Continuity

    NASA Astrophysics Data System (ADS)

    Haffner, D. P.; McPeters, R. D.; Bhartia, P. K.; Labow, G. J.

    2015-12-01

    The TOMS V9 total ozone algorithm will be applied to the OMPS Nadir Mapper instrument to supersede the exisiting V8.6 data product in operational processing and re-processing for public release. Becuase the quality of the V8.6 data is already quite high, enchancements in V9 are mainly with information provided by the retrieval and simplifcations to the algorithm. The design of the V9 algorithm has been influenced by improvements both in our knowledge of atmospheric effects, such as those of clouds made possible by studies with OMI, and also limitations in the V8 algorithms applied to both OMI and OMPS. But the namesake instruments of the TOMS algorithm are substantially more limited in their spectral and noise characterisitics, and a requirement of our algorithm is to also apply the algorithm to these discrete band spectrometers which date back to 1978. To achieve continuity for all these instruments, the TOMS V9 algorithm continues to use radiances in discrete bands, but now uses Rodgers optimal estimation to retrieve a coarse profile and provide uncertainties for each retrieval. The algorithm remains capable of achieving high accuracy results with a small number of discrete wavelengths, and in extreme cases, such as unusual profile shapes and high solar zenith angles, the quality of the retrievals is improved. Despite the intended design to use limited wavlenegths, the algorithm can also utilitze additional wavelengths from hyperspectral sensors like OMPS to augment the retreival's error detection and information content; for example SO2 detection and correction of Ring effect on atmospheric radiances. We discuss these and other aspects of the V9 algorithm as it will be applied to OMPS, and will mention potential improvements which aim to take advantage of a synergy with OMPS Limb Profiler and Nadir Mapper to further improve the quality of total ozone from the OMPS instrument.

  20. Taking Care of Your Vision

    MedlinePlus

    ... a Friend Who Cuts? Taking Care of Your Vision KidsHealth > For Teens > Taking Care of Your Vision ... are important parts of keeping your peepers perfect. Vision Basics One of the best things you can ...

  1. Why Take a Prenatal Supplement?

    MedlinePlus

    ... Newsroom Dietary Guidelines Communicator’s Guide Why take a prenatal supplement? You are here Home / Audience / Adults / Moms/ Moms-to-Be / Dietary Supplements Why take a prenatal supplement? Print Share During pregnancy, your needs increase ...

  2. Ghost ileostomy: real and potential advantages.

    PubMed

    Miccini, Michelangelo; Amore Bonapasta, Stefano; Gregori, Matteo; Barillari, Paolo; Tocchi, Adriano

    2010-10-01

    Loop ileostomy is created to minimize the clinical impact of colorectal anastomotic leak. However, a lot of complications may be associated with ileostomy presence and with its reversal. Moreover, patients hardly accept the quality of life resulting from ileostomy. We describe a simple technique (ghost ileostomy) to combine all the advantages of a disposable ileostomy without entailing its complications in patients submitted to low rectal resection. In case of uneventful postoperative course, the ghost ileostomy prevents all complications related to defunctioning ileostomy. At the same time, in case of anastomotic leakage, the ghost ileostomy is easily and safely converted into a defunctioning ileostomy. PMID:20887836

  3. Establishing a competitive advantage through quality management.

    PubMed

    George, R J

    1996-06-01

    The successful dentist of the future will establish a sustainable competitive advantage in the marketplace by recognising that patients undergoing dental treatment cannot see the result before purchase, and that they therefore look for signs of service quality to reduce uncertainty. Thus the successful dentist will implement a quality programme that recognises not only that quality is defined by meeting patients' needs and expectations, but also that quality service is fundamental to successful business strategy. Finally, the successful dentist of the future will realise that the pursuit of quality is a never-ending process which requires leadership by example. PMID:8710317

  4. Using information networks for competitive advantage.

    PubMed

    Rothenberg, R L

    1995-01-01

    Although the healthcare "information superhighway" has received considerable attention, the use of information technology to create a sustainable competitive advantage is not new to other industries. Economic survival in the new world of managed care may depend on a healthcare delivery system's ability to use network-based communications technologies to differentiate itself in the market, especially through cost savings and demonstration of desirable outcomes. The adaptability of these technologies can help position healthcare organizations to break the paradigms of the past and thrive in a market environment that stresses coordination, efficiency, and quality in various settings. PMID:10146130

  5. Parallelization of the Red-Black Algorithm on Solving the Second-Order PN Transport Equation with the Hybrid Finite Element Method

    SciTech Connect

    Yaqi Wang; Cristian Rabiti; Giuseppe Palmiotti

    2011-06-01

    The Red-Black algorithm has been successfully applied on solving the second-order parity transport equation with the PN approximation in angle and the Hybrid Finite Element Method (HFEM) in space, i.e., the Variational Nodal Method (VNM) [1,2,3,4,5]. Any transport solving techniques, including the Red-Black algorithm, need to be parallelized in order to take the advantage of the development of supercomputers with multiple processors for the advanced modeling and simulation. To our knowledge, an attempt [6] was done to parallelize it, but it was devoted only to the z axis plans in three-dimensional calculations. General parallelization of the Red-Black algorithm with the spatial domain decomposition has not been reported in the literature. In this summary, we present our implementation of the parallelization of the Red-Black algorithm and its efficiency results.

  6. Learning tensegrity locomotion using open-loop control signals and coevolutionary algorithms.

    PubMed

    Iscen, Atil; Caluwaerts, Ken; Bruce, Jonathan; Agogino, Adrian; SunSpiral, Vytas; Tumer, Kagan

    2015-01-01

    Soft robots offer many advantages over traditional rigid robots. However, soft robots can be difficult to control with standard control methods. Fortunately, evolutionary algorithms can offer an elegant solution to this problem. Instead of creating controls to handle the intricate dynamics of these robots, we can simply evolve the controls using a simulation to provide an evaluation function. In this article, we show how such a control paradigm can be applied to an emerging field within soft robotics: robots based on tensegrity structures. We take the model of the Spherical Underactuated Planetary Exploration Robot ball (SUPERball), an icosahedron tensegrity robot under production at NASA Ames Research Center, develop a rolling locomotion algorithm, and study the learned behavior using an accurate model of the SUPERball simulated in the NASA Tensegrity Robotics Toolkit. We first present the historical-average fitness-shaping algorithm for coevolutionary algorithms to speed up learning while favoring robustness over optimality. Second, we use a distributed control approach by coevolving open-loop control signals for each controller. Being simple and distributed, open-loop controllers can be readily implemented on SUPERball hardware without the need for sensor information or precise coordination. We analyze signals of different complexities and frequencies. Among the learned policies, we take one of the best and use it to analyze different aspects of the rolling gait, such as lengths, tensions, and energy consumption. We also discuss the correlation between the signals controlling different parts of the tensegrity robot. PMID:25951199

  7. An Adaptive Reputation-Based Algorithm for Grid Virtual Organization Formation

    NASA Astrophysics Data System (ADS)

    Cui, Yongrui; Li, Mingchu; Ren, Yizhi; Sakurai, Kouichi

    A novel adaptive reputation-based virtual organization formation is proposed. It restrains the bad performers effectively based on the consideration of the global experience of the evaluator and evaluates the direct trust relation between two grid nodes accurately by consulting the previous trust value rationally. It also consults and improves the reputation evaluation process in PathTrust model by taking account of the inter-organizational trust relationship and combines it with direct and recommended trust in a weighted way, which makes the algorithm more robust against collusion attacks. Additionally, the proposed algorithm considers the perspective of the VO creator and takes required VO services as one of the most important fine-grained evaluation criterion, which makes the algorithm more suitable for constructing VOs in grid environments that include autonomous organizations. Simulation results show that our algorithm restrains the bad performers and resists against fake transaction attacks and badmouth attacks effectively. It provides a clear advantage in the design of a VO infrastructure.

  8. Intelligent QoS routing algorithm based on improved AODV protocol for Ad Hoc networks

    NASA Astrophysics Data System (ADS)

    Huibin, Liu; Jun, Zhang

    2016-04-01

    Mobile Ad Hoc Networks were playing an increasingly important part in disaster reliefs, military battlefields and scientific explorations. However, networks routing difficulties are more and more outstanding due to inherent structures. This paper proposed an improved cuckoo searching-based Ad hoc On-Demand Distance Vector Routing protocol (CSAODV). It elaborately designs the calculation methods of optimal routing algorithm used by protocol and transmission mechanism of communication-package. In calculation of optimal routing algorithm by CS Algorithm, by increasing QoS constraint, the found optimal routing algorithm can conform to the requirements of specified bandwidth and time delay, and a certain balance can be obtained among computation spending, bandwidth and time delay. Take advantage of NS2 simulation software to take performance test on protocol in three circumstances and validate the feasibility and validity of CSAODV protocol. In results, CSAODV routing protocol is more adapt to the change of network topological structure than AODV protocol, which improves package delivery fraction of protocol effectively, reduce the transmission time delay of network, reduce the extra burden to network brought by controlling information, and improve the routing efficiency of network.

  9. Geostationary Fire Detection with the Wildfire Automated Biomass Burning Algorithm

    NASA Astrophysics Data System (ADS)

    Hoffman, J.; Schmidt, C. C.; Brunner, J. C.; Prins, E. M.

    2010-12-01

    The Wild Fire Automated Biomass Burning Algorithm (WF_ABBA), developed at the Cooperative Institute for Meteorological Satellite Studies (CIMSS), has a long legacy of operational wildfire detection and characterization. In recent years, applications of geostationary fire detection and characterization data have been expanding. Fires are detected with a contextual algorithm and when the fires meet certain conditions the instantaneous fire size, temperature, and radiative power are calculated and provided in user products. The WF_ABBA has been applied to data from Geostationary Operational Environmental Satellite (GOES)-8 through 15, Meteosat-8/-9, and Multifunction Transport Satellite (MTSAT)-1R/-2. WF_ABBA is also being developed for the upcoming platforms like GOES-R Advanced Baseline Imager (ABI) and other geostationary satellites. Development of the WF_ABBA for GOES-R ABI has focused on adapting the legacy algorithm to the new satellite system, enhancing its capabilities to take advantage of the improvements available from ABI, and addressing user needs. By its nature as a subpixel feature, observation of fire is extraordinarily sensitive to the characteristics of the sensor and this has been a fundamental part of the GOES-R WF_ABBA development work.

  10. Efficient Implementation of the Backpropagation Algorithm in FPGAs and Microcontrollers.

    PubMed

    Ortega-Zamorano, Francisco; Jerez, Jose M; Urda Munoz, Daniel; Luque-Baena, Rafael M; Franco, Leonardo

    2016-09-01

    The well-known backpropagation learning algorithm is implemented in a field-programmable gate array (FPGA) board and a microcontroller, focusing in obtaining efficient implementations in terms of a resource usage and computational speed. The algorithm was implemented in both cases using a training/validation/testing scheme in order to avoid overfitting problems. For the case of the FPGA implementation, a new neuron representation that reduces drastically the resource usage was introduced by combining the input and first hidden layer units in a single module. Further, a time-division multiplexing scheme was implemented for carrying out product computations taking advantage of the built-in digital signal processor cores. In both implementations, the floating-point data type representation normally used in a personal computer (PC) has been changed to a more efficient one based on a fixed-point scheme, reducing system memory variable usage and leading to an increase in computation speed. The results show that the modifications proposed produced a clear increase in computation speed in comparison with the standard PC-based implementation, demonstrating the usefulness of the intrinsic parallelism of FPGAs in neurocomputational tasks and the suitability of both implementations of the algorithm for its application to the real world problems. PMID:26277004

  11. A Resampling Based Clustering Algorithm for Replicated Gene Expression Data.

    PubMed

    Li, Han; Li, Chun; Hu, Jie; Fan, Xiaodan

    2015-01-01

    In gene expression data analysis, clustering is a fruitful exploratory technique to reveal the underlying molecular mechanism by identifying groups of co-expressed genes. To reduce the noise, usually multiple experimental replicates are performed. An integrative analysis of the full replicate data, instead of reducing the data to the mean profile, carries the promise of yielding more precise and robust clusters. In this paper, we propose a novel resampling based clustering algorithm for genes with replicated expression measurements. Assuming those replicates are exchangeable, we formulate the problem in the bootstrap framework, and aim to infer the consensus clustering based on the bootstrap samples of replicates. In our approach, we adopt the mixed effect model to accommodate the heterogeneous variances and implement a quasi-MCMC algorithm to conduct statistical inference. Experiments demonstrate that by taking advantage of the full replicate data, our algorithm produces more reliable clusters and has robust performance in diverse scenarios, especially when the data is subject to multiple sources of variance. PMID:26671802

  12. An Enhanced Differential Evolution Algorithm Based on Multiple Mutation Strategies

    PubMed Central

    Xiang, Wan-li; Meng, Xue-lei; An, Mei-qing; Li, Yin-zhen; Gao, Ming-xia

    2015-01-01

    Differential evolution algorithm is a simple yet efficient metaheuristic for global optimization over continuous spaces. However, there is a shortcoming of premature convergence in standard DE, especially in DE/best/1/bin. In order to take advantage of direction guidance information of the best individual of DE/best/1/bin and avoid getting into local trap, based on multiple mutation strategies, an enhanced differential evolution algorithm, named EDE, is proposed in this paper. In the EDE algorithm, an initialization technique, opposition-based learning initialization for improving the initial solution quality, and a new combined mutation strategy composed of DE/current/1/bin together with DE/pbest/bin/1 for the sake of accelerating standard DE and preventing DE from clustering around the global best individual, as well as a perturbation scheme for further avoiding premature convergence, are integrated. In addition, we also introduce two linear time-varying functions, which are used to decide which solution search equation is chosen at the phases of mutation and perturbation, respectively. Experimental results tested on twenty-five benchmark functions show that EDE is far better than the standard DE. In further comparisons, EDE is compared with other five state-of-the-art approaches and related results show that EDE is still superior to or at least equal to these methods on most of benchmark functions. PMID:26609304

  13. Computational Discovery of Materials Using the Firefly Algorithm

    NASA Astrophysics Data System (ADS)

    Avendaño-Franco, Guillermo; Romero, Aldo

    Our current ability to model physical phenomena accurately, the increase computational power and better algorithms are the driving forces behind the computational discovery and design of novel materials, allowing for virtual characterization before their realization in the laboratory. We present the implementation of a novel firefly algorithm, a population-based algorithm for global optimization for searching the structure/composition space. This novel computation-intensive approach naturally take advantage of concurrency, targeted exploration and still keeping enough diversity. We apply the new method in both periodic and non-periodic structures and we present the implementation challenges and solutions to improve efficiency. The implementation makes use of computational materials databases and network analysis to optimize the search and get insights about the geometric structure of local minima on the energy landscape. The method has been implemented in our software PyChemia, an open-source package for materials discovery. We acknowledge the support of DMREF-NSF 1434897 and the Donors of the American Chemical Society Petroleum Research Fund for partial support of this research under Contract 54075-ND10.

  14. Complexity, Competitive Intelligence and the "First Mover" Advantage

    NASA Astrophysics Data System (ADS)

    Fellman, Philip Vos; Post, Jonathan Vos

    In the following paper we explore some of the ways in which competitive intelligence and game theory can be employed to assist firms in deciding whether or not to undertake international market diversification and whether or not there is an advantage to being a market leader or a market follower overseas. In attempting to answer these questions, we take a somewhat unconventional approach. We first examine how some of the most recent advances in the physical and biological sciences can contribute to the ways in which we understand how firms behave. Subsequently, we propose a formal methodology for competitive intelligence. While space considerations here do not allow for a complete game-theoretic treatment of competitive intelligence and its use with respect to understanding first and second mover advantage in firm internationalization, that treatment can be found in its entirety in the on-line proceedings of the 6th International Conference on Complex Systems at http://knowledgetoday.org/wiki/indec.php/ICCS06/89

  15. Complexity, Competitive Intelligence and the "First Mover" Advantage

    NASA Astrophysics Data System (ADS)

    Fellman, Philip Vos; Post, Jonathan Vos

    In the following paper we explore some of the ways in which competitive intelligence and game theory can be employed to assist firms in deciding whether or not to undertake international market diversification and whether or not there is an advantage to being a market leader or a market follower overseas. In attempting to answer these questions, we take a somewhat unconventional approach. We first examine how some of the most recent advances in the physical and biological sciences can contribute to the ways in which we understand how firms behave. Subsequently, we propose a formal methodology for competitive intelligence. While space considerations here do not allow for a complete game-theoretic treatment of competitive intelligence and its use with respect to understanding first and second mover advantage in firm internationalization, that treatment can be found in its entirety in the on-line proceedings of the 6th International Conference on Complex Systems at e">http://knowledgetoday.org/wiki/indec.php/ICCS06/89.

  16. A universal optimization strategy for ant colony optimization algorithms based on the Physarum-inspired mathematical model.

    PubMed

    Zhang, Zili; Gao, Chao; Liu, Yuxin; Qian, Tao

    2014-09-01

    Ant colony optimization (ACO) algorithms often fall into the local optimal solution and have lower search efficiency for solving the travelling salesman problem (TSP). According to these shortcomings, this paper proposes a universal optimization strategy for updating the pheromone matrix in the ACO algorithms. The new optimization strategy takes advantages of the unique feature of critical paths reserved in the process of evolving adaptive networks of the Physarum-inspired mathematical model (PMM). The optimized algorithms, denoted as PMACO algorithms, can enhance the amount of pheromone in the critical paths and promote the exploitation of the optimal solution. Experimental results in synthetic and real networks show that the PMACO algorithms are more efficient and robust than the traditional ACO algorithms, which are adaptable to solve the TSP with single or multiple objectives. Meanwhile, we further analyse the influence of parameters on the performance of the PMACO algorithms. Based on these analyses, the best values of these parameters are worked out for the TSP. PMID:24613939

  17. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; A Recursive Maximum Likelihood Decoding

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    The Viterbi algorithm is indeed a very simple and efficient method of implementing the maximum likelihood decoding. However, if we take advantage of the structural properties in a trellis section, other efficient trellis-based decoding algorithms can be devised. Recently, an efficient trellis-based recursive maximum likelihood decoding (RMLD) algorithm for linear block codes has been proposed. This algorithm is more efficient than the conventional Viterbi algorithm in both computation and hardware requirements. Most importantly, the implementation of this algorithm does not require the construction of the entire code trellis, only some special one-section trellises of relatively small state and branch complexities are needed for constructing path (or branch) metric tables recursively. At the end, there is only one table which contains only the most likely code-word and its metric for a given received sequence r = (r(sub 1), r(sub 2),...,r(sub n)). This algorithm basically uses the divide and conquer strategy. Furthermore, it allows parallel/pipeline processing of received sequences to speed up decoding.

  18. Enhanced Deep Blue aerosol retrieval algorithm: The second generation

    NASA Astrophysics Data System (ADS)

    Hsu, N. C.; Jeong, M.-J.; Bettenhausen, C.; Sayer, A. M.; Hansell, R.; Seftor, C. S.; Huang, J.; Tsay, S.-C.

    2013-08-01

    The aerosol products retrieved using the Moderate Resolution Imaging Spectroradiometer (MODIS) collection 5.1 Deep Blue algorithm have provided useful information about aerosol properties over bright-reflecting land surfaces, such as desert, semiarid, and urban regions. However, many components of the C5.1 retrieval algorithm needed to be improved; for example, the use of a static surface database to estimate surface reflectances. This is particularly important over regions of mixed vegetated and nonvegetated surfaces, which may undergo strong seasonal changes in land cover. In order to address this issue, we develop a hybrid approach, which takes advantage of the combination of precalculated surface reflectance database and normalized difference vegetation index in determining the surface reflectance for aerosol retrievals. As a result, the spatial coverage of aerosol data generated by the enhanced Deep Blue algorithm has been extended from the arid and semiarid regions to the entire land areas. In this paper, the changes made in the enhanced Deep Blue algorithm regarding the surface reflectance estimation, aerosol model selection, and cloud screening schemes for producing the MODIS collection 6 aerosol products are discussed. A similar approach has also been applied to the algorithm that generates the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Deep Blue products. Based upon our preliminary results of comparing the enhanced Deep Blue aerosol products with the Aerosol Robotic Network (AERONET) measurements, the expected error of the Deep Blue aerosol optical thickness (AOT) is estimated to be better than 0.05 + 20%. Using 10 AERONET sites with long-term time series, 79% of the best quality Deep Blue AOT values are found to fall within this expected error.

  19. 2015: Rural Medicare Advantage Enrollment Update.

    PubMed

    Finegan, Chance; Ullrich, Fred; Mueller, Keith

    2015-07-01

    Key Findings. (1) Rural enrollment in Medicare Advantage (MA) and other prepaid plans increased by 6.8 percent between March 2014 and March 2015 to 2.1 million members, or 21.2 percent of all rural residents eligible for Medicare. This compares to a national enrollment in MA and other prepaid plans of 31.1 percent (16.7 million) of enrollees. (2) Rural enrollment in Health Maintenance Organization (HMO) plans (including point-of-service, or POS, plans), Preferred Provider Organization (PP0) plans, and other pre-paid plans (including Medicare Cost and Program of All-Inclusive Care for the Elderly Plans) all increased by 5-13 percent. (3) Enrollment in private fee-for-service (PFFS) plans continued to decline (decreasing nationally by 15.8 percent and 12.1 percent in rural counties over the period March 2014-2015). Only eight states showed an increase in PFFS plan enrollment. Five states experienced decreases of 50 percent or more. (4) The five states with the highest percentages of rural beneficiaries enrolled in a Medicare Advantage plan are Minnesota (51.8 percent), Hawaii (39.4 percent), Pennsylvania (36.2 percent), Wisconsin (35.5 percent), and New York (31.5 percent). PMID:26793818

  20. SR-71 Taking Off

    NASA Technical Reports Server (NTRS)

    1990-01-01

    One of three U.S. Air Force SR-71 reconnaissance aircraft originally retired from operational service and loaned to NASA for a high-speed research program retracts its landing gear after taking off from NASA's Ames-Dryden Flight Research Facility (later Dryden Flight Research Center), Edwards, California, on a 1990 research flight. One of the SR-71As was later returned to the Air Force for active duty in 1995. Data from the SR-71 high-speed research program will be used to aid designers of future supersonic/hypersonic aircraft and propulsion systems. Two SR-71 aircraft have been used by NASA as testbeds for high-speed and high-altitude aeronautical research. The aircraft, an SR-71A and an SR-71B pilot trainer aircraft, have been based here at NASA's Dryden Flight Research Center, Edwards, California. They were transferred to NASA after the U.S. Air Force program was cancelled. As research platforms, the aircraft can cruise at Mach 3 for more than one hour. For thermal experiments, this can produce heat soak temperatures of over 600 degrees Fahrenheit (F). This operating environment makes these aircraft excellent platforms to carry out research and experiments in a variety of areas -- aerodynamics, propulsion, structures, thermal protection materials, high-speed and high-temperature instrumentation, atmospheric studies, and sonic boom characterization. The SR-71 was used in a program to study ways of reducing sonic booms or over pressures that are heard on the ground, much like sharp thunderclaps, when an aircraft exceeds the speed of sound. Data from this Sonic Boom Mitigation Study could eventually lead to aircraft designs that would reduce the 'peak' overpressures of sonic booms and minimize the startling affect they produce on the ground. One of the first major experiments to be flown in the NASA SR-71 program was a laser air data collection system. It used laser light instead of air pressure to produce airspeed and attitude reference data, such as angle of

  1. Cropping and noise resilient steganography algorithm using secret image sharing

    NASA Astrophysics Data System (ADS)

    Juarez-Sandoval, Oswaldo; Fierro-Radilla, Atoany; Espejel-Trujillo, Angelina; Nakano-Miyatake, Mariko; Perez-Meana, Hector

    2015-03-01

    This paper proposes an image steganography scheme, in which a secret image is hidden into a cover image using a secret image sharing (SIS) scheme. Taking advantage of the fault tolerant property of the (k,n)-threshold SIS, where using any k of n shares (k≤n), the secret data can be recovered without any ambiguity, the proposed steganography algorithm becomes resilient to cropping and impulsive noise contamination. Among many SIS schemes proposed until now, Lin and Chan's scheme is selected as SIS, due to its lossless recovery capability of a large amount of secret data. The proposed scheme is evaluated from several points of view, such as imperceptibility of the stegoimage respect to its original cover image, robustness of hidden data to cropping operation and impulsive noise contamination. The evaluation results show a high quality of the extracted secret image from the stegoimage when it suffered more than 20% cropping or high density noise contamination.

  2. Tightly Coupled Multiphysics Algorithm for Pebble Bed Reactors

    SciTech Connect

    HyeongKae Park; Dana Knoll; Derek Gaston; Richard Martineau

    2010-10-01

    We have developed a tightly coupled multiphysics simulation tool for the pebble-bed reactor (PBR) concept, a type of Very High-Temperature gas-cooled Reactor (VHTR). The simulation tool, PRONGHORN, takes advantages of the Multiphysics Object-Oriented Simulation Environment library, and is capable of solving multidimensional thermal-fluid and neutronics problems implicitly with a Newton-based approach. Expensive Jacobian matrix formation is alleviated via the Jacobian-free Newton-Krylov method, and physics-based preconditioning is applied to minimize Krylov iterations. Motivation for the work is provided via analysis and numerical experiments on simpler multiphysics reactor models. We then provide detail of the physical models and numerical methods in PRONGHORN. Finally, PRONGHORN's algorithmic capability is demonstrated on a number of PBR test cases.

  3. Developing dataflow algorithms

    SciTech Connect

    Hiromoto, R.E. ); Bohm, A.P.W. . Dept. of Computer Science)

    1991-01-01

    Our goal is to study the performance of a collection of numerical algorithms written in Id which is available to users of Motorola's dataflow machine Monsoon. We will study the dataflow performance of these implementations first under the parallel profiling simulator Id World, and second in comparison with actual dataflow execution on the Motorola Monsoon. This approach will allow us to follow the computational and structural details of the parallel algorithms as implemented on dataflow systems. When running our programs on the Id World simulator we will examine the behaviour of algorithms at dataflow graph level, where each instruction takes one timestep and data becomes available at the next. This implies that important machine level phenomena such as the effect that global communication time may have on the computation are not addressed. These phenomena will be addressed when we run our programs on the Monsoon hardware. Potential ramifications for compilation techniques, functional programming style, and program efficiency are significant to this study. In a later stage of our research we will compare the efficiency of Id programs to programs written in other languages. This comparison will be of a rather qualitative nature as there are too many degrees of freedom in a language implementation for a quantitative comparison to be of interest. We begin our study by examining one routine that exhibit different computational characteristics. This routine and its corresponding characteristics is Fast Fourier Transforms; computational parallelism and data dependences between the butterfly shuffles.

  4. [Biomarkers for liver fibrosis: advances, advantages and disadvantages].

    PubMed

    Cequera, A; García de León Méndez, M C

    2014-01-01

    Liver cirrhosis in Mexico is one of the most important causes of death in persons between the ages of 25 and 50 years. One of the reasons for therapeutic failure is the lack of knowledge about the molecular mechanisms that cause liver disorder and make it irreversible. One of its prevalent anatomical characteristics is an excessive deposition of fibrous tissue that takes different forms depending on etiology and disease stage. Liver biopsy, traditionally regarded as the gold standard of fibrosis staging, has been brought into question over the past decade, resulting in the proposal for developing non-invasive technologies based on different, but complementary, approaches: a biological one that takes the serum levels of products arising from the fibrosis into account, and a more physical one that evaluates scarring of the liver by methods such as ultrasound and magnetic resonance elastography; some of the methods were originally studied and validated in patients with hepatitis C. There is great interest in determining non-invasive markers for the diagnosis of liver fibrosis, since at present there is no panel or parameter efficient and reliable enough for diagnostic use. In this paper, we describe the biomarkers that are currently being used for studying liver fibrosis in humans, their advantages and disadvantages, as well as the implementation of new-generation technologies and the evaluation of their possible use in the diagnosis of fibrosis. PMID:24954541

  5. The Advantages of a Tapered Whisker

    PubMed Central

    Williams, Christopher M.; Kramer, Eric M.

    2010-01-01

    The role of facial vibrissae (whiskers) in the behavior of terrestrial mammals is principally as a supplement or substitute for short-distance vision. Each whisker in the array functions as a mechanical transducer, conveying forces applied along the shaft to mechanoreceptors in the follicle at the whisker base. Subsequent processing of mechanoreceptor output in the trigeminal nucleus and somatosensory cortex allows high accuracy discriminations of object distance, direction, and surface texture. The whiskers of terrestrial mammals are tapered and approximately circular in cross section. We characterize the taper of whiskers in nine mammal species, measure the mechanical deflection of isolated felid whiskers, and discuss the mechanics of a single whisker under static and oscillatory deflections. We argue that a tapered whisker provides some advantages for tactile perception (as compared to a hypothetical untapered whisker), and that this may explain why the taper has been preserved during the evolution of terrestrial mammals. PMID:20098714

  6. Provable quantum advantage in randomness processing.

    PubMed

    Dale, Howard; Jennings, David; Rudolph, Terry

    2015-01-01

    Quantum advantage is notoriously hard to find and even harder to prove. For example the class of functions computable with classical physics exactly coincides with the class computable quantum mechanically. It is strongly believed, but not proven, that quantum computing provides exponential speed-up for a range of problems, such as factoring. Here we address a computational scenario of randomness processing in which quantum theory provably yields, not only resource reduction over classical stochastic physics, but a strictly larger class of problems which can be solved. Beyond new foundational insights into the nature and malleability of randomness, and the distinction between quantum and classical information, these results also offer the potential of developing classically intractable simulations with currently accessible quantum technologies. PMID:26381816

  7. The advantages of a tapered whisker.

    PubMed

    Williams, Christopher M; Kramer, Eric M

    2010-01-01

    The role of facial vibrissae (whiskers) in the behavior of terrestrial mammals is principally as a supplement or substitute for short-distance vision. Each whisker in the array functions as a mechanical transducer, conveying forces applied along the shaft to mechanoreceptors in the follicle at the whisker base. Subsequent processing of mechanoreceptor output in the trigeminal nucleus and somatosensory cortex allows high accuracy discriminations of object distance, direction, and surface texture. The whiskers of terrestrial mammals are tapered and approximately circular in cross section. We characterize the taper of whiskers in nine mammal species, measure the mechanical deflection of isolated felid whiskers, and discuss the mechanics of a single whisker under static and oscillatory deflections. We argue that a tapered whisker provides some advantages for tactile perception (as compared to a hypothetical untapered whisker), and that this may explain why the taper has been preserved during the evolution of terrestrial mammals. PMID:20098714

  8. Advantages and Challenges of Superconducting Accelerators

    NASA Astrophysics Data System (ADS)

    Krischel, Detlef

    After a short review of the history toward high-energy superconducting (SC) accelerators for ion beam therapy (IBT), an overview is given on material properties and technical developments enabling to use SC components in a medical accelerator for full body cancer treatment. The design concept and the assembly of a commercially available SC cyclotron for proton therapy (PT) are described and the potential advantages for applying superconductivity are assessed. The discussion includes the first years of operation experience with regard to cryogenic and magnetic performance, automated beam control, and maintenance aspects. An outlook is given on alternative machine concepts for protons-only or for heavier ions. Finally, it is discussed whether the application of superconductivity might be expanded in the future to a broader range of subsystems of clinical IBT accelerators such as SC magnets for transfer beam lines or gantries.

  9. Ultraspectral imaging and the snapshot advantage

    NASA Astrophysics Data System (ADS)

    Kudenov, Michael W.; Gupta Roy, Subharup; Pantalone, Brett; Maione, Bryan

    2015-05-01

    Ultraspectral sensing has been investigated as a way to resolve terrestrial chemical fluorescence within solar Fraunhofer lines. Referred to as Fraunhofer Line Discriminators (FLDs), these sensors attempt to measure "band filling" of terrestrial fluorescence within these naturally dark regions of the spectrum. However, the method has challenging signal to noise ratio limitations due to the low fluorescence emission signal of the target, which is exacerbated by the high spectral resolution required by the sensor (<0.1 nm). To now, many Fraunhofer line discriminators have been scanning sensors; either pushbroom or whiskbroom, which require temporal and/or spatial scanning to acquire an image. In this paper, we attempt to quantify the snapshot throughput advantage in ultraspectral imaging for FLD. This is followed by preliminary results of our snapshot FLD sensor. The system has a spatial resolution of 280x280 pixels and a spectral resolving power of approximately 10,000 at a 658 nm operating wavelength.

  10. Neural responses to advantageous and disadvantageous inequity.

    PubMed

    Fliessbach, Klaus; Phillipps, Courtney B; Trautner, Peter; Schnabel, Marieke; Elger, Christian E; Falk, Armin; Weber, Bernd

    2012-01-01

    In this paper we study neural responses to inequitable distributions of rewards despite equal performance. We specifically focus on differences between advantageous inequity (AI) and disadvantageous inequity (DI). AI and DI were realized in a hyperscanning functional magnetic resonance imaging (fMRI) experiment with pairs of subjects simultaneously performing a task in adjacent scanners and observing both subjects' rewards. Results showed (1) hypoactivation of the ventral striatum (VS) under DI but not under AI; (2) inequity induced activation of the right dorsolateral prefrontal cortex (DLPFC) that was stronger under DI than under AI; (3) correlations between subjective evaluations of AI evaluation and bilateral ventrolateral prefrontal and left insular activity. Our study provides neurophysiological evidence for different cognitive processes that occur when exposed to DI and AI, respectively. One possible interpretation is that any form of inequity represents a norm violation, but that important differences between AI and DI emerge from an asymmetric involvement of status concerns. PMID:22701414

  11. Neural responses to advantageous and disadvantageous inequity

    PubMed Central

    Fliessbach, Klaus; Phillipps, Courtney B.; Trautner, Peter; Schnabel, Marieke; Elger, Christian E.; Falk, Armin; Weber, Bernd

    2012-01-01

    In this paper we study neural responses to inequitable distributions of rewards despite equal performance. We specifically focus on differences between advantageous inequity (AI) and disadvantageous inequity (DI). AI and DI were realized in a hyperscanning functional magnetic resonance imaging (fMRI) experiment with pairs of subjects simultaneously performing a task in adjacent scanners and observing both subjects' rewards. Results showed (1) hypoactivation of the ventral striatum (VS) under DI but not under AI; (2) inequity induced activation of the right dorsolateral prefrontal cortex (DLPFC) that was stronger under DI than under AI; (3) correlations between subjective evaluations of AI evaluation and bilateral ventrolateral prefrontal and left insular activity. Our study provides neurophysiological evidence for different cognitive processes that occur when exposed to DI and AI, respectively. One possible interpretation is that any form of inequity represents a norm violation, but that important differences between AI and DI emerge from an asymmetric involvement of status concerns. PMID:22701414

  12. The kinematic advantage of electric cars

    NASA Astrophysics Data System (ADS)

    Meyn, Jan-Peter

    2015-11-01

    Acceleration of a common car with with a turbocharged diesel engine is compared to the same type with an electric motor in terms of kinematics. Starting from a state of rest, the electric car reaches a distant spot earlier than the diesel car, even though the latter has a better specification for engine power and average acceleration from 0 to 100 km h-1. A three phase model of acceleration as a function of time fits the data of the electric car accurately. The first phase is a quadratic growth of acceleration in time. It is shown that the tenfold higher coefficient for the first phase accounts for most of the kinematic advantage of the electric car.

  13. The mechanical defence advantage of small seeds.

    PubMed

    Fricke, Evan C; Wright, S Joseph

    2016-08-01

    Seed size and toughness affect seed predators, and size-dependent investment in mechanical defence could affect relationships between seed size and predation. We tested how seed toughness and mechanical defence traits (tissue density and protective tissue content) are related to seed size among tropical forest species. Absolute toughness increased with seed size. However, smaller seeds had higher specific toughness both within and among species, with the smallest seeds requiring over 2000 times more energy per gram to break than the largest seeds. Investment in mechanical defence traits varied widely but independently of the toughness-mass allometry. Instead, a physical scaling relationship confers a toughness advantage on small seeds independent of selection on defence traits and without a direct cost. This scaling relationship may contribute to seed size diversity by decreasing fitness differences among large and small seeds. Allometric scaling of toughness reconciles predictions and conflicting empirical relationships between seed size and predation. PMID:27324185

  14. [Risks and advantages of the vegetarian diet].

    PubMed

    Krajcovicová-Kudlácková, M; Simoncic, R; Béderová, A

    1997-12-01

    The authors summarize the health risks and advantages of alternative nutrition-lactovegetarian, lactoovovegetarian and vegan. These dietary patterns involve risk in particular during pregnancy, lactation and for the growing organism. Veganism excluding all foods of animal origin involves the greatest risk. General nutritional principles for the prevention of cardiovascular diseases, oncological diseases and diabetes are fully met by the vegetarian diet. Vegetarians and vegans have low risk factors of atherosclerosis and conversely higher levels of antisclerotic substances. Overthreshold values of essential antioxidants in vegetarians imply a protective action against reactive metabolic oxygen products and toxic products of lipid peroxidation and may reduce the incidence of free radical diseases. The authors also draw attention to some still open problems of vegetarianism (higher n-3 fatty acids, taurine, carnitine). In the conclusion semivegetarianism is evaluated. PMID:9476373

  15. Steel monoleg design tries for concrete advantages

    SciTech Connect

    Not Available

    1984-11-01

    The conceptual design of a fixed steel monoleg structure designed for North Sea water depths of 80-250 meters is described. The design was commissioned by the Dutch Ministry of Economic Affairs and funded in part by the European Economic Community. The design aims to give a steel structure some of the advantages of concrete condeeps. Maintenance should be minimized by enclosing the risers inside a monopod. The single-shell, variable geometry of the column structure should also serve to equalize stresses, unlike a conventional space frame where stresses tend to concentrate around the nodes. Construction and installation could be vertical, as in condeep style, or horizontal, as in steel jackets. Thus the fixed steel platform could be either barge-towed and upended with ballast tanks or floated out vertically as built and towed, like a condeep, to a mating with an integrated deck before final tow and installation by simple ballasting.

  16. New Cirrus Retrieval Algorithms and Results from eMAS during SEAC4RS

    NASA Astrophysics Data System (ADS)

    Holz, R.; Platnick, S. E.; Meyer, K.; Wang, C.; Wind, G.; Arnold, T.; King, M. D.; Yorks, J. E.; McGill, M. J.

    2014-12-01

    The enhanced MODIS Airborne Simulator (eMAS) scanning imager was flown on the ER-2 during the SEAC4RS field campaign. The imager provides measurements in 38 spectral channels from the visible into the 13μm CO2 absorption bands at approximately 25 m nadir spatial resolution at cirrus altitudes, and with a swath width of about 18 km, provided substantial context and synergy for other ER-2 cirrus observations. The eMAS is an update to the original MAS scanner, having new midwave and IR spectrometers coupled with the previous VNIR/SWIR spectrometers. In addition to the standard MODIS-like cloud retrieval algorithm (MOD06/MYD06 for MODIS Terra/Aqua, respectively) that provides cirrus optical thickness (COT) and effective particle radius (CER) from several channel combinations, three new algorithms were developed to take advantage of unique aspects of eMAS and/or other ER-2 observations. The first uses a combination of two solar reflectance channels within the 1.88 μm water vapor absorption band, each with significantly different single scattering albedo, allowing for simultaneous COT and CER retrievals. The advantage of this algorithm is that the strong water vapor absorption can significantly reduce the sensitivity to lower level clouds and ocean/land surface properties thus better isolating cirrus properties. A second algorithm uses a suite of infrared channels in an optimal estimation algorithm to simultaneously retrieve COT, CER, and cloud-top pressure/temperature. Finally, a window IR algorithm is used to retrieve COT in synergy with the ER-2 Cloud Physics Lidar (CPL) cloud top/base boundary measurements. Using a variety of quantifiable error sources, uncertainties for all eMAS retrievals will be shown along with comparisons with CPL COT retrievals.

  17. ALE advantage in hypervelocity impact calculations

    SciTech Connect

    Gerassimenko, M.; Rathkopf, J.

    1998-10-01

    The ALE3D code is used to model experiments relevant to hypervelocity impact lethality, carried out in the 4-5 km/s velocity range. The code is run in the Eulerian and ALE modes. Zoning in the calculations is refined beyond the level found in most lethality calculations, but still short of convergence. The level of zoning refinement that produces equivalent results in uniformly zoned Eulerian calculations and ALE ones utilizing specialized zoning, weighting and relaxation techniques is established. It takes 11 times fewer zones and about 60% as many cycles when ALE capabilities are used. Calculations are compared to experimental results.

  18. Children's Understanding of Certainty and Evidentiality: Advantage of Grammaticalized Forms over Lexical Alternatives

    ERIC Educational Resources Information Center

    Matsui, Tomoko; Miura, Yui

    2009-01-01

    In verbal communication, the hearer takes advantage of the linguistic expressions of certainty and evidentiality to assess how committed the speaker might be to the truth of the informational content of the utterance. Little is known, however, about the precise developmental mechanism of this ability. In this chapter, we approach the question by…

  19. 78 FR 35826 - Unfair Competitive Advantages; Enhancement of the Formal Complaint Process

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-14

    ...The Commission is proposing rules to enhance the formal complaint process in cases involving alleged violations of a law that prohibits the Postal Service from taking certain actions that might provide it with unfair competitive advantages. The proposal provides an optional accelerated procedure that allows for adjudication of this type of complaint within 90 days. The Commission invites......

  20. Effects of Virtual Education on Academic Culture: Perceived Advantages and Disadvantages

    ERIC Educational Resources Information Center

    Jefferson, Renee N.; Arnold, Liz W.

    2009-01-01

    The perceived advantages and disadvantages of courses taught in online and face-to-face learning environments were explored for students taking an accounting and a data collection and analysis course. Both courses were taught in a face-to-face learning environment at the main or satellite campus. It was hypothesized that there would be…

  1. Improvements of HITS Algorithms for Spam Links

    NASA Astrophysics Data System (ADS)

    Asano, Yasuhito; Tezuka, Yu; Nishizeki, Takao

    The HITS algorithm proposed by Kleinberg is one of the representative methods of scoring Web pages by using hyperlinks. In the days when the algorithm was proposed, most of the pages given high score by the algorithm were really related to a given topic, and hence the algorithm could be used to find related pages. However, the algorithm and the variants including Bharat's improved HITS, abbreviated to BHITS, proposed by Bharat and Henzinger cannot be used to find related pages any more on today's Web, due to an increase of spam links. In this paper, we first propose three methods to find “linkfarms,” that is, sets of spam links forming a densely connected subgraph of a Web graph. We then present an algorithm, called a trust-score algorithm, to give high scores to pages which are not spam pages with a high probability. Combining the three methods and the trust-score algorithm with BHITS, we obtain several variants of the HITS algorithm. We ascertain by experiments that one of them, named TaN+BHITS using the trust-score algorithm and the method of finding linkfarms by employing name servers, is most suitable for finding related pages on today's Web. Our algorithms take time and memory no more than those required by the original HITS algorithm, and can be executed on a PC with a small amount of main memory.

  2. Parallel paving: An algorithm for generating distributed, adaptive, all-quadrilateral meshes on parallel computers

    SciTech Connect

    Lober, R.R.; Tautges, T.J.; Vaughan, C.T.

    1997-03-01

    Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.

  3. A generalized LSTM-like training algorithm for second-order recurrent neural networks

    PubMed Central

    Monner, Derek; Reggia, James A.

    2011-01-01

    The Long Short Term Memory (LSTM) is a second-order recurrent neural network architecture that excels at storing sequential short-term memories and retrieving them many time-steps later. LSTM’s original training algorithm provides the important properties of spatial and temporal locality, which are missing from other training approaches, at the cost of limiting it’s applicability to a small set of network architectures. Here we introduce the Generalized Long Short-Term Memory (LSTM-g) training algorithm, which provides LSTM-like locality while being applicable without modification to a much wider range of second-order network architectures. With LSTM-g, all units have an identical set of operating instructions for both activation and learning, subject only to the configuration of their local environment in the network; this is in contrast to the original LSTM training algorithm, where each type of unit has its own activation and training instructions. When applied to LSTM architectures with peephole connections, LSTM-g takes advantage of an additional source of back-propagated error which can enable better performance than the original algorithm. Enabled by the broad architectural applicability of LSTM-g, we demonstrate that training recurrent networks engineered for specific tasks can produce better results than single-layer networks. We conclude that LSTM-g has the potential to both improve the performance and broaden the applicability of spatially and temporally local gradient-based training algorithms for recurrent neural networks. PMID:21803542

  4. Distortion correction algorithm for UAV remote sensing image based on CUDA

    NASA Astrophysics Data System (ADS)

    Wenhao, Zhang; Yingcheng, Li; Delong, Li; Changsheng, Teng; Jin, Liu

    2014-03-01

    In China, natural disasters are characterized by wide distribution, severe destruction and high impact range, and they cause significant property damage and casualties every year. Following a disaster, timely and accurate acquisition of geospatial information can provide an important basis for disaster assessment, emergency relief, and reconstruction. In recent years, Unmanned Aerial Vehicle (UAV) remote sensing systems have played an important role in major natural disasters, with UAVs becoming an important technique of obtaining disaster information. UAV is equipped with a non-metric digital camera with lens distortion, resulting in larger geometric deformation for acquired images, and affecting the accuracy of subsequent processing. The slow speed of the traditional CPU-based distortion correction algorithm cannot meet the requirements of disaster emergencies. Therefore, we propose a Compute Unified Device Architecture (CUDA)-based image distortion correction algorithm for UAV remote sensing, which takes advantage of the powerful parallel processing capability of the GPU, greatly improving the efficiency of distortion correction. Our experiments show that, compared with traditional CPU algorithms and regardless of image loading and saving times, the maximum acceleration ratio using our proposed algorithm reaches 58 times that using the traditional algorithm. Thus, data processing time can be reduced by one to two hours, thereby considerably improving disaster emergency response capability.

  5. A comprehensive review of swarm optimization algorithms.

    PubMed

    Ab Wahab, Mohd Nadhir; Nefti-Meziani, Samia; Atyabi, Adham

    2015-01-01

    Many swarm optimization algorithms have been introduced since the early 60's, Evolutionary Programming to the most recent, Grey Wolf Optimization. All of these algorithms have demonstrated their potential to solve many optimization problems. This paper provides an in-depth survey of well-known optimization algorithms. Selected algorithms are briefly explained and compared with each other comprehensively through experiments conducted using thirty well-known benchmark functions. Their advantages and disadvantages are also discussed. A number of statistical tests are then carried out to determine the significant performances. The results indicate the overall advantage of Differential Evolution (DE) and is closely followed by Particle Swarm Optimization (PSO), compared with other considered approaches. PMID:25992655

  6. A Comprehensive Review of Swarm Optimization Algorithms

    PubMed Central

    2015-01-01

    Many swarm optimization algorithms have been introduced since the early 60’s, Evolutionary Programming to the most recent, Grey Wolf Optimization. All of these algorithms have demonstrated their potential to solve many optimization problems. This paper provides an in-depth survey of well-known optimization algorithms. Selected algorithms are briefly explained and compared with each other comprehensively through experiments conducted using thirty well-known benchmark functions. Their advantages and disadvantages are also discussed. A number of statistical tests are then carried out to determine the significant performances. The results indicate the overall advantage of Differential Evolution (DE) and is closely followed by Particle Swarm Optimization (PSO), compared with other considered approaches. PMID:25992655

  7. GPUs benchmarking in subpixel image registration algorithm

    NASA Astrophysics Data System (ADS)

    Sanz-Sabater, Martin; Picazo-Bueno, Jose Angel; Micó, Vicente; Ferrerira, Carlos; Granero, Luis; Garcia, Javier

    2015-05-01

    Image registration techniques are used among different scientific fields, like medical imaging or optical metrology. The straightest way to calculate shifting between two images is using the cross correlation, taking the highest value of this correlation image. Shifting resolution is given in whole pixels which cannot be enough for certain applications. Better results can be achieved interpolating both images, as much as the desired resolution we want to get, and applying the same technique described before, but the memory needed by the system is significantly higher. To avoid memory consuming we are implementing a subpixel shifting method based on FFT. With the original images, subpixel shifting can be achieved multiplying its discrete Fourier transform by a linear phase with different slopes. This method is high time consuming method because checking a concrete shifting means new calculations. The algorithm, highly parallelizable, is very suitable for high performance computing systems. GPU (Graphics Processing Unit) accelerated computing became very popular more than ten years ago because they have hundreds of computational cores in a reasonable cheap card. In our case, we are going to register the shifting between two images, doing the first approach by FFT based correlation, and later doing the subpixel approach using the technique described before. We consider it as `brute force' method. So we will present a benchmark of the algorithm consisting on a first approach (pixel resolution) and then do subpixel resolution approaching, decreasing the shifting step in every loop achieving a high resolution in few steps. This program will be executed in three different computers. At the end, we will present the results of the computation, with different kind of CPUs and GPUs, checking the accuracy of the method, and the time consumed in each computer, discussing the advantages, disadvantages of the use of GPUs.

  8. Atmospheric Correction Algorithm for Hyperspectral Imagery

    SciTech Connect

    R. J. Pollina

    1999-09-01

    In December 1997, the US Department of Energy (DOE) established a Center of Excellence (Hyperspectral-Multispectral Algorithm Research Center, HyMARC) for promoting the research and development of algorithms to exploit spectral imagery. This center is located at the DOE Remote Sensing Laboratory in Las Vegas, Nevada, and is operated for the DOE by Bechtel Nevada. This paper presents the results to date of a research project begun at the center during 1998 to investigate the correction of hyperspectral data for atmospheric aerosols. Results of a project conducted by the Rochester Institute of Technology to define, implement, and test procedures for absolute calibration and correction of hyperspectral data to absolute units of high spectral resolution imagery will be presented. Hybrid techniques for atmospheric correction using image or spectral scene data coupled through radiative propagation models will be specifically addressed. Results of this effort to analyze HYDICE sensor data will be included. Preliminary results based on studying the performance of standard routines, such as Atmospheric Pre-corrected Differential Absorption and Nonlinear Least Squares Spectral Fit, in retrieving reflectance spectra show overall reflectance retrieval errors of approximately one to two reflectance units in the 0.4- to 2.5-micron-wavelength region (outside of the absorption features). These results are based on HYDICE sensor data collected from the Southern Great Plains Atmospheric Radiation Measurement site during overflights conducted in July of 1997. Results of an upgrade made in the model-based atmospheric correction techniques, which take advantage of updates made to the moderate resolution atmospheric transmittance model (MODTRAN 4.0) software, will also be presented. Data will be shown to demonstrate how the reflectance retrieval in the shorter wavelengths of the blue-green region will be improved because of enhanced modeling of multiple scattering effects.

  9. Improved algorithm for processing grating-based phase contrast interferometry image sets

    SciTech Connect

    Marathe, Shashidhara Assoufid, Lahsen Xiao, Xianghui; Ham, Kyungmin; Johnson, Warren W.; Butler, Leslie G.

    2014-01-15

    Grating-based X-ray and neutron interferometry tomography using phase-stepping methods generates large data sets. An improved algorithm is presented for solving for the parameters to calculate transmissions, differential phase contrast, and dark-field images. The method takes advantage of the vectorization inherent in high-level languages such as Mathematica and MATLAB and can solve a 16 × 1k × 1k data set in less than a second. In addition, the algorithm can function with partial data sets. This is demonstrated with processing of a 16-step grating data set with partial use of the original data chosen without any restriction. Also, we have calculated the reduced chi-square for the fit and notice the effect of grating support structural elements upon the differential phase contrast image and have explored expanded basis set representations to mitigate the impact.

  10. Lazy skip-lists: An algorithm for fast hybridization-expansion quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Sémon, P.; Yee, Chuck-Hou; Haule, Kristjan; Tremblay, A.-M. S.

    2014-08-01

    The solution of a generalized impurity model lies at the heart of electronic structure calculations with dynamical mean field theory. In the strongly correlated regime, the method of choice for solving the impurity model is the hybridization-expansion continuous-time quantum Monte Carlo (CT-HYB). Enhancements to the CT-HYB algorithm are critical for bringing new physical regimes within reach of current computational power. Taking advantage of the fact that the bottleneck in the algorithm is a product of hundreds of matrices, we present optimizations based on the introduction and combination of two concepts of more general applicability: (a) skip lists and (b) fast rejection of proposed configurations based on matrix bounds. Considering two very different test cases with d electrons, we find speedups of ˜25 up to ˜500 compared to the direct evaluation of the matrix product. Even larger speedups are likely with f electron systems and with clusters of correlated atoms.

  11. Modified hyperspheres algorithm to trace homotopy curves of nonlinear circuits composed by piecewise linear modelled devices.

    PubMed

    Vazquez-Leal, H; Jimenez-Fernandez, V M; Benhammouda, B; Filobello-Nino, U; Sarmiento-Reyes, A; Ramirez-Pinero, A; Marin-Hernandez, A; Huerta-Chua, J

    2014-01-01

    We present a homotopy continuation method (HCM) for finding multiple operating points of nonlinear circuits composed of devices modelled by using piecewise linear (PWL) representations. We propose an adaptation of the modified spheres path tracking algorithm to trace the homotopy trajectories of PWL circuits. In order to assess the benefits of this proposal, four nonlinear circuits composed of piecewise linear modelled devices are analysed to determine their multiple operating points. The results show that HCM can find multiple solutions within a single homotopy trajectory. Furthermore, we take advantage of the fact that homotopy trajectories are PWL curves meant to replace the multidimensional interpolation and fine tuning stages of the path tracking algorithm with a simple and highly accurate procedure based on the parametric straight line equation. PMID:25184157

  12. Modified Cholesky factorizations in interior-point algorithms for linear programming.

    SciTech Connect

    Wright, S.; Mathematics and Computer Science

    1999-01-01

    We investigate a modified Cholesky algorithm typical of those used in most interior-point codes for linear programming. Cholesky-based interior-point codes are popular for three reasons: their implementation requires only minimal changes to standard sparse Cholesky algorithms (allowing us to take full advantage of software written by specialists in that area); they tend to be more efficient than competing approaches that use alternative factorizations; and they perform robustly on most practical problems, yielding good interior-point steps even when the coefficient matrix of the main linear system to be solved for the step components is ill conditioned. We investigate this surprisingly robust performance by using analytical tools from matrix perturbation theory and error analysis, illustrating our results with computational experiments. Finally, we point out the potential limitations of this approach.

  13. Modified Hyperspheres Algorithm to Trace Homotopy Curves of Nonlinear Circuits Composed by Piecewise Linear Modelled Devices

    PubMed Central

    Vazquez-Leal, H.; Jimenez-Fernandez, V. M.; Benhammouda, B.; Filobello-Nino, U.; Sarmiento-Reyes, A.; Ramirez-Pinero, A.; Marin-Hernandez, A.; Huerta-Chua, J.

    2014-01-01

    We present a homotopy continuation method (HCM) for finding multiple operating points of nonlinear circuits composed of devices modelled by using piecewise linear (PWL) representations. We propose an adaptation of the modified spheres path tracking algorithm to trace the homotopy trajectories of PWL circuits. In order to assess the benefits of this proposal, four nonlinear circuits composed of piecewise linear modelled devices are analysed to determine their multiple operating points. The results show that HCM can find multiple solutions within a single homotopy trajectory. Furthermore, we take advantage of the fact that homotopy trajectories are PWL curves meant to replace the multidimensional interpolation and fine tuning stages of the path tracking algorithm with a simple and highly accurate procedure based on the parametric straight line equation. PMID:25184157

  14. Improved algorithm for processing grating-based phase contrast interferometry image sets.

    PubMed

    Marathe, Shashidhara; Assoufid, Lahsen; Xiao, Xianghui; Ham, Kyungmin; Johnson, Warren W; Butler, Leslie G

    2014-01-01

    Grating-based X-ray and neutron interferometry tomography using phase-stepping methods generates large data sets. An improved algorithm is presented for solving for the parameters to calculate transmissions, differential phase contrast, and dark-field images. The method takes advantage of the vectorization inherent in high-level languages such as Mathematica and MATLAB and can solve a 16 × 1k × 1k data set in less than a second. In addition, the algorithm can function with partial data sets. This is demonstrated with processing of a 16-step grating data set with partial use of the original data chosen without any restriction. Also, we have calculated the reduced chi-square for the fit and notice the effect of grating support structural elements upon the differential phase contrast image and have explored expanded basis set representations to mitigate the impact. PMID:24517772

  15. A modified decision tree algorithm based on genetic algorithm for mobile user classification problem.

    PubMed

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389

  16. A Modified Decision Tree Algorithm Based on Genetic Algorithm for Mobile User Classification Problem

    PubMed Central

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389

  17. Pharyngeal Packing during Rhinoplasty: Advantages and Disadvantages

    PubMed Central

    Razavi, Majid; Taghavi Gilani, Mehryar; Bameshki, Ali Reza; Behdani, Reza; Khadivi, Ehsan; Bakhshaee, Mahdi

    2015-01-01

    Introduction: Controversy remains as to the advantages and disadvantages of pharyngeal packing during septorhinoplasty. Our study investigated the effect of pharyngeal packing on postoperative nausea and vomiting and sore throat following this type of surgery or septorhinoplasty. Materials and Methods: This clinical trial was performed on 90 American Society of Anesthesiologists (ASA) I or II patients who were candidates for septorhinoplasty. They were randomly divided into two groups. Patients in the study group had received pharyngeal packing while those in the control group had not. The incidence of nausea and vomiting and sore throat based on the visual analog scale (VAS) was evaluated postoperatively in the recovery room as well as at 2, 6 and 24 hours. Results: The incidence of postoperative nausea and vomiting (PONV) was 12.3%, with no significant difference between the study and control groups. Sore throat was reported in 50.5% of cases overall (56.8% on pack group and 44.4% on control). Although the severity of pain was higher in the study group at all times, the incidence in the two groups did not differ significantly. Conclusion: The use of pharyngeal packing has no effect in reducing the incidence of nausea and vomiting and sore throat after surgery. Given that induced hypotension is used as the routine method of anesthesia in septorhinoplasty surgery, with a low incidence of hemorrhage and a high risk of unintended retention of pharyngeal packing, its routine use is not recommended for this procedure. PMID:26788486

  18. Clinical advantages of carbon-ion radiotherapy

    NASA Astrophysics Data System (ADS)

    Tsujii, Hirohiko; Kamada, Tadashi; Baba, Masayuki; Tsuji, Hiroshi; Kato, Hirotoshi; Kato, Shingo; Yamada, Shigeru; Yasuda, Shigeo; Yanagi, Takeshi; Kato, Hiroyuki; Hara, Ryusuke; Yamamoto, Naotaka; Mizoe, Junetsu

    2008-07-01

    Carbon-ion radiotherapy (C-ion RT) possesses physical and biological advantages. It was started at NIRS in 1994 using the Heavy Ion Medical Accelerator in Chiba (HIMAC); since then more than 50 protocol studies have been conducted on almost 4000 patients with a variety of tumors. Clinical experiences have demonstrated that C-ion RT is effective in such regions as the head and neck, skull base, lung, liver, prostate, bone and soft tissues, and pelvic recurrence of rectal cancer, as well as for histological types including adenocarcinoma, adenoid cystic carcinoma, malignant melanoma and various types of sarcomas, against which photon therapy could be less effective. Furthermore, when compared with photon and proton RT, a significant reduction of overall treatment time and fractions has been accomplished without enhancing toxicities. Currently, the number of irradiation sessions per patient averages 13 fractions spread over approximately three weeks. This means that in a carbon therapy facility a larger number of patients than is possible with other modalities can be treated over the same period of time.

  19. Vegetarian diets: what are the advantages?

    PubMed

    Leitzmann, Claus

    2005-01-01

    A growing body of scientific evidence indicates that wholesome vegetarian diets offer distinct advantages compared to diets containing meat and other foods of animal origin. The benefits arise from lower intakes of saturated fat, cholesterol and animal protein as well as higher intakes of complex carbohydrates, dietary fiber, magnesium, folic acid, vitamin C and E, carotenoids and other phytochemicals. Since vegetarians consume widely divergent diets, a differentiation between various types of vegetarian diets is necessary. Indeed, many contradictions and misunderstandings concerning vegetarianism are due to scientific data from studies without this differentiation. In the past, vegetarian diets have been described as being deficient in several nutrients including protein, iron, zinc, calcium, vitamin B12 and A, n-3 fatty acids and iodine. Numerous studies have demonstrated that the observed deficiencies are usually due to poor meal planning. Well-balanced vegetarian diets are appropriate for all stages of the life cycle, including children, adolescents, pregnant and lactating women, the elderly and competitive athletes. In most cases, vegetarian diets are beneficial in the prevention and treatment of certain diseases, such as cardiovascular disease, hypertension, diabetes, cancer, osteoporosis, renal disease and dementia, as well as diverticular disease, gallstones and rheumatoid arthritis. The reasons for choosing a vegetarian diet often go beyond health and well-being and include among others economical, ecological and social concerns. The influences of these aspects of vegetarian diets are the subject of the new field of nutritional ecology that is concerned with sustainable life styles and human development. PMID:15702597

  20. The academic advantage: gender disparities in patenting.

    PubMed

    Sugimoto, Cassidy R; Ni, Chaoqun; West, Jevin D; Larivière, Vincent

    2015-01-01

    We analyzed gender disparities in patenting by country, technological area, and type of assignee using the 4.6 million utility patents issued between 1976 and 2013 by the United States Patent and Trade Office (USPTO). Our analyses of fractionalized inventorships demonstrate that women's rate of patenting has increased from 2.7% of total patenting activity to 10.8% over the nearly 40-year period. Our results show that, in every technological area, female patenting is proportionally more likely to occur in academic institutions than in corporate or government environments. However, women's patents have a lower technological impact than that of men, and that gap is wider in the case of academic patents. We also provide evidence that patents to which women--and in particular academic women--contributed are associated with a higher number of International Patent Classification (IPC) codes and co-inventors than men. The policy implications of these disparities and academic setting advantages are discussed. PMID:26017626

  1. The Academic Advantage: Gender Disparities in Patenting

    PubMed Central

    Sugimoto, Cassidy R.; Ni, Chaoqun; West, Jevin D.; Larivière, Vincent

    2015-01-01

    We analyzed gender disparities in patenting by country, technological area, and type of assignee using the 4.6 million utility patents issued between 1976 and 2013 by the United States Patent and Trade Office (USPTO). Our analyses of fractionalized inventorships demonstrate that women’s rate of patenting has increased from 2.7% of total patenting activity to 10.8% over the nearly 40-year period. Our results show that, in every technological area, female patenting is proportionally more likely to occur in academic institutions than in corporate or government environments. However, women’s patents have a lower technological impact than that of men, and that gap is wider in the case of academic patents. We also provide evidence that patents to which women—and in particular academic women—contributed are associated with a higher number of International Patent Classification (IPC) codes and co-inventors than men. The policy implications of these disparities and academic setting advantages are discussed. PMID:26017626

  2. Advantageous grain boundaries in iron pnictide superconductors

    PubMed Central

    Katase, Takayoshi; Ishimaru, Yoshihiro; Tsukamoto, Akira; Hiramatsu, Hidenori; Kamiya, Toshio; Tanabe, Keiichi; Hosono, Hideo

    2011-01-01

    High critical temperature superconductors have zero power consumption and could be used to produce ideal electric power lines. The principal obstacle in fabricating superconducting wires and tapes is grain boundaries—the misalignment of crystalline orientations at grain boundaries, which is unavoidable for polycrystals, largely deteriorates critical current density. Here we report that high critical temperature iron pnictide superconductors have advantages over cuprates with respect to these grain boundary issues. The transport properties through well-defined bicrystal grain boundary junctions with various misorientation angles (θGB) were systematically investigated for cobalt-doped BaFe2As2 (BaFe2As2:Co) epitaxial films fabricated on bicrystal substrates. The critical current density through bicrystal grain boundary (JcBGB) remained high (>1 MA cm−2) and nearly constant up to a critical angle θc of ∼9°, which is substantially larger than the θc of ∼5° for YBa2Cu3O7–δ. Even at θGB>θc, the decay of JcBGB was much slower than that of YBa2Cu3O7–δ. PMID:21811238

  3. Prochlorococcus: Advantages and Limits of Minimalism

    NASA Astrophysics Data System (ADS)

    Partensky, Frédéric; Garczarek, Laurence

    2010-01-01

    Prochlorococcus is the key phytoplanktonic organism of tropical gyres, large ocean regions that are depleted of the essential macronutrients needed for photosynthesis and cell growth. This cyanobacterium has adapted itself to oligotrophy by minimizing the resources necessary for life through a drastic reduction of cell and genome sizes. This rarely observed strategy in free-living organisms has conferred on Prochlorococcus a considerable advantage over other phototrophs, including its closest relative Synechococcus, for life in this vast yet little variable ecosystem. However, this strategy seems to reach its limits in the upper layer of the S Pacific gyre, the most oligotrophic region of the world ocean. By losing some important genes and/or functions during evolution, Prochlorococcus has seemingly become dependent on co-occurring microorganisms. In this review, we present some of the recent advances in the ecology, biology, and evolution of Prochlorococcus, which because of its ecological importance and tiny genome is rapidly imposing itself as a model organism in environmental microbiology.

  4. Prochlorococcus: advantages and limits of minimalism.

    PubMed

    Partensky, Frédéric; Garczarek, Laurence

    2010-01-01

    Prochlorococcus is the key phytoplanktonic organism of tropical gyres, large ocean regions that are depleted of the essential macronutrients needed for photosynthesis and cell growth. This cyanobacterium has adapted itself to oligotrophy by minimizing the resources necessary for life through a drastic reduction of cell and genome sizes. This rarely observed strategy in free-living organisms has conferred on Prochlorococcus a considerable advantage over other phototrophs, including its closest relative Synechococcus, for life in this vast yet little variable ecosystem. However, this strategy seems to reach its limits in the upper layer of the S Pacific gyre, the most oligotrophic region of the world ocean. By losing some important genes and/or functions during evolution, Prochlorococcus has seemingly become dependent on co-occurring microorganisms. In this review, we present some of the recent advances in the ecology, biology, and evolution of Prochlorococcus, which because of its ecological importance and tiny genome is rapidly imposing itself as a model organism in environmental microbiology. PMID:21141667

  5. Airtraq optical laryngoscope: advantages and disadvantages.

    PubMed

    Saracoglu, Kemal Tolga; Eti, Zeynep; Gogus, Fevzi Yilmaz

    2013-06-01

    Difficult or unsuccesful tracheal intubation is one of the important causes for morbidity and mortality in susceptible patients. Almost 30% of the anesthesia-related deaths are induced by the complications of difficult airway management and more than 85% of all respiratory related complications cause brain injury or death. Nowadays, due to the advances in technology, new videolaryngoscopic devices became available. Airtraq is a novel single-use laryngoscope which provides glottis display without any deviation in the normal position of the oral, pharyngeal or the tracheal axes. With the help of the display lens glottis and the surrounding structures are visualised and under direct view of its tip the tracheal tube is introduced between the vocal cords. In patients having restricted neck motion or limited mouth opening (provided that it is greater than 3 cm) Airtraq offers the advantage of a better display. Moreover the video image can be transfered to an external monitor thus an experienced specialist can provide assistance and an educational course can be conducted simultaneously. On the other hand the Airtraq videolaryngoscopic devices possess certain disadvantages including the need of experience and the time demand for the operator to learn how to use them properly, the rapid deterioration of their display in the presence of a swelling or a secretion and the fact that they are rather complicated and expensive devices. The Airtraq device has already documented benefits in the management of difficult airways, however serial utilization obviously necessitates experience. PMID:24180160

  6. Advantages of a leveled commitment contracting protocol

    SciTech Connect

    Sandholm, T.W.; Lesser, V.R.

    1996-12-31

    In automated negotiation systems consisting of self-interested agents, contracts have traditionally been binding. Such contracts do not allow agents to efficiently accommodate future events. Game theory has proposed contingency contracts to solve this problem. Among computational agents, contingency contracts are often impractical due to large numbers of interdependent and unanticipated future events to be conditioned on, and because some events are not mutually observable. This paper proposes a leveled commitment contracting protocol that allows self-interested agents to efficiently accommodate future events by having the possibility of unilaterally decommitting from a contract based on local reasoning. A decommitment penalty is assigned to both agents in a contract: to be freed from the contract, an agent only pays this penalty to the other party. It is shown through formal analysis of several contracting settings that this leveled commitment feature in a contracting protocol increases Pareto efficiency of deals and can make contracts individually rational when no full commitment contract can. This advantage holds even if the agents decommit manipulatively.

  7. Take Your Leadership Role Seriously.

    ERIC Educational Resources Information Center

    School Administrator, 1986

    1986-01-01

    The principal authors of a new book, "Profiling Excellence in America's Schools," state that leadership is the single most important element for effective schools. The generic skills of leaders are flexibility, autonomy, risk taking, innovation, and commitment. Exceptional principals and teachers take their leadership and management roles…

  8. Taking Over a Broken Program

    ERIC Educational Resources Information Center

    Grabowski, Carl

    2008-01-01

    Taking over a broken program can be one of the hardest tasks to take on. However, working towards a vision and a common goal--and eventually getting there--makes it all worth it in the end. In this article, the author shares the lessons she learned as the new director for the Bright Horizons Center in Ashburn, Virginia. She suggests that new…

  9. Algorithmic chemistry

    SciTech Connect

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  10. Virtual online consultations: advantages and limitations (VOCAL) study

    PubMed Central

    Greenhalgh, Trisha; Vijayaraghavan, Shanti; Wherton, Joe; Shaw, Sara; Byrne, Emma; Campbell-Richards, Desirée; Bhattacharya, Satya; Hanson, Philippa; Ramoutar, Seendy; Gutteridge, Charles; Hodkinson, Isabel; Collard, Anna; Morris, Joanne

    2016-01-01

    Introduction Remote video consultations between clinician and patient are technically possible and increasingly acceptable. They are being introduced in some settings alongside (and occasionally replacing) face-to-face or telephone consultations. Methods To explore the advantages and limitations of video consultations, we will conduct in-depth qualitative studies of real consultations (microlevel) embedded in an organisational case study (mesolevel), taking account of national context (macrolevel). The study is based in 2 contrasting clinical settings (diabetes and cancer) in a National Health Service (NHS) acute trust in London, UK. Main data sources are: microlevel—audio, video and screen capture to produce rich multimodal data on 45 remote consultations; mesolevel—interviews, ethnographic observations and analysis of documents within the trust; macrolevel—key informant interviews of national-level stakeholders and document analysis. Data will be analysed and synthesised using a sociotechnical framework developed from structuration theory. Ethics approval City Road and Hampstead NHS Research Ethics Committee, 9 December 2014, reference 14/LO/1883. Planned outputs We plan outputs for 5 main audiences: (1) academics: research publications and conference presentations; (2) service providers: standard operating procedures, provisional operational guidance and key safety issues; (3) professional bodies and defence societies: summary of relevant findings to inform guidance to members; (4) policymakers: summary of key findings; (5) patients and carers: ‘what to expect in your virtual consultation’. Discussion The research literature on video consultations is sparse. Such consultations offer potential advantages to patients (who are spared the cost and inconvenience of travel) and the healthcare system (eg, they may be more cost-effective), but fears have been expressed that they may be clinically risky and/or less acceptable to patients or staff, and they

  11. Copper-phosphorus alloys offer advantages in brazing copper

    SciTech Connect

    Rupert, W.D.

    1996-05-01

    Copper-phosphorus brazing alloys are used extensively for joining copper, especially refrigeration and air-conditioning copper tubing and electrical conductors. What is the effect of phosphorus when alloyed with copper? The following are some of the major effects: (1) It lowers the melt temperature of copper (a temperature depressant). (2) It increases the fluidity of the copper when in the liquid state. (3) It acts as a deoxidant or a fluxing agent with copper. (4) It lowers the ductility of copper (embrittles). There is a misconception that silver improves the ductility of the copper-phosphorus alloys. In reality, silver added to copper acts in a similar manner as phosphorus. The addition of silver to copper lowers the melt temperature (temperature depressant) and decreases the ductility. Fortunately, the rate and amount at which silver lowers copper ductility is significantly less than that of phosphorus. Therefore, taking advantage of the temperature depressant property of silver, a Ag-Cu-P alloy can be selected at approximately the same melt temperature as a Cu-P alloy, but at a lower phosphorus content. The lowering of the phosphorus content actually makes the alloy more ductile, not the silver addition. A major advantage of the copper-phosphorus alloys is the self-fluxing characteristic when joining copper to copper. They may also be used with the addition of a paste flux on brass, bronze, and specialized applications on silver, tungsten and molybdenum. Whether it is selection of the proper BCuP alloy or troubleshooting an existing problem, the suggested approach is a review of the desired phosphorus content in the liquid metal and how it is being altered during application. In torch brazing, a slight change in the oxygen-fuel ratio can affect the joint quality or leak tightness.

  12. Competitive Advantage of PET/MRI

    PubMed Central

    Jadvar, Hossein; Colletti, Patrick M.

    2013-01-01

    Multimodality imaging has made great strides in the imaging evaluation of patients with a variety of diseases. Positron emission tomography/computed tomography (PET/CT) is now established as the imaging modality of choice in many clinical conditions, particularly in oncology. While the initial development of combined PET/magnetic resonance imaging (PET/MRI) was in the preclinical arena, hybrid PET/MR scanners are now available for clinical use. PET/MRI combines the unique features of MRI including excellent soft tissue contrast, diffusion-weighted imaging, dynamic contrast-enhanced imaging, fMRI and other specialized sequences as well as MR spectroscopy with the quantitative physiologic information that is provided by PET. Most evidence for the potential clinical utility of PET/MRI is based on studies performed with side-by-side comparison or software-fused MRI and PET images. Data on distinctive utility of hybrid PET/MRI are rapidly emerging. There are potential competitive advantages of PET/MRI over PET/CT. In general, PET/MRI may be preferred over PET/CT where the unique features of MRI provide more robust imaging evaluation in certain clinical settings. The exact role and potential utility of simultaneous data acquisition in specific research and clinical settings will need to be defined. It may be that simultaneous PET/MRI will be best suited for clinical situations that are disease-specific, organ-specific, related to diseases of the children or in those patients undergoing repeated imaging for whom cumulative radiation dose must be kept as low as reasonably achievable. PET/MRI also offers interesting opportunities for use of dual modality probes. Upon clear definition of clinical utility, other important and practical issues related to business operational model, clinical workflow and reimbursement will also be resolved. PMID:23791129

  13. Searching for the Advantages of Virus Sex

    NASA Astrophysics Data System (ADS)

    Turner, Paul E.

    2003-02-01

    Sex (genetic exchange) is a nearly universal phenomenon in biological populations. But this is surprising given the costs associated with sex. For example, sex tends to break apart co-adapted genes, and sex causes a female to inefficiently contribute only half the genes to her offspring. Why then did sex evolve? One famous model poses that sex evolved to combat Muller's ratchet, the mutational load that accrues when harmful mutations drift to high frequencies in populations of small size. In contrast, the Fisher-Muller Hypothesis predicts that sex evolved to promote genetic variation that speeds adaptation in novel environments. Sexual mechanisms occur in viruses, which feature high rates of deleterious mutation and frequent exposure to novel or changing environments. Thus, confirmation of one or both hypotheses would shed light on the selective advantages of virus sex. Experimental evolution has been used to test these classic models in the RNA bacteriophage φ6, a virus that experiences sex via reassortment of its chromosomal segments. Empirical data suggest that sex might have originated in φ6 to assist in purging deleterious mutations from the genome. However, results do not support the idea that sex evolved because it provides beneficial variation in novel environments. Rather, experiments show that too much sex can be bad for φ6 promiscuity allows selfish viruses to evolve and spread their inferior genes to subsequent generations. Here I discuss various explanations for the evolution of segmentation in RNA viruses, and the added cost of sex when large numbers of viruses co-infect the same cell.

  14. Fluorescence advantages with microscopic spatiotemporal control

    NASA Astrophysics Data System (ADS)

    Goswami, Debabrata; Roy, Debjit; De, Arijit K.

    2013-03-01

    We present a clever design concept of using femtosecond laser pulses in microscopy by selective excitation or de-excitation of one fluorophore over the other overlapping one. Using either a simple pair of femtosecond pulses with variable delay or using a train of laser pulses at 20-50 Giga-Hertz excitation, we show controlled fluorescence excitation or suppression of one of the fluorophores with respect to the other through wave-packet interference, an effect that prevails even after the fluorophore coherence timescale. Such an approach can be used both under the single-photon excitation as well as in the multi-photon excitation conditions resulting in effective higher spatial resolution. Such high spatial resolution advantage with broadband-pulsed excitation is of immense benefit to multi-photon microscopy and can also be an effective detection scheme for trapped nanoparticles with near-infrared light. Such sub-diffraction limit trapping of nanoparticles is challenging and a two-photon fluorescence diagnostics allows a direct observation of a single nanoparticle in a femtosecond high-repetition rate laser trap, which promises new directions to spectroscopy at the single molecule level in solution. The gigantic peak power of femtosecond laser pulses at high repetition rate, even at low average powers, provide huge instantaneous gradient force that most effectively result in a stable optical trap for spatial control at sub-diffraction limit. Such studies have also enabled us to explore simultaneous control of internal and external degrees of freedom that require coupling of various control parameters to result in spatiotemporal control, which promises to be a versatile tool for the microscopic world.

  15. Competitive advantage of PET/MRI.

    PubMed

    Jadvar, Hossein; Colletti, Patrick M

    2014-01-01

    Multimodality imaging has made great strides in the imaging evaluation of patients with a variety of diseases. Positron emission tomography/computed tomography (PET/CT) is now established as the imaging modality of choice in many clinical conditions, particularly in oncology. While the initial development of combined PET/magnetic resonance imaging (PET/MRI) was in the preclinical arena, hybrid PET/MR scanners are now available for clinical use. PET/MRI combines the unique features of MRI including excellent soft tissue contrast, diffusion-weighted imaging, dynamic contrast-enhanced imaging, fMRI and other specialized sequences as well as MR spectroscopy with the quantitative physiologic information that is provided by PET. Most evidence for the potential clinical utility of PET/MRI is based on studies performed with side-by-side comparison or software-fused MRI and PET images. Data on distinctive utility of hybrid PET/MRI are rapidly emerging. There are potential competitive advantages of PET/MRI over PET/CT. In general, PET/MRI may be preferred over PET/CT where the unique features of MRI provide more robust imaging evaluation in certain clinical settings. The exact role and potential utility of simultaneous data acquisition in specific research and clinical settings will need to be defined. It may be that simultaneous PET/MRI will be best suited for clinical situations that are disease-specific, organ-specific, related to diseases of the children or in those patients undergoing repeated imaging for whom cumulative radiation dose must be kept as low as reasonably achievable. PET/MRI also offers interesting opportunities for use of dual modality probes. Upon clear definition of clinical utility, other important and practical issues related to business operational model, clinical workflow and reimbursement will also be resolved. PMID:23791129

  16. Fourier Lucas-Kanade algorithm.

    PubMed

    Lucey, Simon; Navarathna, Rajitha; Ashraf, Ahmed Bilal; Sridharan, Sridha

    2013-06-01

    In this paper, we propose a framework for both gradient descent image and object alignment in the Fourier domain. Our method centers upon the classical Lucas & Kanade (LK) algorithm where we represent the source and template/model in the complex 2D Fourier domain rather than in the spatial 2D domain. We refer to our approach as the Fourier LK (FLK) algorithm. The FLK formulation is advantageous when one preprocesses the source image and template/model with a bank of filters (e.g., oriented edges, Gabor, etc.) as 1) it can handle substantial illumination variations, 2) the inefficient preprocessing filter bank step can be subsumed within the FLK algorithm as a sparse diagonal weighting matrix, 3) unlike traditional LK, the computational cost is invariant to the number of filters and as a result is far more efficient, and 4) this approach can be extended to the Inverse Compositional (IC) form of the LK algorithm where nearly all steps (including Fourier transform and filter bank preprocessing) can be precomputed, leading to an extremely efficient and robust approach to gradient descent image matching. Further, these computational savings translate to nonrigid object alignment tasks that are considered extensions of the LK algorithm, such as those found in Active Appearance Models (AAMs). PMID:23599053

  17. Motion Cueing Algorithm Development: Initial Investigation and Redesign of the Algorithms

    NASA Technical Reports Server (NTRS)

    Telban, Robert J.; Wu, Weimin; Cardullo, Frank M.; Houck, Jacob A. (Technical Monitor)

    2000-01-01

    In this project four motion cueing algorithms were initially investigated. The classical algorithm generated results with large distortion and delay and low magnitude. The NASA adaptive algorithm proved to be well tuned with satisfactory performance, while the UTIAS adaptive algorithm produced less desirable results. Modifications were made to the adaptive algorithms to reduce the magnitude of undesirable spikes. The optimal algorithm was found to have the potential for improved performance with further redesign. The center of simulator rotation was redefined. More terms were added to the cost function to enable more tuning flexibility. A new design approach using a Fortran/Matlab/Simulink setup was employed. A new semicircular canals model was incorporated in the algorithm. With these changes results show the optimal algorithm has some advantages over the NASA adaptive algorithm. Two general problems observed in the initial investigation required solutions. A nonlinear gain algorithm was developed that scales the aircraft inputs by a third-order polynomial, maximizing the motion cues while remaining within the operational limits of the motion system. A braking algorithm was developed to bring the simulator to a full stop at its motion limit and later release the brake to follow the cueing algorithm output.

  18. Calculating Home Advantage in the First Decade of the 21th Century UEFA Soccer Leagues

    PubMed Central

    García, Miguel Saavedra; Aguilar, Óscar Gutiérrez; Marques, Paulo Sa; Tobío, Gabriel Torres; Fernández Romero, Juan J.

    2013-01-01

    Home advantage has been studied in different sports, establishing its existence and its possible causes. This article analyzes the home advantage in soccer leagues of UEFA countries in the first part of the 21st century. The sample of 52 countries monitored during a period of 10 years allows us to study 520 leagues and 111,030 matches of the highest level in each country associated with UEFA. Home advantage exists and is significant in 32 of the 52 UEFA countries, where it equals 55.6%. A decrease can be observed in the tendency towards home advantage between the years 2000 and 2010. Values between 55 and 56 were observed for home advantage in the top ten leagues in Europe. It has also been observed that home advantage depends on the level of the league evaluated using UEFA’s 2010/11 Country coefficients. The home advantage is calculated taking into account the teams’ position and the points obtained in each of the leagues. A direct relationship was observed with the number of points gained and an inverse relationship was observed with the team position. PMID:24235990

  19. Expectation-maximization algorithms for learning a finite mixture of univariate survival time distributions from partially specified class values

    SciTech Connect

    Lee, Youngrok

    2013-05-15

    Heterogeneity exists on a data set when samples from di erent classes are merged into the data set. Finite mixture models can be used to represent a survival time distribution on heterogeneous patient group by the proportions of each class and by the survival time distribution within each class as well. The heterogeneous data set cannot be explicitly decomposed to homogeneous subgroups unless all the samples are precisely labeled by their origin classes; such impossibility of decomposition is a barrier to overcome for estimating nite mixture models. The expectation-maximization (EM) algorithm has been used to obtain maximum likelihood estimates of nite mixture models by soft-decomposition of heterogeneous samples without labels for a subset or the entire set of data. In medical surveillance databases we can find partially labeled data, that is, while not completely unlabeled there is only imprecise information about class values. In this study we propose new EM algorithms that take advantages of using such partial labels, and thus incorporate more information than traditional EM algorithms. We particularly propose four variants of the EM algorithm named EM-OCML, EM-PCML, EM-HCML and EM-CPCML, each of which assumes a specific mechanism of missing class values. We conducted a simulation study on exponential survival trees with five classes and showed that the advantages of incorporating substantial amount of partially labeled data can be highly signi cant. We also showed model selection based on AIC values fairly works to select the best proposed algorithm on each specific data set. A case study on a real-world data set of gastric cancer provided by Surveillance, Epidemiology and End Results (SEER) program showed a superiority of EM-CPCML to not only the other proposed EM algorithms but also conventional supervised, unsupervised and semi-supervised learning algorithms.

  20. An evolutionary computation based algorithm for calculating solar differential rotation by automatic tracking of coronal bright points

    NASA Astrophysics Data System (ADS)

    Shahamatnia, Ehsan; Dorotovič, Ivan; Fonseca, Jose M.; Ribeiro, Rita A.

    2016-03-01

    Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO), solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO), a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.

  1. Exact and approximate Fourier rebinning algorithms for the solution of the data truncation problem in 3-D PET.

    PubMed

    Bouallègue, Fayçal Ben; Crouzet, Jean-François; Comtat, Claude; Fourcade, Marjolaine; Mohammadi, Bijan; Mariano-Goulart, Denis

    2007-07-01

    This paper presents an extended 3-D exact rebinning formula in the Fourier space that leads to an iterative reprojection algorithm (iterative FOREPROJ), which enables the estimation of unmeasured oblique projection data on the basis of the whole set of measured data. In first approximation, this analytical formula also leads to an extended Fourier rebinning equation that is the basis for an approximate reprojection algorithm (extended FORE). These algorithms were evaluated on numerically simulated 3-D positron emission tomography (PET) data for the solution of the truncation problem, i.e., the estimation of the missing portions in the oblique projection data, before the application of algorithms that require complete projection data such as some rebinning methods (FOREX) or 3-D reconstruction algorithms (3DRP or direct Fourier methods). By taking advantage of all the 3-D data statistics, the iterative FOREPROJ reprojection provides a reliable alternative to the classical FOREPROJ method, which only exploits the low-statistics nonoblique data. It significantly improves the quality of the external reconstructed slices without loss of spatial resolution. As for the approximate extended FORE algorithm, it clearly exhibits limitations due to axial interpolations, but will require clinical studies with more realistic measured data in order to decide on its pertinence. PMID:17649913

  2. How Users Take Advantage of Different Forms of Interactivity on Online News Sites: Clicking, E-Mailing, and Commenting

    ERIC Educational Resources Information Center

    Boczkowski, Pablo J.; Mitchelstein, Eugenia

    2012-01-01

    This study examines the uptake of multiple interactive features on news sites. It looks at the thematic composition of the most clicked, most e-mailed, and most commented stories during periods of heightened and routine political activity. Results show that (a) during the former period, the most commented stories were more likely to be focused on…

  3. Taking ad-Vantage of lax advertising regulation in the USA and Canada: reassuring and distracting health-concerned smokers.

    PubMed

    Anderson, Stacey J; Pollay, Richard W; Ling, Pamela M

    2006-10-01

    We explored the evolution from cigarette product attributes to psychosocial needs in advertising campaigns for low-tar cigarettes. Analysis of previously secret tobacco industry documents and print advertising images indicated that low-tar brands targeted smokers who were concerned about their health with advertising images intended to distract them from the health hazards of smoking. Advertising first emphasized product characteristics (filtration, low tar) that implied health benefits. Over time, advertising emphasis shifted to salient psychosocial needs of the target markets. A case study of Vantage cigarettes in the USA and Canada showed that advertising presented images of intelligent, upward-striving people who had achieved personal success and intentionally excluded the act of smoking from the imagery, while minimal product information was provided. This illustrates one strategy to appeal to concerned smokers by not describing the product itself (which may remind smokers of the problems associated with smoking), but instead using evocative imagery to distract smokers from these problems. Current advertising for potential reduced-exposure products (PREPs) emphasizes product characteristics, but these products have not delivered on the promise of a healthier alternative cigarette. Our results suggest that the tobacco control community should be on the alert for a shift in advertising focus for PREPs to the image of the user rather than the cigarette. Global Framework Convention on Tobacco Control-style advertising bans that prohibit all user imagery in tobacco advertising could preempt a psychosocial needs-based advertising strategy for PREPs and maintain public attention on the health hazards of smoking. PMID:16843578

  4. Taking Advantage of the Strengths of 2 Different Dietary Assessment Instruments to Improve Intake Estimates for Nutritional Epidemiology

    PubMed Central

    Carroll, Raymond J.; Midthune, Douglas; Subar, Amy F.; Shumakovich, Marina; Freedman, Laurence S.; Thompson, Frances E.; Kipnis, Victor

    2012-01-01

    With the advent of Internet-based 24-hour recall (24HR) instruments, it is now possible to envision their use in cohort studies investigating the relation between nutrition and disease. Understanding that all dietary assessment instruments are subject to measurement errors and correcting for them under the assumption that the 24HR is unbiased for usual intake, here the authors simultaneously address precision, power, and sample size under the following 3 conditions: 1) 1–12 24HRs; 2) a single calibrated food frequency questionnaire (FFQ); and 3) a combination of 24HR and FFQ data. Using data from the Eating at America’s Table Study (1997–1998), the authors found that 4–6 administrations of the 24HR is optimal for most nutrients and food groups and that combined use of multiple 24HR and FFQ data sometimes provides data superior to use of either method alone, especially for foods that are not regularly consumed. For all food groups but the most rarely consumed, use of 2–4 recalls alone, with or without additional FFQ data, was superior to use of FFQ data alone. Thus, if self-administered automated 24HRs are to be used in cohort studies, 4–6 administrations of the 24HR should be considered along with administration of an FFQ. PMID:22273536

  5. How can we take advantage of halophyte properties to cope with heavy metal toxicity in salt-affected areas?

    PubMed Central

    Lutts, Stanley; Lefèvre, Isabelle

    2015-01-01

    Background Many areas throughout the world are simultaneously contaminated by high concentrations of soluble salts and by high concentrations of heavy metals that constitute a serious threat to human health. The use of plants to extract or stabilize pollutants is an interesting alternative to classical expensive decontamination procedures. However, suitable plant species still need to be identified for reclamation of substrates presenting a high electrical conductivity. Scope Halophytic plant species are able to cope with several abiotic constraints occurring simultaneously in their natural environment. This review considers their putative interest for remediation of polluted soil in relation to their ability to sequester absorbed toxic ions in trichomes or vacuoles, to perform efficient osmotic adjustment and to limit the deleterious impact of oxidative stress. These physiological adaptations are considered in relation to the impact of salt on heavy metal bioavailabilty in two types of ecosystem: (1) salt marshes and mangroves, and (2) mine tailings in semi-arid areas. Conclusions Numerous halophytes exhibit a high level of heavy metal accumulation and external NaCl may directly influence heavy metal speciation and absorption rate. Maintenance of biomass production and plant water status makes some halophytes promising candidates for further management of heavy-metal-polluted areas in both saline and non-saline environments. PMID:25672360

  6. MICA: A fast short-read aligner that takes full advantage of Many Integrated Core Architecture (MIC)

    PubMed Central

    2015-01-01

    Background Short-read aligners have recently gained a lot of speed by exploiting the massive parallelism of GPU. An uprising alterative to GPU is Intel MIC; supercomputers like Tianhe-2, currently top of TOP500, is built with 48,000 MIC boards to offer ~55 PFLOPS. The CPU-like architecture of MIC allows CPU-based software to be parallelized easily; however, the performance is often inferior to GPU counterparts as an MIC card contains only ~60 cores (while a GPU card typically has over a thousand cores). Results To better utilize MIC-enabled computers for NGS data analysis, we developed a new short-read aligner MICA that is optimized in view of MIC's limitation and the extra parallelism inside each MIC core. By utilizing the 512-bit vector units in the MIC and implementing a new seeding strategy, experiments on aligning 150 bp paired-end reads show that MICA using one MIC card is 4.9 times faster than BWA-MEM (using 6 cores of a top-end CPU), and slightly faster than SOAP3-dp (using a GPU). Furthermore, MICA's simplicity allows very efficient scale-up when multiple MIC cards are used in a node (3 cards give a 14.1-fold speedup over BWA-MEM). Summary MICA can be readily used by MIC-enabled supercomputers for production purpose. We have tested MICA on Tianhe-2 with 90 WGS samples (17.47 Tera-bases), which can be aligned in an hour using 400 nodes. MICA has impressive performance even though MIC is only in its initial stage of development. Availability and implementation MICA's source code is freely available at http://sourceforge.net/projects/mica-aligner under GPL v3. Supplementary information Supplementary information is available as "Additional File 1". Datasets are available at www.bio8.cs.hku.hk/dataset/mica. PMID:25952019

  7. Taking Advantage of the "Big Mo"--Momentum in Everyday English and Swedish and in Physics Teaching

    ERIC Educational Resources Information Center

    Haglund, Jesper; Jeppsson, Fredrik; Ahrenberg, Lars

    2015-01-01

    Science education research suggests that our everyday intuitions of motion and interaction of physical objects fit well with how physicists use the term "momentum". Corpus linguistics provides an easily accessible approach to study language in different domains, including everyday language. Analysis of language samples from English text…

  8. Taking Advantage of the "Big Mo"—Momentum in Everyday English and Swedish and in Physics Teaching

    NASA Astrophysics Data System (ADS)

    Haglund, Jesper; Jeppsson, Fredrik; Ahrenberg, Lars

    2015-06-01

    Science education research suggests that our everyday intuitions of motion and interaction of physical objects fit well with how physicists use the term "momentum". Corpus linguistics provides an easily accessible approach to study language in different domains, including everyday language. Analysis of language samples from English text corpora reveals a trend of increasing metaphorical use of "momentum" in non-science domains, and through conceptual metaphor analysis, we show that the use of the word in everyday language, as opposed to for instance "force", is largely adequate from a physics point of view. In addition, "momentum" has recently been borrowed into Swedish as a metaphor in domains such as sports, politics and finance, with meanings similar to those in physics. As an implication for educational practice, we find support for the suggestion to introduce the term "momentum" to English-speaking pupils at an earlier age than what is typically done in the educational system today, thereby capitalising on their intuitions and experiences of everyday language. For Swedish-speaking pupils, and possibly also relevant to other languages, the parallel between "momentum" and the corresponding physics term in the students' mother tongue could be made explicit..

  9. Host manipulation by an ichneumonid spider ectoparasitoid that takes advantage of preprogrammed web-building behaviour for its cocoon protection.

    PubMed

    Takasuka, Keizo; Yasui, Tomoki; Ishigami, Toru; Nakata, Kensuke; Matsumoto, Rikio; Ikeda, Kenichi; Maeto, Kaoru

    2015-08-01

    Host manipulation by parasites and parasitoids is a fascinating phenomenon within evolutionary ecology, representing an example of extended phenotypes. To elucidate the mechanism of host manipulation, revealing the origin and function of the invoked actions is essential. Our study focused on the ichneumonid spider ectoparasitoid Reclinervellus nielseni, which turns its host spider (Cyclosa argenteoalba) into a drugged navvy, to modify the web structure into a more persistent cocoon web so that the wasp can pupate safely on this web after the spider's death. We focused on whether the cocoon web originated from the resting web that an unparasitized spider builds before moulting, by comparing web structures, building behaviour and silk spectral/tensile properties. We found that both resting and cocoon webs have reduced numbers of radii decorated by numerous fibrous threads and specific decorating behaviour was identical, suggesting that the cocoon web in this system has roots in the innate resting web and ecdysteroid-related components may be responsible for the manipulation. We also show that these decorations reflect UV light, possibly to prevent damage by flying web-destroyers such as birds or large insects. Furthermore, the tensile test revealed that the spider is induced to repeat certain behavioural steps in addition to resting web construction so that many more threads are laid down for web reinforcement. PMID:26246608

  10. Mentoring the Next Generation of AACRAO Leaders: Taking Advantage of Routines, Exceptions, and Challenges for Developing Leadership Skills

    ERIC Educational Resources Information Center

    Cramer, Sharon F.

    2012-01-01

    As members of enrollment management units look ahead to the next few years, they anticipate many institution-wide challenges: (1) implementation of a new student information system; (2) major upgrade of an existing system; and (3) re-configuring an existing system to reflect changes in academic policies or to accommodate new federal or state…

  11. A Content Analysis of Kindergarten-12th Grade School-Based Nutrition Interventions: Taking Advantage of Past Learning

    ERIC Educational Resources Information Center

    Roseman, Mary G.; Riddell, Martha C.; Haynes, Jessica N.

    2011-01-01

    Objective: To review the literature, identifying proposed recommendations for school-based nutrition interventions, and evaluate kindergarten through 12th grade school-based nutrition interventions conducted from 2000-2008. Design: Proposed recommendations from school-based intervention reviews were developed and used in conducting a content…

  12. The advantages of logarithmically scaled data for electromagnetic inversion

    NASA Astrophysics Data System (ADS)

    Wheelock, Brent; Constable, Steven; Key, Kerry

    2015-06-01

    Non-linear inversion algorithms traverse a data misfit space over multiple iterations of trial models in search of either a global minimum or some target misfit contour. The success of the algorithm in reaching that objective depends upon the smoothness and predictability of the misfit space. For any given observation, there is no absolute form a datum must take, and therefore no absolute definition for the misfit space; in fact, there are many alternatives. However, not all misfit spaces are equal in terms of promoting the success of inversion. In this work, we appraise three common forms that complex data take in electromagnetic geophysical methods: real and imaginary components, a power of amplitude and phase, and logarithmic amplitude and phase. We find that the optimal form is logarithmic amplitude and phase. Single-parameter misfit curves of log-amplitude and phase data for both magnetotelluric and controlled-source electromagnetic methods are the smoothest of the three data forms and do not exhibit flattening at low model resistivities. Synthetic, multiparameter, 2-D inversions illustrate that log-amplitude and phase is the most robust data form, converging to the target misfit contour in the fewest steps regardless of starting model and the amount of noise added to the data; inversions using the other two data forms run slower or fail under various starting models and proportions of noise. It is observed that inversion with log-amplitude and phase data is nearly two times faster in converging to a solution than with other data types. We also assess the statistical consequences of transforming data in the ways discussed in this paper. With the exception of real and imaginary components, which are assumed to be Gaussian, all other data types do not produce an expected mean-squared misfit value of 1.00 at the true model (a common assumption) as the errors in the complex data become large. We recommend that real and imaginary data with errors larger than 10 per

  13. Advantages of using flat-panel LCD for projection displays

    NASA Astrophysics Data System (ADS)

    Wu, Dean C.

    1995-04-01

    The advantages of applying flat panel Liquid CRystal Displays (LCD) for Projection Displays will be extensively discussed. The selection and fabrication of flat panel LCD in order to meet the specific requirements of projection displays through various technologies will be suggested and explored in detail. The compact, flexible size and easy portability of flat panel LCDs are well known. For practical reasons, it is desirable to take advantages some of these useful properties in Projection Displays. With the recent popularity of large format display sizes, high information content and practicality all increases the demand of projection enlargement for high level performance and comfortable viewing. As a result, Projection Displays are becoming the chosen technological option for effective presentation of visual information. In general, the Liquid Crystal Light Valves (LCLV) used in Projection Displays are simply transmissive flat panel liquid crystal displays. For example at the low end, the monochromatic LCD projection panels are simply transmissive LCDs to be used in combination with laptops or PCs and light sources such as overhead projectors. These projection panels are getting popular for their portability, readability and low cost. However, due to the passive nature of the LCD used in these projector panels, the response time, contrast ratio and color gamut are relatively limited. Whether the newly developed Active Addressing technology will be able to improve the response time, contrast ratio and color gamut of these passive matrix LCDs remain to be proven. In the middle range of projection displays, Liquid Crystal Light Valves using color Active Matrix LCDs are rapidly replacing the dominant CRT based projectors. LCLVs have a number of advantages including portability, easy set-up and data readability. There are several new developments using single crystal, polysilicon as active matrix for LCDs with improved performance. Since single crystal active matrix

  14. The competitive advantage of corporate philanthropy.

    PubMed

    Porter, Michael E; Kramer, Mark R

    2002-12-01

    When it comes to philanthropy, executives increasingly see themselves as caught between critics demanding ever higher levels of "corporate social responsibility" and investors applying pressure to maximize short-term profits. In response, many companies have sought to make their giving more strategic, but what passes for strategic philanthropy is almost never truly strategic, and often isn't particularly effective as philanthropy. Increasingly, philanthropy is used as a form of public relations or advertising, promoting a company's image through high-profile sponsorships. But there is a more truly strategic way to think about philanthropy. Corporations can use their charitable efforts to improve their competitive context--the quality of the business environment in the locations where they operate. Using philanthropy to enhance competitive context aligns social and economic goals and improves a company's long-term business prospects. Addressing context enables a company to not only give money but also leverage its capabilities and relationships in support of charitable causes. The produces social benefits far exceeding those provided by individual donors, foundations, or even governments. Taking this new direction requires fundamental changes in the way companies approach their contribution programs. For example, philanthropic investments can improve education and local quality of life in ways that will benefit the company. Such investments can also improve the company's competitiveness by contributing to expanding the local market and helping to reduce corruption in the local business environment. Adopting a context-focused approach goes against the grain of current philanthropic practice, and it requires a far more disciplined approach than is prevalent today. But it can make a company's philanthropic activities far more effective. PMID:12510538

  15. Building and Accessing Clausal Representations: The Advantage of First Mention versus the Advantage of Clause Recency

    PubMed Central

    Gernsbacher, Morton Ann; Hargreaves, David J.; Beeman, Mark

    2014-01-01

    We investigated two seemingly contradictory phenomena: the Advantage of the First-Mentioned Participant (participants mentioned first in a sentence are more accessible than participants mentioned second) and the Advantage of the Most Recent Clause (concepts mentioned in the most recent clause are more accessible than concepts mentioned in an earlier clause). We resolved this contradiction by measuring how quickly comprehenders accessed participants mentioned in the first versus second clauses of two-clause sentences. Our data supported the following hypotheses: Comprehenders represent each clause of a two-clause sentence in its own mental substructure. Comprehenders have greatest access to information in the substructure that they are currently developing; that is, they have greatest access to the most recent clause. However, at some point, the first clause becomes more accessible because the substructure representing the first clause of a two-clause sentence serves as a foundation for the whole sentence-level representation. PMID:25505819

  16. Testing block subdivision algorithms on block designs

    NASA Astrophysics Data System (ADS)

    Wiseman, Natalie; Patterson, Zachary

    2016-01-01

    Integrated land use-transportation models predict future transportation demand taking into account how households and firms arrange themselves partly as a function of the transportation system. Recent integrated models require parcels as inputs and produce household and employment predictions at the parcel scale. Block subdivision algorithms automatically generate parcel patterns within blocks. Evaluating block subdivision algorithms is done by way of generating parcels and comparing them to those in a parcel database. Three block subdivision algorithms are evaluated on how closely they reproduce parcels of different block types found in a parcel database from Montreal, Canada. While the authors who developed each of the algorithms have evaluated them, they have used their own metrics and block types to evaluate their own algorithms. This makes it difficult to compare their strengths and weaknesses. The contribution of this paper is in resolving this difficulty with the aim of finding a better algorithm suited to subdividing each block type. The proposed hypothesis is that given the different approaches that block subdivision algorithms take, it's likely that different algorithms are better adapted to subdividing different block types. To test this, a standardized block type classification is used that consists of mutually exclusive and comprehensive categories. A statistical method is used for finding a better algorithm and the probability it will perform well for a given block type. Results suggest the oriented bounding box algorithm performs better for warped non-uniform sites, as well as gridiron and fragmented uniform sites. It also produces more similar parcel areas and widths. The Generalized Parcel Divider 1 algorithm performs better for gridiron non-uniform sites. The Straight Skeleton algorithm performs better for loop and lollipop networks as well as fragmented non-uniform and warped uniform sites. It also produces more similar parcel shapes and patterns.

  17. Evaluating an image-fusion algorithm with synthetic-image-generation tools

    NASA Astrophysics Data System (ADS)

    Gross, Harry N.; Schott, John R.

    1996-06-01

    An algorithm that combines spectral mixing and nonlinear optimization is used to fuse multiresolution images. Image fusion merges images of different spatial and spectral resolutions to create a high spatial resolution multispectral combination. High spectral resolution allows identification of materials in the scene, while high spatial resolution locates those materials. In this algorithm, conventional spectral mixing estimates the percentage of each material (called endmembers) within each low resolution pixel. Three spectral mixing models are compared; unconstrained, partially constrained, and fully constrained. In the partially constrained application, the endmember fractions are required to sum to one. In the fully constrained application, all fractions are additionally required to lie between zero and one. While negative fractions seem inappropriate, they can arise from random spectral realizations of the materials. In the second part of the algorithm, the low resolution fractions are used as inputs to a constrained nonlinear optimization that calculates the endmember fractions for the high resolution pixels. The constraints mirror the low resolution constraints and maintain consistency with the low resolution fraction results. The algorithm can use one or more higher resolution sharpening images to locate the endmembers to high spatial accuracy. The algorithm was evaluated with synthetic image generation (SIG) tools. A SIG developed image can be used to control the various error sources that are likely to impair the algorithm performance. These error sources include atmospheric effects, mismodeled spectral endmembers, and variability in topography and illumination. By controlling the introduction of these errors, the robustness of the algorithm can be studied and improved upon. The motivation for this research is to take advantage of the next generation of multi/hyperspectral sensors. Although the hyperspectral images will be of modest to low resolution

  18. Vector Quantization Algorithm Based on Associative Memories

    NASA Astrophysics Data System (ADS)

    Guzmán, Enrique; Pogrebnyak, Oleksiy; Yáñez, Cornelio; Manrique, Pablo

    This paper presents a vector quantization algorithm for image compression based on extended associative memories. The proposed algorithm is divided in two stages. First, an associative network is generated applying the learning phase of the extended associative memories between a codebook generated by the LBG algorithm and a training set. This associative network is named EAM-codebook and represents a new codebook which is used in the next stage. The EAM-codebook establishes a relation between training set and the LBG codebook. Second, the vector quantization process is performed by means of the recalling stage of EAM using as associative memory the EAM-codebook. This process generates a set of the class indices to which each input vector belongs. With respect to the LBG algorithm, the main advantages offered by the proposed algorithm is high processing speed and low demand of resources (system memory); results of image compression and quality are presented.

  19. Taking medicines to treat tuberculosis

    MedlinePlus

    ... drugs. This is called directly observed therapy. Side Effects and Other Problems Women who may be pregnant, who are pregnant, or who are breastfeeding should talk to their provider before taking these ...

  20. LRO Takes the Moon's Temperature

    NASA Video Gallery

    During the June 2011 lunar eclipse, scientists will be able to get a unique view of the moon. While the sun is blocked by the Earth, LRO's Diviner instrument will take the temperature on the lunar ...

  1. LRO Takes the Moon's Temperature

    NASA Video Gallery

    During the December 2011 lunar eclipse, LRO's Diviner instrument will take the temperature on the lunar surface. Since different rock sizes cool at different rates, scientists will be able to infer...

  2. Brazilian physicists take centre stage

    NASA Astrophysics Data System (ADS)

    Curtis, Susan

    2014-06-01

    With the FIFA World Cup taking place in Brazil this month, Susan Curtis travels to South America's richest nation to find out how its physicists are exploiting recent big increases in science funding.

  3. Taking America To New Heights

    NASA Video Gallery

    NASA's Commercial Crew Program (CCP) is taking America to new heights with its Commercial Crew Development Round 2 (CCDev2) partners. In 2011, NASA entered into funded Space Act Agreements (SAAs) w...

  4. [Communication server in the hospital--advantages, expenses and limitations].

    PubMed

    Jendrysiak, U

    1997-01-01

    The common situation in a hospital with multiple departments is a heterogeneous set of subsystems, one or more for each department. Today, we have a rising number of requests for an information interchange between these independent systems. The exchange of patients data has a technical and a conceptional part. Establishing a connection between more than two subsystems requires links from one system to all the others, each of them with its own code translation, interface and message transfer. A communication server is an important tool for significantly reducing the amount of work for the technical realisation. It reduces the number of interfaces, facilitates the definition, maintenance and documentation of the message structure and translation tables and helps to keep control on the message pipelines. Existing interfaces can be adapted for similar purposes. Anyway, a communication server needs a lot of configuration and it is necessary to know about low-level internetworking on different hard- and software to take advantage of its features. The code for writing files on a remote system and for process communication via TCP/IP sockets or similar techniques has to be written specifically for each communication task. There are first experiences in the university school of medicine in Mainz setting up a communication server to connect different departments. We also made a checklist for the selection of such a product. PMID:9381841

  5. Does Medicare Advantage Cost Less Than Traditional Medicare?

    PubMed

    Biles, Brian; Casillas, Giselle; Guterman, Stuart

    2016-01-01

    The costs of providing benefits to enrollees in private Medicare Advantage (MA) plans are slightly less, on average, than what traditional Medicare spends per beneficiary in the same county. However, MA plans that are able to keep their costs comparatively low are concen­trated in a fairly small number of U.S. counties. In the 25 counties where the cost differences between MA plans and traditional Medicare are largest, MA plans spent a total of $5.2 billion less than what traditional Medicare would have been expected to spend on the same benefi­ciaries, with health maintenance organizations (HMOs) accounting for all of that difference. In the rest of the country, MA plans spent $4.8 billion above the expected costs under tradi­tional Medicare. Broad determinations about the relative efficiency of MA plans and traditional Medicare can therefore be misleading, as they fail to take into account local conditions and individual plans' performance. PMID:26934756

  6. The advantages of stereo vision in a face recognition system

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2014-06-01

    Humans can recognize a face with binocular vision, while computers typically use a single face image. It is known that the performance of face recognition (by a computer) can be improved using the score fusion of multimodal images and multiple algorithms. A question is: Can we apply stereo vision to a face recognition system? We know that human binocular vision has many advantages such as stereopsis (3D vision), binocular summation, and singleness of vision including fusion of binocular images (cyclopean image). For face recognition, a 3D face or 3D facial features are typically computed from a pair of stereo images. In human visual processes, the binocular summation and singleness of vision are similar as image fusion processes. In this paper, we propose an advanced face recognition system with stereo imaging capability, which is comprised of two 2-in-1 multispectral (visible and thermal) cameras and three recognition algorithms (circular Gaussian filter, face pattern byte, and linear discriminant analysis [LDA]). Specifically, we present and compare stereo fusion at three levels (images, features, and scores) by using stereo images (from left camera and right camera). Image fusion is achieved with three methods (Laplacian pyramid, wavelet transform, average); feature fusion is done with three logical operations (AND, OR, XOR); and score fusion is implemented with four classifiers (LDA, k-nearest neighbor, support vector machine, binomial logical regression). The system performance is measured by probability of correct classification (PCC) rate (reported as accuracy rate in this paper) and false accept rate (FAR). The proposed approaches were validated with a multispectral stereo face dataset from 105 subjects. Experimental results show that any type of stereo fusion can improve the PCC, meanwhile reduce the FAR. It seems that stereo image/feature fusion is superior to stereo score fusion in terms of recognition performance. Further score fusion after image

  7. Stability of Bareiss algorithm

    NASA Astrophysics Data System (ADS)

    Bojanczyk, Adam W.; Brent, Richard P.; de Hoog, F. R.

    1991-12-01

    In this paper, we present a numerical stability analysis of Bareiss algorithm for solving a symmetric positive definite Toeplitz system of linear equations. We also compare Bareiss algorithm with Levinson algorithm and conclude that the former has superior numerical properties.

  8. The exposure advantage: Early exposure to a multilingual environment promotes effective communication

    PubMed Central

    Fan, Samantha P.; Liberman, Zoe; Keysar, Boaz; Kinzler, Katherine D.

    2016-01-01

    Early language exposure is essential to developing a formal language system, but may not be sufficient for communicating effectively. To understand a speaker’s intention, one must take the speaker’s perspective. Multilingual exposure may promote effective communication by enhancing perspective taking. We tested children on a task that required perspective taking to interpret a speaker’s intended meaning. Monolingual children failed to interpret the speaker’s meaning dramatically more often than bilingual children and children who were exposed to a multilingual environment but were not bilinguals themselves. Children who were merely exposed to a second language performed as well as bilingual children, despite having lower executive function scores. Thus, communicative advantages may be social in origin, and not due to enhanced executive control. For millennia, multilingual exposure has been the norm. Our study shows that such an environment may facilitate the development of perspective-taking tools that are critical for effective communication. PMID:25956911

  9. Effects of visualization on algorithm comprehension

    NASA Astrophysics Data System (ADS)

    Mulvey, Matthew

    Computer science students are expected to learn and apply a variety of core algorithms which are an essential part of the field. Any one of these algorithms by itself is not necessarily extremely complex, but remembering the large variety of algorithms and the differences between them is challenging. To address this challenge, we present a novel algorithm visualization tool designed to enhance students understanding of Dijkstra's algorithm by allowing them to discover the rules of the algorithm for themselves. It is hoped that a deeper understanding of the algorithm will help students correctly select, adapt and apply the appropriate algorithm when presented with a problem to solve, and that what is learned here will be applicable to the design of other visualization tools designed to teach different algorithms. Our visualization tool is currently in the prototype stage, and this thesis will discuss the pedagogical approach that informs its design, as well as the results of some initial usability testing. Finally, to clarify the direction for further development of the tool, four different variations of the prototype were implemented, and the instructional effectiveness of each was assessed by having a small sample participants use the different versions of the prototype and then take a quiz to assess their comprehension of the algorithm.

  10. Benchmarking image fusion algorithm performance

    NASA Astrophysics Data System (ADS)

    Howell, Christopher L.

    2012-06-01

    Registering two images produced by two separate imaging sensors having different detector sizes and fields of view requires one of the images to undergo transformation operations that may cause its overall quality to degrade with regards to visual task performance. This possible change in image quality could add to an already existing difference in measured task performance. Ideally, a fusion algorithm would take as input unaltered outputs from each respective sensor used in the process. Therefore, quantifying how well an image fusion algorithm performs should be base lined to whether the fusion algorithm retained the performance benefit achievable by each independent spectral band being fused. This study investigates an identification perception experiment using a simple and intuitive process for discriminating between image fusion algorithm performances. The results from a classification experiment using information theory based image metrics is presented and compared to perception test results. The results show an effective performance benchmark for image fusion algorithms can be established using human perception test data. Additionally, image metrics have been identified that either agree with or surpass the performance benchmark established.

  11. Accelerated ray tracing algorithm under urban macro cell

    NASA Astrophysics Data System (ADS)

    Liu, Z.-Y.; Guo, L.-X.; Guan, X.-W.

    2015-10-01

    In this study, an ray tracing propagation prediction model, which is based on creating a virtual source tree, is used because of their high efficiency and reliable prediction accuracy. In addition, several acceleration techniques are also adopted to improve the efficiency of ray-tracing-based prediction over large areas. However, in the process of employing the ray tracing method for coverage zone prediction, runtime is linearly proportional to the total number of prediction points, leading to large and sometimes prohibitive computation time requirements under complex geographical urban macrocell environments. In order to overcome this bottleneck, the compute unified device architecture (CUDA), which provides fine-grained data parallelism and thread parallelism, is implemented to accelerate the calculation. Taking full advantage of tens of thousands of threads in CUDA program, the decomposition of the coverage prediction problem is firstly conducted by partitioning the image tree and the visible prediction points to different sources. Then, we make every thread calculate the electromagnetic field of one propagation path and then collect these results. Comparing this parallel algorithm with the traditional sequential algorithm, it can be found that computational efficiency has been improved.

  12. The Chorus Conflict and Loss of Separation Resolution Algorithms

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Hagen, George E.; Maddalon, Jeffrey M.

    2013-01-01

    The Chorus software is designed to investigate near-term, tactical conflict and loss of separation detection and resolution concepts for air traffic management. This software is currently being used in two different problem domains: en-route self- separation and sense and avoid for unmanned aircraft systems. This paper describes the core resolution algorithms that are part of Chorus. The combination of several features of the Chorus program distinguish this software from other approaches to conflict and loss of separation resolution. First, the program stores a history of state information over time which enables it to handle communication dropouts and take advantage of previous input data. Second, the underlying conflict algorithms find resolutions that solve the most urgent conflict, but also seek to prevent secondary conflicts with the other aircraft. Third, if the program is run on multiple aircraft, and the two aircraft maneuver at the same time, the result will be implicitly co-ordinated. This implicit coordination property is established by ensuring that a resolution produced by Chorus will comply with a mathematically-defined criteria whose correctness has been formally verified. Fourth, the program produces both instantaneous solutions and kinematic solutions, which are based on simple accel- eration models. Finally, the program provides resolutions for recovery from loss of separation. Different versions of this software are implemented as Java and C++ software programs, respectively.

  13. New Algorithms for Large-scale 3D Radiation Transport

    NASA Astrophysics Data System (ADS)

    Lentz, Eric J.

    2009-05-01

    Radiation transport is critical not only for analysis of astrophysical objects but also for the dynamical transport of energy within. Increased fidelity and dimensionality of the other components of such models requires a similar improvement in the radiation transport. Modern astrophysical simulations can be large enough that the values for a single variable for the entire computational domain cannot be stored on a single compute node. The natural solution is to decompose the physical domain into pieces with each node responsible for a single sub-domain. Using localized plus "ghost" zone data works well for problems like explicit hydrodynamics or nuclear reaction networks with modest impact from inter-process communication. Unfortunately, radiation transport is an inherently non-local process that couples the entire model domain together and efficient algorithms are needed to conquer this problem. In this poster, I present the early development of a new parallel, 3-D transport code using ray tracing to formally solve the transport equation across numerically decomposed domains. The algorithm model takes advantage of one-sided communication to develop a scalable, parallel formal solver. Other aspects and future direction of the parallel code development such as scalability and the inclusion of scattering will also be discussed.

  14. Feature Subset Selection, Class Separability, and Genetic Algorithms

    SciTech Connect

    Cantu-Paz, E

    2004-01-21

    The performance of classification algorithms in machine learning is affected by the features used to describe the labeled examples presented to the inducers. Therefore, the problem of feature subset selection has received considerable attention. Genetic approaches to this problem usually follow the wrapper approach: treat the inducer as a black box that is used to evaluate candidate feature subsets. The evaluations might take a considerable time and the traditional approach might be unpractical for large data sets. This paper describes a hybrid of a simple genetic algorithm and a method based on class separability applied to the selection of feature subsets for classification problems. The proposed hybrid was compared against each of its components and two other feature selection wrappers that are used widely. The objective of this paper is to determine if the proposed hybrid presents advantages over the other methods in terms of accuracy or speed in this problem. The experiments used a Naive Bayes classifier and public-domain and artificial data sets. The experiments suggest that the hybrid usually finds compact feature subsets that give the most accurate results, while beating the execution time of the other wrappers.

  15. GPU Accelerated Event Detection Algorithm

    2011-05-25

    Smart grid external require new algorithmic approaches as well as parallel formulations. One of the critical components is the prediction of changes and detection of anomalies within the power grid. The state-of-the-art algorithms are not suited to handle the demands of streaming data analysis. (i) need for events detection algorithms that can scale with the size of data, (ii) need for algorithms that can not only handle multi dimensional nature of the data, but alsomore » model both spatial and temporal dependencies in the data, which, for the most part, are highly nonlinear, (iii) need for algorithms that can operate in an online fashion with streaming data. The GAEDA code is a new online anomaly detection techniques that take into account spatial, temporal, multi-dimensional aspects of the data set. The basic idea behind the proposed approach is to (a) to convert a multi-dimensional sequence into a univariate time series that captures the changes between successive windows extracted from the original sequence using singular value decomposition (SVD), and then (b) to apply known anomaly detection techniques for univariate time series. A key challenge for the proposed approach is to make the algorithm scalable to huge datasets by adopting techniques from perturbation theory, incremental SVD analysis. We used recent advances in tensor decomposition techniques which reduce computational complexity to monitor the change between successive windows and detect anomalies in the same manner as described above. Therefore we propose to develop the parallel solutions on many core systems such as GPUs, because these algorithms involve lot of numerical operations and are highly data-parallelizable.« less

  16. Versatility of the CFR algorithm for limited angle reconstruction

    SciTech Connect

    Fujieda, I.; Heiskanen, K.; Perez-Mendez, V. )

    1990-04-01

    The constrained Fourier reconstruction (CFR) algorithm and the iterative reconstruction-reprojection (IRR) algorithm are evaluated based on their accuracy for three types of limited angle reconstruction problems. The cFR algorithm performs better for problems such as Xray CT imaging of a nuclear reactor core with one large data gap due to structural blocking of the source and detector pair. For gated heart imaging by Xray CT, radioisotope distribution imaging by PET or SPECT, using a polygonal array of gamma cameras with insensitive gaps between camera boundaries, the IRR algorithm has a slight advantage over the CFR algorithm but the difference is not significant.

  17. Toward a practical ultrasound waveform tomography algorithm for improving breast imaging

    NASA Astrophysics Data System (ADS)

    Li, Cuiping; Sandhu, Gursharan S.; Roy, Olivier; Duric, Neb; Allada, Veerendra; Schmidt, Steven

    2014-03-01

    Ultrasound tomography is an emerging modality for breast imaging. However, most current ultrasonic tomography imaging algorithms, historically hindered by the limited memory and processor speed of computers, are based on ray theory and assume a homogeneous background which is inaccurate for complex heterogeneous regions. Therefore, wave theory, which accounts for diffraction effects, must be used in ultrasonic imaging algorithms to properly handle the heterogeneous nature of breast tissue in order to accurately image small lesions. However, application of waveform tomography to medical imaging has been limited by extreme computational cost and convergence. By taking advantage of the computational architecture of Graphic Processing Units (GPUs), the intensive processing burden of waveform tomography can be greatly alleviated. In this study, using breast imaging methods, we implement a frequency domain waveform tomography algorithm on GPUs with the goal of producing high-accuracy and high-resolution breast images on clinically relevant time scales. We present some simulation results and assess the resolution and accuracy of our waveform tomography algorithms based on the simulation data.

  18. Through-Wall Multiple Targets Vital Signs Tracking Based on VMD Algorithm.

    PubMed

    Yan, Jiaming; Hong, Hong; Zhao, Heng; Li, Yusheng; Gu, Chen; Zhu, Xiaohua

    2016-01-01

    Targets located at the same distance are easily neglected in most through-wall multiple targets detecting applications which use the single-input single-output (SISO) ultra-wideband (UWB) radar system. In this paper, a novel multiple targets vital signs tracking algorithm for through-wall detection using SISO UWB radar has been proposed. Taking advantage of the high-resolution decomposition of the Variational Mode Decomposition (VMD) based algorithm, the respiration signals of different targets can be decomposed into different sub-signals, and then, we can track the time-varying respiration signals accurately when human targets located in the same distance. Intensive evaluation has been conducted to show the effectiveness of our scheme with a 0.15 m thick concrete brick wall. Constant, piecewise-constant and time-varying vital signs could be separated and tracked successfully with the proposed VMD based algorithm for two targets, even up to three targets. For the multiple targets' vital signs tracking issues like urban search and rescue missions, our algorithm has superior capability in most detection applications. PMID:27537880

  19. Efficient irregular wavefront propagation algorithms on Intel® Xeon Phi™

    PubMed Central

    Gomes, Jeremias M.; Teodoro, George; de Melo, Alba; Kong, Jun; Kurc, Tahsin; Saltz, Joel H.

    2016-01-01

    We investigate the execution of the Irregular Wavefront Propagation Pattern (IWPP), a fundamental computing structure used in several image analysis operations, on the Intel® Xeon Phi™ co-processor. An efficient implementation of IWPP on the Xeon Phi is a challenging problem because of IWPP’s irregularity and the use of atomic instructions in the original IWPP algorithm to resolve race conditions. On the Xeon Phi, the use of SIMD and vectorization instructions is critical to attain high performance. However, SIMD atomic instructions are not supported. Therefore, we propose a new IWPP algorithm that can take advantage of the supported SIMD instruction set. We also evaluate an alternate storage container (priority queue) to track active elements in the wavefront in an effort to improve the parallel algorithm efficiency. The new IWPP algorithm is evaluated with Morphological Reconstruction and Imfill operations as use cases. Our results show performance improvements of up to 5.63× on top of the original IWPP due to vectorization. Moreover, the new IWPP achieves speedups of 45.7× and 1.62×, respectively, as compared to efficient CPU and GPU implementations. PMID:27298591

  20. An ant colony optimization based algorithm for identifying gene regulatory elements.

    PubMed

    Liu, Wei; Chen, Hanwu; Chen, Ling

    2013-08-01

    It is one of the most important tasks in bioinformatics to identify the regulatory elements in gene sequences. Most of the existing algorithms for identifying regulatory elements are inclined to converge into a local optimum, and have high time complexity. Ant Colony Optimization (ACO) is a meta-heuristic method based on swarm intelligence and is derived from a model inspired by the collective foraging behavior of real ants. Taking advantage of the ACO in traits such as self-organization and robustness, this paper designs and implements an ACO based algorithm named ACRI (ant-colony-regulatory-identification) for identifying all possible binding sites of transcription factor from the upstream of co-expressed genes. To accelerate the ants' searching process, a strategy of local optimization is presented to adjust the ants' start positions on the searched sequences. By exploiting the powerful optimization ability of ACO, the algorithm ACRI can not only improve precision of the results, but also achieve a very high speed. Experimental results on real world datasets show that ACRI can outperform other traditional algorithms in the respects of speed and quality of solutions. PMID:23746735

  1. An events based algorithm for distributing concurrent tasks on multi-core architectures

    NASA Astrophysics Data System (ADS)

    Holmes, David W.; Williams, John R.; Tilke, Peter

    2010-02-01

    In this paper, a programming model is presented which enables scalable parallel performance on multi-core shared memory architectures. The model has been developed for application to a wide range of numerical simulation problems. Such problems involve time stepping or iteration algorithms where synchronization of multiple threads of execution is required. It is shown that traditional approaches to parallelism including message passing and scatter-gather can be improved upon in terms of speed-up and memory management. Using spatial decomposition to create orthogonal computational tasks, a new task management algorithm called H-Dispatch is developed. This algorithm makes efficient use of memory resources by limiting the need for garbage collection and takes optimal advantage of multiple cores by employing a "hungry" pull strategy. The technique is demonstrated on a simple finite difference solver and results are compared to traditional MPI and scatter-gather approaches. The H-Dispatch approach achieves near linear speed-up with results for efficiency of 85% on a 24-core machine. It is noted that the H-Dispatch algorithm is quite general and can be applied to a wide class of computational tasks on heterogeneous architectures involving multi-core and GPGPU hardware.

  2. jClustering, an open framework for the development of 4D clustering algorithms.

    PubMed

    Mateos-Pérez, José María; García-Villalba, Carmen; Pascau, Javier; Desco, Manuel; Vaquero, Juan J

    2013-01-01

    We present jClustering, an open framework for the design of clustering algorithms in dynamic medical imaging. We developed this tool because of the difficulty involved in manually segmenting dynamic PET images and the lack of availability of source code for published segmentation algorithms. Providing an easily extensible open tool encourages publication of source code to facilitate the process of comparing algorithms and provide interested third parties with the opportunity to review code. The internal structure of the framework allows an external developer to implement new algorithms easily and quickly, focusing only on the particulars of the method being implemented and not on image data handling and preprocessing. This tool has been coded in Java and is presented as an ImageJ plugin in order to take advantage of all the functionalities offered by this imaging analysis platform. Both binary packages and source code have been published, the latter under a free software license (GNU General Public License) to allow modification if necessary. PMID:23990913

  3. Accelerating k-NN Algorithm with Hybrid MPI and OpenSHMEM

    SciTech Connect

    Lin, Jian; Hamidouche, Khaled; Zheng, Jie; Lu, Xiaoyi; Vishnu, Abhinav; Panda, Dhabaleswar

    2015-08-05

    Machine Learning algorithms are benefiting from the continuous improvement of programming models, including MPI, MapReduce and PGAS. k-Nearest Neighbors (k-NN) algorithm is a widely used machine learning algorithm, applied to supervised learning tasks such as classification. Several parallel implementations of k-NN have been proposed in the literature and practice. However, on high-performance computing systems with high-speed interconnects, it is important to further accelerate existing designs of the k-NN algorithm through taking advantage of scalable programming models. To improve the performance of k-NN on large-scale environment with InfiniBand network, this paper proposes several alternative hybrid MPI+OpenSHMEM designs and performs a systemic evaluation and analysis on typical workloads. The hybrid designs leverage the one-sided memory access to better overlap communication with computation than the existing pure MPI design, and propose better schemes for efficient buffer management. The implementation based on k-NN program from MaTEx with MVAPICH2-X (Unified MPI+PGAS Communication Runtime over InfiniBand) shows up to 9.0% time reduction for training KDD Cup 2010 workload over 512 cores, and 27.6% time reduction for small workload with balanced communication and computation. Experiments of running with varied number of cores show that our design can maintain good scalability.

  4. Staged optimization algorithms based MAC dynamic bandwidth allocation for OFDMA-PON

    NASA Astrophysics Data System (ADS)

    Liu, Yafan; Qian, Chen; Cao, Bingyao; Dun, Han; Shi, Yan; Zou, Junni; Lin, Rujian; Wang, Min

    2016-06-01

    Orthogonal frequency division multiple access passive optical network (OFDMA-PON) has being considered as a promising solution for next generation PONs due to its high spectral efficiency and flexible bandwidth allocation scheme. In order to take full advantage of these merits of OFDMA-PON, a high-efficiency medium access control (MAC) dynamic bandwidth allocation (DBA) scheme is needed. In this paper, we propose two DBA algorithms which can act on two different stages of a resource allocation process. To achieve higher bandwidth utilization and ensure the equity of ONUs, we propose a DBA algorithm based on frame structure for the stage of physical layer mapping. Targeting the global quality of service (QoS) of OFDMA-PON, we propose a full-range DBA algorithm with service level agreement (SLA) and class of service (CoS) for the stage of bandwidth allocation arbitration. The performance of the proposed MAC DBA scheme containing these two algorithms is evaluated using numerical simulations. Simulations of a 15 Gbps network with 1024 sub-carriers and 32 ONUs demonstrate the maximum network throughput of 14.87 Gbps and the maximum packet delay of 1.45 ms for the highest priority CoS under high load condition.

  5. Back to Basics: A Bilingual Advantage in Infant Visual Habituation

    ERIC Educational Resources Information Center

    Singh, Leher; Fu, Charlene S. L.; Rahman, Aishah A.; Hameed, Waseem B.; Sanmugam, Shamini; Agarwal, Pratibha; Jiang, Binyan; Chong, Yap Seng; Meaney, Michael J.; Rifkin-Graboi, Anne

    2015-01-01

    Comparisons of cognitive processing in monolinguals and bilinguals have revealed a bilingual advantage in inhibitory control. Recent studies have demonstrated advantages associated with exposure to two languages in infancy. However, the domain specificity and scope of the infant bilingual advantage in infancy remains unclear. In the present study,…

  6. Did Babe Ruth Have a Comparative Advantage as a Pitcher?

    ERIC Educational Resources Information Center

    Scahill, Edward M.

    1990-01-01

    Advocates using baseball statistics to illustrate the advantages of specialization in production. Using Babe Ruth's record as an analogy, suggests a methodology for determining a player's comparative advantage as a teaching illustration. Includes the team's statistical profile in five tables to explain comparative advantage and profit maximizing.…

  7. The new Medicare Advantage a disadvantage for providers?

    PubMed

    O'Hare, Patrick K

    2004-03-01

    The Medicare Advantage program, a provision of the Medicare Prescription Drug, Improvement, and Modernization Act of 2003, encourages providers to think about dealing with Medicare Advantage plans the way they now deal with commercial payers. Consequently, the Medicare Advantage program could either benefit or harm providers. PMID:15029797

  8. Double regions growing algorithm for automated satellite image mosaicking

    NASA Astrophysics Data System (ADS)

    Tan, Yihua; Chen, Chen; Tian, Jinwen

    2011-12-01

    Feathering is a most widely used method in seamless satellite image mosaicking. A simple but effective algorithm - double regions growing (DRG) algorithm, which utilizes the shape content of images' valid regions, is proposed for generating robust feathering-line before feathering. It works without any human intervention, and experiment on real satellite images shows the advantages of the proposed method.

  9. NASA Team 2 Sea Ice Concentration Algorithm Retrieval Uncertainty

    NASA Technical Reports Server (NTRS)

    Brucker, Ludovic; Cavalieri, Donald J.; Markus, Thorsten; Ivanoff, Alvaro

    2014-01-01

    Satellite microwave radiometers are widely used to estimate sea ice cover properties (concentration, extent, and area) through the use of sea ice concentration (IC) algorithms. Rare are the algorithms providing associated IC uncertainty estimates. Algorithm uncertainty estimates are needed to assess accurately global and regional trends in IC (and thus extent and area), and to improve sea ice predictions on seasonal to interannual timescales using data assimilation approaches. This paper presents a method to provide relative IC uncertainty estimates using the enhanced NASA Team (NT2) IC algorithm. The proposed approach takes advantage of the NT2 calculations and solely relies on the brightness temperatures (TBs) used as input. NT2 IC and its associated relative uncertainty are obtained for both the Northern and Southern Hemispheres using the Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E) TB. NT2 IC relative uncertainties estimated on a footprint-by-footprint swath-by-swath basis were averaged daily over each 12.5-km grid cell of the polar stereographic grid. For both hemispheres and throughout the year, the NT2 relative uncertainty is less than 5%. In the Southern Hemisphere, it is low in the interior ice pack, and it increases in the marginal ice zone up to 5%. In the Northern Hemisphere, areas with high uncertainties are also found in the high IC area of the Central Arctic. Retrieval uncertainties are greater in areas corresponding to NT2 ice types associated with deep snow and new ice. Seasonal variations in uncertainty show larger values in summer as a result of melt conditions and greater atmospheric contributions. Our analysis also includes an evaluation of the NT2 algorithm sensitivity to AMSR-E sensor noise. There is a 60% probability that the IC does not change (to within the computed retrieval precision of 1%) due to sensor noise, and the cumulated probability shows that there is a 90% chance that the IC varies by less than

  10. The Advantages of Fixed Facilities in Characterizing TRU Wastes

    SciTech Connect

    FRENCH, M.S.

    2000-02-08

    In May 1998 the Hanford Site started developing a program for characterization of transuranic (TRU) waste for shipment to the Waste Isolation Pilot Plant (WIPP) in New Mexico. After less than two years, Hanford will have a program certified by the Carlsbad Area Office (CAO). By picking a simple waste stream, taking advantage of lessons learned at the other sites, as well as communicating effectively with the CAO, Hanford was able to achieve certification in record time. This effort was further simplified by having a centralized program centered on the Waste Receiving and Processing (WRAP) Facility that contains most of the equipment required to characterize TRU waste. The use of fixed facilities for the characterization of TRU waste at sites with a long-term clean-up mission can be cost effective for several reasons. These include the ability to control the environment in which sensitive instrumentation is required to operate and ensuring that calibrations and maintenance activities are scheduled and performed as an operating routine. Other factors contributing to cost effectiveness include providing approved procedures and facilities for handling hazardous materials and anticipated contingencies and performing essential evolutions, and regulating and smoothing the work load and environmental conditions to provide maximal efficiency and productivity. Another advantage is the ability to efficiently provide characterization services to other sites in the Department of Energy (DOE) Complex that do not have the same capabilities. The Waste Receiving and Processing (WRAP) Facility is a state-of-the-art facility designed to consolidate the operations necessary to inspect, process and ship waste to facilitate verification of contents for certification to established waste acceptance criteria. The WRAP facility inspects, characterizes, treats, and certifies transuranic (TRU), low-level and mixed waste at the Hanford Site in Washington state. Fluor Hanford operates the $89

  11. Fast three-step phase-shifting algorithm

    SciTech Connect

    Huang, Peisen S.; Zhang Song

    2006-07-20

    We propose a new three-step phase-shifting algorithm, which is much faster than the traditional three-step algorithm. We achieve the speed advantage by using a simple intensity ratio function to replace the arc tangent function in the traditional algorithm. The phase error caused by this new algorithm is compensated for by use of a lookup table. Our experimental result sshow that both the new algorithm and the traditional algorithm generate similar results, but the new algorithm is 3.4 times faster. By implementing this new algorithm in a high-resolution, real-time three-dimensional shape measurement system,we were able to achieve a measurement speed of 40 frames per second ata resolution of 532x500 pixels, all with an ordinary personal computer.

  12. Take Steps to Prevent Type 2 Diabetes

    MedlinePlus

    ... I at Risk? 4 of 9 sections Take Action! Take Action: Talk to Your Doctor Take these steps to ... Previous section Signs 5 of 9 sections Take Action: Cost and Insurance What about cost? Thanks to ...

  13. Fever and Taking Your Child's Temperature

    MedlinePlus

    ... About Zika & Pregnancy Fever and Taking Your Child's Temperature KidsHealth > For Parents > Fever and Taking Your Child's ... a mercury thermometer.) previous continue Tips for Taking Temperatures As any parent knows, taking a squirming child's ...

  14. Associated petroleum gas utilization in Tomsk Oblast: energy efficiency and tax advantages

    NASA Astrophysics Data System (ADS)

    Vazim, A.; Romanyuk, V.; Ahmadeev, K.; Matveenko, I.

    2015-11-01

    This article deals with oil production companies activities in increasing the utilization volume of associated petroleum gas (APG) in Tomsk Oblast. Cost-effectiveness analysis of associated petroleum gas utilization was carried out using the example of gas engine power station AGP-350 implementation at Yuzhno-Cheremshanskoye field, Tomsk Oblast. Authors calculated the effectiveness taking into account the tax advantages of 2012. The implementation of this facility shows high profitability, the payback period being less than 2 years.

  15. Taking a History of Childhood Trauma in Psychotherapy

    PubMed Central

    SAPORTA, JOSÉ A.; GANS, JEROME S.

    1995-01-01

    The authors examine the process of taking an initial history of childhood abuse and trauma in psychodynamic psychotherapy. In exploring the advantages, complexities, and potential complications of this practice, they hope to heighten the sensitivities of clinicians taking trauma histories. Emphasis on the need to be active in eliciting important historical material is balanced with discussion of concepts that can help therapists avoid interpersonal dynamics that reenact and perpetuate the traumas the therapy seeks to treat. Ensuring optimal psychotherapeutic treatment for patients who have experienced childhood trauma requires attention to the following concepts: a safe holding environment, destabilization, compliance, the repetition compulsion, and projective identification. PMID:22700250

  16. Recursive Branching Simulated Annealing Algorithm

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew; Smith, J. Scott; Aronstein, David

    2012-01-01

    This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal

  17. [A Brillouin Scattering Spectrum Feature Extraction Based on Flies Optimization Algorithm with Adaptive Mutation and Generalized Regression Neural Network].

    PubMed

    Zhang, Yan-jun; Liu, Wen-zhe; Fu, Xing-hu; Bi, Wei-hong

    2015-10-01

    According to the high precision extracting characteristics of scattering spectrum in Brillouin optical time domain reflection optical fiber sensing system, this paper proposes a new algorithm based on flies optimization algorithm with adaptive mutation and generalized regression neural network. The method takes advantages of the generalized regression neural network which has the ability of the approximation ability, learning speed and generalization of the model. Moreover, by using the strong search ability of flies optimization algorithm with adaptive mutation, it can enhance the learning ability of the neural network. Thus the fitting degree of Brillouin scattering spectrum and the extraction accuracy of frequency shift is improved. Model of actual Brillouin spectrum are constructed by Gaussian white noise on theoretical spectrum, whose center frequency is 11.213 GHz and the linewidths are 40-50, 30-60 and 20-70 MHz, respectively. Comparing the algorithm with the Levenberg-Marquardt fitting method based on finite element analysis, hybrid algorithm particle swarm optimization, Levenberg-Marquardt and the least square method, the maximum frequency shift error of the new algorithm is 0.4 MHz, the fitting degree is 0.991 2 and the root mean square error is 0.024 1. The simulation results show that the proposed algorithm has good fitting degree and minimum absolute error. Therefore, the algorithm can be used on distributed optical fiber sensing system based on Brillouin optical time domain reflection, which can improve the fitting of Brillouin scattering spectrum and the precision of frequency shift extraction effectively. PMID:26904844

  18. College Presidents Take on 21

    ERIC Educational Resources Information Center

    Fain, Paul

    2008-01-01

    College presidents have long gotten flak for refusing to take controversial stands on national issues. A large group of presidents opened an emotionally charged national debate on the drinking age. In doing so, they triggered an avalanche of news-media coverage and a fierce backlash. While the criticism may sting, the prime-time fracas may help…

  19. Synthesis Can Take Many Forms

    ERIC Educational Resources Information Center

    Darrow, Rob

    2005-01-01

    Synthesis can take many forms at the high school level and from a Big6 perspective. Synthesis means purposeful, valuable and interesting assignments. It is very important for a classroom teacher to recognize that students can synthesize information several times during a project and that there are many different ways to present information.

  20. Take Charge of Your Career

    ERIC Educational Resources Information Center

    Brown, Marshall A.

    2013-01-01

    Today's work world is full of uncertainty. Every day, people hear about another organization going out of business, downsizing, or rightsizing. To prepare for these uncertain times, one must take charge of their own career. This article presents some tips for surviving in today's world of work: (1) Be self-managing; (2) Know what you…

  1. Taking Stock and Standing down

    ERIC Educational Resources Information Center

    Peeler, Tom

    2009-01-01

    Standing down is an action the military takes to review, regroup, and reorganize. Unfortunately, it often comes after an accident or other tragic event. To stop losses, the military will "stand down" until they are confident they can resume safe operations. Standing down is good for everyone, not just the military. In today's fast-paced world,…

  2. Taking your carotid pulse (image)

    MedlinePlus

    ... take oxygenated blood from the heart to the brain. The pulse from the carotids may be felt on either side of the front of the neck just below the angle of the jaw. This rhythmic "beat" is caused by varying volumes of blood being pushed out of the heart ...

  3. Aspiring Teachers Take up Residence

    ERIC Educational Resources Information Center

    Honawar, Vaishall

    2008-01-01

    The Boston Teacher Residency program is a yearlong, selective preparation route that trains aspiring teachers, many of them career-changers, to take on jobs in some of the city's highest-needs schools. The program, which fits neither of the two most common types of teacher preparation--alternative routes and traditional teacher education…

  4. Pair take top science posts

    NASA Astrophysics Data System (ADS)

    Pockley, Peter

    2008-11-01

    Australia's science minister Kim Carr has appointed physical scientists to key posts. Penny Sackett, an astronomer, takes over as the government's chief scientist this month, while in January geologist Megan Clark will become chief executive of the Commonwealth Scientific and Industrial Research Organisation (CSIRO), the county's largest research agency. Both five-year appointments have been welcomed by researchers.

  5. Taking Stands for Social Justice

    ERIC Educational Resources Information Center

    Lindley, Lorinda; Rios, Francisco

    2004-01-01

    In this paper the authors describe efforts to help students take a stand for social justice in the College of Education at one predominantly White institution in the western Rocky Mountain region. The authors outline the theoretical frameworks that inform this work and the context of our work. The focus is on specific pedagogical strategies used…

  6. Four Takes on Tough Times

    ERIC Educational Resources Information Center

    Rebell, Michael A.; Odden, Allan; Rolle, Anthony; Guthrie, James W.

    2012-01-01

    Educational Leadership talks with four experts in the fields of education policy and finance about how schools can weather the current financial crisis. Michael A. Rebell focuses on the recession and students' rights; Allan Odden suggests five steps schools can take to improve in tough times; Anthony Rolle describes the tension between equity and…

  7. Intuitive Risk Taking during Adolescence

    ERIC Educational Resources Information Center

    Holland, James D.; Klaczynski, Paul A.

    2009-01-01

    Adolescents frequently engage in risky behaviors that endanger both themselves and others. Critical to the development of effective interventions is an understanding of the processes adolescents go through when deciding to take risks. This article explores two information processing systems; a slow, deliberative, analytic system and a quick,…

  8. Professionalism: Teachers Taking the Reins

    ERIC Educational Resources Information Center

    Helterbran, Valeri R.

    2008-01-01

    It is essential that teachers take a proactive look at their profession and themselves to strengthen areas of professionalism over which they have control. In this article, the author suggests strategies that include collaborative planning, reflectivity, growth in the profession, and the examination of certain personal characteristics.

  9. Library of Continuation Algorithms

    2005-03-01

    LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newton’s method for their nonlinear solve.

  10. A joint effort to deliver satellite retrieved atmospheric CO2 concentrations for surface flux inversions: the ensemble median algorithm EMMA

    NASA Astrophysics Data System (ADS)

    Reuter, M.; Bösch, H.; Bovensmann, H.; Bril, A.; Buchwitz, M.; Butz, A.; Burrows, J. P.; O'Dell, C. W.; Guerlet, S.; Hasekamp, O.; Heymann, J.; Kikuchi, N.; Oshchepkov, S.; Parker, R.; Pfeifer, S.; Schneising, O.; Yokota, T.; Yoshida, Y.

    2012-09-01

    We analyze an ensemble of seven XCO2 retrieval algorithms for SCIAMACHY and GOSAT. The ensemble spread can be interpreted as regional uncertainty and can help to identify locations for new TCCON validation sites. Additionally, we introduce the ensemble median algorithm EMMA combining individual soundings of the seven algorithms into one new dataset. The ensemble takes advantage of the algorithms' independent developments. We find ensemble spreads being often <1 ppm but rising up to 2 ppm especially in the tropics and East Asia. On the basis of gridded monthly averages, we compare EMMA and all individual algorithms with TCCON and CarbonTracker model results (potential outliers, north/south gradient, seasonal (peak-to-peak) amplitude, standard deviation of the difference). Our findings show that EMMA is a promising candidate for inverse modeling studies. Compared to CarbonTracker, the satellite retrievals find consistently larger north/south gradients (by 0.3 ppm-0.9 ppm) and seasonal amplitudes (by 1.5 ppm-2.0 ppm).

  11. A joint effort to deliver satellite retrieved atmospheric CO2 concentrations for surface flux inversions: the ensemble median algorithm EMMA

    NASA Astrophysics Data System (ADS)

    Reuter, M.; Bösch, H.; Bovensmann, H.; Bril, A.; Buchwitz, M.; Butz, A.; Burrows, J. P.; O'Dell, C. W.; Guerlet, S.; Hasekamp, O.; Heymann, J.; Kikuchi, N.; Oshchepkov, S.; Parker, R.; Pfeifer, S.; Schneising, O.; Yokota, T.; Yoshida, Y.

    2013-02-01

    We analyze an ensemble of seven XCO2 retrieval algorithms for SCIAMACHY (scanning imaging absorption spectrometer of atmospheric chartography) and GOSAT (greenhouse gases observing satellite). The ensemble spread can be interpreted as regional uncertainty and can help to identify locations for new TCCON (total carbon column observing network) validation sites. Additionally, we introduce the ensemble median algorithm EMMA combining individual soundings of the seven algorithms into one new data set. The ensemble takes advantage of the algorithms' independent developments. We find ensemble spreads being often < 1 ppm but rising up to 2 ppm especially in the tropics and East Asia. On the basis of gridded monthly averages, we compare EMMA and all individual algorithms with TCCON and CarbonTracker model results (potential outliers, north/south gradient, seasonal (peak-to-peak) amplitude, standard deviation of the difference). Our findings show that EMMA is a promising candidate for inverse modeling studies. Compared to CarbonTracker, the satellite retrievals find consistently larger north/south gradients (by 0.3-0.9 ppm) and seasonal amplitudes (by 1.5-2.0 ppm).

  12. When perspective taking increases taking: reactive egoism in social interaction.

    PubMed

    Epley, Nicholas; Caruso, Eugene; Bazerman, Max H

    2006-11-01

    Group members often reason egocentrically, believing that they deserve more than their fair share of group resources. Leading people to consider other members' thoughts and perspectives can reduce these egocentric (self-centered) judgments such that people claim that it is fair for them to take less; however, the consideration of others' thoughts and perspectives actually increases egoistic (selfish) behavior such that people actually take more of available resources. A series of experiments demonstrates this pattern in competitive contexts in which considering others' perspectives activates egoistic theories of their likely behavior, leading people to counter by behaving more egoistically themselves. This reactive egoism is attenuated in cooperative contexts. Discussion focuses on the implications of reactive egoism in social interaction and on strategies for alleviating its potentially deleterious effects. PMID:17059307

  13. Algorithms for High-Speed Noninvasive Eye-Tracking System

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Morookian, John-Michael; Lambert, James

    2010-01-01

    Two image-data-processing algorithms are essential to the successful operation of a system of electronic hardware and software that noninvasively tracks the direction of a person s gaze in real time. The system was described in High-Speed Noninvasive Eye-Tracking System (NPO-30700) NASA Tech Briefs, Vol. 31, No. 8 (August 2007), page 51. To recapitulate from the cited article: Like prior commercial noninvasive eyetracking systems, this system is based on (1) illumination of an eye by a low-power infrared light-emitting diode (LED); (2) acquisition of video images of the pupil, iris, and cornea in the reflected infrared light; (3) digitization of the images; and (4) processing the digital image data to determine the direction of gaze from the centroids of the pupil and cornea in the images. Most of the prior commercial noninvasive eyetracking systems rely on standard video cameras, which operate at frame rates of about 30 Hz. Such systems are limited to slow, full-frame operation. The video camera in the present system includes a charge-coupled-device (CCD) image detector plus electronic circuitry capable of implementing an advanced control scheme that effects readout from a small region of interest (ROI), or subwindow, of the full image. Inasmuch as the image features of interest (the cornea and pupil) typically occupy a small part of the camera frame, this ROI capability can be exploited to determine the direction of gaze at a high frame rate by reading out from the ROI that contains the cornea and pupil (but not from the rest of the image) repeatedly. One of the present algorithms exploits the ROI capability. The algorithm takes horizontal row slices and takes advantage of the symmetry of the pupil and cornea circles and of the gray-scale contrasts of the pupil and cornea with respect to other parts of the eye. The algorithm determines which horizontal image slices contain the pupil and cornea, and, on each valid slice, the end coordinates of the pupil and cornea

  14. The BR eigenvalue algorithm

    SciTech Connect

    Geist, G.A.; Howell, G.W.; Watkins, D.S.

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  15. A well-separated pairs decomposition algorithm for k-d trees implemented on multi-core architectures

    NASA Astrophysics Data System (ADS)

    Lopes, Raul H. C.; Reid, Ivan D.; Hobson, Peter R.

    2014-06-01

    Variations of k-d trees represent a fundamental data structure used in Computational Geometry with numerous applications in science. For example particle track fitting in the software of the LHC experiments, and in simulations of N-body systems in the study of dynamics of interacting galaxies, particle beam physics, and molecular dynamics in biochemistry. The many-body tree methods devised by Barnes and Hutt in the 1980s and the Fast Multipole Method introduced in 1987 by Greengard and Rokhlin use variants of k-d trees to reduce the computation time upper bounds to O(n log n) and even O(n) from O(n2). We present an algorithm that uses the principle of well-separated pairs decomposition to always produce compressed trees in O(n log n) work. We present and evaluate parallel implementations for the algorithm that can take advantage of multi-core architectures.

  16. [Conclusions. The precautionary principle: its advantages and risks].

    PubMed

    Tubiana, M

    2000-01-01

    The proposed extension to health of the precautionary principle is the reaction to two social demands: the desire for greater health safety and for more transparency in the decision making process by associating the public. In medical care, all decisions are based on the balance between cost (dangers induced by the treatment) and benefit (the therapeutic effect). It is as dangerous to overestimate the cost, in other words the risks, as it is to underestimate them. The same problem is encountered in public health. If a vaccination is to be prescribed, the beneficial effects must outweigh the risks; however, these risks are inevitable and have been known to exist since the 18th century, but they have been accepted for the public good. It takes courage to make a vaccination mandatory because those who benefit from it will never know, while those who suffer from its ill effects could take legal action. In order to counter accusations, an evaluation must be made beforehand of the risks and benefits, which underlines the important role of expert opinion. Within the framework of the precautionary principle, actions cannot be taken in ignorance and, at the very least, plausible estimations must be made. The analysis of several recent events (contaminated blood, BSE, growth hormone and Creutzfeldt-Jacob disease) shows that the precautionary principle would have had a very limited impact and that only once there was sufficient knowledge was action made possible. The same is true concerning current debates (the possible risks associated with electromagnetic fields, mobile phones and radon); in these three cases, no country in the world has invoked the precautionary principle, but rather the priority has been given to research. The public understands quite readily the cost/benefit relationship. In the case of oral contraceptives, or hormone replacement therapy the public was aware of their possible health risks but judged that the advantages outweighed the risks. The

  17. The Optimization of Trained and Untrained Image Classification Algorithms for Use on Large Spatial Datasets

    NASA Technical Reports Server (NTRS)

    Kocurek, Michael J.

    2005-01-01

    The HARVIST project seeks to automatically provide an accurate, interactive interface to predict crop yield over the entire United States. In order to accomplish this goal, large images must be quickly and automatically classified by crop type. Current trained and untrained classification algorithms, while accurate, are highly inefficient when operating on large datasets. This project sought to develop new variants of two standard trained and untrained classification algorithms that are optimized to take advantage of the spatial nature of image data. The first algorithm, harvist-cluster, utilizes divide-and-conquer techniques to precluster an image in the hopes of increasing overall clustering speed. The second algorithm, harvistSVM, utilizes support vector machines (SVMs), a type of trained classifier. It seeks to increase classification speed by applying a "meta-SVM" to a quick (but inaccurate) SVM to approximate a slower, yet more accurate, SVM. Speedups were achieved by tuning the algorithm to quickly identify when the quick SVM was incorrect, and then reclassifying low-confidence pixels as necessary. Comparing the classification speeds of both algorithms to known baselines showed a slight speedup for large values of k (the number of clusters) for harvist-cluster, and a significant speedup for harvistSVM. Future work aims to automate the parameter tuning process required for harvistSVM, and further improve classification accuracy and speed. Additionally, this research will move documents created in Canvas into ArcGIS. The launch of the Mars Reconnaissance Orbiter (MRO) will provide a wealth of image data such as global maps of Martian weather and high resolution global images of Mars. The ability to store this new data in a georeferenced format will support future Mars missions by providing data for landing site selection and the search for water on Mars.

  18. A seed expanding cluster algorithm for deriving upwelling areas on sea surface temperature images

    NASA Astrophysics Data System (ADS)

    Nascimento, Susana; Casca, Sérgio; Mirkin, Boris

    2015-12-01

    In this paper a novel clustering algorithm is proposed as a version of the seeded region growing (SRG) approach for the automatic recognition of coastal upwelling from sea surface temperature (SST) images. The new algorithm, one seed expanding cluster (SEC), takes advantage of the concept of approximate clustering due to Mirkin (1996, 2013) to derive a homogeneity criterion in the format of a product rather than the conventional difference between a pixel value and the mean of values over the region of interest. It involves a boundary-oriented pixel labeling so that the cluster growing is performed by expanding its boundary iteratively. The starting point is a cluster consisting of just one seed, the pixel with the coldest temperature. The baseline version of the SEC algorithm uses Otsu's thresholding method to fine-tune the homogeneity threshold. Unfortunately, this method does not always lead to a satisfactory solution. Therefore, we introduce a self-tuning version of the algorithm in which the homogeneity threshold is locally derived from the approximation criterion over a window around the pixel under consideration. The window serves as a boundary regularizer. These two unsupervised versions of the algorithm have been applied to a set of 28 SST images of the western coast of mainland Portugal, and compared against a supervised version fine-tuned by maximizing the F-measure with respect to manually labeled ground-truth maps. The areas built by the unsupervised versions of the SEC algorithm are significantly coincident over the ground-truth regions in the cases at which the upwelling areas consist of a single continuous fragment of the SST map.

  19. A set-covering based heuristic algorithm for the periodic vehicle routing problem.

    PubMed

    Cacchiani, V; Hemmelmayr, V C; Tricoire, F

    2014-01-30

    We present a hybrid optimization algorithm for mixed-integer linear programming, embedding both heuristic and exact components. In order to validate it we use the periodic vehicle routing problem (PVRP) as a case study. This problem consists of determining a set of minimum cost routes for each day of a given planning horizon, with the constraints that each customer must be visited a required number of times (chosen among a set of valid day combinations), must receive every time the required quantity of product, and that the number of routes per day (each respecting the capacity of the vehicle) does not exceed the total number of available vehicles. This is a generalization of the well-known vehicle routing problem (VRP). Our algorithm is based on the linear programming (LP) relaxation of a set-covering-like integer linear programming formulation of the problem, with additional constraints. The LP-relaxation is solved by column generation, where columns are generated heuristically by an iterated local search algorithm. The whole solution method takes advantage of the LP-solution and applies techniques of fixing and releasing of the columns as a local search, making use of a tabu list to avoid cycling. We show the results of the proposed algorithm on benchmark instances from the literature and compare them to the state-of-the-art algorithms, showing the effectiveness of our approach in producing good quality solutions. In addition, we report the results on realistic instances of the PVRP introduced in Pacheco et al. (2011)  [24] and on benchmark instances of the periodic traveling salesman problem (PTSP), showing the efficacy of the proposed algorithm on these as well. Finally, we report the new best known solutions found for all the tested problems. PMID:24748696

  20. Detection Algorithms of the Seismic Alert System of Mexico (SASMEX)

    NASA Astrophysics Data System (ADS)

    Cuellar Martinez, A.; Espinosa Aranda, J.; Ramos Perez, S.; Ibarrola Alvarez, G.; Zavala Guerrero, M.; Sasmex

    2013-05-01

    The importance of a rapid and reliable detection of an earthquake, allows taking advantage with more opportunity time of any possible opportunity warnings to the population. Thus detection algorithms in the sensing field station (FS) of an earthquake early earning system, must have a high rate of correct detection; this condition lets perform numerical processes to obtain appropriate parameters for the alert activation. During the evolution and continuous service of the Mexican Seismic Alert System (SASMEX) in more than 23 operation years, it has used various methodologies in the detection process to get the largest opportunity time when an earthquake occurs and it is alerted. In addition to the characteristics of the acceleration signal observed in sensing field stations, it is necessary the site conditions reducing urban noise, but sometimes it is not present through of the first operation years, however, urban growth near to FS cause urban noise, which should be tolerated while carrying out the relocation process of the station, and in the algorithm design should be contemplating the robustness to reduce possible errors and false detections. This work presents some results on detection algorithms used in Mexico for early warning systems for earthquakes considering recent events and different opportunity times obtained depending of the detections on P and S phases of the earthquake detected in the station. Some methodologies are reviewed and described in detail in this work and the main features implemented in The Seismic Alert System of Mexico City (SAS), in continuous operation since 1991, and the Seismic Alert System of Oaxaca City (SASO), today both comprise the SASMEX.

  1. Algorithm for obtaining angular fluxes in a cell for the LUCKY and LUCKY{sub C} multiprocessor programs

    SciTech Connect

    Moryakov, A. V.

    2012-12-15

    Basic formulas for solving the transport equation in a cell are presented. The algorithm has been implemented in the LUCKY and LUCKY{sub C} programs. The advantages of the proposed algorithm are described.

  2. A smoothing algorithm using cubic spline functions

    NASA Technical Reports Server (NTRS)

    Smith, R. E., Jr.; Price, J. M.; Howser, L. M.

    1974-01-01

    Two algorithms are presented for smoothing arbitrary sets of data. They are the explicit variable algorithm and the parametric variable algorithm. The former would be used where large gradients are not encountered because of the smaller amount of calculation required. The latter would be used if the data being smoothed were double valued or experienced large gradients. Both algorithms use a least-squares technique to obtain a cubic spline fit to the data. The advantage of the spline fit is that the first and second derivatives are continuous. This method is best used in an interactive graphics environment so that the junction values for the spline curve can be manipulated to improve the fit.

  3. Genetic algorithms and supernovae type Ia analysis

    SciTech Connect

    Bogdanos, Charalampos; Nesseris, Savvas E-mail: nesseris@nbi.dk

    2009-05-15

    We introduce genetic algorithms as a means to analyze supernovae type Ia data and extract model-independent constraints on the evolution of the Dark Energy equation of state w(z) {identical_to} P{sub DE}/{rho}{sub DE}. Specifically, we will give a brief introduction to the genetic algorithms along with some simple examples to illustrate their advantages and finally we will apply them to the supernovae type Ia data. We find that genetic algorithms can lead to results in line with already established parametric and non-parametric reconstruction methods and could be used as a complementary way of treating SNIa data. As a non-parametric method, genetic algorithms provide a model-independent way to analyze data and can minimize bias due to premature choice of a dark energy model.

  4. Sleep Deprivation and Advice Taking.

    PubMed

    Häusser, Jan Alexander; Leder, Johannes; Ketturat, Charlene; Dresler, Martin; Faber, Nadira Sophie

    2016-01-01

    Judgements and decisions in many political, economic or medical contexts are often made while sleep deprived. Furthermore, in such contexts individuals are required to integrate information provided by - more or less qualified - advisors. We asked if sleep deprivation affects advice taking. We conducted a 2 (sleep deprivation: yes vs. no) ×2 (competency of advisor: medium vs. high) experimental study to examine the effects of sleep deprivation on advice taking in an estimation task. We compared participants with one night of total sleep deprivation to participants with a night of regular sleep. Competency of advisor was manipulated within subjects. We found that sleep deprived participants show increased advice taking. An interaction of condition and competency of advisor and further post-hoc analyses revealed that this effect was more pronounced for the medium competency advisor compared to the high competency advisor. Furthermore, sleep deprived participants benefited more from an advisor of high competency in terms of stronger improvement in judgmental accuracy than well-rested participants. PMID:27109507

  5. Sleep Deprivation and Advice Taking

    PubMed Central

    Häusser, Jan Alexander; Leder, Johannes; Ketturat, Charlene; Dresler, Martin; Faber, Nadira Sophie

    2016-01-01

    Judgements and decisions in many political, economic or medical contexts are often made while sleep deprived. Furthermore, in such contexts individuals are required to integrate information provided by – more or less qualified – advisors. We asked if sleep deprivation affects advice taking. We conducted a 2 (sleep deprivation: yes vs. no) ×2 (competency of advisor: medium vs. high) experimental study to examine the effects of sleep deprivation on advice taking in an estimation task. We compared participants with one night of total sleep deprivation to participants with a night of regular sleep. Competency of advisor was manipulated within subjects. We found that sleep deprived participants show increased advice taking. An interaction of condition and competency of advisor and further post-hoc analyses revealed that this effect was more pronounced for the medium competency advisor compared to the high competency advisor. Furthermore, sleep deprived participants benefited more from an advisor of high competency in terms of stronger improvement in judgmental accuracy than well-rested participants. PMID:27109507

  6. Parallelization of the Pipelined Thomas Algorithm

    NASA Technical Reports Server (NTRS)

    Povitsky, A.

    1998-01-01

    In this study the following questions are addressed. Is it possible to improve the parallelization efficiency of the Thomas algorithm? How should the Thomas algorithm be formulated in order to get solved lines that are used as data for other computational tasks while processors are idle? To answer these questions, two-step pipelined algorithms (PAs) are introduced formally. It is shown that the idle processor time is invariant with respect to the order of backward and forward steps in PAs starting from one outermost processor. The advantage of PAs starting from two outermost processors is small. Versions of the pipelined Thomas algorithms considered here fall into the category of PAs. These results show that the parallelization efficiency of the Thomas algorithm cannot be improved directly. However, the processor idle time can be used if some data has been computed by the time processors become idle. To achieve this goal the Immediate Backward pipelined Thomas Algorithm (IB-PTA) is developed in this article. The backward step is computed immediately after the forward step has been completed for the first portion of lines. This enables the completion of the Thomas algorithm for some of these lines before processors become idle. An algorithm for generating a static processor schedule recursively is developed. This schedule is used to switch between forward and backward computations and to control communications between processors. The advantage of the IB-PTA over the basic PTA is the presence of solved lines, which are available for other computations, by the time processors become idle.

  7. Take the monkey and run

    PubMed Central

    Phillips, Kimberley A.; Hambright, M. Karen; Hewes, Kelly; Schilder, Brian M.; Ross, Corinna N.; Tardif, Suzette D.

    2015-01-01

    Background The common marmoset (Callithrix jacchus) is a small, New World primate that is used extensively in biomedical and behavioral research. This short-lived primate, with its small body size, ease of handling, and docile temperament, has emerged as a valuable model for aging and neurodegenerative research. A growing body of research has indicated exercise, aerobic exercise especially, imparts beneficial effects to normal aging. Understanding the mechanisms underlying these positive effects of exercise, and the degree to which exercise has neurotherapeutic effects, is an important research focus. Thus, developing techniques to engage marmosets in aerobic exercise would have great advantages. New method Here we describe the marmoset exercise ball (MEB) paradigm: a safe (for both experimenter and subjects), novel and effective means to engage marmosets in aerobic exercise. We trained young adult male marmosets to run on treadmills for 30 min a day, 3 days a week. Results Our training procedures allowed us to engage male marmosets in this aerobic exercise within 4 weeks, and subjects maintained this frequency of exercise for 3 months. Comparison with existing methods To our knowledge, this is the first described method to engage marmosets in aerobic exercise. A major advantage of this exercise paradigm is that while it was technically forced exercise, it did not appear to induce stress in the marmosets. Conclusions These techniques should be useful to researchers wishing to address physiological responses of exercise in a marmoset model. PMID:25835199

  8. Reproductive fitness advantage of BCR-ABL expressing leukemia cells.

    PubMed

    Traulsen, Arne; Pacheco, Jorge M; Dingli, David

    2010-08-01

    Mutations in oncogenes and tumor suppressor genes confer a fitness advantage to cells that can lead to cancer. The tumor phenotype normally results from the interaction of many mutant genes making it difficult to estimate the fitness advantage provided by any oncogene, except when tumors depend on one oncogene only. We utilize a model of chronic myeloid leukemia (CML), to quantitate the fitness advantage conferred by expression of BCR-ABL in hematopoietic cells from in vivo patient data. We show that BCR-ABL expression provides a high fitness advantage, which explains why this single mutation drives the chronic phase of CML. PMID:20153920

  9. Combined string searching algorithm based on knuth-morris- pratt and boyer-moore algorithms

    NASA Astrophysics Data System (ADS)

    Tsarev, R. Yu; Chernigovskiy, A. S.; Tsareva, E. A.; Brezitskaya, V. V.; Nikiforov, A. Yu; Smirnov, N. A.

    2016-04-01

    The string searching task can be classified as a classic information processing task. Users either encounter the solution of this task while working with text processors or browsers, employing standard built-in tools, or this task is solved unseen by the users, while they are working with various computer programmes. Nowadays there are many algorithms for solving the string searching problem. The main criterion of these algorithms’ effectiveness is searching speed. The larger the shift of the pattern relative to the string in case of pattern and string characters’ mismatch is, the higher is the algorithm running speed. This article offers a combined algorithm, which has been developed on the basis of well-known Knuth-Morris-Pratt and Boyer-Moore string searching algorithms. These algorithms are based on two different basic principles of pattern matching. Knuth-Morris-Pratt algorithm is based upon forward pattern matching and Boyer-Moore is based upon backward pattern matching. Having united these two algorithms, the combined algorithm allows acquiring the larger shift in case of pattern and string characters’ mismatch. The article provides an example, which illustrates the results of Boyer-Moore and Knuth-Morris- Pratt algorithms and combined algorithm’s work and shows advantage of the latter in solving string searching problem.

  10. A procedure and program to calculate shuttle mask advantage

    NASA Astrophysics Data System (ADS)

    Balasinski, A.; Cetin, J.; Kahng, A.; Xu, X.

    2006-10-01

    A well-known recipe for reducing mask cost component in product development is to place non-redundant elements of layout databases related to multiple products on one reticle plate [1,2]. Such reticles are known as multi-product, multi-layer, or, in general, multi-IP masks. The composition of the mask set should minimize not only the layout placement cost, but also the cost of the manufacturing process, design flow setup, and product design and introduction to market. An important factor is the quality check which should be expeditious and enable thorough visual verification to avoid costly modifications once the data is transferred to the mask shop. In this work, in order to enable the layer placement and quality check procedure, we proposed an algorithm where mask layers are first lined up according to the price and field tone [3]. Then, depending on the product die size, expected fab throughput, and scribeline requirements, the subsequent product layers are placed on the masks with different grades. The actual reduction of this concept to practice allowed us to understand the tradeoffs between the automation of layer placement and setup related constraints. For example, the limited options of the numbers of layer per plate dictated by the die size and other design feedback, made us consider layer pairing based not only on the final price of the mask set, but also on the cost of mask design and fab-friendliness. We showed that it may be advantageous to introduce manual layer pairing to ensure that, e.g., all interconnect layers would be placed on the same plate, allowing for easy and simultaneous design fixes. Another enhancement was to allow some flexibility in mixing and matching of the layers such that non-critical ones requiring low mask grade would be placed in a less restrictive way, to reduce the count of orphan layers. In summary, we created a program to automatically propose and visualize shuttle mask architecture for design verification, with

  11. Does “game participation cost” affect the advantage of heterogeneous networks for evolving cooperation?

    NASA Astrophysics Data System (ADS)

    Tanimoto, Jun; Yamauchi, Atsuo

    2010-06-01

    Masuda [N. Masuda, Participation costs dismiss the advantage of heterogeneous networks in evolution of cooperation, Proceedings of the Royal Society B 274 (2007) 1815-1821] reported that a game participation cost (expressed by adding same negative values to all four elements in a 2 × 2 payoff matrix) affects the advantage of heterogeneous networks in the evolution of cooperation. We show that this finding is not always true depending on the features of the network, indicating that participation costs help cooperation in certain situations rather than destroy it. In a weaker dilemma game on a scale free network derived from the Barabasi & Albert algorithm with a larger average degree, game participation cost helps rather than destroy the network reciprocity.

  12. ALFA: Automated Line Fitting Algorithm

    NASA Astrophysics Data System (ADS)

    Wesson, R.

    2015-12-01

    ALFA fits emission line spectra of arbitrary wavelength coverage and resolution, fully automatically. It uses a catalog of lines which may be present to construct synthetic spectra, the parameters of which are then optimized by means of a genetic algorithm. Uncertainties are estimated using the noise structure of the residuals. An emission line spectrum containing several hundred lines can be fitted in a few seconds using a single processor of a typical contemporary desktop or laptop PC. Data cubes in FITS format can be analysed using multiple processors, and an analysis of tens of thousands of deep spectra obtained with instruments such as MUSE will take a few hours.

  13. Algorithmic Mechanism Design of Evolutionary Computation

    PubMed Central

    Pei, Yan

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777

  14. Algorithmic Mechanism Design of Evolutionary Computation.

    PubMed

    Pei, Yan

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777

  15. A novel stochastic optimization algorithm.

    PubMed

    Li, B; Jiang, W

    2000-01-01

    This paper presents a new stochastic approach SAGACIA based on proper integration of simulated annealing algorithm (SAA), genetic algorithm (GA), and chemotaxis algorithm (CA) for solving complex optimization problems. SAGACIA combines the advantages of SAA, GA, and CA together. It has the following features: (1) it is not the simple mix of SAA, GA, and CA; (2) it works from a population; (3) it can be easily used to solve optimization problems either with continuous variables or with discrete variables, and it does not need coding and decoding,; and (4) it can easily escape from local minima and converge quickly. Good solutions can be obtained in a very short time. The search process of SAGACIA can be explained with Markov chains. In this paper, it is proved that SAGACIA has the property of global asymptotical convergence. SAGACIA has been applied to solve such problems as scheduling, the training of artificial neural networks, and the optimizing of complex functions. In all the test cases, the performance of SAGACIA is better than that of SAA, GA, and CA. PMID:18244742

  16. Algorithm for dynamic Speckle pattern processing

    NASA Astrophysics Data System (ADS)

    Cariñe, J.; Guzmán, R.; Torres-Ruiz, F. A.

    2016-07-01

    In this paper we present a new algorithm for determining surface activity by processing speckle pattern images recorded with a CCD camera. Surface activity can be produced by motility or small displacements among other causes, and is manifested as a change in the pattern recorded in the camera with reference to a static background pattern. This intensity variation is considered to be a small perturbation compared with the mean intensity. Based on a perturbative method we obtain an equation with which we can infer information about the dynamic behavior of the surface that generates the speckle pattern. We define an activity index based on our algorithm that can be easily compared with the outcomes from other algorithms. It is shown experimentally that this index evolves in time in the same way as the Inertia Moment method, however our algorithm is based on direct processing of speckle patterns without the need for other kinds of post-processes (like THSP and co-occurrence matrix), making it a viable real-time method. We also show how this algorithm compares with several other algorithms when applied to calibration experiments. From these results we conclude that our algorithm offer qualitative and quantitative advantages over current methods.

  17. Nonlinear dynamics optimization with particle swarm and genetic algorithms for SPEAR3 emittance upgrade

    NASA Astrophysics Data System (ADS)

    Huang, Xiaobiao; Safranek, James

    2014-09-01

    Nonlinear dynamics optimization is carried out for a low emittance upgrade lattice of SPEAR3 in order to improve its dynamic aperture and Touschek lifetime. Two multi-objective optimization algorithms, a genetic algorithm and a particle swarm algorithm, are used for this study. The performance of the two algorithms are compared. The result shows that the particle swarm algorithm converges significantly faster to similar or better solutions than the genetic algorithm and it does not require seeding of good solutions in the initial population. These advantages of the particle swarm algorithm may make it more suitable for many accelerator optimization applications.

  18. Algorithm For Hypersonic Flow In Chemical Equilibrium

    NASA Technical Reports Server (NTRS)

    Palmer, Grant

    1989-01-01

    Implicit, finite-difference, shock-capturing algorithm calculates inviscid, hypersonic flows in chemical equilibrium. Implicit formulation chosen because overcomes limitation on mathematical stability encountered in explicit formulations. For dynamical portion of problem, Euler equations written in conservation-law form in Cartesian coordinate system for two-dimensional or axisymmetric flow. For chemical portion of problem, equilibrium state of gas at each point in computational grid determined by minimizing local Gibbs free energy, subject to local conservation of molecules, atoms, ions, and total enthalpy. Major advantage: resulting algorithm naturally stable and captures strong shocks without help of artificial-dissipation terms to damp out spurious numerical oscillations.

  19. Risk taking among diabetic clients.

    PubMed

    Joseph, D H; Schwartz-Barcott, D; Patterson, B

    1992-01-01

    Diabetic clients must make daily decisions about their health care needs. Observational and anecdotal evidence suggests that vast differences exist between the kinds of choices diabetic clients make and the kinds of chances they are willing to take. The purpose of this investigation was to develop a diabetic risk-assessment tool. This instrument, which is based on subjective expected utility theory, measures risk-prone and risk-averse behavior. Initial findings from a pilot study of 18 women clients who are on insulin indicate that patterns of risk behavior exist in the areas of exercise, skin care, and diet. PMID:1729123

  20. Rover Takes a Sunday Drive

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This animation, made with images from the Mars Exploration Rover Spirit hazard-identification camera, shows the rover's perspective of its first post-egress drive on Mars Sunday. Engineers drove Spirit approximately 3 meters (10 feet) toward its first rock target, a football-sized, mountain-shaped rock called Adirondack. The drive took approximately 30 minutes to complete, including time stopped to take images. Spirit first made a series of arcing turns totaling approximately 1 meter (3 feet). It then turned in place and made a series of short, straightforward movements totaling approximately 2 meters (6.5 feet).

  1. Community detection based on modularity and an improved genetic algorithm

    NASA Astrophysics Data System (ADS)

    Shang, Ronghua; Bai, Jing; Jiao, Licheng; Jin, Chao

    2013-03-01

    Complex networks are widely applied in every aspect of human society, and community detection is a research hotspot in complex networks. Many algorithms use modularity as the objective function, which can simplify the algorithm. In this paper, a community detection method based on modularity and an improved genetic algorithm (MIGA) is put forward. MIGA takes the modularity Q as the objective function, which can simplify the algorithm, and uses prior information (the number of community structures), which makes the algorithm more targeted and improves the stability and accuracy of community detection. Meanwhile, MIGA takes the simulated annealing method as the local search method, which can improve the ability of local search by adjusting the parameters. Compared with the state-of-art algorithms, simulation results on computer-generated and four real-world networks reflect the effectiveness of MIGA.

  2. Comparative Advantage, Relative Wages, and the Accumulation of Human Capital.

    ERIC Educational Resources Information Center

    Teulings, Coen N.

    2005-01-01

    I apply Ricardo's principle of comparative advantage to a theory of factor substitutability in a model with a continuum of worker and job types. Highly skilled workers have a comparative advantage in complex jobs. The model satisfies the distance-dependent elasticity of substitution (DIDES) characteristic: substitutability between types declines…

  3. Reasoning about Other People's Beliefs: Bilinguals Have an Advantage

    ERIC Educational Resources Information Center

    Rubio-Fernandez, Paula; Glucksberg, Sam

    2012-01-01

    Bilingualism can have widespread cognitive effects. In this article we investigate whether bilingualism might have an effect on adults' abilities to reason about other people's beliefs. In particular, we tested whether bilingual adults might have an advantage over monolingual adults in false-belief reasoning analogous to the advantage that has…

  4. Information Technology, Core Competencies, and Sustained Competitive Advantage.

    ERIC Educational Resources Information Center

    Byrd, Terry Anthony

    2001-01-01

    Presents a model that depicts a possible connection between competitive advantage and information technology. Focuses on flexibility of the information technology infrastructure as an enabler of core competencies, especially mass customization and time-to-market, that have a relationship to sustained competitive advantage. (Contains 82…

  5. Polysemy Advantage with Abstract but Not Concrete Words

    ERIC Educational Resources Information Center

    Jager, Bernadet; Cleland, Alexandra A.

    2016-01-01

    It is a robust finding that ambiguous words are recognized faster than unambiguous words. More recent studies (e.g., Rodd et al. in "J Mem Lang" 46:245-266, 2002) now indicate that this "ambiguity advantage" may in reality be a "polysemy advantage": caused by related senses (polysemy) rather than unrelated meanings…

  6. Distance Education in Rural Schools: Advantages and Disadvantages.

    ERIC Educational Resources Information Center

    Barker, Bruce A.

    1990-01-01

    Reviews distance-education technologies, their uses for rural schools, and related issues. Lists advantages and disadvantages for satellite TV teaching, microcomputer networks, and two-way TV instruction. Concludes that technologies' advantages outweigh disadvantages. Suggests final decision making and assessment depend on local program content…

  7. Advantage of resonant power conversion in aerospace applications

    NASA Technical Reports Server (NTRS)

    Hansen, I. G.

    1983-01-01

    An ultrasonic, sinusoidal aerospace power distribution system is shown to have many advantages over other candidate power systems. These advantages include light weight, ease of fault clearing, versatility in handling many loads including motors, and the capability of production within the limits of present technology. References are cited that demonstrate the state of resonant converter technology and support these conclusions.

  8. Aging and Text Comprehension: Interpretation and Domain Knowledge Advantage

    ERIC Educational Resources Information Center

    Jeong, Heisawn; Kim, Hyo Sik

    2009-01-01

    In this study, young, middle-aged, and elderly adults read two different history texts. In the "knowledge advantage" condition, readers read a history text about an event that was well-known to readers of all ages but most familiar to elderly adults. In the "no advantage" condition, readers read a history text about a political situation of a…

  9. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.

    1986-01-01

    Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.

  10. Algorithm-development activities

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    1994-01-01

    The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.

  11. Take Care of Your Teeth and Gums

    MedlinePlus

    ... Previous section Overview 2 of 6 sections Take Action! Take Action: Brushing Tips Follow these tips for a healthy, ... Why It's Important 3 of 6 sections Take Action: Flossing Tips Floss every day. Floss every day ...

  12. Global convergence analysis of a discrete time nonnegative ICA algorithm.

    PubMed

    Ye, Mao

    2006-01-01

    When the independent sources are known to be nonnegative and well-grounded, which means that they have a nonzero pdf in the region of zero, Oja and Plumbley have proposed a "Nonnegative principal component analysis (PCA)" algorithm to separate these positive sources. Generally, it is very difficult to prove the convergence of a discrete-time independent component analysis (ICA) learning algorithm. However, by using the skew-symmetry property of this discrete-time "Nonnegative PCA" algorithm, if the learning rate satisfies suitable condition, the global convergence of this discrete-time algorithm can be proven. Simulation results are employed to further illustrate the advantages of this theory. PMID:16526495

  13. Identification of Traceability Barcode Based on Phase Correlation Algorithm

    NASA Astrophysics Data System (ADS)

    Lang, Liying; Zhang, Xiaofang

    In the paper phase correlation algorithm based on Fourier transform is applied to the traceability barcode identification, which is a widely used method of image registration. And there is the rotation-invariant phase correlation algorithm which combines polar coordinate transform with phase correlation, that they can recognize the barcode with partly destroyed and rotated. The paper provides the analysis and simulation for the algorithm using Matlab, the results show that the algorithm has the advantages of good real-time and high performance. And it improves the matching precision and reduces the calculation by optimizing the rotation-invariant phase correlation.

  14. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  15. Automated Vectorization of Decision-Based Algorithms

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    Virtually all existing vectorization algorithms are designed to only analyze the numeric properties of an algorithm and distribute those elements across multiple processors. This advances the state of the practice because it is the only known system, at the time of this reporting, that takes high-level statements and analyzes them for their decision properties and converts them to a form that allows them to automatically be executed in parallel. The software takes a high-level source program that describes a complex decision- based condition and rewrites it as a disjunctive set of component Boolean relations that can then be executed in parallel. This is important because parallel architectures are becoming more commonplace in conventional systems and they have always been present in NASA flight systems. This technology allows one to take existing condition-based code and automatically vectorize it so it naturally decomposes across parallel architectures.

  16. A time-accurate multiple-grid algorithm

    NASA Technical Reports Server (NTRS)

    Jespersen, D. C.

    1985-01-01

    A time-accurate multiple-grid algorithm is described. The algorithm allows one to take much larger time steps with an explicit time-marching scheme than would otherwise be the case. Sample calculations of a scalar advection equation and the Euler equations for an oscillating airfoil are shown. For the oscillating airfoil, time steps an order of magnitude larger than the single-grid algorithm are possible.

  17. Beyond the University of Racial Diversity: Some Remarks on Race, Diversity, (Dis)Advantage and Affirmative Action

    ERIC Educational Resources Information Center

    Waghid, Y.

    2010-01-01

    The compelling essays in this issue of the journal take on the often contentious and complex issue of racial affirmative action. I do not wish to repeat the arguments authors offer either in defence or against student admissions to a university on the grounds of race, (dis)advantage, class, gender, and so on. Rather, I wish to respond to a…

  18. Optimal configuration algorithm of a satellite transponder

    NASA Astrophysics Data System (ADS)

    Sukhodoev, M. S.; Savenko, I. I.; Martynov, Y. A.; Savina, N. I.; Asmolovskiy, V. V.

    2016-04-01

    This paper describes the algorithm of determining the optimal transponder configuration of the communication satellite while in service. This method uses a mathematical model of the pay load scheme based on the finite-state machine. The repeater scheme is shown as a weighted oriented graph that is represented as plexus in the program view. This paper considers an algorithm example for application with a typical transparent repeater scheme. In addition, the complexity of the current algorithm has been calculated. The main peculiarity of this algorithm is that it takes into account the functionality and state of devices, reserved equipment and input-output ports ranged in accordance with their priority. All described limitations allow a significant decrease in possible payload commutation variants and enable a satellite operator to make reconfiguration solutions operatively.

  19. SOM-based algorithms for qualitative variables.

    PubMed

    Cottrell, Marie; Ibbou, Smaïl; Letrémy, Patrick

    2004-01-01

    It is well known that the SOM algorithm achieves a clustering of data which can be interpreted as an extension of Principal Component Analysis, because of its topology-preserving property. But the SOM algorithm can only process real-valued data. In previous papers, we have proposed several methods based on the SOM algorithm to analyze categorical data, which is the case in survey data. In this paper, we present these methods in a unified manner. The first one (Kohonen Multiple Correspondence Analysis, KMCA) deals only with the modalities, while the two others (Kohonen Multiple Correspondence Analysis with individuals, KMCA_ind, Kohonen algorithm on DISJonctive table, KDISJ) can take into account the individuals, and the modalities simultaneously. PMID:15555858

  20. Optimisation of nonlinear motion cueing algorithm based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Asadi, Houshyar; Mohamed, Shady; Rahim Zadeh, Delpak; Nahavandi, Saeid

    2015-04-01

    Motion cueing algorithms (MCAs) are playing a significant role in driving simulators, aiming to deliver the most accurate human sensation to the simulator drivers compared with a real vehicle driver, without exceeding the physical limitations of the simulator. This paper provides the optimisation design of an MCA for a vehicle simulator, in order to find the most suitable washout algorithm parameters, while respecting all motion platform physical limitations, and minimising human perception error between real and simulator driver. One of the main limitations of the classical washout filters is that it is attuned by the worst-case scenario tuning method. This is based on trial and error, and is effected by driving and programmers experience, making this the most significant obstacle to full motion platform utilisation. This leads to inflexibility of the structure, production of false cues and makes the resulting simulator fail to suit all circumstances. In addition, the classical method does not take minimisation of human perception error and physical constraints into account. Production of motion cues and the impact of different parameters of classical washout filters on motion cues remain inaccessible for designers for this reason. The aim of this paper is to provide an optimisation method for tuning the MCA parameters, based on nonlinear filtering and genetic algorithms. This is done by taking vestibular sensation error into account between real and simulated cases, as well as main dynamic limitations, tilt coordination and correlation coefficient. Three additional compensatory linear blocks are integrated into the MCA, to be tuned in order to modify the performance of the filters successfully. The proposed optimised MCA is implemented in MATLAB/Simulink software packages. The results generated using the proposed method show increased performance in terms of human sensation, reference shape tracking and exploiting the platform more efficiently without reaching

  1. Public and expert collaborative evaluation model and algorithm for enterprise knowledge

    NASA Astrophysics Data System (ADS)

    Le, Chengyi; Gu, Xinjian; Pan, Kai; Dai, Feng; Qi, Guoning

    2013-08-01

    Knowledge is becoming the most important resource for more and more enterprises and increases exponentially, but there is not an effective method to evaluate them cogently. Based on Web2.0, this article firstly builds an enterprise knowledge sharing model. Synthetically taking the advantage of the convenience and low cost in public evaluation and of the specialty in peer review, a public and expert collaborative evaluation (PECE) model and algorithm for enterprise knowledge are put forward. Through analyzing interaction between user's domain weights and scores of knowledge points, the PECE model and algorithm serve to recognise valuable knowledge and domain experts efficiently and therefore improve ordering and utilisation of knowledge. This article also studies malicious and casual evaluation from users and a method is proposed to update user's domain weights. Finally, a case of knowledge sharing system for amanufacturing enterprise is developed and realised. User's behaviour of publishing and evaluating knowledge is simulated and then analysed to verify the feasibility of PECE algorithm based on the system.

  2. A hyperspectral imagery anomaly detection algorithm based on local three-dimensional orthogonal subspace projection

    NASA Astrophysics Data System (ADS)

    Zhang, Xing; Wen, Gongjian

    2015-10-01

    Anomaly detection (AD) becomes increasingly important in hyperspectral imagery analysis with many practical applications. Local orthogonal subspace projection (LOSP) detector is a popular anomaly detector which exploits local endmembers/eigenvectors around the pixel under test (PUT) to construct background subspace. However, this subspace only takes advantage of the spectral information, but the spatial correlat ion of the background clutter is neglected, which leads to the anomaly detection result sensitive to the accuracy of the estimated subspace. In this paper, a local three dimensional orthogonal subspace projection (3D-LOSP) algorithm is proposed. Firstly, under the jointly use of both spectral and spatial information, three directional background subspaces are created along the image height direction, the image width direction and the spectral direction, respectively. Then, the three corresponding orthogonal subspaces are calculated. After that, each vector along three direction of the local cube is projected onto the corresponding orthogonal subspace. Finally, a composite score is given through the three direction operators. In 3D-LOSP, the anomalies are redefined as the target not only spectrally different to the background, but also spatially distinct. Thanks to the addition of the spatial information, the robustness of the anomaly detection result has been improved greatly by the proposed 3D-LOSP algorithm. It is noteworthy that the proposed algorithm is an expansion of LOSP and this ideology can inspire many other spectral-based anomaly detection methods. Experiments with real hyperspectral images have proved the stability of the detection result.

  3. Stochastic coalescence in finite systems: an algorithm for the numerical solution of the multivariate master equation.

    NASA Astrophysics Data System (ADS)

    Alfonso, Lester; Zamora, Jose; Cruz, Pedro

    2015-04-01

    The stochastic approach to coagulation considers the coalescence process going in a system of a finite number of particles enclosed in a finite volume. Within this approach, the full description of the system can be obtained from the solution of the multivariate master equation, which models the evolution of the probability distribution of the state vector for the number of particles of a given mass. Unfortunately, due to its complexity, only limited results were obtained for certain type of kernels and monodisperse initial conditions. In this work, a novel numerical algorithm for the solution of the multivariate master equation for stochastic coalescence that works for any type of kernels and initial conditions is introduced. The performance of the method was checked by comparing the numerically calculated particle mass spectrum with analytical solutions obtained for the constant and sum kernels, with an excellent correspondence between the analytical and numerical solutions. In order to increase the speedup of the algorithm, software parallelization techniques with OpenMP standard were used, along with an implementation in order to take advantage of new accelerator technologies. Simulations results show an important speedup of the parallelized algorithms. This study was funded by a grant from Consejo Nacional de Ciencia y Tecnologia de Mexico SEP-CONACYT CB-131879. The authors also thanks LUFAC® Computacion SA de CV for CPU time and all the support provided.

  4. A hybrid, self-adjusting search algorithm for optimal space trajectory design

    NASA Astrophysics Data System (ADS)

    Bolle, Andrea; Circi, Christian

    2012-08-01

    The aim of the present paper is to propose a hybrid, self adjusting, search algorithm for space trajectory optimization. By taking advantage of both direct and indirect methods, the present algorithm allows the finding of the optimal solution through the introduction of some new control parameters, whose number is smaller than that of the Lagrange multipliers, and whose range is bounded. Eventually, the optimal solution is determined by means of an iterative self-adjustment of the search domain occurring at "runtime", without any correction by an external user. This new set of parameters can be found through a reduction process of the degrees of freedom, obtained through the transversality conditions before entering the search loop. Furthermore, such a process shows that Lagrange multipliers are subject to a deep symmetry mirroring the features of the state vector. The algorithm reliability and efficiency is assessed through some test cases, and by reproducing some optimal transfer trajectories: a full three-dimensional, minimum time Mars mission, an optimal transfer to Jupiter, and finally an injection into a circular Moon orbit.

  5. CLPM: A Cross-Linked Peptide Mapping Algorithm for Mass Spectrometric Analysis

    PubMed Central

    Tang, Yong; Chen, Yingfeng; Lichti, Cheryl F; Hall, Roger A; Raney, Kevin D; Jennings, Steven F

    2005-01-01

    Background Protein-protein, protein-DNA and protein-RNA interactions are of central importance in biological systems. Quadrapole Time-of-flight (Q-TOF) mass spectrometry is a sensitive, promising tool for studying these interactions. Combining this technique with chemical crosslinking, it is possible to identify the sites of interactions within these complexes. Due to the complexities of the mass spectrometric data of crosslinked proteins, new software is required to analyze the resulting products of these studies. Result We designed a Cross-Linked Peptide Mapping (CLPM) algorithm which takes advantage of all of the information available in the experiment including the amino acid sequence from each protein, the identity of the crosslinker, the identity of the digesting enzyme, the level of missed cleavage, and possible chemical modifications. The algorithm does in silico digestion and crosslinking, calculates all possible mass values and matches the theoretical data to the actual experimental data provided by the mass spectrometry analysis to identify the crosslinked peptides. Conclusion Identifying peptides by their masses can be an efficient starting point for direct sequence confirmation. The CLPM algorithm provides a powerful tool in identifying these potential interaction sites in combination with chemical crosslinking and mass spectrometry. Through this cost-effective approach, subsequent efforts can quickly focus attention on investigating these specific interaction sites. PMID:16026606

  6. [Non-contact heart rate estimation based on joint approximate diagonalization of eigenmatrices algorithm].

    PubMed

    Wang Yinazhi; Han Tailin

    2014-08-01

    Based on the imaging photoplethysmography (iPPG) and blind source separation (BSS) theory the author put forward a method for non-contact heartbeat frequency estimation. Using the recorded video images of the human face in the ambient light with Webcam, we detected the human face through software, separated the detected facial image into three channels RGB components. And then preprocesses i.e. normalization, whitening, etc. were carried out to a certain number of RGB data. After the independent component analysis (ICA)'theory and joint approximate diagonalization of eigenmatrices (JADE) algorithm were applied, we estimated the frequency of heart rate through spectrum analysis. Taking advantage of the consistency of Bland-Altman theory analysis and the commercial Pulse Oximetry Sensor test results, the root mean square error of the algorithm result was calculated as 2. 06 beat/min. It indicated that the algorithm could realize the non-contact measurement of heart rate and lay the foundation for the re- mote and non-contact measurement of multi-parameter physiological measurements. PMID:25464777

  7. [Non-contact heart rate estimation based on joint approximate diagonalization of eigenmatrices algorithm].

    PubMed

    Wang Yinazhi; Han Tailin

    2014-08-01

    Based on the imaging photoplethysmography (iPPG) and blind source separation (BSS) theory the author put forward a method for non-contact heartbeat frequency estimation. Using the recorded video images of the human face in the ambient light with Webcam, we detected the human face through software, separated the detected facial image into three channels RGB components. And then preprocesses i.e. normalization, whitening, etc. were carried out to a certain number of RGB data. After the independent component analysis (ICA)'theory and joint approximate diagonalization of eigenmatrices (JADE) algorithm were applied, we estimated the frequency of heart rate through spectrum analysis. Taking advantage of the consistency of Bland-Altman theory analysis and the commercial Pulse Oximetry Sensor test results, the root mean square error of the algorithm result was calculated as 2. 06 beat/min. It indicated that the algorithm could realize the non-contact measurement of heart rate and lay the foundation for the re- mote and non-contact measurement of multi-parameter physiological measurements. PMID:25508408

  8. A Prototype Algorithm for Land Surface Temperature Retrieval from Sentinel-3 Mission

    NASA Astrophysics Data System (ADS)

    Sobrino, Jose A.; Jimenez-Munoz, Juan C.; Soria, Guillem; Brockmann, Carsten; Ruescas, Ana; Danne, Olaf; North, Peter; Phillipe, Pierre; Berger, Michel; Merchant, Chris; Ghent, Darren; Remedios, John

    2015-12-01

    In this work we present a prototype algorithm to retrieve Land Surface Temperature (LST) from OLCI and SLSTR instruments on board Sentinel-3 platform, which was developed in the framework of the SEN4LST project. For this purpose, data acquired with the ENVISAT MERIS and AATSR instruments are used as a benchmark. The objective is to improve the LST standard product (level 2) currently derived from the single AATSR instrument taking advantages of the improved characteristics of the future OLCI and SLSTR instruments. Hence, the high spectral resolution of OLCI instrument and the dual-view and thermal bands available in the SLSTR instruments have the potential to improve the characterization of the atmosphere and therefore to improve the atmospheric correction and cloud mask. Bands in the solar domain available in both instruments allow the retrieval of the surface emissivity, being a key input to the LST algorithm. Pairs of MERIS/AATSR are processed over different sites and validated with in situ measurements using the LST processor included in the BEAM software. Results showed that the proposed LST algorithm improves LST retrievals of the standard level-2 product.

  9. Taking charge: a personal responsibility.

    PubMed Central

    Newman, D M

    1987-01-01

    Women can adopt health practices that will help them to maintain good health throughout their various life stages. Women can take charge of their health by maintaining a nutritionally balanced diet, exercising, and using common sense. Women can also employ known preventive measures against osteoporosis, stroke, lung and breast cancer and accidents. Because women experience increased longevity and may require long-term care with age, the need for restructuring the nation's care system for the elderly becomes an important women's health concern. Adult day care centers, home health aides, and preventive education will be necessary, along with sufficient insurance to maintain quality care and self-esteem without depleting a person's resources. PMID:3120224

  10. A three-dimensional weighted cone beam filtered backprojection (CB-FBP) algorithm for image reconstruction in volumetric CT under a circular source trajectory

    NASA Astrophysics Data System (ADS)

    Tang, Xiangyang; Hsieh, Jiang; Hagiwara, Akira; Nilsen, Roy A.; Thibault, Jean-Baptiste; Drapkin, Evgeny

    2005-08-01

    can be implemented in either the native CB geometry or the so-called cone-parallel geometry. By taking the cone-parallel geometry as an example, the experimental evaluation shows that, up to a moderate cone angle corresponding to a detector dimension of 64 × 0.625 mm, the CB artefacts can be substantially suppressed by the proposed algorithm, while advantages of the original FDK algorithm, such as the filtered backprojection algorithm structure, 1D ramp filtering and data manipulation efficiency, are maintained.

  11. Reducing PAPR of optical OFDM system based on PTS and companding joint algorithm

    NASA Astrophysics Data System (ADS)

    Jia, Yangjing; Li, Ping; Lei, Dongming; Chen, Ailin; Wang, Jinpeng; Zou, Nianyu

    2015-10-01

    Optical orthogonal frequency division multiplexing (OFDM) system combines the advantages of both wireless OFDM and optical fiber technology, thus has high spectral efficiency and can effectively resist polarization mode dispersion and chromatic dispersion in fiber link. However, high peak-to-average power ratio (PAPR) is one of the important shortcomings of optical OFDM system, which requires not only amplifiers with a greater dynamic range, but also leads to serious fiber nonlinear effect. So how to reduce PAPR of optical OFDM system is a crucial issue. This work, aiming to reduce PAPR and improving system BER, analyzes suppression technology of PAPR based on optical OFDM system. Firstly, to improve BER, we utilize Partial Transmit Sequence (PTS) algorithm which introduces phase factors b(v) multiplying IFFT converted signals and searches a b(v) which will make PAPR minimum. But this method needs much calculation. Then we exploit companding which can compress amplitude of big OFDM signals and expand small signals. Secondly, simulating the two algorithms respectively and finding two algorithms can suppress PAPR, but the effect has room for improvement. Therefore, an implementation of PTS and companding joint algorithm is proposed, then simulating this method and adding it into optical OFDM system. A system was set up, fiber length setting as 10km, utilizing a MZM modulator and a distributed feedback laser, taking 4QAM and 512points IFFT. The results show that, joint algorithm can reduce PAPR from about 12dB to 8dB, improving the problem of high PAPR, constellation convergence, enhances optical OFDM system transmission performance.

  12. OPERA: Objective Prism Enhanced Reduction Algorithms

    NASA Astrophysics Data System (ADS)

    Universidad Complutense de Madrid Astrophysics Research Group

    2015-09-01

    OPERA (Objective Prism Enhanced Reduction Algorithms) automatically analyzes astronomical images using the objective-prism (OP) technique to register thousands of low resolution spectra in large areas. It detects objects in an image, extracts one-dimensional spectra, and identifies the emission line feature. The main advantages of this method are: 1) to avoid subjectivity inherent to visual inspection used in past studies; and 2) the ability to obtain physical parameters without follow-up spectroscopy.

  13. Hair testing is taking root.

    PubMed

    Cooper, Gail Audrey Ann

    2011-11-01

    An increasing number of toxicology laboratories are choosing to expand the services they offer to include hair testing in response to customer demands. Hair provides the toxicologist with many advantages over conventional matrices in that it is easy to collect, is a robust and stable matrix that does not require refrigeration, and most importantly, provides a historical profile of an individual's exposure to drugs or analytes of interest. The establishment of hair as a complementary technique in forensic toxicology is a direct result of the success of the matrix in medicolegal cases and the wide range of applications. However, before introducing hair testing, laboratories must consider what additional requirements they will need that extend beyond simply adapting methodologies already validated for blood or urine. Hair presents many challenges with respect to the lack of available quality control materials, extensive sample handling protocols and low drug concentrations requiring greater instrument sensitivity. Unfortunately, a common pitfall involves over-interpretation of the findings and must be avoided. PMID:21868416

  14. Does being female provide a neuroprotective advantage following spinal cord injury?

    PubMed

    Datto, Jeffrey P; Yang, Jackie; Dietrich, W Dalton; Pearse, Damien D

    2015-10-01

    It has been controversial whether gender has any effect on recovery following spinal cord injury (SCI). Past experimental and clinical research aimed at addressing this subject has led to constrasting findings on whether females hold any advantage in locomotor recovery. Additionally, for studies supporting the notion of a female gender related advantage, a definite cause has not been explained. In a recent study, using large sample sizes for comparative male and female spinal cord injury cohorts, we reported that a significant gender advantage favoring females existed in both tissue preservation and functional recovery after taking into consideration discrepancies in age and weight of the animals across sexes. Prior animal research frequently used sample sizes that were too small to determine significance with certainty and also did not account for two other factors that influence locomotor performance: age and weight. Our finding is important in light of controversy surrounding the effect of gender on outcome and the fact that SCI affects more than ten thousand new individuals annually, a population that is disproportionately male. By deepening our understanding of why a gender advantage exists, potential new therapeutics can be designed to improve recovery for the male population following the initial trauma or putatively augment the neuroprotective privilege in females for enhanced outcomes. PMID:26692831

  15. Toward an image compression algorithm for the high-resolution electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    Taking pictures with a camera that uses a digital recording medium instead of film has the advantage of recording and transmitting images without the use of a darkroom or a courier. However, high-resolution images contain an enormous amount of information and strain data-storage systems. Image compression will allow multiple images to be stored in the High-Resolution Electronic Still Camera. The camera is under development at Johnson Space Center. Fidelity of the reproduced image and compression speed are of tantamount importance. Lossless compression algorithms are fast and faithfully reproduce the image, but their compression ratios will be unacceptably low due to noise in the front end of the camera. Future efforts will include exploring methods that will reduce the noise in the image and increase the compression ratio.

  16. IDP++: signal and image processing algorithms in C++ version 4.1

    SciTech Connect

    Lehman, S.K.

    1996-11-01

    IDP++ (Image and Data Processing in C++) is a collection of signal and image processing algorithms written in C++. It is a compiled signal processing environment which supports four data types of up to four dimensions. It is developed within Lawrence Livermore National Laboratory`s Image and Data Processing group as a partial replacement for View. IDP ++ takes advantage of the latest, implemented and actually working, object-oriented compiler technology to provide `information hiding.` Users need only know C, not C++. Signals are treated like any other variable with a defined set of operators and functions in an intuitive manner. IDP++ is designed for real-time environment where interpreted processing packages are less efficient. IDP++ exists for both SUNs and Silicon Graphics using their most current compilers.

  17. On extracting brightness temperature maps from scanning radiometer data. [techniques for algorithm design

    NASA Technical Reports Server (NTRS)

    Argentiero, P.; Garza-Robles, R.

    1980-01-01

    The extraction of brightness temperature maps from scanning radiometer data is described as a typical linear inverse problem. Spatial quantization and parameter estimation is described and is suggested as an advantageous approach to a solution. Since this approach takes into explicit account the multivariate nature of the problem, it permits an accurate determination of the most detailed resolution extractable from the data as well as explicitly defining the possible compromises between accuracy and resolution. To illustrate the usefulness of the method described for algorithm design and accuracy prediction, it was applied to the problem of providing brightness temperature maps during the NOSS flight segment. The most detained possible resolution was determined and a curve which displays the possible compromises between accuracy and resolution was provided.

  18. [An improved N-FINDR endmember extraction algorithm based on manifold learning and spatial information].

    PubMed

    Tang, Xiao-yan; Gao, Kun; Ni, Guo-qiang; Zhu, Zhen-yu; Cheng, Hao-bo

    2013-09-01

    An improved N-FINDR endmember extraction algorithm by combining manifold learning and spatial information is presented under nonlinear mixing assumptions. Firstly, adaptive local tangent space alignment is adapted to seek potential intrinsic low-dimensional structures of hyperspectral high-diemensional data and reduce original data into a low-dimensional space. Secondly, spatial preprocessing is used by enhancing each pixel vector in spatially homogeneous areas, according to the continuity of spatial distribution of the materials. Finally, endmembers are extracted by looking for the largest simplex volume. The proposed method can increase the precision of endmember extraction by solving the nonlinearity of hyperspectral data and taking advantage of spatial information. Experimental results on simulated and real hyperspectral data demonstrate that the proposed approach outperformed the geodesic simplex volume maximization (GSVM), vertex component analysis (VCA) and spatial preprocessing N-FINDR method (SPPNFINDR). PMID:24369664

  19. Seeking the competitive advantage: it's more than cost reduction.

    PubMed

    South, S F

    1999-01-01

    Most organizations focus considerable time and energy on reducing operating costs as a way to attain marketplace advantage. This strategy was not inappropriate in the past. To be competitive in the future, however, focus must be placed on other issues, not just cost reduction. The near future will be dominated by service industries, knowledge management, and virtual partnerships, with production optimization and flexibility, innovation, and strong partnerships defining those organizations that attain competitive advantage. Competitive advantage will reside in clarifying the vision and strategic plan, reviewing and redesigning work processes to optimize resources and value-added work, and creating change-ready environments and empowered workforces. PMID:10557880

  20. Algorithm validation using multicolor phantoms.

    PubMed

    Samarov, Daniel V; Clarke, Matthew L; Lee, Ji Youn; Allen, David W; Litorja, Maritoni; Hwang, Jeeseong

    2012-06-01

    We present a framework for hyperspectral image (HSI) analysis validation, specifically abundance fraction estimation based on HSI measurements of water soluble dye mixtures printed on microarray chips. In our work we focus on the performance of two algorithms, the Least Absolute Shrinkage and Selection Operator (LASSO) and the Spatial LASSO (SPLASSO). The LASSO is a well known statistical method for simultaneously performing model estimation and variable selection. In the context of estimating abundance fractions in a HSI scene, the "sparse" representations provided by the LASSO are appropriate as not every pixel will be expected to contain every endmember. The SPLASSO is a novel approach we introduce here for HSI analysis which takes the framework of the LASSO algorithm a step further and incorporates the rich spatial information which is available in HSI to further improve the estimates of abundance. In our work here we introduce the dye mixture platform as a new benchmark data set for hyperspectral biomedical image processing and show our algorithm's improvement over the standard LASSO. PMID:22741077

  1. Algorithms for Planning Multi-Object Spectroscopy Observations with the JWST Near-Infrared Spectrograph

    NASA Astrophysics Data System (ADS)

    Karakla, Diane M.; Pontoppidan, K.; Shyrokov, A.; Beck, T. L.; Valenti, J. A.; Soderblom, D. R.; Tumlinson, J.; Muzerolle, J.

    2014-01-01

    Planning observations for the JWST NIRSpec Multi-Object Spectroscopy will be complex because of the fixed-grid nature of the Micro-Shutter Arrays (MSAs) used for this instrument mode. Two algorithms have been incorporated into the 'MSA Planning Tool' (MPT) in the Astronomers Proposal Tools (APT) for this NIRSpec observation planning process. The 'Basic Algorithm' and the 'Constrained Algorithm' both determine a set of on-sky pointing positions which yield an optimal number of science sources observed per MSA shutter configuration, but these algorithms have different strategies for generating their observing plans. The Basic algorithm uses a defined set of fixed dithers specified by the observer, while the Constrained algorithm can more flexibly define dithers by merely constraining offsets from one pointing position to the next. Each algorithm offers advantages for different observing cases. This poster describes the two algorithms and their products, and clarifies observing cases where clear planning advantages are offered by each.

  2. A Sparse Self-Consistent Field Algorithm and Its Parallel Implementation: Application to Density-Functional-Based Tight Binding.

    PubMed

    Scemama, Anthony; Renon, Nicolas; Rapacioli, Mathias

    2014-06-10

    We present an algorithm and its parallel implementation for solving a self-consistent problem as encountered in Hartree-Fock or density functional theory. The algorithm takes advantage of the sparsity of matrices through the use of local molecular orbitals. The implementation allows one to exploit efficiently modern symmetric multiprocessing (SMP) computer architectures. As a first application, the algorithm is used within the density-functional-based tight binding method, for which most of the computational time is spent in the linear algebra routines (diagonalization of the Fock/Kohn-Sham matrix). We show that with this algorithm (i) single point calculations on very large systems (millions of atoms) can be performed on large SMP machines, (ii) calculations involving intermediate size systems (1000-100 000 atoms) are also strongly accelerated and can run efficiently on standard servers, and (iii) the error on the total energy due to the use of a cutoff in the molecular orbital coefficients can be controlled such that it remains smaller than the SCF convergence criterion. PMID:26580754

  3. A real-time GPU implementation of the SIFT algorithm for large-scale video analysis tasks

    NASA Astrophysics Data System (ADS)

    Fassold, Hannes; Rosner, Jakub

    2015-02-01

    The SIFT algorithm is one of the most popular feature extraction methods and therefore widely used in all sort of video analysis tasks like instance search and duplicate/ near-duplicate detection. We present an efficient GPU implementation of the SIFT descriptor extraction algorithm using CUDA. The major steps of the algorithm are presented and for each step we describe how to efficiently parallelize it massively, how to take advantage of the unique capabilities of the GPU like shared memory / texture memory and how to avoid or minimize common GPU performance pitfalls. We compare the GPU implementation with the reference CPU implementation in terms of runtime and quality and achieve a speedup factor of approximately 3 - 5 for SD and 5 - 6 for Full HD video with respect to a multi-threaded CPU implementation, allowing us to run the SIFT descriptor extraction algorithm in real-time on SD video. Furthermore, quality tests show that the GPU implementation gives the same quality as the reference CPU implementation from the HessSIFT library. We further describe the benefits of GPU-accelerated SIFT descriptor calculation for video analysis applications such as near-duplicate video detection.

  4. Hardware acceleration of lucky-region fusion (LRF) algorithm for high-performance real-time video processing

    NASA Astrophysics Data System (ADS)

    Browning, Tyler; Jackson, Christopher; Cayci, Furkan; Carhart, Gary W.; Liu, J. J.; Kiamilev, Fouad

    2015-06-01

    "Lucky-region" fusion (LRF) is a synthetic imaging technique that has proven successful in enhancing the quality of images distorted by atmospheric turbulence. The LRF algorithm extracts sharp regions of an image obtained from a series of short exposure frames from fast, high-resolution image sensors, and fuses the sharp regions into a final, improved image. In our previous research, the LRF algorithm had been implemented on CPU and field programmable gate array (FPGA) platforms. The CPU did not have sufficient processing power to handle real-time processing of video. Last year, we presented a real-time LRF implementation using an FPGA. However, due to the slow register-transfer level (RTL) development and simulation time, it was difficult to adjust and discover optimal LRF settings such as Gaussian kernel radius and synthetic frame buffer size. To overcome this limitation, we implemented the LRF algorithm on an off-the-shelf graphical processing unit (GPU) in order to take advantage of built-in parallelization and significantly faster development time. Our initial results show that the unoptimized GPU implementation has almost comparable turbulence mitigation to the FPGA version. In our presentation, we will explore optimization of the LRF algorithm on the GPU to achieve higher performance results, and adding new performance capabilities such as image stabilization.

  5. Taking Care of Your Diabetes Means Taking Care of Your Heart

    MedlinePlus

    ENGLISH Taking Care of Your Diabetes Means Taking Care of Your Heart Diabetes and Heart Disease For people with diabetes, heart ... such as a heart attack or stroke. Taking care of your diabetes can also help you take ...

  6. Feature Selection via Modified Gravitational Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Nabizadeh, Nooshin; John, Nigel

    2015-03-01

    Feature selection is the process of selecting a subset of relevant and most informative features, which efficiently represents the input data. We proposed a feature selection algorithm based on n-dimensional gravitational optimization algorithm (NGOA), which is based on the principle of gravitational fields. The objective function of optimization algorithm is a non-linear function of variables, which are called masses and defined based on extracted features. The forces between the masses as well as their new locations are calculated using the value of the objective function and the values of masses. We extracted variety of features applying different wavelet transforms and statistical methods on FLAIR and T1-weighted MR brain images. There are two classes of normal and abnormal tissues. Extracted features are divided into groups of five features. The best feature is selected in each group using N-dimensional gravitational optimization algorithm and support vector machine classifier. Then the selected features from each group make several groups of five features again and so on till desired number of features is selected. The advantage of NGOA algorithm is that the possibility of being drawn into a local optimal solution is very low. The experimental results show that our method outperforms some standard feature selection algorithms on both real-data and simulated brain tumor data.

  7. A segmentation algorithm for automated tracking of fast swimming unlabelled cells in three dimensions.

    PubMed

    Pimentel, J A; Carneiro, J; Darszon, A; Corkidi, G

    2012-01-01

    Recent advances in microscopy and cytolabelling methods enable the real time imaging of cells as they move and interact in their real physiological environment. Scenarios in which multiple cells move autonomously in all directions are not uncommon in biology. A remarkable example is the swimming of marine spermatozoa in search of the conspecific oocyte. Imaging cells in these scenarios, particularly when they move fast and are poorly labelled or even unlabelled requires very fast three-dimensional time-lapse (3D+t) imaging. This 3D+t imaging poses challenges not only to the acquisition systems but also to the image analysis algorithms. It is in this context that this work describes an original automated multiparticle segmentation method to analyse motile translucent cells in 3D microscopical volumes. The proposed segmentation technique takes advantage of the way the cell appearance changes with the distance to the focal plane position. The cells translucent properties and their interaction with light produce a specific pattern: when the cell is within or close to the focal plane, its two-dimensional (2D) appearance matches a bright spot surrounded by a dark ring, whereas when it is farther from the focal plane the cell contrast is inverted looking like a dark spot surrounded by a bright ring. The proposed method analyses the acquired video sequence frame-by-frame taking advantage of 2D image segmentation algorithms to identify and select candidate cellular sections. The crux of the method is in the sequential filtering of the candidate sections, first by template matching of the in-focus and out-of-focus templates and second by considering adjacent candidates sections in 3D. These sequential filters effectively narrow down the number of segmented candidate sections making the automatic tracking of cells in three dimensions a straightforward operation. PMID:21999166

  8. Review of ADHD Pharmacotherapies: Advantages, Disadvantages, and Clinical Pearls

    ERIC Educational Resources Information Center

    Daughton, Joan M.; Kratochvil, Christopher J.

    2009-01-01

    The advantages, disadvantages, as well as helpful hints on when to use several drug therapies against attention deficit hyperactivity disorder are discussed. The drugs discussed are methylphenidate, atomoxetine, clonidine, and bupropion.

  9. Back to basics: a bilingual advantage in infant visual habituation.

    PubMed

    Singh, Leher; Fu, Charlene S L; Rahman, Aishah A; Hameed, Waseem B; Sanmugam, Shamini; Agarwal, Pratibha; Jiang, Binyan; Chong, Yap Seng; Meaney, Michael J; Rifkin-Graboi, Anne

    2015-01-01

    Comparisons of cognitive processing in monolinguals and bilinguals have revealed a bilingual advantage in inhibitory control. Recent studies have demonstrated advantages associated with exposure to two languages in infancy. However, the domain specificity and scope of the infant bilingual advantage in infancy remains unclear. In the present study, 114 monolingual and bilingual infants were compared in a very basic task of information processing-visual habituation-at 6 months of age. Bilingual infants demonstrated greater efficiency in stimulus encoding as well as in improved recognition memory for familiar stimuli as compared to monolinguals. Findings reveal a generalized cognitive advantage in bilingual infants that is broad in scope, early to emerge, and not specific to language. PMID:25074016

  10. Cognitive advantage in bilingualism: an example of publication bias?

    PubMed

    de Bruin, Angela; Treccani, Barbara; Della Sala, Sergio

    2015-01-01

    It is a widely held belief that bilinguals have an advantage over monolinguals in executive-control tasks, but is this what all studies actually demonstrate? The idea of a bilingual advantage may result from a publication bias favoring studies with positive results over studies with null or negative effects. To test this hypothesis, we looked at conference abstracts from 1999 to 2012 on the topic of bilingualism and executive control. We then determined which of the studies they reported were subsequently published. Studies with results fully supporting the bilingual-advantage theory were most likely to be published, followed by studies with mixed results. Studies challenging the bilingual advantage were published the least. This discrepancy was not due to differences in sample size, tests used, or statistical power. A test for funnel-plot asymmetry provided further evidence for the existence of a publication bias. PMID:25475825

  11. A Cross Unequal Clustering Routing Algorithm for Sensor Network

    NASA Astrophysics Data System (ADS)

    Tong, Wang; Jiyi, Wu; He, Xu; Jinghua, Zhu; Munyabugingo, Charles

    2013-08-01

    In the routing protocol for wireless sensor network, the cluster size is generally fixed in clustering routing algorithm for wireless sensor network, which can easily lead to the "hot spot" problem. Furthermore, the majority of routing algorithms barely consider the problem of long distance communication between adjacent cluster heads that brings high energy consumption. Therefore, this paper proposes a new cross unequal clustering routing algorithm based on the EEUC algorithm. In order to solve the defects of EEUC algorithm, this algorithm calculating of competition radius takes the node's position and node's remaining energy into account to make the load of cluster heads more balanced. At the same time, cluster adjacent node is applied to transport data and reduce the energy-loss of cluster heads. Simulation experiments show that, compared with LEACH and EEUC, the proposed algorithm can effectively reduce the energy-loss of cluster heads and balance the energy consumption among all nodes in the network and improve the network lifetime

  12. Online Planning Algorithms for POMDPs

    PubMed Central

    Ross, Stéphane; Pineau, Joelle; Paquet, Sébastien; Chaib-draa, Brahim

    2009-01-01

    Partially Observable Markov Decision Processes (POMDPs) provide a rich framework for sequential decision-making under uncertainty in stochastic domains. However, solving a POMDP is often intractable except for small problems due to their complexity. Here, we focus on online approaches that alleviate the computational complexity by computing good local policies at each decision step during the execution. Online algorithms generally consist of a lookahead search to find the best action to execute at each time step in an environment. Our objectives here are to survey the various existing online POMDP methods, analyze their properties and discuss their advantages and disadvantages; and to thoroughly evaluate these online approaches in different environments under various metrics (return, error bound reduction, lower bound improvement). Our experimental results indicate that state-of-the-art online heuristic search methods can handle large POMDP domains efficiently. PMID:19777080

  13. Bootstrap performance profiles in stochastic algorithms assessment

    SciTech Connect

    Costa, Lino; Espírito Santo, Isabel A.C.P.; Oliveira, Pedro

    2015-03-10

    Optimization with stochastic algorithms has become a relevant research field. Due to its stochastic nature, its assessment is not straightforward and involves integrating accuracy and precision. Performance profiles for the mean do not show the trade-off between accuracy and precision, and parametric stochastic profiles require strong distributional assumptions and are limited to the mean performance for a large number of runs. In this work, bootstrap performance profiles are used to compare stochastic algorithms for different statistics. This technique allows the estimation of the sampling distribution of almost any statistic even with small samples. Multiple comparison profiles are presented for more than two algorithms. The advantages and drawbacks of each assessment methodology are discussed.

  14. Turbopump Performance Improved by Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Oyama, Akira; Liou, Meng-Sing

    2002-01-01

    The development of design optimization technology for turbomachinery has been initiated using the multiobjective evolutionary algorithm under NASA's Intelligent Synthesis Environment and Revolutionary Aeropropulsion Concepts programs. As an alternative to the traditional gradient-based methods, evolutionary algorithms (EA's) are emergent design-optimization algorithms modeled after the mechanisms found in natural evolution. EA's search from multiple points, instead of moving from a single point. In addition, they require no derivatives or gradients of the objective function, leading to robustness and simplicity in coupling any evaluation codes. Parallel efficiency also becomes very high by using a simple master-slave concept for function evaluations, since such evaluations often consume the most CPU time, such as computational fluid dynamics. Application of EA's to multiobjective design problems is also straightforward because EA's maintain a population of design candidates in parallel. Because of these advantages, EA's are a unique and attractive approach to real-world design optimization problems.

  15. Development of sensor-based nitrogen recommendation algorithms for cereal crops

    NASA Astrophysics Data System (ADS)

    Asebedo, Antonio Ray

    through 2014 to evaluate the previously developed KSU sensor-based N recommendation algorithm in corn N fertigation systems. Results indicate that the current KSU corn algorithm was effective at achieving high yields, but has the tendency to overestimate N requirements. To optimize sensor-based N recommendations for N fertigation systems, algorithms must be specifically designed for these systems to take advantage of their full capabilities, thus allowing implementation of high NUE N management systems.

  16. Global Precipitation Measurement (GPM) Microwave Imager Falling Snow Retrieval Algorithm Performance

    NASA Astrophysics Data System (ADS)

    Skofronick Jackson, Gail; Munchak, Stephen J.; Johnson, Benjamin T.

    2015-04-01

    Retrievals of falling snow from space represent an important data set for understanding the Earth's atmospheric, hydrological, and energy cycles. While satellite-based remote sensing provides global coverage of falling snow events, the science is relatively new and retrievals are still undergoing development with challenges and uncertainties remaining. This work reports on the development and post-launch testing of retrieval algorithms for the NASA Global Precipitation Measurement (GPM) mission Core Observatory satellite launched in February 2014. In particular, we will report on GPM Microwave Imager (GMI) radiometer instrument algorithm performance with respect to falling snow detection and estimation. Since GPM's launch, the at-launch GMI precipitation algorithms, based on a Bayesian framework, have been used with the new GPM data. The at-launch database is generated using proxy satellite data merged with surface measurements (instead of models). One year after launch, the Bayesian database will begin to be replaced with the more realistic observational data from the GPM spacecraft radar retrievals and GMI data. It is expected that the observational database will be much more accurate for falling snow retrievals because that database will take full advantage of the 166 and 183 GHz snow-sensitive channels. Furthermore, much retrieval algorithm work has been done to improve GPM retrievals over land. The Bayesian framework for GMI retrievals is dependent on the a priori database used in the algorithm and how profiles are selected from that database. Thus, a land classification sorts land surfaces into ~15 different categories for surface-specific databases (radiometer brightness temperatures are quite dependent on surface characteristics). In addition, our work has shown that knowing if the land surface is snow-covered, or not, can improve the performance of the algorithm. Improvements were made to the algorithm that allow for daily inputs of ancillary snow cover

  17. Take-all or nothing.

    PubMed

    Hernández-Restrepo, M; Groenewald, J Z; Elliott, M L; Canning, G; McMillan, V E; Crous, P W

    2016-01-01

    Take-all disease of Poaceae is caused by Gaeumannomyces graminis (Magnaporthaceae). Four varieties are recognised in G. graminis based on ascospore size, hyphopodial morphology and host preference. The aim of the present study was to clarify boundaries among species and varieties in Gaeumannomyces by combining morphology and multi-locus phylogenetic analyses based on partial gene sequences of ITS, LSU, tef1 and rpb1. Two new genera, Falciphoriella and Gaeumannomycella were subsequently introduced in Magnaporthaceae. The resulting phylogeny revealed several cryptic species previously overlooked within Gaeumannomyces. Isolates of Gaeumannomyces were distributed in four main clades, from which 19 species could be delimited, 12 of which were new to science. Our results show that the former varieties Gaeumannomyces graminis var. avenae and Gaeumannomyces graminis var. tritici represent species phylogenetically distinct from G. graminis, for which the new combinations G. avenae and G. tritici are introduced. Based on molecular data, morphology and host preferences, Gaeumannomyces graminis var. maydis is proposed as a synonym of G. radicicola. Furthermore, an epitype for Gaeumannomyces graminis var. avenae was designated to help stabilise the application of that name. PMID:27504028

  18. Microgravity Smoldering Combustion Takes Flight

    NASA Technical Reports Server (NTRS)

    1996-01-01

    The Microgravity Smoldering Combustion (MSC) experiment lifted off aboard the Space Shuttle Endeavour in September 1995 on the STS-69 mission. This experiment is part of series of studies focused on the smolder characteristics of porous, combustible materials in a microgravity environment. Smoldering is a nonflaming form of combustion that takes place in the interior of combustible materials. Common examples of smoldering are nonflaming embers, charcoal briquettes, and cigarettes. The objective of the study is to provide a better understanding of the controlling mechanisms of smoldering, both in microgravity and Earth gravity. As with other forms of combustion, gravity affects the availability of air and the transport of heat, and therefore, the rate of combustion. Results of the microgravity experiments will be compared with identical experiments carried out in Earth's gravity. They also will be used to verify present theories of smoldering combustion and will provide new insights into the process of smoldering combustion, enhancing our fundamental understanding of this frequently encountered combustion process and guiding improvement in fire safety practices.

  19. HOW MUCH FAVORABLE SELECTION IS LEFT IN MEDICARE ADVANTAGE?

    PubMed Central

    PRICE, MARY; MCWILLIAMS, J. MICHAEL; HSU, JOHN; MCGUIRE, THOMAS G.

    2015-01-01

    The health economics literature contains two models of selection, one with endogenous plan characteristics to attract good risks and one with fixed plan characteristics; neither model contains a regulator. Medicare Advantage, a principal example of selection in the literature, is, however, subject to anti-selection regulations. Because selection causes economic inefficiency and because the historically favorable selection into Medicare Advantage plans increased government cost, the effectiveness of the anti-selection regulations is an important policy question, especially since the Medicare Advantage program has grown to comprise 30 percent of Medicare beneficiaries. Moreover, similar anti-selection regulations are being used in health insurance exchanges for those under 65. Contrary to earlier work, we show that the strengthened anti-selection regulations that Medicare introduced starting in 2004 markedly reduced government overpayment attributable to favorable selection in Medicare Advantage. At least some of the remaining selection is plausibly related to fixed plan characteristics of Traditional Medicare versus Medicare Advantage rather than changed selection strategies by Medicare Advantage plans. PMID:26389127

  20. Semioptimal practicable algorithmic cooling

    SciTech Connect

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-04-15

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  1. Contrasting behavior between dispersive seismic velocity and attenuation: advantages in subsoil characterization.

    PubMed

    Zhubayev, Alimzhan; Ghose, Ranajit

    2012-02-01

    A careful look into the pertinent models of poroelasticity reveals that in water-saturated sediments or soils, the seismic (P and S wave) velocity dispersion and attenuation in the low field-seismic frequency band (20-200 Hz) have a contrasting behavior in the porosity-permeability domain. Taking advantage of this nearly orthogonal behavior, a new approach has been proposed, which leads to unique estimates of both porosity and permeability simultaneously. Through realistic numerical tests, the effect of maximum frequency content in data and the integration of P and S waves on the accuracy and robustness of the estimates are demonstrated. PMID:22352618

  2. A Danger-Theory-Based Immune Network Optimization Algorithm

    PubMed Central

    Li, Tao; Xiao, Xin; Shi, Yuanquan

    2013-01-01

    Existing artificial immune optimization algorithms reflect a number of shortcomings, such as premature convergence and poor local search ability. This paper proposes a danger-theory-based immune network optimization algorithm, named dt-aiNet. The danger theory emphasizes that danger signals generated from changes of environments will guide different levels of immune responses, and the areas around danger signals are called danger zones. By defining the danger zone to calculate danger signals for each antibody, the algorithm adjusts antibodies' concentrations through its own danger signals and then triggers immune responses of self-regulation. So the population diversity can be maintained. Experimental results show that the algorithm has more advantages in the solution quality and diversity of the population. Compared with influential optimization algorithms, CLONALG, opt-aiNet, and dopt-aiNet, the algorithm has smaller error values and higher success rates and can find solutions to meet the accuracies within the specified function evaluation times. PMID:23483853

  3. Rayleigh wave nonlinear inversion based on the Firefly algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Teng-Fei; Peng, Geng-Xin; Hu, Tian-Yue; Duan, Wen-Sheng; Yao, Feng-Chang; Liu, Yi-Mou

    2014-06-01

    Rayleigh waves have high amplitude, low frequency, and low velocity, which are treated as strong noise to be attenuated in reflected seismic surveys. This study addresses how to identify useful shear wave velocity profile and stratigraphic information from Rayleigh waves. We choose the Firefly algorithm for inversion of surface waves. The Firefly algorithm, a new type of particle swarm optimization, has the advantages of being robust, highly effective, and allows global searching. This algorithm is feasible and has advantages for use in Rayleigh wave inversion with both synthetic models and field data. The results show that the Firefly algorithm, which is a robust and practical method, can achieve nonlinear inversion of surface waves with high resolution.

  4. Advantages of a Grazing Incidence Monochromator in the Extreme Ultraviolet

    NASA Astrophysics Data System (ADS)

    Barton, Sarah; Turley, R. Steven

    2006-10-01

    One of the main goals of the BYU Thin Films group is to find optical constants for materials in the Extreme Ultraviolet. This is accomplished by taking reflection and transmission measurements. The addition of a Grazing Incidence Monochromator to our current system allows us to take reflectance measurements at wavelengths currently unavailable on the Normal Incidence Monochromator (Monarch).

  5. Adaptive thresholding algorithm based on SAR images and wind data to segment oil spills along the northwest coast of the Iberian Peninsula.

    PubMed

    Mera, David; Cotos, José M; Varela-Pet, José; Garcia-Pineda, Oscar

    2012-10-01

    Satellite Synthetic Aperture Radar (SAR) has been established as a useful tool for detecting hydrocarbon spillage on the ocean's surface. Several surveillance applications have been developed based on this technology. Environmental variables such as wind speed should be taken into account for better SAR image segmentation. This paper presents an adaptive thresholding algorithm for detecting oil spills based on SAR data and a wind field estimation as well as its implementation as a part of a functional prototype. The algorithm was adapted to an important shipping route off the Galician coast (northwest Iberian Peninsula) and was developed on the basis of confirmed oil spills. Image testing revealed 99.93% pixel labelling accuracy. By taking advantage of multi-core processor architecture, the prototype was optimized to get a nearly 30% improvement in processing time. PMID:22874883

  6. Does living donation have advantages over deceased donation in liver transplantation?

    PubMed

    Kaido, Toshimi; Uemoto, Shinji

    2010-10-01

    Liver transplantation (LT) is the best treatment option for patients with end-stage liver disease. Living donor LT (LDLT) has developed as an alternative to deceased donor LT (DDLT) in order to overcome the critical shortage of deceased organ donations, particularly in Asia. LDLT offers several advantages over DDLT. The major advantage of LDLT is the reduction in waiting time mortality. Especially among patients with hepatocellular carcinoma (HCC), LDLT can shorten the waiting time and lower the dropout rate. The Hong Kong group reported that median waiting time was significantly shorter for LDLT than for DDLT. Intention-to-treat survival rates of HCC patients with voluntary live donors were significantly higher than those of patients without voluntary live donors. In contrast, a multicenter adult-to-adult LDLT retrospective cohort study reported that LDLT recipients displayed a significantly higher rate of HCC recurrence than DDLT recipients, although LDLT recipients had shorter waiting times than DDLT recipients. The advantage of LDLT involves the more liberal criteria for HCC compared with those for DDLT. Various preoperative interventions including nutritional treatment can also be planned for both the donor and recipient in LDLT. Conversely, LDLT has marked unfavorable characteristics in terms of donor risks. Donor morbidity is not infrequent and the donor mortality rate is estimated at around 0.1-0.3%. In conclusion, living donation is not necessarily advantageous over deceased donation in LT. Taking the advantages and disadvantages of each option into consideration, LDLT and DDLT should both be used to facilitate effective LT for patients requiring transplant. PMID:20880167

  7. Simple algorithm for improved security in the FDDI protocol

    NASA Astrophysics Data System (ADS)

    Lundy, G. M.; Jones, Benjamin

    1993-02-01

    We propose a modification to the Fiber Distributed Data Interface (FDDI) protocol based on a simple algorithm which will improve confidential communication capability. This proposed modification provides a simple and reliable system which exploits some of the inherent security properties in a fiber optic ring network. This method differs from conventional methods in that end to end encryption can be facilitated at the media access control sublayer of the data link layer in the OSI network model. Our method is based on a variation of the bit stream cipher method. The transmitting station takes the intended confidential message and uses a simple modulo two addition operation against an initialization vector. The encrypted message is virtually unbreakable without the initialization vector. None of the stations on the ring will have access to both the encrypted message and the initialization vector except the transmitting and receiving stations. The generation of the initialization vector is unique for each confidential transmission and thus provides a unique approach to the key distribution problem. The FDDI protocol is of particular interest to the military in terms of LAN/MAN implementations. Both the Army and the Navy are considering the standard as the basis for future network systems. A simple and reliable security mechanism with the potential to support realtime communications is a necessary consideration in the implementation of these systems. The proposed method offers several advantages over traditional methods in terms of speed, reliability, and standardization.

  8. Algorithm-dependent fault tolerance for distributed computing

    SciTech Connect

    P. D. Hough; M. e. Goldsby; E. J. Walsh

    2000-02-01

    Large-scale distributed systems assembled from commodity parts, like CPlant, have become common tools in the distributed computing world. Because of their size and diversity of parts, these systems are prone to failures. Applications that are being run on these systems have not been equipped to efficiently deal with failures, nor is there vendor support for fault tolerance. Thus, when a failure occurs, the application crashes. While most programmers make use of checkpoints to allow for restarting of their applications, this is cumbersome and incurs substantial overhead. In many cases, there are more efficient and more elegant ways in which to address failures. The goal of this project is to develop a software architecture for the detection of and recovery from faults in a cluster computing environment. The detection phase relies on the latest techniques developed in the fault tolerance community. Recovery is being addressed in an application-dependent manner, thus allowing the programmer to take advantage of algorithmic characteristics to reduce the overhead of fault tolerance. This architecture will allow large-scale applications to be more robust in high-performance computing environments that are comprised of clusters of commodity computers such as CPlant and SMP clusters.

  9. Improved interpretation of satellite altimeter data using genetic algorithms

    NASA Technical Reports Server (NTRS)

    Messa, Kenneth; Lybanon, Matthew

    1992-01-01

    Genetic algorithms (GA) are optimization techniques that are based on the mechanics of evolution and natural selection. They take advantage of the power of cumulative selection, in which successive incremental improvements in a solution structure become the basis for continued development. A GA is an iterative procedure that maintains a 'population' of 'organisms' (candidate solutions). Through successive 'generations' (iterations) the population as a whole improves in simulation of Darwin's 'survival of the fittest'. GA's have been shown to be successful where noise significantly reduces the ability of other search techniques to work effectively. Satellite altimetry provides useful information about oceanographic phenomena. It provides rapid global coverage of the oceans and is not as severely hampered by cloud cover as infrared imagery. Despite these and other benefits, several factors lead to significant difficulty in interpretation. The GA approach to the improved interpretation of satellite data involves the representation of the ocean surface model as a string of parameters or coefficients from the model. The GA searches in parallel, a population of such representations (organisms) to obtain the individual that is best suited to 'survive', that is, the fittest as measured with respect to some 'fitness' function. The fittest organism is the one that best represents the ocean surface model with respect to the altimeter data.

  10. Confidence intervals for expected moments algorithm flood quantile estimates

    USGS Publications Warehouse

    Cohn, T.A.; Lane, W.L.; Stedinger, J.R.

    2001-01-01

    Historical and paleoflood information can substantially improve flood frequency estimates if appropriate statistical procedures are properly applied. However, the Federal guidelines for flood frequency analysis, set forth in Bulletin 17B, rely on an inefficient "weighting" procedure that fails to take advantage of historical and paleoflood information. This has led researchers to propose several more efficient alternatives including the Expected Moments Algorithm (EMA), which is attractive because it retains Bulletin 17B's statistical structure (method of moments with the Log Pearson Type 3 distribution) and thus can be easily integrated into flood analyses employing the rest of the Bulletin 17B approach. The practical utility of EMA, however, has been limited because no closed-form method has been available for quantifying the uncertainty of EMA-based flood quantile estimates. This paper addresses that concern by providing analytical expressions for the asymptotic variance of EMA flood-quantile estimators and confidence intervals for flood quantile estimates. Monte Carlo simulations demonstrate the properties of such confidence intervals for sites where a 25- to 100-year streamgage record is augmented by 50 to 150 years of historical information. The experiments show that the confidence intervals, though not exact, should be acceptable for most purposes.

  11. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.; Subrahmanyam, P.A.

    1988-12-01

    The authors present a methodology for verifying correctness of systolic algorithms. The methodology is based on solving a set of Uniform Recurrence Equations obtained from a description of systolic algorithms as a set of recursive equations. They present an approach to mechanically verify correctness of systolic algorithms, using the Boyer-Moore theorem proven. A mechanical correctness proof of an example from the literature is also presented.

  12. A Review of Control Algorithms for Autonomous Quadrotors

    NASA Astrophysics Data System (ADS)

    Zulu, Andrew; John, Samuel

    The quadrotor unmanned aerial vehicle is a great platform for control systems research as its nonlinear nature and under-actuated configuration make it ideal to synthesize and analyze control algorithms. After a brief explanation of the system, several algorithms have been analyzed including their advantages and disadvantages: PID, Linear Quadratic Regulator (LQR), Sliding mode, Backstepping, Feedback linearization, Adaptive, Robust, Optimal, L1, H-infinity, Fuzzy logic and Artificial neutral networks. The conclusion of this work is a proposal of hybrid systems to be considered as they combine advantages from more than one control philosophy.

  13. RES: Regularized Stochastic BFGS Algorithm

    NASA Astrophysics Data System (ADS)

    Mokhtari, Aryan; Ribeiro, Alejandro

    2014-12-01

    RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.

  14. Ligand Identification Scoring Algorithm (LISA)

    PubMed Central

    Zheng, Zheng; Merz, Kenneth M.

    2011-01-01

    A central problem in de novo drug design is determining the binding affinity of a ligand with a receptor. A new scoring algorithm is presented that estimates the binding affinity of a protein-ligand complex given a three-dimensional structure. The method, LISA (Ligand Identification Scoring Algorithm), uses an empirical scoring function to describe the binding free energy. Interaction terms have been designed to account for van der Waals (VDW) contacts, hydrogen bonding, desolvation effects and metal chelation to model the dissociation equilibrium constants using a linear model. Atom types have been introduced to differentiate the parameters for VDW, H-bonding interactions and metal chelation between different atom pairs. A training set of 492 protein-ligand complexes was selected for the fitting process. Different test sets have been examined to evaluate its ability to predict experimentally measured binding affinities. By comparing with other well known scoring functions, the results show that LISA has advantages over many existing scoring functions in simulating protein-ligand binding affinity, especially metalloprotein-ligand binding affinity. Artificial Neural Network (ANN) was also used in order to demonstrate that the energy terms in LISA are well designed and do not require extra cross terms. PMID:21561101

  15. A prescription of Winograd's discrete Fourier transform algorithm

    NASA Technical Reports Server (NTRS)

    Zohar, S.

    1979-01-01

    A detailed and complete description of Winograd's discrete Fourier transform algorithm (DFT) is presented omitting all proofs and derivations. The algorithm begins with the transfer of data from the input vector array to the working array where the actual transformation takes place, otherwise known as input scrambling and output unscrambling. The third array holds constraints required in the transformation stage that are evaluated in the precomputation stage. The algorithm is made up of several FORTRAN subroutines which are not to be confused with practical software algorithmic implementation since they are designed for clarity and not for speed.

  16. Inhomogeneous phase shifting: an algorithm for nonconstant phase displacements

    SciTech Connect

    Tellez-Quinones, Alejandro; Malacara-Doblado, Daniel

    2010-11-10

    In this work, we have developed a different algorithm than the classical one on phase-shifting interferometry. These algorithms typically use constant or homogeneous phase displacements and they can be quite accurate and insensitive to detuning, taking appropriate weight factors in the formula to recover the wrapped phase. However, these algorithms have not been considered with variable or inhomogeneous displacements. We have generalized these formulas and obtained some expressions for an implementation with variable displacements and ways to get partially insensitive algorithms with respect to these arbitrary error shifts.

  17. Synthesizing Dynamic Programming Algorithms from Linear Temporal Logic Formulae

    NASA Technical Reports Server (NTRS)

    Rosu, Grigore; Havelund, Klaus

    2001-01-01

    The problem of testing a linear temporal logic (LTL) formula on a finite execution trace of events, generated by an executing program, occurs naturally in runtime analysis of software. We present an algorithm which takes an LTL formula and generates an efficient dynamic programming algorithm. The generated algorithm tests whether the LTL formula is satisfied by a finite trace of events given as input. The generated algorithm runs in linear time, its constant depending on the size of the LTL formula. The memory needed is constant, also depending on the size of the formula.

  18. Competing Sudakov veto algorithms

    NASA Astrophysics Data System (ADS)

    Kleiss, Ronald; Verheyen, Rob

    2016-07-01

    We present a formalism to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The formal analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance in a semi-realistic setting and show that there are significantly faster alternatives to the commonly used algorithms.

  19. Dynamic multi DAG scheduling algorithm for optical grid environment

    NASA Astrophysics Data System (ADS)

    Zhu, Liying; Sun, Zhenyu; Guo, Wei; Jin, Yaohui; Sun, Weiqiang; Hu, Weisheng

    2007-11-01

    Facing the evolvement of the Optical Grid technology, dynamic task scheduling can largely improve the efficiency of the Grid environment under the real circumstances. We propose a Serve On Time (SOT) algorithm - based on the idea of combining all the dynamic multi tasks so that all the tasks will obtain the rights to be served as soon as possible. We then introduce the basic First Come First Serve (FCFS) algorithm. A simulation will show the advantage of SOT.

  20. Internal quantum efficiency analysis of solar cell by genetic algorithm

    SciTech Connect

    Xiong, Kanglin; Yang, Hui; Lu, Shulong; Zhou, Taofei; Wang, Rongxin; Qiu, Kai; Dong, Jianrong; Jiang, Desheng

    2010-11-15

    To investigate factors limiting the performance of a GaAs solar cell, genetic algorithm is employed to fit the experimentally measured internal quantum efficiency (IQE) in the full spectra range. The device parameters such as diffusion lengths and surface recombination velocities are extracted. Electron beam induced current (EBIC) is performed in the base region of the cell with obtained diffusion length agreeing with the fit result. The advantage of genetic algorithm is illustrated. (author)