Science.gov

Sample records for algorithm takes advantage

  1. Taking advantage of natural biodegradation

    SciTech Connect

    Butler, W.A.; Bartlett, C.L.

    1995-12-31

    A chemical manufacturing facility in central New Jersey evaluated alternatives to address low levels of volatile organic compounds (VOCs) in groundwater. Significant natural attenuation of VOCs was observed in groundwater, and is believed to be the result of natural biodegradation, commonly referred to as intrinsic bioremediation. A study consisting of groundwater sampling and analysis, field monitoring, and transport modeling was conducted to evaluate and confirm this phenomenon. The primary conclusion that can be drawn from the study is that observed natural attenuation of VOCs in groundwater is due to natural biodegradation. Based on the concept that natural biodegradation will minimize contaminant migration, bioventing has been implemented to remove the vadose-zone source of VOCs to groundwater. Taking advantage of natural biodegradation has resulted in significant cost savings compared to implementing a conventional groundwater pump-and-treat system, while still protecting human health and the environment.

  2. TAKING SCIENTIFIC ADVANTAGE OF A DISASTROUS OIL SPILL

    EPA Science Inventory

    On 19 January 1996, the North Cape barge ran aground on Moonstone Beach in southern Rhode Island, releasing 828,000 gallons of refined oil. This opportunistic study was designed to take scientific advantage of the most severely affected seabird, the common loon (Gavia immer) . As...

  3. Neural Correlates of Traditional Chinese Medicine Induced Advantageous Risk-Taking Decision Making

    ERIC Educational Resources Information Center

    Lee, Tiffany M. Y.; Guo, Li-guo; Shi, Hong-zhi; Li, Yong-zhi; Luo, Yue-jia; Sung, Connie Y. Y.; Chan, Chetwyn C. H.; Lee, Tatia M. C.

    2009-01-01

    This fMRI study examined the neural correlates of the observed improvement in advantageous risk-taking behavior, as measured by the number of adjusted pumps in the Balloon Analogue Risk Task (BART), following a 60-day course of a Traditional Chinese Medicine (TCM) recipe, specifically designed to regulate impulsiveness in order to modulate…

  4. Taking advantage of unspecific interactions to produce highly active magnetic nanoparticle-antibody conjugates.

    PubMed

    Puertas, Sara; Batalla, Pilar; Moros, María; Polo, Ester; Del Pino, Pablo; Guisan, José M; Grazú, Valeria; de la Fuente, Jesús M

    2011-06-28

    Several strategies for linking antibodies (Abs) through their Fc region in an oriented manner have been proposed at the present time. By using these strategies, the Fab region of the Ab is available for antigen molecular recognition, leading to a more efficient interaction. Most of these strategies are complex processes optimized mainly for the functionalization of surfaces or microbeads. These methodologies imply though the Ab modification through several steps of purification or the use of expensive immobilized proteins. Besides, the functionalization of magnetic nanoparticles (MNPs) turned out to be much more complex than expected due to the lack of stability of most MNPs at high ionic strength and non-neutral pH values. Therefore, there is still missing an efficient, easy and universal methodology for the immobilization of nonmodified Abs onto MNPs without involving their Fab regions during the immobilization process. Herein, we propose the functionalization of MNPs via a two-steps strategy that takes advantage of the ionic reversible interactions between the Ab and the MNP. These interactions make possible the orientation of the Ab on the MNP surface before being attached in an irreversible way via covalent bonds. Three Abs (Immunoglobulin G class) with very different isoelectric points (against peroxidase, carcinoembryonic antigen, and human chorionic gonadotropin hormone) were used to prove the general applicability of the strategy here proposed and its utility for the development of more bioactive NPs.

  5. ATTRACT--applications in telemedicine taking rapid advantage of cable television network evolution.

    PubMed

    Anogianakis, G; Maglavera, S; Pomportsis, A

    1998-01-01

    ATTRACT is a project that intends to provide telemedicine services over Cable Television Networks. ATTRACT is an European Commission funded project (Healthcare Telematics). The main objective of ATTRACT is to take advantage of emerging European Cable Television network infrastructures and offer cost-effective care to patients at home. This will be achieved through a set of broadband network applications that competitively provide low cost interactive health-care services at home. The applications will be based on existing or developing European Cable Television network infrastructures in order to provide all kind of users with affordable homecare services. It is ATTRACT's intention that citizens and users benefit from high quality access to home telemedical services which also implies cost savings for patients, their families and the already over burdened health institutions. In addition, the European industries will have extensive opportunities to develop, evaluate and validate broadband network infrastructures providing multimedia and interactive telemedical services at home. ATTRACT contributes to the EU telecommunications and telematics policy objectives that promote the development and validation of "applications and services" which "provide an intelligent telematic environment for the patient in institutions and other points of care that helps the patient to continue, as far as possible, normal activities and external communication".

  6. Taking advantage of ground data systems attributes to achieve quality results in testing software

    NASA Technical Reports Server (NTRS)

    Sigman, Clayton B.; Koslosky, John T.; Hageman, Barbara H.

    1994-01-01

    During the software development life cycle process, basic testing starts with the development team. At the end of the development process, an acceptance test is performed for the user to ensure that the deliverable is acceptable. Ideally, the delivery is an operational product with zero defects. However, the goal of zero defects is normally not achieved but is successful to various degrees. With the emphasis on building low cost ground support systems while maintaining a quality product, a key element in the test process is simulator capability. This paper reviews the Transportable Payload Operations Control Center (TPOCC) Advanced Spacecraft Simulator (TASS) test tool that is used in the acceptance test process for unmanned satellite operations control centers. The TASS is designed to support the development, test and operational environments of the Goddard Space Flight Center (GSFC) operations control centers. The TASS uses the same basic architecture as the operations control center. This architecture is characterized by its use of distributed processing, industry standards, commercial off-the-shelf (COTS) hardware and software components, and reusable software. The TASS uses much of the same TPOCC architecture and reusable software that the operations control center developer uses. The TASS also makes use of reusable simulator software in the mission specific versions of the TASS. Very little new software needs to be developed, mainly mission specific telemetry communication and command processing software. By taking advantage of the ground data system attributes, successful software reuse for operational systems provides the opportunity to extend the reuse concept into the test area. Consistency in test approach is a major step in achieving quality results.

  7. Taking Advantage of STEM (Science, Technology, Engineering, and Math) Popularity to Enhance Student/Public Engagement

    NASA Astrophysics Data System (ADS)

    Dittrich, T. M.

    2011-12-01

    . Our goal is to expand the use of these modules to a more broad public audience, including at a future campus/public event know as "All Things Water". We have also organized a walking tour/demo with 3rd-5th graders in a small mining town west of Boulder where we hiked to an old historical mine site, measured water quality (pH, dissolved lead, conductivity), and coated the inside of small bottles with silver. Organizing and hosting a conference can also be a great way to facilitate a discussion of ideas within the community. "All Things STEM" organized a broad student research conference related to water quality and water treatment which included research from 22 students from 11 different countries. We worked with 12 local engineering consultants, municipalities, and local businesses to provide 2000 for student awards. Our presentation will focus on lessons we have learned on how to take advantage of student energy, excitement, and time on campus to receive funding opportunities for planning events that engage the public. We will also talk about our experiences in using student energy to develop partnerships with K-12 schools, community groups, and industry professionals.

  8. The pen is mightier than the keyboard: advantages of longhand over laptop note taking.

    PubMed

    Mueller, Pam A; Oppenheimer, Daniel M

    2014-06-01

    Taking notes on laptops rather than in longhand is increasingly common. Many researchers have suggested that laptop note taking is less effective than longhand note taking for learning. Prior studies have primarily focused on students' capacity for multitasking and distraction when using laptops. The present research suggests that even when laptops are used solely to take notes, they may still be impairing learning because their use results in shallower processing. In three studies, we found that students who took notes on laptops performed worse on conceptual questions than students who took notes longhand. We show that whereas taking more notes can be beneficial, laptop note takers' tendency to transcribe lectures verbatim rather than processing information and reframing it in their own words is detrimental to learning.

  9. Advantages of multiple algorithm support in treatment planning system for external beam dose calculations.

    PubMed

    2005-01-01

    The complexity of interactions and the nature of the approximations made in the formulation of the algorithm require that the user be familiar with the limitations of various models. As computer power keeps growing, calculation algorithms are tending more towards physically based models. The nature and quantity of the data required varies according to the model which may be either measurement based models or physical based models. Multiple dose calculation algorithm support found in XiO Treatment Planning System can be used to advantage when choice is to be made between speed and accuracy. Thus XiO allows end users generate plans accurately and quickly to optimize the delivery of radiation therapy.

  10. Military Base Realignments and Closures: More Guidance and Information Needed to Take Advantage of Opportunities to Consolidate Training

    DTIC Science & Technology

    2016-02-01

    Consolidate Training Report to Congressional Committees February 2016 GAO-16-45 United States Government Accountability Office...REALIGNMENTS AND CLOSURES More Guidance and Information Needed to Take Advantage of Opportunities to Consolidate Training Why GAO Did This Study...evaluates the extent to which DOD has (1) implemented the recommendations requiring the services to relocate select training functions to increase

  11. Perspective-Taking Ability in Bilingual Children: Extending Advantages in Executive Control to Spatial Reasoning

    ERIC Educational Resources Information Center

    Greenberg, Anastasia; Bellana, Buddhika; Bialystok, Ellen

    2013-01-01

    Monolingual and bilingual 8-year-olds performed a computerized spatial perspective-taking task. Children were asked to decide how an observer saw a four-block array from one of three different positions (90 degrees, 180 degrees, and 270 degrees counter-clockwise from the child's position) by selecting one of four responses--the correct response,…

  12. Cell-Mediated Delivery of Nanoparticles: Taking Advantage of Circulatory Cells to Target Nanoparticles

    PubMed Central

    Anselmo, Aaron C.; Mitragotri, Samir

    2014-01-01

    Cellular hitchhiking leverages the use of circulatory cells to enhance the biological outcome of nanoparticle drug delivery systems, which often suffer from poor circulation time and limited targeting. Cellular hitchhiking utilizes the natural abilities of circulatory cells to: (i) navigate the vasculature while avoiding immune system clearance, (ii) remain relatively inert until needed and (iii) perform specific functions, including nutrient delivery to tissues, clearance of pathogens, and immune system surveillance. A variety of synthetic nanoparticles attempt to mimic these functional attributes of circulatory cells for drug delivery purposes. By combining the advantages of circulatory cells and synthetic nanoparticles, many advanced drug delivery systems have been developed that adopt the concept of cellular hitchhiking. Here, we review the development and specific applications of cellular hitchhiking-based drug delivery systems. PMID:24747161

  13. Social customer relationship management: taking advantage of Web 2.0 and Big Data technologies.

    PubMed

    Orenga-Roglá, Sergio; Chalmeta, Ricardo

    2016-01-01

    The emergence of Web 2.0 and Big Data technologies has allowed a new customer relationship strategy based on interactivity and collaboration called Social Customer Relationship Management (Social CRM) to be created. This enhances customer engagement and satisfaction. The implementation of Social CRM is a complex task that involves different organisational, human and technological aspects. However, there is a lack of methodologies to assist companies in these processes. This paper shows a novel methodology that helps companies to implement Social CRM, taking into account different aspects such as social customer strategy, the Social CRM performance measurement system, the Social CRM business processes, or the Social CRM computer system. The methodology was applied to one company in order to validate and refine it.

  14. IMAGE-BASED VERIFICATION: SOME ADVANTAGES, CHALLENGES, AND ALGORITHM-DRIVEN REQUIREMENTS

    SciTech Connect

    Seifert, Allen; McDonald, Benjamin S.; Jarman, Kenneth D.; Robinson, Sean M.; Misner, Alex C.; Miller, Erin A.; White, Timothy A.; Pitts, William K.

    2011-06-10

    ABSTRACT Imaging technologies may be a particularly useful technique that supports monitoring and verification of deployed and stockpiled nuclear weapons and dismantlement components. However, protecting the sensitive design information requires processing the image behind an information barrier and reporting only non-sensitive attributes related to the image. Reducing images to attributes may destroy some sensitive information, but the challenge remains. For example, reducing the measurement to an attribute such as defined shape and X-ray transmission of an edge might reveal sensitive information relating to shape, size, and material composition. If enough additional information is available to analyze with the attribute, it may still be possible to extract sensitive design information. In spite of these difficulties, the implementation of new treaty requirements may demand image technology as an option. Two fundamental questions are raised: What (minimal) information is needed from imaging to enable verification, and what imaging technologies are appropriate? PNNL is currently developing a suite of image analysis algorithms to define and extract attributes from images for dismantlement and warhead verification and counting scenarios. In this talk, we discuss imaging requirements from the perspective of algorithms operating behind information barriers, and review imaging technologies and their potential advantages for verification. Companion talks will concentrate on the technical aspects of the algorithms.

  15. The development of a charge protocol to take advantage of off- and on-peak demand economics at facilities

    SciTech Connect

    Jeffrey Wishart

    2012-02-01

    This document reports the work performed under Task 1.2.1.1: 'The development of a charge protocol to take advantage of off- and on-peak demand economics at facilities'. The work involved in this task included understanding the experimental results of the other tasks of SOW-5799 in order to take advantage of the economics of electricity pricing differences between on- and off-peak hours and the demonstrated charging and facility energy demand profiles. To undertake this task and to demonstrate the feasibility of plug-in hybrid electric vehicle (PHEV) and electric vehicle (EV) bi-directional electricity exchange potential, BEA has subcontracted Electric Transportation Applications (now known as ECOtality North America and hereafter ECOtality NA) to use the data from the demand and energy study to focus on reducing the electrical power demand of the charging facility. The use of delayed charging as well as vehicle-to-grid (V2G) and vehicle-to-building (V2B) operations were to be considered.

  16. 5 CFR 792.207 - When does the child care subsidy program law become effective and how may agencies take advantage...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false When does the child care subsidy program law become effective and how may agencies take advantage of this law? 792.207 Section 792.207... When does the child care subsidy program law become effective and how may agencies take advantage...

  17. 5 CFR 792.207 - When does the child care subsidy program law become effective and how may agencies take advantage...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 2 2011-01-01 2011-01-01 false When does the child care subsidy program law become effective and how may agencies take advantage of this law? 792.207 Section 792.207... When does the child care subsidy program law become effective and how may agencies take advantage...

  18. Size and shape of Brain may be such as to take advantage of two Dimensions of Time

    NASA Astrophysics Data System (ADS)

    Kriske, Richard

    2014-03-01

    This author had previously Theorized that there are two non-commuting Dimensions of time. One is Clock Time and the other is Information Time (which we generally refer to as Information, like Spin Up or Spin Down). When time does not commute with another Dimension of Time, one takes the Clock Time at one point in space and the Information time is not known; that is different than if one takes the Information time at that point and the Clock time is not known--This is not explicitly about time but rather space. An example of this non-commutation is that if one knows the Spin at one point and the Time at one point of space then simultaneosly, one knows the Spin at another point of Space and the Time there (It is the same time), it is a restatement of the EPR paradox. As a matter of fact two Dimensions of Time would prove the EPR paradox. It is obvious from that argument that if one needed to take advantage of Information, then a fairly large space needs to be used, a large amount of Energy needs to be Generated and a symmetry needs to be established in Space-like the lobes of a Brain in order to detect the fact that the Tclock and Tinfo are not Commuting. This Non-Commuting deposits a large amount of Information simultaneously in that space, and synchronizes the time there.

  19. Building Models in the Classroom: Taking Advantage of Sophisticated Geomorphic Numerical Tools Using a Simple Graphical User Interface

    NASA Astrophysics Data System (ADS)

    Roy, S. G.; Koons, P. O.; Gerbi, C. C.; Capps, D. K.; Tucker, G. E.; Rogers, Z. A.

    2014-12-01

    Sophisticated numerical tools exist for modeling geomorphic processes and linking them to tectonic and climatic systems, but they are often seen as inaccessible for users with an exploratory level of interest. We have improved the accessibility of landscape evolution models by producing a simple graphics user interface (GUI) that takes advantage of the Channel-Hillslope Integrated Landscape Development (CHILD) model. Model access is flexible: the user can edit values for basic geomorphic, tectonic, and climate parameters, or obtain greater control by defining the spatiotemporal distributions of those parameters. Users can make educated predictions by choosing their own parametric values for the governing equations and interpreting the results immediately through model graphics. This method of modeling allows users to iteratively build their understanding through experimentation. Use of this GUI is intended for inquiry and discovery-based learning activities. We discuss a number of examples of how the GUI can be used at the upper high school, introductory university, and advanced university level. Effective teaching modules initially focus on an inquiry-based example guided by the instructor. As students become familiar with the GUI and the CHILD model, the class can shift to more student-centered exploration and experimentation. To make model interpretations more robust, digital elevation models can be imported and direct comparisons can be made between CHILD model results and natural topography. The GUI is available online through the University of Maine's Earth and Climate Sciences website, through the Community Surface Dynamics Modeling System (CSDMS) model repository, or by contacting the corresponding author.

  20. Know the risk, take the win: how executive functions and probability processing influence advantageous decision making under risk conditions.

    PubMed

    Brand, Matthias; Schiebener, Johannes; Pertl, Marie-Theres; Delazer, Margarete

    2014-01-01

    Recent models on decision making under risk conditions have suggested that numerical abilities are important ingredients of advantageous decision-making performance, but empirical evidence is still limited. The results of our first study show that logical reasoning and basic mental calculation capacities predict ratio processing and that ratio processing predicts decision making under risk. In the second study, logical reasoning together with executive functions predicted probability processing (numeracy and probability knowledge), and probability processing predicted decision making under risk. These findings suggest that increasing an individual's understanding of ratios and probabilities should lead to more advantageous decisions under risk conditions.

  1. A symbiotic gas exchange between bioreactors enhances microalgal biomass and lipid productivities: taking advantage of complementary nutritional modes.

    PubMed

    Santos, C A; Ferreira, M E; da Silva, T Lopes; Gouveia, L; Novais, J M; Reis, A

    2011-08-01

    This paper describes the association of two bioreactors: one photoautotrophic and the other heterotrophic, connected by the gas phase and allowing an exchange of O(2) and CO(2) gases between them, benefiting from a symbiotic effect. The association of two bioreactors was proposed with the aim of improving the microalgae oil productivity for biodiesel production. The outlet gas flow from the autotrophic (O(2) enriched) bioreactor was used as the inlet gas flow for the heterotrophic bioreactor. In parallel, the outlet gas flow from another heterotrophic (CO(2) enriched) bioreactor was used as the inlet gas flow for the autotrophic bioreactor. Aside from using the air supplied from the auto- and hetero-trophic bioreactors as controls, one mixotrophic bioreactor was also studied and used as a model, for its claimed advantage of CO(2) and organic carbon being simultaneously assimilated. The microalga Chlorella protothecoides was chosen as a model due to its ability to grow under different nutritional modes (auto, hetero, and mixotrophic), and its ability to attain a high biomass productivity and lipid content, suitable for biodiesel production. The comparison between heterotrophic, autotrophic, and mixotrophic Chlorella protothecoides growth for lipid production revealed that heterotrophic growth achieved the highest biomass productivity and lipid content (>22%), and furthermore showed that these lipids had the most suitable fatty acid profile in order to produce high quality biodiesel. Both associations showed a higher biomass productivity (10-20%), when comparing the two separately operated bioreactors (controls) which occurred on the fourth day. A more remarkable result would have been seen if in actuality the two bioreactors had been inter-connected in a closed loop. The biomass productivity gain would have been 30% and the lipid productivity gain would have been 100%, as seen by comparing the productivities of the symbiotic assemblage with the sum of the two

  2. Take Advantage of Constitution Day

    ERIC Educational Resources Information Center

    McCune, Bonnie F.

    2008-01-01

    The announcement of the mandate for Constitution and Citizenship Day shortly before September, 2005, probably led to groans of dismay. Not another "must-do" for teachers and schools already stressed by federal and state requirements for standardized tests, increasingly rigid curricula, and scrutiny from the public and officials. But the…

  3. Optimized MPPT algorithm for boost converters taking into account the environmental variables

    NASA Astrophysics Data System (ADS)

    Petit, Pierre; Sawicki, Jean-Paul; Saint-Eve, Frédéric; Maufay, Fabrice; Aillerie, Michel

    2016-07-01

    This paper presents a study on the specific behavior of the Boost DC-DC converters generally used for powering conversion of PV panels connected to a HVDC (High Voltage Direct Current) Bus. It follows some works pointing out that converter MPPT (Maximum Power Point Tracker) is severely perturbed by output voltage variations due to physical dependency of parameters as the input voltage, the output voltage and the duty cycle of the PWM switching control of the MPPT. As a direct consequence many converters connected together on a same load perturb each other because of the output voltage variations induced by fluctuations on the HVDC bus essentially due to a not insignificant bus impedance. In this paper we show that it is possible to include an internal computed variable in charge to compensate local and external variations to take into account the environment variables.

  4. Intra-host competition between co-infecting digeneans within a bivalve second intermediate host: dominance by priority-effect or taking advantage of others?

    PubMed

    Leung, Tommy L F; Poulin, Robert

    2011-03-01

    We experimentally investigated the interactions between two parasites known to manipulate their host's phenotype, the trematodes Acanthoparyphium sp. and Curtuteria australis, which infect the cockle Austrovenus stutchburyi. The larval stages of both species encyst within the tissue of the bivalve's muscular foot, with a preference for the tip of the foot. As more individuals accumulate at that site, they impair the burrowing behaviour of cockles and increase the probability of the parasites' transmission to a bird definitive host. However, individuals at the foot tip are also vulnerable to non-host predators in the form of foot-cropping fish which selectively bite off the foot tip of exposed cockles. Parasites encysted at the foot base are safe from such predators although they do not contribute to altering host behaviour, but nevertheless benefit from host manipulation as all parasites within the cockle are transmitted if it is ingested by a bird. Experimental infection revealed that Acanthoparyphium sp. and C. australis have different encystment patterns within the host, with proportionally fewer Acanthoparyphium metacercariae encysting at the foot tip than C. australis. This indicates that Acanthoparyphium may benefit indirectly from C. australis and incur a lower risk of non-host predation. However, in co-infections, not only did C. australis have higher infectivity than Acanthoparyphium, it also severely affected the latter's infection success. The asymmetrical strategies and interactions between the two species suggest that the advantages obtained from exploiting the host manipulation efforts of another parasite might be offset by traits such as reduced competitiveness in co-infections.

  5. Taking Full Advantage of Children's Literature

    ERIC Educational Resources Information Center

    Serafini, Frank

    2012-01-01

    Teachers need a deeper understanding of the texts being discussed, in particular the various textual and visual aspects of picturebooks themselves, including the images, written text and design elements, to support how readers made sense of these texts. As teachers become familiar with aspects of literary criticism, art history, visual grammar,…

  6. Educational advantage.

    PubMed

    2006-06-01

    WHAT SPECIAL ADVANTAGE DOES JERHRE offer to research ethics education? Empirical research employs concepts and methods for understanding and addressing problems; the methods employed can be generalized to related problems in new contexts. Research published in JERHRE uses concepts and methods designed to understand and solve ethical problems in human research. These tools can be reused by JERHRE's readership as part of their learning and problem solving. Instead of telling scientists, students, ethics committee members and others what they ought to do, educators can use curriculum based on the empirical articles contained in JERHRE to enable learners to solve the particular research-related problems they confront. Each issue of JERHRE publishes curriculum based on articles published therein. The lesson plans are deliberately general so that they can be adapted to the particular learners.

  7. Educational advantage.

    PubMed

    2006-03-01

    What special advantage does JERHRE offer to research ethics education? Empirical research employs concepts and methods for understanding and addressing problems; the methods employed can be generalized to related problems in new contexts. Research published in JERHRE uses concepts and methods designed to understand and solve ethical problems in human research. These tools can be reused by JERHRE's readership as part of their learning and problem solving. Instead of telling scientists, students, ethics committee members and others what they ought to do, educators can use curriculum based on the empirical articles contained in JERHRE to enable learners to solve the particular research-related problems they confront. Each issue of JERHRE publishes curriculum based on articles published therein. The lesson plans are deliberately general so that they can be adapted to the particular learners.

  8. Practical advantages of evolutionary computation

    NASA Astrophysics Data System (ADS)

    Fogel, David B.

    1997-10-01

    Evolutionary computation is becoming a common technique for solving difficult, real-world problems in industry, medicine, and defense. This paper reviews some of the practical advantages to using evolutionary algorithms as compared with classic methods of optimization or artificial intelligence. Specific advantages include the flexibility of the procedures, as well as their ability to self-adapt the search for optimum solutions on the fly. As desktop computers increase in speed, the application of evolutionary algorithms will become routine.

  9. [Validation of the modified algorithm for predicting host susceptibility to viruses taking into account susceptibility parameters of primary target cell cultures and natural immunity factors].

    PubMed

    Zhukov, V A; Shishkina, L N; Safatov, A S; Sergeev, A A; P'iankov, O V; Petrishchenko, V A; Zaĭtsev, B N; Toporkov, V S; Sergeev, A N; Nesvizhskiĭ, Iu V; Vorob'ev, A A

    2010-01-01

    The paper presents results of testing a modified algorithm for predicting virus ID50 values in a host of interest by extrapolation from a model host taking into account immune neutralizing factors and thermal inactivation of the virus. The method was tested for A/Aichi/2/68 influenza virus in SPF Wistar rats, SPF CD-1 mice and conventional ICR mice. Each species was used as a host of interest while the other two served as model hosts. Primary lung and trachea cells and secretory factors of the rats' airway epithelium were used to measure parameters needed for the purpose of prediction. Predicted ID50 values were not significantly different (p = 0.05) from those experimentally measured in vivo. The study was supported by ISTC/DARPA Agreement 450p.

  10. An improved simulated annealing algorithm for standard cell placement

    NASA Technical Reports Server (NTRS)

    Jones, Mark; Banerjee, Prithviraj

    1988-01-01

    Simulated annealing is a general purpose Monte Carlo optimization technique that was applied to the problem of placing standard logic cells in a VLSI ship so that the total interconnection wire length is minimized. An improved standard cell placement algorithm that takes advantage of the performance enhancements that appear to come from parallelizing the uniprocessor simulated annealing algorithm is presented. An outline of this algorithm is given.

  11. Multimodal Estimation of Distribution Algorithms.

    PubMed

    Yang, Qiang; Chen, Wei-Neng; Li, Yun; Chen, C L Philip; Xu, Xiang-Min; Zhang, Jun

    2016-02-15

    Taking the advantage of estimation of distribution algorithms (EDAs) in preserving high diversity, this paper proposes a multimodal EDA. Integrated with clustering strategies for crowding and speciation, two versions of this algorithm are developed, which operate at the niche level. Then these two algorithms are equipped with three distinctive techniques: 1) a dynamic cluster sizing strategy; 2) an alternative utilization of Gaussian and Cauchy distributions to generate offspring; and 3) an adaptive local search. The dynamic cluster sizing affords a potential balance between exploration and exploitation and reduces the sensitivity to the cluster size in the niching methods. Taking advantages of Gaussian and Cauchy distributions, we generate the offspring at the niche level through alternatively using these two distributions. Such utilization can also potentially offer a balance between exploration and exploitation. Further, solution accuracy is enhanced through a new local search scheme probabilistically conducted around seeds of niches with probabilities determined self-adaptively according to fitness values of these seeds. Extensive experiments conducted on 20 benchmark multimodal problems confirm that both algorithms can achieve competitive performance compared with several state-of-the-art multimodal algorithms, which is supported by nonparametric tests. Especially, the proposed algorithms are very promising for complex problems with many local optima.

  12. Taking Advantage of Murder and Mayhem for Social Studies.

    ERIC Educational Resources Information Center

    Harden, G. Daniel

    1991-01-01

    Suggests the use of key historical antisocial acts to teach social studies concepts as a means of arousing the interest of adolescents. Recommends overcoming initial sensationalism by shifting emphasis to more appropriate interests. Includes discussion of the Abraham Lincoln and John F. Kennedy assassinations and the Rosenberg spy case. Suggests…

  13. Gate-all-around technology: Taking advantage of ballistic transport?

    NASA Astrophysics Data System (ADS)

    Huguenin, J. L.; Bidal, G.; Denorme, S.; Fleury, D.; Loubet, N.; Pouydebasque, A.; Perreau, P.; Leverd, F.; Barnola, S.; Beneyton, R.; Orlando, B.; Gouraud, P.; Salvetat, T.; Clement, L.; Monfray, S.; Ghibaudo, G.; Boeuf, F.; Skotnicki, T.

    2010-09-01

    This work presents an experimental study in order to evaluate the quality of transport in the most advanced state-of-the-art gate-all-around devices in term of performances. Experiments have been done on silicon channel devices with metal/high-k gate all-round stack at aggressive dimensions ( L × W × TSi = 25nm × 20 nm × 10nm). We deeply investigated the mobility and the limiting velocity in order to evaluate the possible occurrence of ballisticity. Interest of the gate-all-around in terms of effective current and parasitic capacitance has then been studied in the scope of elementary circuit perspectives.

  14. Taking advantage of Google's Web-based applications and services.

    PubMed

    Brigham, Tara J

    2014-01-01

    Google is a company that is constantly expanding and growing its services and products. While most librarians possess a "love/hate" relationship with Google, there are a number of reasons you should consider exploring some of the tools Google has created and made freely available. Applications and services such as Google Docs, Slides, and Google+ are functional and dynamic without the cost of comparable products. This column will address some of the issues users should be aware of before signing up to use Google's tools, and a description of some of Google's Web applications and services, plus how they can be useful to librarians in health care.

  15. Taking Advantage of Alice to Teach Programming Concepts

    ERIC Educational Resources Information Center

    Biju, Soly Mathew

    2013-01-01

    Learning the fundamentals of programming languages has always been a difficult task for students. It is equally challenging for lecturers to teach these concepts. A number of methods have been deployed by teachers to teach these concepts. This article analyses the result of a class test to identify fundamental programming concepts that students…

  16. Creating Collaborative Advantage.

    ERIC Educational Resources Information Center

    Huxham, Chris, Ed.

    Although interorganizational collaboration is becoming increasingly significant as a means of achieving organizational objectives, it is not an easy process to implement. Drawing on the work of authors with extensive experience, an accessible introduction to the theory and practice of creating collaborative advantage is presented in this volume.…

  17. Sorting on STAR. [CDC computer algorithm timing comparison

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1978-01-01

    Timing comparisons are given for three sorting algorithms written for the CDC STAR computer. One algorithm is Hoare's (1962) Quicksort, which is the fastest or nearly the fastest sorting algorithm for most computers. A second algorithm is a vector version of Quicksort that takes advantage of the STAR's vector operations. The third algorithm is an adaptation of Batcher's (1968) sorting algorithm, which makes especially good use of vector operations but has a complexity of N(log N)-squared as compared with a complexity of N log N for the Quicksort algorithms. In spite of its worse complexity, Batcher's sorting algorithm is competitive with the serial version of Quicksort for vectors up to the largest that can be treated by STAR. Vector Quicksort outperforms the other two algorithms and is generally preferred. These results indicate that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis.

  18. Parallelism of the SANDstorm hash algorithm.

    SciTech Connect

    Torgerson, Mark Dolan; Draelos, Timothy John; Schroeppel, Richard Crabtree

    2009-09-01

    Mainstream cryptographic hashing algorithms are not parallelizable. This limits their speed and they are not able to take advantage of the current trend of being run on multi-core platforms. Being limited in speed limits their usefulness as an authentication mechanism in secure communications. Sandia researchers have created a new cryptographic hashing algorithm, SANDstorm, which was specifically designed to take advantage of multi-core processing and be parallelizable on a wide range of platforms. This report describes a late-start LDRD effort to verify the parallelizability claims of the SANDstorm designers. We have shown, with operating code and bench testing, that the SANDstorm algorithm may be trivially parallelized on a wide range of hardware platforms. Implementations using OpenMP demonstrates a linear speedup with multiple cores. We have also shown significant performance gains with optimized C code and the use of assembly instructions to exploit particular platform capabilities.

  19. Algorithmic Perspectives on Problem Formulations in MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2000-01-01

    This work is concerned with an approach to formulating the multidisciplinary optimization (MDO) problem that reflects an algorithmic perspective on MDO problem solution. The algorithmic perspective focuses on formulating the problem in light of the abilities and inabilities of optimization algorithms, so that the resulting nonlinear programming problem can be solved reliably and efficiently by conventional optimization techniques. We propose a modular approach to formulating MDO problems that takes advantage of the problem structure, maximizes the autonomy of implementation, and allows for multiple easily interchangeable problem statements to be used depending on the available resources and the characteristics of the application problem.

  20. The SWIR advantage

    NASA Astrophysics Data System (ADS)

    Lane, Richard N.

    1995-09-01

    The advantage of panchromatic imaging at wavelengths between 1.1 - 2.5 micrometer [short-wave infrared (SWIR)] to that of 0.5 - 1.0 micrometer [visible and near wave infrared (NWIR)] is shown by analysis and experiment in this paper. At long ranges and under low visibility conditions, the signal-to-noise ratio and image quality in the SWIR are significantly better than in the NWIR and visible spectral bands. This effect can be utilized to great advantage in airborne reconnaissance to extend the range of coverage and to improve the interpretability of the product. Such improvements apply to ground-based and space borne systems as well. Other system benefits are derived by utilizing SWIR in place of the NWIR wavelength region. Stabilization requirements can be relaxed; larger optical fabrication, alignment, environmental and boundary layer wavefront error can be tolerated; and less degradation occurs due to atmospheric turbulence and dispersion error. SWIR systems can be fabricated with some of the same optical materials available as in the NWIR and visible systems. All these effects lead to a simpler, less-expensive, and more capable imaging system that together comprise the SWIR Advantage.

  1. Taking Medication

    MedlinePlus

    ... remembering to take them. Some over-the-counter products, supplements, or natural remedies can interfere with the effectiveness of your prescribed medicines. Tell your diabetes educator about ANY supplements you are taking so ...

  2. Creating corporate advantage.

    PubMed

    Collis, D J; Montgomery, C A

    1998-01-01

    What differentiates truly great corporate strategies from the merely adequate? How can executives at the corporate level create tangible advantage for their businesses that makes the whole more than the sum of the parts? This article presents a comprehensive framework for value creation in the multibusiness company. It addresses the most fundamental questions of corporate strategy: What businesses should a company be in? How should it coordinate activities across businesses? What role should the corporate office play? How should the corporation measure and control performance? Through detailed case studies of Tyco International, Sharp, the Newell Company, and Saatchi and Saatchi, the authors demonstrate that the answers to all those questions are driven largely by the nature of a company's special resources--its assets, skills, and capabilities. These range along a continuum from the highly specialized at one end to the very general at the other. A corporation's location on the continuum constrains the set of businesses it should compete in and limits its choices about the design of its organization. Applying the framework, the authors point out the common mistakes that result from misaligned corporate strategies. Companies mistakenly enter businesses based on similarities in products rather than the resources that contribute to competitive advantage in each business. Instead of tailoring organizational structures and systems to the needs of a particular strategy, they create plain-vanilla corporate offices and infrastructures. The company examples demonstrate that one size does not fit all. One can find great corporate strategies all along the continuum.

  3. Taking Risks.

    ERIC Educational Resources Information Center

    Merson, Martha, Ed.; Reuys, Steve, Ed.

    1999-01-01

    Following an introduction on "Taking Risks" (Martha Merson), this journal contains 11 articles on taking risks in teaching adult literacy, mostly by educators in the Boston area. The following are included: "My Dreams Are Bigger than My Fears Now" (Sharon Carey); "Making a Pitch for Poetry in ABE [Adult Basic…

  4. A limited-memory algorithm for bound-constrained optimization

    SciTech Connect

    Byrd, R.H.; Peihuang, L.; Nocedal, J. |

    1996-03-01

    An algorithm for solving large nonlinear optimization problems with simple bounds is described. It is based on the gradient projection method and uses a limited-memory BFGS matrix to approximate the Hessian of the objective function. We show how to take advantage of the form of the limited-memory approximation to implement the algorithm efficiently. The results of numerical tests on a set of large problems are reported.

  5. Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems

    DOEpatents

    Van Benthem, Mark H.; Keenan, Michael R.

    2008-11-11

    A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.

  6. Taking antacids

    MedlinePlus

    ... magnesium may cause diarrhea. Brands with calcium or aluminum may cause constipation. Rarely, brands with calcium may ... you take large amounts of antacids that contain aluminum, you may be at risk for calcium loss, ...

  7. Filtered refocusing: a volumetric reconstruction algorithm for plenoptic-PIV

    NASA Astrophysics Data System (ADS)

    Fahringer, Timothy W.; Thurow, Brian S.

    2016-09-01

    A new algorithm for reconstruction of 3D particle fields from plenoptic image data is presented. The algorithm is based on the technique of computational refocusing with the addition of a post reconstruction filter to remove the out of focus particles. This new algorithm is tested in terms of reconstruction quality on synthetic particle fields as well as a synthetically generated 3D Gaussian ring vortex. Preliminary results indicate that the new algorithm performs as well as the MART algorithm (used in previous work) in terms of the reconstructed particle position accuracy, but produces more elongated particles. The major advantage to the new algorithm is the dramatic reduction in the computational cost required to reconstruct a volume. It is shown that the new algorithm takes 1/9th the time to reconstruct the same volume as MART while using minimal resources. Experimental results are presented in the form of the wake behind a cylinder at a Reynolds number of 185.

  8. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  9. An automated blood vessel segmentation algorithm using histogram equalization and automatic threshold selection.

    PubMed

    Saleh, Marwan D; Eswaran, C; Mueen, Ahmed

    2011-08-01

    This paper focuses on the detection of retinal blood vessels which play a vital role in reducing the proliferative diabetic retinopathy and for preventing the loss of visual capability. The proposed algorithm which takes advantage of the powerful preprocessing techniques such as the contrast enhancement and thresholding offers an automated segmentation procedure for retinal blood vessels. To evaluate the performance of the new algorithm, experiments are conducted on 40 images collected from DRIVE database. The results show that the proposed algorithm performs better than the other known algorithms in terms of accuracy. Furthermore, the proposed algorithm being simple and easy to implement, is best suited for fast processing applications.

  10. An efficient algorithm for retinal blood vessel segmentation using h-maxima transform and multilevel thresholding.

    PubMed

    Saleh, Marwan D; Eswaran, C

    2012-01-01

    Retinal blood vessel detection and analysis play vital roles in early diagnosis and prevention of several diseases, such as hypertension, diabetes, arteriosclerosis, cardiovascular disease and stroke. This paper presents an automated algorithm for retinal blood vessel segmentation. The proposed algorithm takes advantage of powerful image processing techniques such as contrast enhancement, filtration and thresholding for more efficient segmentation. To evaluate the performance of the proposed algorithm, experiments were conducted on 40 images collected from DRIVE database. The results show that the proposed algorithm yields an accuracy rate of 96.5%, which is higher than the results achieved by other known algorithms.

  11. Double Take

    ERIC Educational Resources Information Center

    Educational Leadership, 2011

    2011-01-01

    This paper begins by discussing the results of two studies recently conducted in Australia. According to the two studies, taking a gap year between high school and college may help students complete a degree once they return to school. The gap year can involve such activities as travel, service learning, or work. Then, the paper presents links to…

  12. Taking Turns

    ERIC Educational Resources Information Center

    Hopkins, Brian

    2010-01-01

    Two people take turns selecting from an even number of items. Their relative preferences over the items can be described as a permutation, then tools from algebraic combinatorics can be used to answer various questions. We describe each person's optimal selection strategies including how each could make use of knowing the other's preferences. We…

  13. A Novel Algorithm Combining Finite State Method and Genetic Algorithm for Solving Crude Oil Scheduling Problem

    PubMed Central

    Duan, Qian-Qian; Yang, Gen-Ke; Pan, Chang-Chun

    2014-01-01

    A hybrid optimization algorithm combining finite state method (FSM) and genetic algorithm (GA) is proposed to solve the crude oil scheduling problem. The FSM and GA are combined to take the advantage of each method and compensate deficiencies of individual methods. In the proposed algorithm, the finite state method makes up for the weakness of GA which is poor at local searching ability. The heuristic returned by the FSM can guide the GA algorithm towards good solutions. The idea behind this is that we can generate promising substructure or partial solution by using FSM. Furthermore, the FSM can guarantee that the entire solution space is uniformly covered. Therefore, the combination of the two algorithms has better global performance than the existing GA or FSM which is operated individually. Finally, a real-life crude oil scheduling problem from the literature is used for conducting simulation. The experimental results validate that the proposed method outperforms the state-of-art GA method. PMID:24772031

  14. A novel algorithm combining finite state method and genetic algorithm for solving crude oil scheduling problem.

    PubMed

    Duan, Qian-Qian; Yang, Gen-Ke; Pan, Chang-Chun

    2014-01-01

    A hybrid optimization algorithm combining finite state method (FSM) and genetic algorithm (GA) is proposed to solve the crude oil scheduling problem. The FSM and GA are combined to take the advantage of each method and compensate deficiencies of individual methods. In the proposed algorithm, the finite state method makes up for the weakness of GA which is poor at local searching ability. The heuristic returned by the FSM can guide the GA algorithm towards good solutions. The idea behind this is that we can generate promising substructure or partial solution by using FSM. Furthermore, the FSM can guarantee that the entire solution space is uniformly covered. Therefore, the combination of the two algorithms has better global performance than the existing GA or FSM which is operated individually. Finally, a real-life crude oil scheduling problem from the literature is used for conducting simulation. The experimental results validate that the proposed method outperforms the state-of-art GA method.

  15. Improved hybrid optimization algorithm for 3D protein structure prediction.

    PubMed

    Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang

    2014-07-01

    A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins.

  16. An Algorithm for Linearly Constrained Nonlinear Programming Programming Problems.

    DTIC Science & Technology

    1980-01-01

    ALGORITHM FOR LINEARLY CONSTRAINED NONLINEAR PROGRAMMING PROBLEMS Mokhtar S. Bazaraa and Jamie J. Goode In this paper an algorithm for solving a linearly...distance pro- gramr.ing, as in the works of Bazaraa and Goode 12], and Wolfe [16 can be used for solving this problem. Special methods that take advantage of...34 Pacific Journal of Mathematics, Volume 16, pp. 1-3, 1966. 2. M. S. Bazaraa and J. j. Goode, "An Algorithm for Finding the Shortest Element of a

  17. A new algorithm for agile satellite-based acquisition operations

    NASA Astrophysics Data System (ADS)

    Bunkheila, Federico; Ortore, Emiliano; Circi, Christian

    2016-06-01

    Taking advantage of the high manoeuvrability and the accurate pointing of the so-called agile satellites, an algorithm which allows efficient management of the operations concerning optical acquisitions is described. Fundamentally, this algorithm can be subdivided into two parts: in the first one the algorithm operates a geometric classification of the areas of interest and a partitioning of these areas into stripes which develop along the optimal scan directions; in the second one it computes the succession of the time windows in which the acquisition operations of the areas of interest are feasible, taking into consideration the potential restrictions associated with these operations and with the geometric and stereoscopic constraints. The results and the performances of the proposed algorithm have been determined and discussed considering the case of the Periodic Sun-Synchronous Orbits.

  18. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  19. Modeling words with subword units in an articulatorily constrained speech recognition algorithm

    SciTech Connect

    Hogden, J.

    1997-11-20

    The goal of speech recognition is to find the most probable word given the acoustic evidence, i.e. a string of VQ codes or acoustic features. Speech recognition algorithms typically take advantage of the fact that the probability of a word, given a sequence of VQ codes, can be calculated.

  20. Flow mediated endothelium function: advantages of an automatic measuring technique

    NASA Astrophysics Data System (ADS)

    Maio, Yamila; Casciaro, Mariano E.; José Urcola y, Maria; Craiem, Damian

    2007-11-01

    The objective of this work is to show the advantages of a non invasive automated method for measuring flow mediated dilation (FMD) in the forearm. This dilation takes place in answer to a shear tension generated by the increase of blood flow, sensed by the endothelium, after the liberation of an occlusion sustained in the time. The method consists of three stages: the continuous acquisition of images of the brachial artery using ultrasound techniques, the pulse to pulse measurement of the vessel's diameter by means of a border detection algorithm, and the later analysis of the results. By means of this technique one cannot only obtain the maximum dilation percentage (FMD%), but a continuous diameter curve that allows to evaluate other relevant aspects such as dilation speed, dilation sustain in time and general maneuver performance. The simplicity of this method, robustness of the technique and accessibility of the required elements makes it a viable alternative of great clinical value for diagnosis in the early detection of numerous cardiovascular pathologies.

  1. Home advantage in Greek football.

    PubMed

    Armatas, Vasilis; Pollard, Richard

    2014-01-01

    Home advantage as it relates to team performance at football was examined in Superleague Greece using nine seasons of game-by-game performance data, a total of 2160 matches. After adjusting for team ability and annual fluctuations in home advantage, there were significant differences between teams. Previous findings regarding the role of territorial protection were strengthened by the fact that home advantage was above average for the team from Xanthi (P =0.015), while lower for teams from the capital city Athens (P =0.008). There were differences between home and away teams in the incidence of most of the 13 within-game match variables, but associated effect sizes were only moderate. In contrast, outcome ratios derived from these variables, and measuring shot success, had negligible effect sizes. This supported a previous finding that home and away teams differed in the incidence of on-the-ball behaviours, but not in their outcomes. By far the most important predictor of home advantage, as measured by goal difference, was the difference between home and away teams in terms of kicked shots from inside the penalty area. Other types of shots had little effect on the final score. The absence of a running track between spectators and the playing field was also a significant predictor of goal difference, worth an average of 0.102 goals per game to the home team. Travel distance did not affect home advantage.

  2. Algorithm Optimally Allocates Actuation of a Spacecraft

    NASA Technical Reports Server (NTRS)

    Motaghedi, Shi

    2007-01-01

    A report presents an algorithm that solves the following problem: Allocate the force and/or torque to be exerted by each thruster and reaction-wheel assembly on a spacecraft for best performance, defined as minimizing the error between (1) the total force and torque commanded by the spacecraft control system and (2) the total of forces and torques actually exerted by all the thrusters and reaction wheels. The algorithm incorporates the matrix vector relationship between (1) the total applied force and torque and (2) the individual actuator force and torque values. It takes account of such constraints as lower and upper limits on the force or torque that can be applied by a given actuator. The algorithm divides the aforementioned problem into two optimization problems that it solves sequentially. These problems are of a type, known in the art as semi-definite programming problems, that involve linear matrix inequalities. The algorithm incorporates, as sub-algorithms, prior algorithms that solve such optimization problems very efficiently. The algorithm affords the additional advantage that the solution requires the minimum rate of consumption of fuel for the given best performance.

  3. Distributed sensor data compression algorithm

    NASA Astrophysics Data System (ADS)

    Ambrose, Barry; Lin, Freddie

    2006-04-01

    Theoretically it is possible for two sensors to reliably send data at rates smaller than the sum of the necessary data rates for sending the data independently, essentially taking advantage of the correlation of sensor readings to reduce the data rate. In 2001, Caltech researchers Michelle Effros and Qian Zhao developed new techniques for data compression code design for correlated sensor data, which were published in a paper at the 2001 Data Compression Conference (DCC 2001). These techniques take advantage of correlations between two or more closely positioned sensors in a distributed sensor network. Given two signals, X and Y, the X signal is sent using standard data compression. The goal is to design a partition tree for the Y signal. The Y signal is sent using a code based on the partition tree. At the receiving end, if ambiguity arises when using the partition tree to decode the Y signal, the X signal is used to resolve the ambiguity. We have extended this work to increase the efficiency of the code search algorithms. Our results have shown that development of a highly integrated sensor network protocol that takes advantage of a correlation in sensor readings can result in 20-30% sensor data transport cost savings. In contrast, the best possible compression using state-of-the-art compression techniques that did not take into account the correlation of the incoming data signals achieved only 9-10% compression at most. This work was sponsored by MDA, but has very widespread applicability to ad hoc sensor networks, hyperspectral imaging sensors and vehicle health monitoring sensors for space applications.

  4. Advantages of proteins being disordered

    PubMed Central

    Liu, Zhirong; Huang, Yongqi

    2014-01-01

    The past decade has witnessed great advances in our understanding of protein structure-function relationships in terms of the ubiquitous existence of intrinsically disordered proteins (IDPs) and intrinsically disordered regions (IDRs). The structural disorder of IDPs/IDRs enables them to play essential functions that are complementary to those of ordered proteins. In addition, IDPs/IDRs are persistent in evolution. Therefore, they are expected to possess some advantages over ordered proteins. In this review, we summarize and survey nine possible advantages of IDPs/IDRs: economizing genome/protein resources, overcoming steric restrictions in binding, achieving high specificity with low affinity, increasing binding rate, facilitating posttranslational modifications, enabling flexible linkers, preventing aggregation, providing resistance to non-native conditions, and allowing compatibility with more available sequences. Some potential advantages of IDPs/IDRs are not well understood and require both experimental and theoretical approaches to decipher. The connection with protein design is also briefly discussed. PMID:24532081

  5. Energy Advantages for Green Schools

    ERIC Educational Resources Information Center

    Griffin, J. Tim

    2012-01-01

    Because of many advantages associated with central utility systems, school campuses, from large universities to elementary schools, have used district energy for decades. District energy facilities enable thermal and electric utilities to be generated with greater efficiency and higher system reliability, while requiring fewer maintenance and…

  6. Selective advantage for sexual reproduction

    NASA Astrophysics Data System (ADS)

    Tannenbaum, Emmanuel

    2006-03-01

    We develop a simplified model for sexual replication within the quasispecies formalism. We assume that the genomes of the replicating organisms are two-chromosomed and diploid, and that the fitness is determined by the number of chromosomes that are identical to a given master sequence. We also assume that there is a cost to sexual replication, given by a characteristic time τseek during which haploid cells seek out a mate with which to recombine. If the mating strategy is such that only viable haploids can mate, then when τseek= 0 , it is possible to show that sexual replication will always outcompete asexual replication. However, as τseek increases, sexual replication only becomes advantageous at progressively higher mutation rates. Once the time cost for sex reaches a critical threshold, the selective advantage for sexual replication disappears entirely. The results of this talk suggest that sexual replication is not advantageous in small populations per se, but rather in populations with low replication rates. In this regime, the cost for sex is sufficiently low that the selective advantage obtained through recombination leads to the dominance of the strategy. In fact, at a given replication rate and for a fixed environment volume, sexual replication is selected for in high populations because of the reduced time spent finding a reproductive partner.

  7. Selective advantage for sexual reproduction

    NASA Astrophysics Data System (ADS)

    Tannenbaum, Emmanuel

    2006-06-01

    This paper develops a simplified model for sexual reproduction within the quasispecies formalism. The model assumes a diploid genome consisting of two chromosomes, where the fitness is determined by the number of chromosomes that are identical to a given master sequence. We also assume that there is a cost to sexual reproduction, given by a characteristic time τseek during which haploid cells seek out a mate with which to recombine. If the mating strategy is such that only viable haploids can mate, then when τseek=0 , it is possible to show that sexual reproduction will always out compete asexual reproduction. However, as τseek increases, sexual reproduction only becomes advantageous at progressively higher mutation rates. Once the time cost for sex reaches a critical threshold, the selective advantage for sexual reproduction disappears entirely. The results of this paper suggest that sexual reproduction is not advantageous in small populations per se, but rather in populations with low replication rates. In this regime, the cost for sex is sufficiently low that the selective advantage obtained through recombination leads to the dominance of the strategy. In fact, at a given replication rate and for a fixed environment volume, sexual reproduction is selected for in high populations because of the reduced time spent finding a reproductive partner.

  8. An Experiment in Comparative Advantage.

    ERIC Educational Resources Information Center

    Haupert, Michael J.

    1996-01-01

    Describes an undergraduate economics course experiment designed to teach the concepts of comparative advantage and opportunity costs. Students have a limited number of labor hours and can chose to produce either wheat or steel. As the project progresses, the students trade commodities in an attempt to maximize use of their labor hours. (MJP)

  9. Achieving a sustainable service advantage.

    PubMed

    Coyne, K P

    1993-01-01

    Many managers believe that superior service should play little or no role in competitive strategy; they maintain that service innovations are inherently copiable. However, the author states that this view is too narrow. For a company to achieve a lasting service advantage, it must base a new service on a capability gap that competitors cannot or will not copy.

  10. Competitive Intelligence and Social Advantage.

    ERIC Educational Resources Information Center

    Davenport, Elisabeth; Cronin, Blaise

    1994-01-01

    Presents an overview of issues concerning civilian competitive intelligence (CI). Topics discussed include competitive advantage in academic and research environments; public domain information and libraries; covert and overt competitive intelligence; data diversity; use of the Internet; cooperative intelligence; and implications for library and…

  11. Nonlocal advantage of quantum coherence

    NASA Astrophysics Data System (ADS)

    Mondal, Debasis; Pramanik, Tanumoy; Pati, Arun Kumar

    2017-01-01

    A bipartite state is said to be steerable if and only if it does not have a single-system description, i.e., the bipartite state cannot be explained by a local hidden state model. Several steering inequalities have been derived using different local uncertainty relations to verify the ability to control the state of one subsystem by the other party. Here, we derive complementarity relations between coherences measured on mutually unbiased bases using various coherence measures such as the l1-norm, relative entropy, and skew information. Using these relations, we derive conditions under which a nonlocal advantage of quantum coherence can be achieved and the state is steerable. We show that not all steerable states can achieve such an advantage.

  12. Implicit, nonswitching, vector-oriented algorithm for steady transonic flow

    NASA Technical Reports Server (NTRS)

    Lottati, I.

    1983-01-01

    A rapid computation of a sequence of transonic flow solutions has to be performed in many areas of aerodynamic technology. The employment of low-cost vector array processors makes the conduction of such calculations economically feasible. However, for a full utilization of the new hardware, the developed algorithms must take advantage of the special characteristics of the vector array processor. The present investigation has the objective to develop an efficient algorithm for solving transonic flow problems governed by mixed partial differential equations on an array processor.

  13. March 2013: Medicare Advantage update.

    PubMed

    Sayavong, Sarah; Kemper, Leah; Barker, Abigail; McBride, Timothy

    2013-09-01

    Key Data Findings. (1) From March 2012 to March 2013, rural enrollment in Medicare Advantage (MA) and other prepaid plans increased by over 200,000 enrollees, to more than 1.9 million. (2) Preferred provider organization (PPO) plan enrollment increased to nearly one million enrollees, accounting for more than 51% of the rural MA market (up from 48% in March 2012). (3) Health maintenance organization (HMO) enrollment continued to grow in 2013, with over 31% of the rural MA market, while private fee-for-service (PFFS) plan enrollment decreased to less than 10% of market share. (4) Despite recent changes to MA payment, rural MA enrollment continues to increase.

  14. Optimization of circuits using a constructive learning algorithm

    SciTech Connect

    Beiu, V.

    1997-05-01

    The paper presents an application of a constructive learning algorithm to optimization of circuits. For a given Boolean function f. a fresh constructive learning algorithm builds circuits belonging to the smallest F{sub n,m} class of functions (n inputs and having m groups of ones in their truth table). The constructive proofs, which show how arbitrary Boolean functions can be implemented by this algorithm, are shortly enumerated An interesting aspect is that the algorithm can be used for generating both classical Boolean circuits and threshold gate circuits (i.e. analogue inputs and digital outputs), or a mixture of them, thus taking advantage of mixed analogue/digital technologies. One illustrative example is detailed The size and the area of the different circuits are compared (special cost functions can be used to closer estimate the area and the delay of VLSI implementations). Conclusions and further directions of research are ending the paper.

  15. Applying Planning Algorithms to Argue in Cooperative Work

    NASA Astrophysics Data System (ADS)

    Monteserin, Ariel; Schiaffino, Silvia; Amandi, Analía

    Negotiation is typically utilized in cooperative work scenarios for solving conflicts. Anticipating possible arguments in this negotiation step represents a key factor since we can take decisions about our participation in the cooperation process. In this context, we present a novel application of planning algorithms for argument generation, where the actions of a plan represent the arguments that a person might use during the argumentation process. In this way, we can plan how to persuade the other participants in cooperative work for reaching an expected agreement in terms of our interests. This approach allows us to take advantages since we can test anticipated argumentative solutions in advance.

  16. A parallel encryption algorithm for dual-core processor based on chaotic map

    NASA Astrophysics Data System (ADS)

    Liu, Jiahui; Song, Dahua; Xu, Yiqiu

    2011-12-01

    In this paper, we propose a parallel chaos-based encryption scheme in order to take advantage of the dual-core processor. The chaos-based cryptosystem is combinatorially generated by the logistic map and Fibonacci sequence. Fibonacci sequence is employed to convert the value of the logistic map to integer data. The parallel algorithm is designed with a master/slave communication model with the Message Passing Interface (MPI). The experimental results show that chaotic cryptosystem possesses good statistical properties, and the parallel algorithm provides more enhanced performance against the serial version of the algorithm. It is suitable for encryption/decryption large sensitive data or multimedia.

  17. A projected preconditioned conjugate gradient algorithm for computing many extreme eigenpairs of a Hermitian matrix

    NASA Astrophysics Data System (ADS)

    Vecharynski, Eugene; Yang, Chao; Pask, John E.

    2015-06-01

    We present an iterative algorithm for computing an invariant subspace associated with the algebraically smallest eigenvalues of a large sparse or structured Hermitian matrix A. We are interested in the case in which the dimension of the invariant subspace is large (e.g., over several hundreds or thousands) even though it may still be small relative to the dimension of A. These problems arise from, for example, density functional theory (DFT) based electronic structure calculations for complex materials. The key feature of our algorithm is that it performs fewer Rayleigh-Ritz calculations compared to existing algorithms such as the locally optimal block preconditioned conjugate gradient or the Davidson algorithm. It is a block algorithm, and hence can take advantage of efficient BLAS3 operations and be implemented with multiple levels of concurrency. We discuss a number of practical issues that must be addressed in order to implement the algorithm efficiently on a high performance computer.

  18. Spectrum sensing algorithm based on autocorrelation energy in cognitive radio networks

    NASA Astrophysics Data System (ADS)

    Ren, Shengwei; Zhang, Li; Zhang, Shibing

    2016-10-01

    Cognitive radio networks have wide applications in the smart home, personal communications and other wireless communication. Spectrum sensing is the main challenge in cognitive radios. This paper proposes a new spectrum sensing algorithm which is based on the autocorrelation energy of signal received. By taking the autocorrelation energy of the received signal as the statistics of spectrum sensing, the effect of the channel noise on the detection performance is reduced. Simulation results show that the algorithm is effective and performs well in low signal-to-noise ratio. Compared with the maximum generalized eigenvalue detection (MGED) algorithm, function of covariance matrix based detection (FMD) algorithm and autocorrelation-based detection (AD) algorithm, the proposed algorithm has 2 11 dB advantage.

  19. Transitional Division Algorithms.

    ERIC Educational Resources Information Center

    Laing, Robert A.; Meyer, Ruth Ann

    1982-01-01

    A survey of general mathematics students whose teachers were taking an inservice workshop revealed that they had not yet mastered division. More direct introduction of the standard division algorithm is favored in elementary grades, with instruction of transitional processes curtailed. Weaknesses in transitional algorithms appear to outweigh…

  20. Travel and the home advantage.

    PubMed

    Pace, A; Carron, A V

    1992-03-01

    The purpose of the present study was to examine the relative contributions of various travel related variables to visiting team success in the National Hockey League. A multiple regression design was used with game outcome as the dependent variable. The independent variables of interest included, as main effects and interactions, number of time zones crossed, direction of travel, distance traveled, preparation/adjustment time, time of season, game number on the road trip, and the home stand. Visiting team success was negatively associated with the interaction of number of time zones crossed and increased preparation time between games, and was positively associated with game number on the road. It was concluded that only a small portion of the variance in the home advantage/visitor disadvantage can be explained by travel related factors.

  1. Advantages of Oscillatory Hydraulic Tomography

    NASA Astrophysics Data System (ADS)

    Kitanidis, P. K.; Bakhos, T.; Cardiff, M. A.; Barrash, W.

    2012-12-01

    Characterizing the subsurface is significant for most hydrogeologic studies, such as those involving site remediation and groundwater resource explo¬ration. A variety of hydraulic and geophysical methods have been developed to estimate hydraulic conductivity and specific storage. Hydraulic methods based on the analysis of conventional pumping tests allow the estimation of conductivity and storage without need for approximate petrophysical relations, which is an advantage over most geophysical methods that first estimate other properties and then infer values of hydraulic parameters. However, hydraulic methods have the disadvantage that the head-change signal decays with distance from the pumping well and thus becomes difficult to separate from noise except in close proximity to the source. Oscillatory hydraulic tomography (OHT) is an emerging technology to im¬age the subsurface. This method utilizes the idea of imposing sinusoidally varying pressure or discharge signals at several points, collecting head observations at several other points, and then processing these data in a tomographic fashion to estimate conductivity and storage coefficients. After an overview of the methodology, including a description of the most important potential advantages and challenges associated with this approach, two key promising features of the approach will be discussed. First, the signal at an observation point is orthogonal to and thus can be separated from nuisance inputs like head fluctuation from production wells, evapotranspiration, irrigation, and changes in the level of adjacent streams. Second, although the signal amplitude may be weak, one can extract the phase and amplitude of the os¬cillatory signal by collecting measurements over a longer time, thus compensating for the effect of large distance through longer sampling period.

  2. Advantages of GPU technology in DFT calculations of intercalated graphene

    NASA Astrophysics Data System (ADS)

    Pešić, J.; Gajić, R.

    2014-09-01

    Over the past few years, the expansion of general-purpose graphic-processing unit (GPGPU) technology has had a great impact on computational science. GPGPU is the utilization of a graphics-processing unit (GPU) to perform calculations in applications usually handled by the central processing unit (CPU). Use of GPGPUs as a way to increase computational power in the material sciences has significantly decreased computational costs in already highly demanding calculations. A level of the acceleration and parallelization depends on the problem itself. Some problems can benefit from GPU acceleration and parallelization, such as the finite-difference time-domain algorithm (FTDT) and density-functional theory (DFT), while others cannot take advantage of these modern technologies. A number of GPU-supported applications had emerged in the past several years (www.nvidia.com/object/gpu-applications.html). Quantum Espresso (QE) is reported as an integrated suite of open source computer codes for electronic-structure calculations and materials modeling at the nano-scale. It is based on DFT, the use of a plane-waves basis and a pseudopotential approach. Since the QE 5.0 version, it has been implemented as a plug-in component for standard QE packages that allows exploiting the capabilities of Nvidia GPU graphic cards (www.qe-forge.org/gf/proj). In this study, we have examined the impact of the usage of GPU acceleration and parallelization on the numerical performance of DFT calculations. Graphene has been attracting attention worldwide and has already shown some remarkable properties. We have studied an intercalated graphene, using the QE package PHonon, which employs GPU. The term ‘intercalation’ refers to a process whereby foreign adatoms are inserted onto a graphene lattice. In addition, by intercalating different atoms between graphene layers, it is possible to tune their physical properties. Our experiments have shown there are benefits from using GPUs, and we reached an

  3. A model selection algorithm for a posteriori probability estimation with neural networks.

    PubMed

    Arribas, Juan Ignacio; Cid-Sueiro, Jesús

    2005-07-01

    This paper proposes a novel algorithm to jointly determine the structure and the parameters of a posteriori probability model based on neural networks (NNs). It makes use of well-known ideas of pruning, splitting, and merging neural components and takes advantage of the probabilistic interpretation of these components. The algorithm, so called a posteriori probability model selection (PPMS), is applied to an NN architecture called the generalized softmax perceptron (GSP) whose outputs can be understood as probabilities although results shown can be extended to more general network architectures. Learning rules are derived from the application of the expectation-maximization algorithm to the GSP-PPMS structure. Simulation results show the advantages of the proposed algorithm with respect to other schemes.

  4. Upward Wealth Mobility: Exploring the Roman Catholic Advantage

    ERIC Educational Resources Information Center

    Keister, Lisa A.

    2007-01-01

    Wealth inequality is among the most extreme forms of stratification in the United States, and upward wealth mobility is not common. Yet mobility is possible, and this paper takes advantage of trends among a unique group to explore the processes that generate mobility. I show that non-Hispanic whites raised in Roman Catholic families have been…

  5. A universal optimization strategy for ant colony optimization algorithms based on the Physarum-inspired mathematical model.

    PubMed

    Zhang, Zili; Gao, Chao; Liu, Yuxin; Qian, Tao

    2014-09-01

    Ant colony optimization (ACO) algorithms often fall into the local optimal solution and have lower search efficiency for solving the travelling salesman problem (TSP). According to these shortcomings, this paper proposes a universal optimization strategy for updating the pheromone matrix in the ACO algorithms. The new optimization strategy takes advantages of the unique feature of critical paths reserved in the process of evolving adaptive networks of the Physarum-inspired mathematical model (PMM). The optimized algorithms, denoted as PMACO algorithms, can enhance the amount of pheromone in the critical paths and promote the exploitation of the optimal solution. Experimental results in synthetic and real networks show that the PMACO algorithms are more efficient and robust than the traditional ACO algorithms, which are adaptable to solve the TSP with single or multiple objectives. Meanwhile, we further analyse the influence of parameters on the performance of the PMACO algorithms. Based on these analyses, the best values of these parameters are worked out for the TSP.

  6. Algorithms to Automate LCLS Undulator Tuning

    SciTech Connect

    Wolf, Zachary

    2010-12-03

    Automation of the LCLS undulator tuning offers many advantages to the project. Automation can make a substantial reduction in the amount of time the tuning takes. Undulator tuning is fairly complex and automation can make the final tuning less dependent on the skill of the operator. Also, algorithms are fixed and can be scrutinized and reviewed, as opposed to an individual doing the tuning by hand. This note presents algorithms implemented in a computer program written for LCLS undulator tuning. The LCLS undulators must meet the following specifications. The maximum trajectory walkoff must be less than 5 {micro}m over 10 m. The first field integral must be below 40 x 10{sup -6} Tm. The second field integral must be below 50 x 10{sup -6} Tm{sup 2}. The phase error between the electron motion and the radiation field must be less than 10 degrees in an undulator. The K parameter must have the value of 3.5000 {+-} 0.0005. The phase matching from the break regions into the undulator must be accurate to better than 10 degrees. A phase change of 113 x 2{pi} must take place over a distance of 3.656 m centered on the undulator. Achieving these requirements is the goal of the tuning process. Most of the tuning is done with Hall probe measurements. The field integrals are checked using long coil measurements. An analysis program written in Matlab takes the Hall probe measurements and computes the trajectories, phase errors, K value, etc. The analysis program and its calculation techniques were described in a previous note. In this note, a second Matlab program containing tuning algorithms is described. The algorithms to determine the required number and placement of the shims are discussed in detail. This note describes the operation of a computer program which was written to automate LCLS undulator tuning. The algorithms used to compute the shim sizes and locations are discussed.

  7. High performance graphics processor based computed tomography reconstruction algorithms for nuclear and other large scale applications.

    SciTech Connect

    Jimenez, Edward Steven,

    2013-09-01

    The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.

  8. Acoustic multiple scattering using recursive algorithms

    NASA Astrophysics Data System (ADS)

    Amirkulova, Feruza A.; Norris, Andrew N.

    2015-10-01

    Acoustic multiple scattering by a cluster of cylinders in an acoustic medium is considered. A fast recursive technique is described which takes advantage of the multilevel Block Toeplitz structure of the linear system. A parallelization technique is described that enables efficient application of the proposed recursive algorithm for solving multilevel Block Toeplitz systems on high performance computer clusters. Numerical comparisons of CPU time and total elapsed time taken to solve the linear system using the direct LAPACK and TOEPLITZ libraries on Intel FORTRAN, show the advantage of the TOEPLITZ solver. Computations are optimized by multi-threading which displays improved efficiency of the TOEPLITZ solver with the increase of the number of scatterers and frequency.

  9. Fractal Landscape Algorithms for Environmental Simulations

    NASA Astrophysics Data System (ADS)

    Mao, H.; Moran, S.

    2014-12-01

    Natural science and geographical research are now able to take advantage of environmental simulations that more accurately test experimental hypotheses, resulting in deeper understanding. Experiments affected by the natural environment can benefit from 3D landscape simulations capable of simulating a variety of terrains and environmental phenomena. Such simulations can employ random terrain generation algorithms that dynamically simulate environments to test specific models against a variety of factors. Through the use of noise functions such as Perlin noise, Simplex noise, and diamond square algorithms, computers can generate simulations that model a variety of landscapes and ecosystems. This study shows how these algorithms work together to create realistic landscapes. By seeding values into the diamond square algorithm, one can control the shape of landscape. Perlin noise and Simplex noise are also used to simulate moisture and temperature. The smooth gradient created by coherent noise allows more realistic landscapes to be simulated. Terrain generation algorithms can be used in environmental studies and physics simulations. Potential studies that would benefit from simulations include the geophysical impact of flash floods or drought on a particular region and regional impacts on low lying area due to global warming and rising sea levels. Furthermore, terrain generation algorithms also serve as aesthetic tools to display landscapes (Google Earth), and simulate planetary landscapes. Hence, it can be used as a tool to assist science education. Algorithms used to generate these natural phenomena provide scientists a different approach in analyzing our world. The random algorithms used in terrain generation not only contribute to the generating the terrains themselves, but are also capable of simulating weather patterns.

  10. CAST: Contraction Algorithm for Symmetric Tensors

    SciTech Connect

    Rajbhandari, Samyam; NIkam, Akshay; Lai, Pai-Wei; Stock, Kevin; Krishnamoorthy, Sriram; Sadayappan, Ponnuswamy

    2014-09-22

    Tensor contractions represent the most compute-intensive core kernels in ab initio computational quantum chemistry and nuclear physics. Symmetries in these tensor contractions makes them difficult to load balance and scale to large distributed systems. In this paper, we develop an efficient and scalable algorithm to contract symmetric tensors. We introduce a novel approach that avoids data redistribution in contracting symmetric tensors while also avoiding redundant storage and maintaining load balance. We present experimental results on two parallel supercomputers for several symmetric contractions that appear in the CCSD quantum chemistry method. We also present a novel approach to tensor redistribution that can take advantage of parallel hyperplanes when the initial distribution has replicated dimensions, and use collective broadcast when the final distribution has replicated dimensions, making the algorithm very efficient.

  11. Medicare Advantage Enrollment Update 2016.

    PubMed

    Ullrich, Fred; Mueller, Keith

    2016-09-01

    Purpose. The RUPRI Center for Rural Health Policy Analysis reports annually on rural beneficiary enrollment in Medicare Advantage (MA) plans, noting any trends or new developments evident in the data. These reports are based on data through March of each year, capturing results of open enrollment periods. Key Findings. (1)The number of non-metropolitan beneficiaries enrolled in MA and other prepaid plans increased to 2,189,300 as of March 2016, representing 21.8 percent of all non-metropolitan Medicare beneficiaries compared with 31.5 percent of beneficiaries enrolled in MA and other prepaid plans nationally. (2) While non-metropolitan enrollment continued to increase through March 2016, the annual growth rate slowed to 5.5 percent, compared to 6.8 percent between March 2014 and March 2015. (3) Enrollment in private fee-for-service MA plans continued to decline, both nationally and in non-metropolitan counties, while enrollment in other types of MA plans increased. (4) The states with the highest percentage of non-metropolitan beneficiaries enrolled in MA plans continued to be Minnesota, Hawaii, Pennsylvania, Wisconsin, and New York, ranging from a high of 53.4 percent in Minnesota to 32.6 percent in New York. (5) Non-metropolitan beneficiary enrollment (counts) in MA plans declined in five states: Hawaii, Idaho, Ohio, Washington, and Wyoming.

  12. Evolutionary advantages of adaptive rewarding

    NASA Astrophysics Data System (ADS)

    Szolnoki, Attila; Perc, Matjaž

    2012-09-01

    Our well-being depends on both our personal success and the success of our society. The realization of this fact makes cooperation an essential trait. Experiments have shown that rewards can elevate our readiness to cooperate, but since giving a reward inevitably entails paying a cost for it, the emergence and stability of such behavior remains elusive. Here we show that allowing for the act of rewarding to self-organize in dependence on the success of cooperation creates several evolutionary advantages that instill new ways through which collaborative efforts are promoted. Ranging from indirect territorial battle to the spontaneous emergence and destruction of coexistence, phase diagrams and the underlying spatial patterns reveal fascinatingly rich social dynamics that explain why this costly behavior has evolved and persevered. Comparisons with adaptive punishment, however, uncover an Achilles heel of adaptive rewarding, coming from over-aggression, which in turn hinders optimal utilization of network reciprocity. This may explain why, despite its success, rewarding is not as firmly embedded into our societal organization as punishment.

  13. Advantages and Uses of AMTEC

    NASA Astrophysics Data System (ADS)

    Lodhi, M. A. K.

    2012-10-01

    Static conversion systems are gaining importance in recent times because of newer applications of electricity like in spacecraft, hybrid-electric vehicles, military uses and domestic purposes. Of the many new static energy conversion systems that are being considered, one is the Alkali Metal Thermal Electric Converter (AMTEC). It is a thermally regenerative, electrochemical device for the direct conversion of heat to electrical power. As the name suggests, this system uses an alkali metal in its process. The electrochemical process involved in the working of AMTEC is ionization of alkali metal atoms at the interface of electrode and electrolyte. The electrons produced as a result flow through the external load thus doing work, and finally recombine with the metal ions at the cathode. AMTECs convert the work done during the nearly isothermal expansion of metal vapor to produce a high current and low voltage electron flow. Due to its principle of working it has many inherent advantages over other conventional generators. These will be discussed briefly.

  14. Quantum Algorithms, Symmetry, and Fourier Analysis

    NASA Astrophysics Data System (ADS)

    Denney, Aaron

    I describe the role of symmetry in two quantum algorithms, with a focus on how that symmetry is made manifest by the Fourier transform. The Fourier transform can be considered in a wider context than the familiar one of functions on Rn or Z/nZ ; instead it can be defined for an arbitrary group where it is known as representation theory.. The first quantum algorithm solves an instance of the hidden subgroup problem—distinguishing conjugates of the Borel subgroup from each other in groups related to PSL(2; q). I use the symmetry of the subgroups under consideration to reduce the problem to a mild extension of a previously solved problem. This generalizes a result of Moore, Rockmore, Russel and Schulman by switching to a more natural measurement that also applies to prime powers. In contrast to the first algorithm, the second quantum algorithm is an attempt to use naturally continuous spaces. Quantum walks have proved to be a useful tool for designing quantum algorithms. The natural equivalent to continuous time quantum walks is evolution with the Schrödinger equation, under the kinetic energy Hamiltonian for a massive particle. I take advantage of quantum interference to find the center of spherical shells in high dimensions. Any implementation would be likely to take place on a discrete grid, using the ability of a digital quantum computer to simulate the evolution of a quantum system. In addition, I use ideas from the second algorithm on a different set of starting states, and find that quantum evolution can be used to sample from the evolute of a plane curve. The method of stationary phase is used to determine scaling exponents characterizing the precision and probability of success for this procedure.

  15. How Successful Is Medicare Advantage?

    PubMed Central

    Newhouse, Joseph P; McGuire, Thomas G

    2014-01-01

    Context Medicare Part C, or Medicare Advantage (MA), now almost 30 years old, has generally been viewed as a policy disappointment. Enrollment has vacillated but has never come close to the penetration of managed care plans in the commercial insurance market or in Medicaid, and because of payment policy decisions and selection, the MA program is viewed as having added to cost rather than saving funds for the Medicare program. Recent changes in Medicare policy, including improved risk adjustment, however, may have changed this picture. Methods This article summarizes findings from our group's work evaluating MA's recent performance and investigating payment options for improving its performance even more. We studied the behavior of both beneficiaries and plans, as well as the effects of Medicare policy. Findings Beneficiaries make “mistakes” in their choice of MA plan options that can be explained by behavioral economics. Few beneficiaries make an active choice after they enroll in Medicare. The high prevalence of “zero-premium” plans signals inefficiency in plan design and in the market's functioning. That is, Medicare premium policies interfere with economically efficient choices. The adverse selection problem, in which healthier, lower-cost beneficiaries tend to join MA, appears much diminished. The available measures, while limited, suggest that, on average, MA plans offer care of equal or higher quality and for less cost than traditional Medicare (TM). In counties, greater MA penetration appears to improve TM's performance. Conclusions Medicare policies regarding lock-in provisions and risk adjustment that were adopted in the mid-2000s have mitigated the adverse selection problem previously plaguing MA. On average, MA plans appear to offer higher value than TM, and positive spillovers from MA into TM imply that reimbursement should not necessarily be neutral. Policy changes in Medicare that reform the way that beneficiaries are charged for MA plan

  16. Nurses’ Creativity: Advantage or Disadvantage

    PubMed Central

    Shahsavari Isfahani, Sara; Hosseini, Mohammad Ali; Fallahi Khoshknab, Masood; Peyrovi, Hamid; Khanke, Hamid Reza

    2015-01-01

    Background Recently, global nursing experts have been aggressively encouraging nurses to pursue creativity and innovation in nursing to improve nursing outcomes. Nurses’ creativity plays a significant role in health and well-being. In most health systems across the world, nurses provide up to 80% of the primary health care; therefore, they are critically positioned to provide creative solutions for current and future global health challenges. Objectives The purpose of this study was to explore Iranian nurses’ perceptions and experiences toward the expression of creativity in clinical settings and the outcomes of their creativity for health care organizations. Patients and Methods A qualitative approach using content analysis was adopted. Data were collected through in-depth semistructured interviews with 14 nurses who were involved in the creative process in educational hospitals affiliated to Jahrom and Tehran Universities of Medical Sciences in Iran. Results Four themes emerged from the data analysis, including a) Improvement in quality of patient care, b) Improvement in nurses’ quality of work, personal and social life, c) Promotion of organization, and d) Unpleasant outcomes. Conclusions The findings indicated that nurses’ creativity in health care organizations can lead to major changes of nursing practice, improvement of care and organizational performance. Therefore, policymakers, nurse educators, nursing and hospital managers should provide a nurturing environment that is conducive to creative thinking, giving the nurses opportunity for flexibility, creativity, support for change, and risk taking. PMID:25793116

  17. Evaluating the power of GPU acceleration for IDW interpolation algorithm.

    PubMed

    Mei, Gang

    2014-01-01

    We first present two GPU implementations of the standard Inverse Distance Weighting (IDW) interpolation algorithm, the tiled version that takes advantage of shared memory and the CDP version that is implemented using CUDA Dynamic Parallelism (CDP). Then we evaluate the power of GPU acceleration for IDW interpolation algorithm by comparing the performance of CPU implementation with three GPU implementations, that is, the naive version, the tiled version, and the CDP version. Experimental results show that the tilted version has the speedups of 120x and 670x over the CPU version when the power parameter p is set to 2 and 3.0, respectively. In addition, compared to the naive GPU implementation, the tiled version is about two times faster. However, the CDP version is 4.8x ∼ 6.0x slower than the naive GPU version, and therefore does not have any potential advantages in practical applications.

  18. License plate detection algorithm

    NASA Astrophysics Data System (ADS)

    Broitman, Michael; Klopovsky, Yuri; Silinskis, Normunds

    2013-12-01

    A novel algorithm for vehicle license plates localization is proposed. The algorithm is based on pixel intensity transition gradient analysis. Near to 2500 natural-scene gray-level vehicle images of different backgrounds and ambient illumination was tested. The best set of algorithm's parameters produces detection rate up to 0.94. Taking into account abnormal camera location during our tests and therefore geometrical distortion and troubles from trees this result could be considered as passable. Correlation between source data, such as license Plate dimensions and texture, cameras location and others, and parameters of algorithm were also defined.

  19. IPP leveraged financing: Unfair advantage

    SciTech Connect

    Naill, R.F.; Dudley, W.C. )

    1992-01-15

    IPPs normally employ project financing - in which the loans for a project are secured primarily by the assets of the project (and not by the assets of the parent or owner of the project). To support project financing, the IPP developer puts together a package that includes a site, a signed electric contract, a steam contract (if the plant is to be a qualifying facility (QF) under PURPA), a construction contract, and all the necessary environmental permits. The developer then usually attempts to borrow as much of the project's capital costs as possible - ergo the term highly leveraged financing. This is because debt is cheaper than equity (equity is a riskier investment and requires a return significantly higher than debt), and cheaper still when is preferential tax treatment is considered. For this reason, equity is typically used by IPPs only to take risks that lenders are unwilling to assume, and to assure lenders that the developer will not walk away if a project becomes, less profitable. In contrast, a utility finances construction - and all its other capital requirements - by issuing debt or selling equity from the parent company. Its capital needs are typically financed by issuing equity and debt, secured by the assets on the balance sheet, in roughly a 50:50 ratio. The cost of debt depends on the utility's bond rating - with the more risky utilities rated lower and, therefore, paying more for debt. If borrowing new capital would cause the utility to exceed its allowed debt-to-equity ratio, the utility will have to sell equity to raise part of its capital requirements. In the case of utility financing, the debt is secured by all the utility's assets - not just those of the particular construction project needing the investment.

  20. Inferring causal structure: a quantum advantage

    NASA Astrophysics Data System (ADS)

    Ried, Katja; Spekkens, Robert

    2014-03-01

    The problem of inferring causal relations from observed correlations is central to science, and extensive study has yielded both important conceptual insights and widely used practical applications. Yet some of the simplest questions are impossible to answer classically: for instance, if one observes correlations between two variables (such as taking a new medical treatment and the subject's recovery), does this show a direct causal influence, or is it due to some hidden common cause? We develop a framework for quantum causal inference, and show how quantum theory provides a unique advantage in this decision problem. The key insight is that certain quantum correlations can only arise from specific causal structures, whereas pairs of classical variables can exhibit any pattern of correlation regardless of whether they have a common cause or a direct-cause relation. For example, suppose one measures the same Pauli observable on two qubits. If they share a common cause, such as being prepared in an entangled state, then one never finds perfect (positive) correlations in every basis, whereas perfect anticorrelations are possible (if one prepares the singlet state). Conversely, if a channel connects the qubits, hence a direct causal influence, perfect anticorrelations are impossible.

  1. A projected preconditioned conjugate gradient algorithm for computing many extreme eigenpairs of a Hermitian matrix [A projected preconditioned conjugate gradient algorithm for computing a large eigenspace of a Hermitian matrix

    SciTech Connect

    Vecharynski, Eugene; Yang, Chao; Pask, John E.

    2015-02-25

    Here, we present an iterative algorithm for computing an invariant subspace associated with the algebraically smallest eigenvalues of a large sparse or structured Hermitian matrix A. We are interested in the case in which the dimension of the invariant subspace is large (e.g., over several hundreds or thousands) even though it may still be small relative to the dimension of A. These problems arise from, for example, density functional theory (DFT) based electronic structure calculations for complex materials. The key feature of our algorithm is that it performs fewer Rayleigh–Ritz calculations compared to existing algorithms such as the locally optimal block preconditioned conjugate gradient or the Davidson algorithm. It is a block algorithm, and hence can take advantage of efficient BLAS3 operations and be implemented with multiple levels of concurrency. We discuss a number of practical issues that must be addressed in order to implement the algorithm efficiently on a high performance computer.

  2. Tomography reconstruction using the Learn and Apply algorithm

    NASA Astrophysics Data System (ADS)

    Vidal, F.; Gendron, E.; Brangier, M.; Sevin, A.; Rousset, G.; Hubert, Z.

    In the framework of the MOAO demonstrator CANARY, we developped a new concept of tomography algorithm that allows to measure the tomographic reconstructor directly on-sky, using or not, a priori from the turbulence profile. This simple algorithm, working in open-loop, uses the measured covariance of slopes between all the wavefront sensors (WFS) to deduce the geometric and atmospheric parameters that are used to compute the tomographic reconstructor. This method called “Learn and Apply” (L&A) has also the advantage to measure and take into account all the misalignments between the WFSs in order to calibrate any MOAO instrument. We present the main principle of the algorithm and the last experimental results performed in MOAO scheme at the SESAME bench.

  3. Enhanced Deep Blue Aerosol Retrieval Algorithm: The Second Generation

    NASA Technical Reports Server (NTRS)

    Hsu, N. C.; Jeong, M.-J.; Bettenhausen, C.; Sayer, A. M.; Hansell, R.; Seftor, C. S.; Huang, J.; Tsay, S.-C.

    2013-01-01

    The aerosol products retrieved using the MODIS collection 5.1 Deep Blue algorithm have provided useful information about aerosol properties over bright-reflecting land surfaces, such as desert, semi-arid, and urban regions. However, many components of the C5.1 retrieval algorithm needed to be improved; for example, the use of a static surface database to estimate surface reflectances. This is particularly important over regions of mixed vegetated and non- vegetated surfaces, which may undergo strong seasonal changes in land cover. In order to address this issue, we develop a hybrid approach, which takes advantage of the combination of pre-calculated surface reflectance database and normalized difference vegetation index in determining the surface reflectance for aerosol retrievals. As a result, the spatial coverage of aerosol data generated by the enhanced Deep Blue algorithm has been extended from the arid and semi-arid regions to the entire land areas.

  4. A self-tuning phase-shifting algorithm for interferometry.

    PubMed

    Estrada, Julio C; Servin, Manuel; Quiroga, Juan A

    2010-02-01

    In Phase Stepping Interferometry (PSI) an interferogram sequence having a known, and constant phase shift between the interferograms is required. Here we take the case where this constant phase shift is unknown and the only assumption is that the interferograms do have a temporal carrier. To recover the modulating phase from the interferograms, we propose a self-tuning phase-shifting algorithm. Our algorithm estimates the temporal frequency first, and then this knowledge is used to estimate the interesting modulating phase. There are several well known iterative schemes published before, but our approach has the unique advantage of being very fast. Our new temporal carrier, and phase estimator is capable of obtaining a very good approximation of their temporal carrier in a single iteration. Numerical experiments are given to show the performance of this simple yet powerful self-tuning phase shifting algorithm.

  5. Medicare Advantage Plans Pay Hospitals Less Than Traditional Medicare Pays.

    PubMed

    Baker, Laurence C; Bundorf, M Kate; Devlin, Aileen M; Kessler, Daniel P

    2016-08-01

    There is ongoing debate about how prices paid to providers by Medicare Advantage plans compare to prices paid by fee-for-service Medicare. We used data from Medicare and the Health Care Cost Institute to identify the prices paid for hospital services by fee-for-service (FFS) Medicare, Medicare Advantage plans, and commercial insurers in 2009 and 2012. We calculated the average price per admission, and its trend over time, in each of the three types of insurance for fixed baskets of hospital admissions across metropolitan areas. After accounting for differences in hospital networks, geographic areas, and case-mix between Medicare Advantage and FFS Medicare, we found that Medicare Advantage plans paid 5.6 percent less for hospital services than FFS Medicare did. Without taking into account the narrower networks of Medicare Advantage, the program paid 8.0 percent less than FFS Medicare. We also found that the rates paid by commercial plans were much higher than those of either Medicare Advantage or FFS Medicare, and growing. At least some of this difference comes from the much higher prices that commercial plans pay for profitable service lines.

  6. 76 FR 9626 - Community Advantage Pilot Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-18

    ... ADMINISTRATION Community Advantage Pilot Program AGENCY: U.S. Small Business Administration (SBA). ACTION: Notice... Advantage'' to provide 7(a) loan guaranties to small businesses in underserved markets, including Veterans and members of the military community. The Community Advantage Pilot Program will allow...

  7. 77 FR 67433 - Community Advantage Pilot Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-09

    ... ADMINISTRATION Community Advantage Pilot Program AGENCY: U.S. Small Business Administration. ACTION: Notice of extension of and changes to Community Advantage Pilot Program and request for comments. SUMMARY: The Community Advantage (``CA'') Pilot Program is a pilot program to increase SBA-guaranteed loans to...

  8. 76 FR 56262 - Community Advantage Pilot Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-12

    ... ADMINISTRATION Community Advantage Pilot Program AGENCY: U.S. Small Business Administration (SBA). ACTION: Notice of change to Community Advantage Pilot Program. SUMMARY: On February 18, 2011, SBA published a notice and request for comments introducing the Community Advantage Pilot Program. In that notice,...

  9. 77 FR 6619 - Community Advantage Pilot Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-08

    ... ADMINISTRATION Community Advantage Pilot Program AGENCY: U.S. Small Business Administration. ACTION: Notice of changes to Community Advantage Pilot Program. SUMMARY: On February 18, 2011, SBA published a notice introducing the Community Advantage Pilot Program. In that notice, SBA provided an overview of the...

  10. Optimization of image processing algorithms on mobile platforms

    NASA Astrophysics Data System (ADS)

    Poudel, Pramod; Shirvaikar, Mukul

    2011-03-01

    This work presents a technique to optimize popular image processing algorithms on mobile platforms such as cell phones, net-books and personal digital assistants (PDAs). The increasing demand for video applications like context-aware computing on mobile embedded systems requires the use of computationally intensive image processing algorithms. The system engineer has a mandate to optimize them so as to meet real-time deadlines. A methodology to take advantage of the asymmetric dual-core processor, which includes an ARM and a DSP core supported by shared memory, is presented with implementation details. The target platform chosen is the popular OMAP 3530 processor for embedded media systems. It has an asymmetric dual-core architecture with an ARM Cortex-A8 and a TMS320C64x Digital Signal Processor (DSP). The development platform was the BeagleBoard with 256 MB of NAND RAM and 256 MB SDRAM memory. The basic image correlation algorithm is chosen for benchmarking as it finds widespread application for various template matching tasks such as face-recognition. The basic algorithm prototypes conform to OpenCV, a popular computer vision library. OpenCV algorithms can be easily ported to the ARM core which runs a popular operating system such as Linux or Windows CE. However, the DSP is architecturally more efficient at handling DFT algorithms. The algorithms are tested on a variety of images and performance results are presented measuring the speedup obtained due to dual-core implementation. A major advantage of this approach is that it allows the ARM processor to perform important real-time tasks, while the DSP addresses performance-hungry algorithms.

  11. Using integration technology as a strategic advantage.

    PubMed

    Fry, P A

    1993-08-01

    The underlying premise of the Managed Competition Act previously cited is that through managed competition providers will be forced to lower care costs while increasing the level of positive care outcomes. Because it may also be that tomorrow's hospitals will find a severe rationing of technology, what can they do to prepare? Most of the systems in place today already have built within them all the necessary potential to address this premise and technology requirement with no change, no conversion, no expense for new equipment and software, and no disruption in day-to-day operations, just a little re-engineering. Today, however, these systems are similar to a 20-mule team pulling in different directions: all the power is there, but the wagon remains motionless and totally unable to reach its objective. It takes a skilled wagonmaster to bring them together, to make the mules work as a cohesive unit, to make the power of 20 mules greater than the sum of 20 mules. So it is and will be for the hospital of tomorrow. System integration is no longer a question of whether but of when. Those hospitals that use it today as a strategic advantage will be in a better position tomorrow to use it as a competitive strategic advantage in an environment that will reward low cost and high positive care outcomes and will penalize those that cannot compete. The technology is already here and economically within reach of nearly every hospital, just waiting to be used. The question that must nag all of us who want to make the health care system of America better is, Why not make the when now? Rich Helppie, president of Superior Consultant Company, summarized the solution well: The old ways will not give way to the new overnight. The re-engineering process in healthcare must evolve. Compared to the last 20 years, however, such evolution may appear to be a massive, forthright, complete, comprehensive, drastic and rapid revolution. Survival is the name of the game, and for healthcare

  12. An algorithm for the solution of dynamic linear programs

    NASA Technical Reports Server (NTRS)

    Psiaki, Mark L.

    1989-01-01

    The algorithm's objective is to efficiently solve Dynamic Linear Programs (DLP) by taking advantage of their special staircase structure. This algorithm constitutes a stepping stone to an improved algorithm for solving Dynamic Quadratic Programs, which, in turn, would make the nonlinear programming method of Successive Quadratic Programs more practical for solving trajectory optimization problems. The ultimate goal is to being trajectory optimization solution speeds into the realm of real-time control. The algorithm exploits the staircase nature of the large constraint matrix of the equality-constrained DLPs encountered when solving inequality-constrained DLPs by an active set approach. A numerically-stable, staircase QL factorization of the staircase constraint matrix is carried out starting from its last rows and columns. The resulting recursion is like the time-varying Riccati equation from multi-stage LQR theory. The resulting factorization increases the efficiency of all of the typical LP solution operations over that of a dense matrix LP code. At the same time numerical stability is ensured. The algorithm also takes advantage of dynamic programming ideas about the cost-to-go by relaxing active pseudo constraints in a backwards sweeping process. This further decreases the cost per update of the LP rank-1 updating procedure, although it may result in more changes of the active set that if pseudo constraints were relaxed in a non-stagewise fashion. The usual stability of closed-loop Linear/Quadratic optimally-controlled systems, if it carries over to strictly linear cost functions, implies that the saving due to reduced factor update effort may outweigh the cost of an increased number of updates. An aerospace example is presented in which a ground-to-ground rocket's distance is maximized. This example demonstrates the applicability of this class of algorithms to aerospace guidance. It also sheds light on the efficacy of the proposed pseudo constraint relaxation

  13. Competitive advantage on a warming planet.

    PubMed

    Lash, Jonathan; Wellington, Fred

    2007-03-01

    Whether you're in a traditional smokestack industry or a "clean" business like investment banking, your company will increasingly feel the effects of climate change. Even people skeptical about global warming's dangers are recognizing that, simply because so many others are concerned, the phenomenon has wide-ranging implications. Investors already are discounting share prices of companies poorly positioned to compete in a warming world. Many businesses face higher raw material and energy costs as more and more governments enact policies placing a cost on emissions. Consumers are taking into account a company's environmental record when making purchasing decisions. There's also a burgeoning market in greenhouse gas emission allowances (the carbon market), with annual trading in these assets valued at tens of billions of dollars. Companies that manage and mitigate their exposure to the risks associated with climate change while seeking new opportunities for profit will generate a competitive advantage over rivals in a carbon-constrained future. This article offers a systematic approach to mapping and responding to climate change risks. According to Jonathan Lash and Fred Wellington of the World Resources Institute, an environmental think tank, the risks can be divided into six categories: regulatory (policies such as new emissions standards), products and technology (the development and marketing of climate-friendly products and services), litigation (lawsuits alleging environmental harm), reputational (how a company's environmental policies affect its brand), supply chain (potentially higher raw material and energy costs), and physical (such as an increase in the incidence of hurricanes). The authors propose a four-step process for responding to climate change risk: Quantify your company's carbon footprint; identify the risks and opportunities you face; adapt your business in response; and do it better than your competitors.

  14. Harnessing complexity: taking advantage of context and relationships in dissemination of school-based interventions.

    PubMed

    Butler, Helen; Bowes, Glenn; Drew, Sarah; Glover, Sara; Godfrey, Celia; Patton, George; Trafford, Lea; Bond, Lyndal

    2010-03-01

    Schools and school systems are increasingly asked to use evidence-based strategies to promote the health and well-being of students. The dissemination of school-based health promotion research, however, offers particular challenges to conventional approaches to dissemination. Schools and education systems are multifaceted organizations that sit within constantly shifting broader contexts. This article argues that health promotion dissemination needs to be rethought for school communities as complex systems and that this requires understanding and harnessing the dynamic ecology of the sociopolitical context. In developing this argument, the authors draw on their experience of the dissemination process of a multilevel school-based intervention in a complex educational context. Building on this experience, they argue for the need to move beyond conventional dissemination strategies to a focus on active partnerships between developers and users of school-based intervention research and offer a conceptual tool for planning dissemination.

  15. When curiosity breeds intimacy: Taking advantage of intimacy opportunities and transforming boring conversations

    PubMed Central

    Kashdan, Todd B.; McKnight, Patrick E.; Fincham, Frank D.; Rose, Paul

    2012-01-01

    Curious people seek knowledge and new experiences. In three studies, we examined whether, when, and how curiosity contributes to positive social outcomes between unacquainted strangers. Study 1 showed that curious people expect to generate closeness during intimate conversations but not during small-talk; less curious people anticipated poor outcomes in both situations. We hypothesized that curious people underestimate their ability to bond with unacquainted strangers during mundane conversations. Studies 2 and 3 showed that curious people felt close to partners during intimate and small-talk conversations; less curious people only felt close when the situation offered relationship-building exercises. Surprise at the pleasure felt during this novel, uncertain situation partially mediated the benefits linked to curiosity. We found evidence of slight asymmetry between self and partner reactions. Results could not be attributed to physical attraction or positive affect. Collectively, results suggest that positive social interactions benefits from an open and curious mindset. PMID:22092143

  16. Taking Advantage of Citation Measures of Scholarly Impact: Hip Hip h Index!

    PubMed

    Ruscio, John

    2016-11-01

    Professional decisions about hiring, tenure, promotion, funding, and honors are informed by assessments of scholarly impact. As a measure of influence, citations are produced by experts but accessible to nonexperts. The h index is the largest number h such that an individual has published at least h works cited at least h times apiece. This is easy to understand and calculate, as or more reliable and valid than alternative citation measures, and highly robust to missing or messy data. Striving for a large h index requires both productivity and influence, which provides healthy incentives for researchers striving for eminence through scientific impact. A number of factors that can influence h are discussed to promote the mindful use of what might otherwise be an ambiguous or misleading measure. The h index adds a transparent, objective component to assessments of scholarly impact, and even academic eminence, that merits at least two cheers.

  17. Taking Advantage of a Corrosion Problem to Solve a Pollution Problem

    ERIC Educational Resources Information Center

    Palomar-Ramirez, Carlos F.; Bazan-Martinez, Jose A.; Palomar-Pardave, Manuel E.; Romero-Romo, Mario A.; Ramirez-Silva, Maria Teresa

    2011-01-01

    Some simple chemistry is used to demonstrate how Fe(II) ions, formed during iron corrosion in acid aqueous solution, can reduce toxic Cr(VI) species, forming soluble Cr(III) and Fe(III) ions. These ions, in turn, can be precipitated by neutralizing the solution. The procedure provides a treatment for industrial wastewaters commonly found in…

  18. Taking advantage of the ESA G-POD service to study deformation processes in mountain areas

    NASA Astrophysics Data System (ADS)

    Manconi, Andrea; Cignetti, Martina; Ardizzone, Francesca; Giordan, Daniele; Allasia, Paolo; De Luca, Claudio; Manunta, Michele; Casu, Francesco

    2015-04-01

    In mountain environments, the analysis of surface displacements is extremely important for a better understanding the effects of mass wasting phenomena, such as landslides, rock-glaciers, and glacier activity. In this scenario, the use of straightforward tools and approaches to monitor surface displacements at high spatial and temporal resolutions is a real need. Here we use the Parallel-SBAS service recently released within the ESA's Grid Processing On Demand environment (G-POD, http://gpod.eo.esa.int/) to generate Earth's surface deformation time series and interferometric production. This service performs the full SBAS-DInSAR chain starting from Level 0 data, and generates displacement time series. We use the data available on the Virtual Archive 4 (http://eo-virtual-archive4.esa.int/, in the framework of Supersite initiative. In the framework of the HAMMER project (part of the NextData initiative, http://www.nextdataproject.it/ ), we produced mean deformation velocity maps, as well as deformation time series, on a regional scale case (Aosta Valley Region, northern Italy), and at local landslide scale (Puy landslide, Piedmont, northen Italy). The possibility to gather the final results in less than 24h (by processing an average of about 30 SAR images for each frame considered), allowed to perform in relatively short time a large number of attempts. By "tuning" the processing, we have maximized for both datasets the final coverage of coherent points, by analysing the effect of SAR images acquired in the winter season, as well as of the impact of perpendicular and temporal baseline constraints. The results obtained with P-SBAS G-POD service on Valle d'Aosta region have been compared to the Deep Seated Gravitational Slope Deformation (DGSD, reference IFFI project), finding a good correlation with the anomalous areas of surface deformation and the catalogued DGSD. In addition, the results obtained on Valle d'Aosta and Piedmont regions show a good agreement to the mean velocity maps available retrieved from the "Portale Cartografico Nazionale" http://www.pcn.minambiente.it/GN/, which was instead processed by considering PSInSAR technique on the same Envisat ASAR dataset. Finally, we discuss possible future developments of the P-SBAS G-POD service in the Sentinel-1 scenario, when a large amount of SAR images will be available to a greater audience, and how this may affect the analysis of surface deformation at different spatial and temporal scales.

  19. Taking advantage of reduced droplet-surface interaction to optimize transport of bioanalytes in digital microfluidics.

    PubMed

    Freire, Sergio L S; Thorne, Nathaniel; Wutkowski, Michael; Dao, Selina

    2014-11-10

    Digital microfluidics (DMF), a technique for manipulation of droplets, is a promising alternative for the development of "lab-on-a-chip" platforms. Often, droplet motion relies on the wetting of a surface, directly associated with the application of an electric field; surface interactions, however, make motion dependent on droplet contents, limiting the breadth of applications of the technique. Some alternatives have been presented to minimize this dependence. However, they rely on the addition of extra chemical species to the droplet or its surroundings, which could potentially interact with droplet moieties. Addressing this challenge, our group recently developed Field-DW devices to allow the transport of cells and proteins in DMF, without extra additives. Here, the protocol for device fabrication and operation is provided, including the electronic interface for motion control. We also continue the studies with the devices, showing that multicellular, relatively large, model organisms can also be transported, arguably unaffected by the electric fields required for device operation.

  20. Taking Advantage of Reduced Droplet-surface Interaction to Optimize Transport of Bioanalytes in Digital Microfluidics

    PubMed Central

    Freire, Sergio L. S.; Thorne, Nathaniel; Wutkowski, Michael; Dao, Selina

    2014-01-01

    Digital microfluidics (DMF), a technique for manipulation of droplets, is a promising alternative for the development of “lab-on-a-chip” platforms. Often, droplet motion relies on the wetting of a surface, directly associated with the application of an electric field; surface interactions, however, make motion dependent on droplet contents, limiting the breadth of applications of the technique. Some alternatives have been presented to minimize this dependence. However, they rely on the addition of extra chemical species to the droplet or its surroundings, which could potentially interact with droplet moieties. Addressing this challenge, our group recently developed Field-DW devices to allow the transport of cells and proteins in DMF, without extra additives. Here, the protocol for device fabrication and operation is provided, including the electronic interface for motion control. We also continue the studies with the devices, showing that multicellular, relatively large, model organisms can also be transported, arguably unaffected by the electric fields required for device operation. PMID:25407533

  1. Taking Advantage of Agile while Minimizing Risk: User Stories and Other Fables

    DTIC Science & Technology

    2013-11-20

    Delivery • RUP, XP, SCRUM DSDM • Popular scaling approach in Europe https://www.thecsiac.com/ spruce /resources/ref_documents/agile-scale-aas... spruce -sei 28 © 2013 Carnegie Mellon University It’s not about the practices and methods It’s about the principles 1. Highest priority is satisfy

  2. Taking Advantage of Automated Assessment of Student-Constructed Graphs in Science

    ERIC Educational Resources Information Center

    Vitale, Jonathan M.; Lai, Kevin; Linn, Marcia C.

    2015-01-01

    We present a new system for automated scoring of graph construction items that address complex science concepts, feature qualitative prompts, and support a range of possible solutions. This system utilizes analysis of spatial features (e.g., slope of a line) to evaluate potential student ideas represented within graphs. Student ideas are then…

  3. Take advantage of mycorrhizal fungi for improved soil fertility and plant health

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Arbuscular mycorrhizal [AM] fungi are naturally-occurring soil fungi that form a beneficial symbiosis with the roots of most crops. The plants benefit because the symbiosis increases mineral nutrient uptake, drought resistance, and disease resistance. These characteristics make utilization of AM f...

  4. Extend Instruction outside the Classroom: Take Advantage of Your Learning Management System

    ERIC Educational Resources Information Center

    Jensen, Lauren A.

    2010-01-01

    Numerous institutions of higher education have implemented a learning management system (LMS) or are considering doing so. This web-based software package provides self-service and quick (often personalized) access to content in a dynamic environment. Learning management systems support administrative, reporting, and documentation activities. LMSs…

  5. Taking Advantage of Nature’s Gift: Can Endogenous Neural Stem Cells Improve Myelin Regeneration?

    PubMed Central

    Akkermann, Rainer; Jadasz, Janusz Joachim; Azim, Kasum; Küry, Patrick

    2016-01-01

    Irreversible functional deficits in multiple sclerosis (MS) are directly correlated to axonal damage and loss. Neurodegeneration results from immune-mediated destruction of myelin sheaths and subsequent axonal demyelination. Importantly, oligodendrocytes, the myelinating glial cells of the central nervous system, can be replaced to some extent to generate new myelin sheaths. This endogenous regeneration capacity has so far mainly been attributed to the activation and recruitment of resident oligodendroglial precursor cells. As this self-repair process is limited and increasingly fails while MS progresses, much interest has evolved regarding the development of remyelination-promoting strategies and the presence of alternative cell types, which can also contribute to the restoration of myelin sheaths. The adult brain comprises at least two neurogenic niches harboring life-long adult neural stem cells (NSCs). An increasing number of investigations are beginning to shed light on these cells under pathological conditions and revealed a significant potential of NSCs to contribute to myelin repair activities. In this review, these emerging investigations are discussed with respect to the importance of stimulating endogenous repair mechanisms from germinal sources. Moreover, we present key findings of NSC-derived oligodendroglial progeny, including a comprehensive overview of factors and mechanisms involved in this process. PMID:27854261

  6. Physical parameter determinations of young Ms. Taking advantage of the Virtual Observatory to compare methodologies

    NASA Astrophysics Data System (ADS)

    Bayo, A.; Rodrigo, C.; Barrado, D.; Allard, F.

    One of the very first steps astronomers working in stellar physics perform to advance in their studies, is to determine the most common/relevant physical parameters of the objects of study (effective temperature, bolometric luminosity, surface gravity, etc.). Different methodologies exist depending on the nature of the data, intrinsic properties of the objects, etc. One common approach is to compare the observational data with theoretical models passed through some simulator that will leave in the synthetic data the same imprint than the observational data carries, and see what set of parameters reproduce the observations best. Even in this case, depending on the kind of data the astronomer has, the methodology changes slightly. After parameters are published, the community tend to quote, praise and criticize them, sometimes paying little attention on whether the possible discrepancies come from the theoretical models, the data themselves or just the methodology used in the analysis. In this work we perform the simple, yet interesting, exercise of comparing the effective temperatures obtained via SED and more detailed spectral fittings (to the same grid of models), of a sample of well known and characterized young M-type objects members to different star forming regions and show how differences in temperature of up to 350 K can be expected just from the difference in methodology/data used. On the other hand we show how these differences are smaller for colder objects even when the complexity of the fit increases like for example introducing differential extinction. To perform this exercise we benefit greatly from the framework offered by the Virtual Observaotry.

  7. Take advantage of opportunities to reduce ED violence, recidivism among children and young adults.

    PubMed

    2013-05-01

    There is evidence that ED-based interventions can make a difference in short-circuiting the cycle of violence that often impacts children and young adults. Since the Violence Intervention Advocacy Program was launched at Boston Medical Center in 2006, recidivism to the ED among gunshot victims is down by 30% and recidivism among stabbing victims is down by about one-half. At Denver Health Medical Center, the At Risk Intervention and Monitoring (AIM) project just launched in June, but thus far, none of the patients being followed in the program have reappeared in the ED with a violent injury. The U.S. Centers for Disease Control and Prevention reports that 700,000 people between the ages of 10 and 24 were treated in EDs for injuries caused by violence in 2009. To effectively intervene with victims of violence, experts recommend that EDs partner with community groups that have deep ties to the neighborhoods most impacted by violence. To avoid re-traumatizing victims of violence, health care personnel need to be trained in how to provide"trauma-informed care," a method of speaking to patients so that they feel empowered and safe. With young victims of violence, the biggest issues requiring attention are mental health, safety, and housing.

  8. Cognitive strategies take advantage of the cooperative potential of heterogeneous networks

    NASA Astrophysics Data System (ADS)

    Vukov, Jeromos; Santos, Francisco C.; Pacheco, Jorge M.

    2012-06-01

    Understanding the emergence and maintenance of cooperation is one of the most challenging topics of our time. Evolutionary game theory offers a very flexible framework within which to address this challenge. Here we use the prisoner's dilemma game to investigate the performance of individuals who are capable of adopting reactive strategies in communities structurally organized by means of Barabási-Albert scale-free networks. We find that basic cognitive abilities, such as the capability to distinguish their partners and act according to their previous actions, enable cooperation to thrive. This result is particularly significant whenever fear is the leading social tension, as this fosters retaliation, thus enforcing and sustaining cooperation. Being able to simultaneously reward fellow cooperators and punish defectors proves instrumental in achieving cooperation and the welfare of the community. As a result, central individuals can successfully lead the community and turn defective players into cooperative ones. Finally, even when participation costs—known to be detrimental to cooperation in scale-free networks—are explicitly included, we find that basic cognitive abilities have enough potential to help cooperation to prevail.

  9. How HIV Takes Advantage of the Cytoskeleton in Entry and Replication

    PubMed Central

    Stolp, Bettina; Fackler, Oliver T.

    2011-01-01

    The host cell cytoskeleton plays a key role in the life cycle of viral pathogens whose propagation depends on mandatory intracellular steps. Accordingly, also the human immunodeficiency virus type 1 (HIV-1) has evolved strategies to exploit and modulate in particular the actin cytoskeleton for its purposes. This review will recapitulate recent findings on how HIV-1 hijacks the cytoskeleton to facilitate entry into, transport within and egress from host cells as well as to commandeer communication of infected with uninfected bystander cells. PMID:21994733

  10. Taking advantage of hyperspectral imaging classification of urinary stones against conventional infrared spectroscopy.

    PubMed

    Blanco, Francisco; Lumbreras, Felipe; Serrat, Joan; Siener, Roswitha; Serranti, Silvia; Bonifazi, Giuseppe; López-Mesas, Montserrat; Valiente, Manuel

    2014-12-01

    The analysis of urinary stones is mandatory for the best management of the disease after the stone passage in order to prevent further stone episodes. Thus the use of an appropriate methodology for an individualized stone analysis becomes a key factor for giving the patient the most suitable treatment. A recently developed hyperspectral imaging methodology, based on pixel-to-pixel analysis of near-infrared spectral images, is compared to the reference technique in stone analysis, infrared (IR) spectroscopy. The developed classification model yields >90% correct classification rate when compared to IR and is able to precisely locate stone components within the structure of the stone with a 15 µm resolution. Due to the little sample pretreatment, low analysis time, good performance of the model, and the automation of the measurements, they become analyst independent; this methodology can be considered to become a routine analysis for clinical laboratories.

  11. Taking advantage of selective change driven processing for 3D scanning.

    PubMed

    Vegara, Francisco; Zuccarello, Pedro; Boluda, Jose A; Pardo, Fernando

    2013-09-27

    This article deals with the application of the principles of SCD (Selective Change Driven) vision to 3D laser scanning. Two experimental sets have been implemented: one with a classical CMOS (Complementary Metal-Oxide Semiconductor) sensor, and the other one with a recently developed CMOS SCD sensor for comparative purposes, both using the technique known as Active Triangulation. An SCD sensor only delivers the pixels that have changed most, ordered by the magnitude of their change since their last readout. The 3D scanning method is based on the systematic search through the entire image to detect pixels that exceed a certain threshold, showing the SCD approach to be ideal for this application. Several experiments for both capturing strategies have been performed to try to find the limitations in high speed acquisition/processing. The classical approach is limited by the sequential array acquisition, as predicted by the Nyquist-Shannon sampling theorem, and this has been experimentally demonstrated in the case of a rotating helix. These limitations are overcome by the SCD 3D scanning prototype achieving a significantly higher performance. The aim of this article is to compare both capturing strategies in terms of performance in the time and frequency domains, so they share all the static characteristics including resolution, 3D scanning method, etc., thus yielding the same 3D reconstruction in static scenes.

  12. SeDiCi: An Authentication Service Taking Advantage of Zero-Knowledge Proofs

    NASA Astrophysics Data System (ADS)

    Grzonkowski, Sławomir

    Transmission of users' profiles over insecure communication means is a crucial task of today's ecommerce applications. In addition, the users have to createmany profiles and remember many credentials. Thus they retype the same information over and over again. Each time the users type their credentials, they expose them to phishing or eavesdropping attempts.These problems could be solved by using Single Sign-on (SSO). The idea of SSO is that the users keep using the same set of credentials when visiting different websites. For web-aplications, OpenID1. is the most prominent solution that partially impelemtns SSO. However, OpenID is prone to phishing attempts and it does not preserve users' privacy [1].

  13. Driving HIV-1 into a Vulnerable Corner by Taking Advantage of Viral Adaptation and Evolution

    PubMed Central

    Harada, Shigeyoshi; Yoshimura, Kazuhisa

    2017-01-01

    Anti-retroviral therapy (ART) is crucial for controlling human immunodeficiency virus type-1 (HIV-1) infection. Recently, progress in identifying and characterizing highly potent broadly neutralizing antibodies has provided valuable templates for HIV-1 therapy and vaccine design. Nevertheless, HIV-1, like many RNA viruses, exhibits genetically diverse populations known as quasispecies. Evolution of quasispecies can occur rapidly in response to selective pressures, such as that exerted by ART and the immune system. Hence, rapid viral evolution leading to drug resistance and/or immune evasion is a significant barrier to the development of effective HIV-1 treatments and vaccines. Here, we describe our recent investigations into evolutionary pressure exerted by anti-retroviral drugs and monoclonal neutralizing antibodies (NAbs) on HIV-1 envelope sequences. We also discuss sensitivities of HIV-1 escape mutants to maraviroc, a CCR5 inhibitor, and HIV-1 sensitized to NAbs by small-molecule CD4-mimetic compounds. These studies help to develop an understanding of viral evolution and escape from both anti-retroviral drugs and the immune system, and also provide fundamental insights into the combined use of NAbs and entry inhibitors. These findings of the adaptation and evolution of HIV in response to drug and immune pressure will inform the development of more effective antiviral therapeutic strategies. PMID:28360890

  14. New frontiers for diagnostic testing: taking advantage of forces changing health care.

    PubMed

    Allawi, S J; Hill, B T; Shah, N R

    1998-01-01

    The transformation of the health-care industry holds great economic potential for laboratory diagnostic testing providers who understand the five market forces driving change and who are shaping their own roles in the emerging market. Because of these trends, provider-based laboratories (PBLs) are competing with independent laboratories (ILs) for the latter's traditional client base--outpatients and nonpatients. PBLs will continue to service acute care patients while becoming more IL-like in logistics, sales, customer service, and marketing. Forced to compete on price, ILs have engaged in mega-mergers and will try to break into acute care via joint ventures. The ILs will need to choose their markets carefully, solidly integrate with parent organizations, and find ways to be profit centers. Consumers' demands also are forcing change. Consumers want accurate, legible bills and simplified eligibility determination and registration. They want an emphasis on prevention and wellness, which means that diagnostic testing must address early identification and monitoring of high-risk groups. To realize cost-efficiencies under whole-life capitation, laboratory networks must be part of a completely integrated health-care system. The laboratory of the future will be multicentered, without walls, and with quick access to information through technology.

  15. Point-of-sale marketing of tobacco products: taking advantage of the socially disadvantaged?

    PubMed

    John, Robert; Cheney, Marshall K; Azad, M Raihan

    2009-05-01

    With increasing regulation of tobacco industry marketing practices, point-of-sale advertising has become an important channel for promoting tobacco products. One hundred and ten convenience stores in Oklahoma County were surveyed for tobacco-related advertising. There were significantly more point-of-sale tobacco advertisements in low-income and minority neighborhoods than in better educated, higher-income, predominantly White neighborhoods. Storeowners or managers were also interviewed to determine who has decision-making power regarding store signage and placement, and to elicit perceptions of industry tactics. Contracts with tobacco companies leave storeowners with little or no control over promotion of tobacco products within their store, and many are unaware of the implications of the tobacco industry point-of-sale practices. Local ordinances that regulated outdoor signage reduced outdoor tobacco advertisements, as well as tobacco signage and promotions within the store. Policy change, rather than education targeting storeowners, is recommended as the most effective strategy for reducing point-of-sale tobacco advertising.

  16. Taking Advantage of Model-Driven Engineering Foundations for Mixed Interaction Design

    NASA Astrophysics Data System (ADS)

    Gauffre, Guillaume; Dubois, Emmanuel

    New forms of interactive systems, hereafter referred to as Mixed Interactive Systems (MIS), are based on the use of physical artefacts present in the environment. Mixing the digital and physical worlds affects the development of interactive systems, especially from the point of view of the design resources which need to express new dimensions. Consequently, there is a crucial need to clearly describe the content and utility of the recent models associated to these new interaction forms. Based on existing initiatives in the field of HCI, this chapter first highlights the interest of using a Model-Driven Engineering (MDE) approach for the design of MIS. Then, this chapter retraces the application of a MDE approach on a specific Mixed Interaction design resource. The resulted contribution is a motivated, explicit, complete and standardized definition of the ASUR model, a model for mixed interaction design. This definition constitutes a basis to promote the use of this model, to support its diffusion and to derive design tools from this model. The model-driven development of a flexible ASUR editor is finally introduced, thus facilitating the insertion of model extensions and articulations.

  17. Taking Advantage of Selective Change Driven Processing for 3D Scanning

    PubMed Central

    Vegara, Francisco; Zuccarello, Pedro; Boluda, Jose A.; Pardo, Fernando

    2013-01-01

    This article deals with the application of the principles of SCD (Selective Change Driven) vision to 3D laser scanning. Two experimental sets have been implemented: one with a classical CMOS (Complementary Metal-Oxide Semiconductor) sensor, and the other one with a recently developed CMOS SCD sensor for comparative purposes, both using the technique known as Active Triangulation. An SCD sensor only delivers the pixels that have changed most, ordered by the magnitude of their change since their last readout. The 3D scanning method is based on the systematic search through the entire image to detect pixels that exceed a certain threshold, showing the SCD approach to be ideal for this application. Several experiments for both capturing strategies have been performed to try to find the limitations in high speed acquisition/processing. The classical approach is limited by the sequential array acquisition, as predicted by the Nyquist–Shannon sampling theorem, and this has been experimentally demonstrated in the case of a rotating helix. These limitations are overcome by the SCD 3D scanning prototype achieving a significantly higher performance. The aim of this article is to compare both capturing strategies in terms of performance in the time and frequency domains, so they share all the static characteristics including resolution, 3D scanning method, etc., thus yielding the same 3D reconstruction in static scenes. PMID:24084110

  18. Taking multiple medicines safely

    MedlinePlus

    ... medlineplus.gov/ency/patientinstructions/000883.htm Taking multiple medicines safely To use the sharing features on this ... directed. Why you may Need More Than one Medicine You may take more than one medicine to ...

  19. Fast autodidactic adaptive equalization algorithms

    NASA Astrophysics Data System (ADS)

    Hilal, Katia

    Autodidactic equalization by adaptive filtering is addressed in a mobile radio communication context. A general method, using an adaptive stochastic gradient Bussgang type algorithm, to deduce two low cost computation algorithms is given: one equivalent to the initial algorithm and the other having improved convergence properties thanks to a block criteria minimization. Two start algorithms are reworked: the Godard algorithm and the decision controlled algorithm. Using a normalization procedure, and block normalization, the performances are improved, and their common points are evaluated. These common points are used to propose an algorithm retaining the advantages of the two initial algorithms. This thus inherits the robustness of the Godard algorithm and the precision and phase correction of the decision control algorithm. The work is completed by a study of the stable states of Bussgang type algorithms and of the stability of the Godard algorithms, initial and normalized. The simulation of these algorithms, carried out in a mobile radio communications context, and under severe conditions on the propagation channel, gave a 75% reduction in the number of samples required for the processing in relation with the initial algorithms. The improvement of the residual error was of a much lower return. These performances are close to making possible the use of autodidactic equalization in the mobile radio system.

  20. Supercontinuum optimization for dual-soliton based light sources using genetic algorithms in a grid platform.

    PubMed

    Arteaga-Sierra, F R; Milián, C; Torres-Gómez, I; Torres-Cisneros, M; Moltó, G; Ferrando, A

    2014-09-22

    We present a numerical strategy to design fiber based dual pulse light sources exhibiting two predefined spectral peaks in the anomalous group velocity dispersion regime. The frequency conversion is based on the soliton fission and soliton self-frequency shift occurring during supercontinuum generation. The optimization process is carried out by a genetic algorithm that provides the optimum input pulse parameters: wavelength, temporal width and peak power. This algorithm is implemented in a Grid platform in order to take advantage of distributed computing. These results are useful for optical coherence tomography applications where bell-shaped pulses located in the second near-infrared window are needed.

  1. Optimization of an algorithm for measurements of velocity vector components using a three-wire sensor.

    PubMed

    Ligeza, P; Socha, K

    2007-10-01

    Hot-wire measurements of velocity vector components use a sensor with three orthogonal wires, taking advantage of an anisotropic effect of wire sensitivity. The sensor is connected to a three-channel anemometric circuit and a data acquisition and processing system. Velocity vector components are obtained from measurement signals, using a modified algorithm for measuring velocity vector components enabling the minimization of measurement errors described in this paper. The standard deviation of the relative error was significantly reduced in comparison with the classical algorithm.

  2. Normal adhesive contact on rough surfaces: efficient algorithm for FFT-based BEM resolution

    NASA Astrophysics Data System (ADS)

    Rey, Valentine; Anciaux, Guillaume; Molinari, Jean-François

    2017-03-01

    We introduce a numerical methodology to compute the solution of an adhesive normal contact problem on rough surfaces with the Boundary Element Method. Based on the Fast Fourier Transform and the Westergaard's fundamental solution, the proposed algorithm enables to solve efficiently the constrained minimization problem: the numerical solution strictly verifies contact orthogonality and the algorithm takes advantage of the constraints to speed up the minimization. Comparisons with the analytical solution of the Hertz case prove the quality of the numerical computation. The method is also used to compute normal adhesive contact between rough surfaces made of multiple asperities.

  3. Fast conjugate gradient algorithm extension for analyzer-based imaging reconstruction

    NASA Astrophysics Data System (ADS)

    Caudevilla, Oriol; Brankov, Jovan G.

    2016-04-01

    This paper presents an extension of the classic Conjugate Gradient Algorithm. Motivated by the Analyzer-Based Imaging inverse problem, the novel method maximizes the Poisson regularized log-likelihood with a non-linear transformation of parameter faster than other solutions. The new approach takes advantage of the special properties of the Poisson log-likelihood to conjugate each ascend direction with respect all the previous directions taken by the algorithm. Our solution is compared with the general solution for non-quadratic unconstrained problems: the Polak- Ribiere formula. Both methods are applied to the ABI reconstruction problem.

  4. Transport implementation of the Bernstein-Vazirani algorithm with ion qubits

    NASA Astrophysics Data System (ADS)

    Fallek, S. D.; Herold, C. D.; McMahon, B. J.; Maller, K. M.; Brown, K. R.; Amini, J. M.

    2016-08-01

    Using trapped ion quantum bits in a scalable microfabricated surface trap, we perform the Bernstein-Vazirani algorithm. Our architecture takes advantage of the ion transport capabilities of such a trap. The algorithm is demonstrated using two- and three-ion chains. For three ions, an improvement is achieved compared to a classical system using the same number of oracle queries. For two ions and one query, we correctly determine an unknown bit string with probability 97.6(8)%. For three ions, we succeed with probability 80.9(3)%.

  5. A DRAM compiler algorithm for high performance VLSI embedded memories

    NASA Technical Reports Server (NTRS)

    Eldin, A. G.

    1992-01-01

    In many applications, the limited density of the embedded SRAM does not allow integrating the memory on the same chip with other logic and functional blocks. In such cases, the embedded DRAM provides the optimum combination of very high density, low power, and high performance. For ASIC's to take full advantage of this design strategy, an efficient and highly reliable DRAM compiler must be used. The embedded DRAM architecture, cell, and peripheral circuit design considerations and the algorithm of a high performance memory compiler are presented .

  6. An Approach to the Programming of Biased Regression Algorithms.

    DTIC Science & Technology

    1978-11-01

    Due to the near nonexistence of computer algorithms for calculating estimators and ancillary statistics that are needed for biased regression methodologies, many users of these methodologies are forced to write their own programs. Brute-force coding of such programs can result in a great waste of computer core and computing time, as well as inefficient and inaccurate computing techniques. This article proposes some guides to more efficient programming by taking advantage of mathematical similarities among several of the more popular biased regression estimators.

  7. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  8. Take a Leaf from Our Books.

    ERIC Educational Resources Information Center

    Fowler, Betty

    1997-01-01

    Describes a unit for the first grade that takes advantage of the fall seasonal changes to explore, be creative, and learn. Ties all the subject areas together through a look at leaves and trees. A list of resources is included. (JRH)

  9. Teachable Moment: Google Earth Takes Us There

    ERIC Educational Resources Information Center

    Williams, Ann; Davinroy, Thomas C.

    2015-01-01

    In the current educational climate, where clearly articulated learning objectives are required, it is clear that the spontaneous teachable moment still has its place. Authors Ann Williams and Thomas Davinroy think that instructors from almost any discipline can employ Google Earth as a tool to take advantage of teachable moments through the…

  10. The Down Syndrome Advantage: Fact or Fiction?

    ERIC Educational Resources Information Center

    Corrice, April M.; Glidden, Laraine Masters

    2009-01-01

    The "Down syndrome advantage" is the popular conception that children with Down syndrome are easier to rear than children with other developmental disabilities. We assessed whether mothers of children with developmental disabilities would demonstrate a consistent Down syndrome advantage as their children aged from 12 to 18 years. Results did not…

  11. The Persistence of Wives' Income Advantage

    ERIC Educational Resources Information Center

    Winslow-Bowe, Sarah

    2006-01-01

    Recent reports using cross-sectional data indicate an increase in the percentage of wives who outearn their husbands, yet we know little about the persistence of wives' income advantage. The present analyses utilize the 1990-1994 waves of the National Longitudinal Survey of Youth 1979 (N = 3,481) to examine wives' long-term earnings advantage.…

  12. How to obtain efficient GPU kernels: An illustration using FMM & FGT algorithms

    NASA Astrophysics Data System (ADS)

    Cruz, Felipe A.; Layton, Simon K.; Barba, L. A.

    2011-10-01

    Computing on graphics processors is maybe one of the most important developments in computational science to happen in decades. Not since the arrival of the Beowulf cluster, which combined open source software with commodity hardware to truly democratize high-performance computing, has the community been so electrified. Like then, the opportunity comes with challenges. The formulation of scientific algorithms to take advantage of the performance offered by the new architecture requires rethinking core methods. Here, we have tackled fast summation algorithms (fast multipole method and fast Gauss transform), and applied algorithmic redesign for attaining performance on GPUs. The progression of performance improvements attained illustrates the exercise of formulating algorithms for the massively parallel architecture of the GPU. The end result has been GPU kernels that run at over 500 Gop/s on one NVIDIATESLA C1060 card, thereby reaching close to practical peak.

  13. Study on algorithm and real-time implementation of infrared image processing based on FPGA

    NASA Astrophysics Data System (ADS)

    Pang, Yulin; Ding, Ruijun; Liu, Shanshan; Chen, Zhe

    2010-10-01

    With the fast development of Infrared Focal Plane Arrays (IRFPA) detectors, high quality real-time image processing becomes more important in infrared imaging system. Facing the demand of better visual effect and good performance, we find FPGA is an ideal choice of hardware to realize image processing algorithm that fully taking advantage of its high speed, high reliability and processing a great amount of data in parallel. In this paper, a new idea of dynamic linear extension algorithm is introduced, which has the function of automatically finding the proper extension range. This image enhancement algorithm is designed in Verilog HDL and realized on FPGA. It works on higher speed than serial processing device like CPU and DSP. Experiment shows that this hardware unit of dynamic linear extension algorithm enhances the visual effect of infrared image effectively.

  14. The advantage of first mention in Spanish

    PubMed Central

    CARREIRAS, MANUEL; GERNSBACHER, MORTON ANN; VILLA, VICTOR

    2015-01-01

    An advantage of first mention—that is, faster access to participants mentioned first in a sentence—has previously been demonstrated only in English. We report three experiments demonstrating that the advantage of first mention occurs also in Spanish sentences, regardless of whether the first-mentioned participants are syntactic subjects, and regardless, too, of whether they are proper names or inanimate objects. Because greater word-order flexibility is allowed in Spanish than in English (e.g., nonpassive object-verb-subject constructions exist in Spanish), these findings provide additional evidence that the advantage of first mention is a general cognitive phenomenon. PMID:24203596

  15. Self-Advantage in the Online World

    PubMed Central

    Yang, Hongsheng; Wang, Fang; Gu, Nianjun; Zhang, Ying

    2015-01-01

    In the current research, screen name was employed to explore the possible cognitive advantage for self-related online material. The results showed that one’s own screen name and real name were detected faster than famous names in both visual search and discrimination tasks. In comparison, there was no difference in visual search speed for the two kinds of self-related names. These findings extend self-advantage from the physical world to the virtual online environment and confirm its robustness. In addition, the present findings also suggest that familiarity might not be the determining factor for self-advantage. PMID:26461490

  16. Taking the Long View

    ERIC Educational Resources Information Center

    Bennett, Robert B., Jr.

    2010-01-01

    Legal studies faculty need to take the long view in their academic and professional lives. Taking the long view would seem to be a cliched piece of advice, but too frequently legal studies faculty, like their students, get focused on meeting the next short-term hurdle--getting through the next class, grading the next stack of papers, making it…

  17. Paedomorphic facial expressions give dogs a selective advantage.

    PubMed

    Waller, Bridget M; Peirce, Kate; Caeiro, Cátia C; Scheider, Linda; Burrows, Anne M; McCune, Sandra; Kaminski, Juliane

    2013-01-01

    How wolves were first domesticated is unknown. One hypothesis suggests that wolves underwent a process of self-domestication by tolerating human presence and taking advantage of scavenging possibilities. The puppy-like physical and behavioural traits seen in dogs are thought to have evolved later, as a byproduct of selection against aggression. Using speed of selection from rehoming shelters as a proxy for artificial selection, we tested whether paedomorphic features give dogs a selective advantage in their current environment. Dogs who exhibited facial expressions that enhance their neonatal appearance were preferentially selected by humans. Thus, early domestication of wolves may have occurred not only as wolf populations became tamer, but also as they exploited human preferences for paedomorphic characteristics. These findings, therefore, add to our understanding of early dog domestication as a complex co-evolutionary process.

  18. Paedomorphic Facial Expressions Give Dogs a Selective Advantage

    PubMed Central

    Waller, Bridget M.; Peirce, Kate; Caeiro, Cátia C.; Scheider, Linda; Burrows, Anne M.; McCune, Sandra; Kaminski, Juliane

    2013-01-01

    How wolves were first domesticated is unknown. One hypothesis suggests that wolves underwent a process of self-domestication by tolerating human presence and taking advantage of scavenging possibilities. The puppy-like physical and behavioural traits seen in dogs are thought to have evolved later, as a byproduct of selection against aggression. Using speed of selection from rehoming shelters as a proxy for artificial selection, we tested whether paedomorphic features give dogs a selective advantage in their current environment. Dogs who exhibited facial expressions that enhance their neonatal appearance were preferentially selected by humans. Thus, early domestication of wolves may have occurred not only as wolf populations became tamer, but also as they exploited human preferences for paedomorphic characteristics. These findings, therefore, add to our understanding of early dog domestication as a complex co-evolutionary process. PMID:24386109

  19. THE HOME ADVANTAGE IN MAJOR LEAGUE BASEBALL.

    PubMed

    Jones, Marshall B

    2015-12-01

    Home advantage is smaller in baseball than in other major professional sports for men, specifically football, basketball, or soccer. This paper advances an explanation. It begins by reviewing the main observations to support the view that there is little or no home advantage in individual sports. It then presents the case that home advantage originates in impaired teamwork among the away players. The need for teamwork and the extent of it vary from sport to sport. To the extent that a sport requires little teamwork it is more like an individual sport, and the home team would be expected to enjoy only a small advantage. Interactions among players on the same side (teamwork) are much less common in baseball than in the other sports considered.

  20. The data sharing advantage in astrophysics

    NASA Astrophysics Data System (ADS)

    Dorch, Bertil F.; Drachen, Thea M.; Ellegaard, Ole

    2016-10-01

    We present here evidence for the existence of a citation advantage within astrophysics for papers that link to data. Using simple measures based on publication data from NASA Astrophysics Data System we find a citation advantage for papers with links to data receiving on the average significantly more citations per paper than papers without links to data. Furthermore, using INSPEC and Web of Science databases we investigate whether either papers of an experimental or theoretical nature display different citation behavior.

  1. The Oilheat Manufacturers Associations Oilheat Advantages Project

    SciTech Connect

    Hedden, R.; Bately, J.E.

    1995-04-01

    The Oilheat Advantages Project is the Oilheat Manufacturers Association`s first project. It involves the creation and disseminaiton of the unified, well documented, compellingly packaged oilheat story. The project invovles three steps: the first step is to pull together all the existing data on the advantages of oilheat into a single, well documented engineering report. The second step will be to rewrite and package the technical document into a consumer piece and a scripted presentation supported with overheads, and to disseminate the information throughout the industry. The third step will be to fund new research to update existing information and discover new advantages of oilheat. This step will begin next year. The inforamtion will be packaged in the following formats: The Engineering Document. This will include all the technical information including the creditable third party sources for all the findings on the many advantages of oilheat; the Consumer Booklet. This summarizes all the findings in the Engineering Document in simple language with easy to understand illustrations and graphs; a series of single topic Statement Stuffers on each of the advantages; an Overhead Transparency-Supported Scripted Show that can be used by industry representatives for presentations to the general public, schools, civic groups, and service clubs; and the Periodic publication of updates to the Oilheat Advantages Study.

  2. Quantum Algorithms

    NASA Technical Reports Server (NTRS)

    Abrams, D.; Williams, C.

    1999-01-01

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases for which all know classical algorithms require exponential time.

  3. Discovering sequence similarity by the algorithmic significance method

    SciTech Connect

    Milosavljevic, A.

    1993-02-01

    The minimal-length encoding approach is applied to define concept of sequence similarity. A sequence is defined to be similar to another sequence or to a set of keywords if it can be encoded in a small number of bits by taking advantage of common subwords. Minimal-length encoding of a sequence is computed in linear time, using a data compression algorithm that is based on a dynamic programming strategy and the directed acyclic word graph data structure. No assumptions about common word ( k-tuple'') length are made in advance, and common words of any length are considered. The newly proposed algorithmic significance method provides an exact upper bound on the probability that sequence similarity has occurred by chance, thus eliminating the need for any arbitrary choice of similarity thresholds. Preliminary experiments indicate that a small number of keywords can positively identify a DNA sequence, which is extremely relevant in the context of partial sequencing by hybridization.

  4. Discovering sequence similarity by the algorithmic significance method

    SciTech Connect

    Milosavljevic, A.

    1993-02-01

    The minimal-length encoding approach is applied to define concept of sequence similarity. A sequence is defined to be similar to another sequence or to a set of keywords if it can be encoded in a small number of bits by taking advantage of common subwords. Minimal-length encoding of a sequence is computed in linear time, using a data compression algorithm that is based on a dynamic programming strategy and the directed acyclic word graph data structure. No assumptions about common word (``k-tuple``) length are made in advance, and common words of any length are considered. The newly proposed algorithmic significance method provides an exact upper bound on the probability that sequence similarity has occurred by chance, thus eliminating the need for any arbitrary choice of similarity thresholds. Preliminary experiments indicate that a small number of keywords can positively identify a DNA sequence, which is extremely relevant in the context of partial sequencing by hybridization.

  5. A Flexible Reservation Algorithm for Advance Network Provisioning

    SciTech Connect

    Balman, Mehmet; Chaniotakis, Evangelos; Shoshani, Arie; Sim, Alex

    2010-04-12

    Many scientific applications need support from a communication infrastructure that provides predictable performance, which requires effective algorithms for bandwidth reservations. Network reservation systems such as ESnet's OSCARS, establish guaranteed bandwidth of secure virtual circuits for a certain bandwidth and length of time. However, users currently cannot inquire about bandwidth availability, nor have alternative suggestions when reservation requests fail. In general, the number of reservation options is exponential with the number of nodes n, and current reservation commitments. We present a novel approach for path finding in time-dependent networks taking advantage of user-provided parameters of total volume and time constraints, which produces options for earliest completion and shortest duration. The theoretical complexity is only O(n2r2) in the worst-case, where r is the number of reservations in the desired time interval. We have implemented our algorithm and developed efficient methodologies for incorporation into network reservation frameworks. Performance measurements confirm the theoretical predictions.

  6. Take the IBS Test

    MedlinePlus

    ... Committed to Quality in Patient Care TAKE THE IBS TEST Do you have recurrent abdominal pain or ... have a real and treatable medical condition called irritable bowel syndrome (IBS). Your doctor now has new information and ...

  7. Take Meds Faithfully

    MedlinePlus

    ... was a good idea.) I Wider use of electronic prescription pills boxes and reminder devices that can ... to help you take your medicines are proliferating. Electronic pill reminder devices are available at most large ...

  8. The Democratic Take

    ERIC Educational Resources Information Center

    Lehane, Christopher S.

    2008-01-01

    The 2008 presidential election stands as a "change" election. The public's anxiety over the challenges globalization poses to the future of the American Dream is driving a desire for the country to change direction. The American people understand that what will give the nation a competitive advantage in a global marketplace are the…

  9. Why is Boris Algorithm So Good?

    SciTech Connect

    et al, Hong Qin

    2013-03-03

    Due to its excellent long term accuracy, the Boris algorithm is the de facto standard for advancing a charged particle. Despite its popularity, up to now there has been no convincing explanation why the Boris algorithm has this advantageous feature. In this letter, we provide an answer to this question. We show that the Boris algorithm conserves phase space volume, even though it is not symplectic. The global bound on energy error typically associated with symplectic algorithms still holds for the Boris algorithm, making it an effective algorithm for the multi-scale dynamics of plasmas.

  10. Is There an Islamist Political Advantage?

    PubMed Central

    Cammett, Melani; Luong, Pauline Jones

    2014-01-01

    There is a widespread presumption that Islamists have an advantage over their opponents when it comes to generating mass appeal and winning elections. The question remains, however, as to whether these advantages—or, what we refer to collectively as an Islamist political advantage—actually exist. We argue that—to the extent that Islamists have a political advantage—the primary source of this advantage is reputation rather than the provision of social services, organizational capacity, or ideological hegemony. Our purpose is not to dismiss the main sources of the Islamist governance advantage identified in scholarly literature and media accounts, but to suggest a different causal path whereby each of these factors individually and sometimes jointly promotes a reputation for Islamists as competent, trustworthy, and pure. It is this reputation for good governance that enables Islamists to distinguish themselves in the streets and at the ballot box. PMID:25767370

  11. Advantages of Studying Processes in Educational Research

    ERIC Educational Resources Information Center

    Schmitz, Bernhard

    2006-01-01

    It is argued that learning and instruction could be conceptualized from a process-analytic perspective. Important questions from the field of learning and instruction are presented which can be answered using our approach of process analyses. A classification system of process concepts and methods is given. One main advantage of this kind of…

  12. Advantages and Problems with Merging Data Bases.

    ERIC Educational Resources Information Center

    Cnaan, Ram A.

    1985-01-01

    Presents the Israeli experience with merging different computerized files using a unique identifier. The advantages and disadvantages of this identifier are examined. Four types of problems are identified and some examples of use of an I.D. number as identifier are given. The desirability of merging files and confidentiality issues are also…

  13. Achieving a competitive advantage in managed care.

    PubMed

    Stahl, D A

    1998-02-01

    When building a competitive advantage to thrive in the managed care arena, subacute care providers are urged to be revolutionary rather than reactionary, proactive rather than passive, optimistic rather than pessimistic and growth-oriented rather than cost-reduction oriented. Weaknesses must be addressed aggressively. To achieve a competitive edge, assess the facility's strengths, understand the marketplace and comprehend key payment methods.

  14. Assessing Binocular Advantage in Aided Vision

    DTIC Science & Technology

    2014-06-01

    SUPPLEMENTARY NOTES Report contains color. 88ABW Cleared 02/03/2014; 88ABW-2014-0320. 14. ABSTRACT Advances in microsensors, microprocessors and...HMD Abstract Advances in microsensors, microprocessors and microdisplays are creating new opportunities for improving vision in degraded...advantages of binocularity are lost. Discussion Recent advances in microsensors, microdisplays, and microprocessors are creating new technology

  15. Challenges and Recent Developments in Hearing Aids: Part I. Speech Understanding in Noise, Microphone Technologies and Noise Reduction Algorithms

    PubMed Central

    Chung, King

    2004-01-01

    This review discusses the challenges in hearing aid design and fitting and the recent developments in advanced signal processing technologies to meet these challenges. The first part of the review discusses the basic concepts and the building blocks of digital signal processing algorithms, namely, the signal detection and analysis unit, the decision rules, and the time constants involved in the execution of the decision. In addition, mechanisms and the differences in the implementation of various strategies used to reduce the negative effects of noise are discussed. These technologies include the microphone technologies that take advantage of the spatial differences between speech and noise and the noise reduction algorithms that take advantage of the spectral difference and temporal separation between speech and noise. The specific technologies discussed in this paper include first-order directional microphones, adaptive directional microphones, second-order directional microphones, microphone matching algorithms, array microphones, multichannel adaptive noise reduction algorithms, and synchrony detection noise reduction algorithms. Verification data for these technologies, if available, are also summarized. PMID:15678225

  16. [Algorithm for assessment of exposure to asbestos].

    PubMed

    Martines, V; Fioravanti, M; Anselmi, A; Attili, F; Battaglia, D; Cerratti, D; Ciarrocca, M; D'Amelio, R; De Lorenzo, G; Ferrante, E; Gaudioso, F; Mascia, E; Rauccio, A; Siena, S; Palitti, T; Tucci, L; Vacca, D; Vigliano, R; Zelano, V; Tomei, F; Sancini, A

    2010-01-01

    There is no universally approved method in the scientific literature to identify subjects exposed to asbestos and divide them in classes according to intensity of exposure. The aim of our work is to study and develope an algorithm based on the findings of occupational anamnestical information provided by a large group of workers. The algorithm allows to discriminate, in a probabilistic way, the risk of exposure by the attribution of a code for each worker (ELSA Code--work estimated exposure to asbestos). The ELSA code has been obtained through a synthesis of information that the international scientific literature identifies as the most predictive for the onset of asbestos-related abnormalities. Four dimensions are analyzed and described: 1) present and/or past occupation; 2) type of materials and equipment used in performing working activity; 3) environment where these activities are carried out; 4) period of time when activities are performed. Although it is possible to have informations in a subjective manner, the decisional procedure is objective and is based on the systematic evaluation of asbestos exposure. From the combination of the four identified dimensions it is possible to have 108 ELSA codes divided in three typological profiles of estimated risk of exposure. The application of the algorithm offers some advantages compared to other methods used for identifying individuals exposed to asbestos: 1) it can be computed both in case of present and past exposure to asbestos; 2) the classification of workers exposed to asbestos using ELSA code is more detailed than the one we have obtained with Job Exposure Matrix (JEM) because the ELSA Code takes in account other indicators of risk besides those considered in the JEM. This algorithm was developed for a project sponsored by the Italian Armed Forces and is also adaptable to other work conditions for in which it could be necessary to assess risk for asbestos exposure.

  17. Teachers Taking Professional Abuse

    ERIC Educational Resources Information Center

    Normore, Anthony H.; Floyd, Andrea

    2005-01-01

    Preservice teachers get their first teaching position hoping to take the first step toward becoming professional educators and expecting support from experienced colleagues and administrators, who often serve as their mentors. In this article, the authors present the story of Kristine (a pseudonym), who works at a middle school in a large U.S.…

  18. Taking the long view

    NASA Astrophysics Data System (ADS)

    White, Patrick; Smith, Emma

    2016-10-01

    A new study of the long-term employment prospects of UK science and engineering students suggests that talk of a skills shortage is overblown, with most graduates in these disciplines taking jobs outside science. Researchers Patrick White and Emma Smith discuss their findings and what they mean for current physics students

  19. Simulating Price-Taking

    ERIC Educational Resources Information Center

    Engelhardt, Lucas M.

    2015-01-01

    In this article, the author presents a price-takers' market simulation geared toward principles-level students. This simulation demonstrates that price-taking behavior is a natural result of the conditions that create perfect competition. In trials, there is a significant degree of price convergence in just three or four rounds. Students find this…

  20. Take a Bow

    ERIC Educational Resources Information Center

    Spitzer, Greg; Ogurek, Douglas J.

    2009-01-01

    Performing-arts centers can provide benefits at the high school and collegiate levels, and administrators can take steps now to get the show started. When a new performing-arts center comes to town, local businesses profit. Events and performances draw visitors to the community. Ideally, a performing-arts center will play many roles: entertainment…

  1. It Takes a Township

    ERIC Educational Resources Information Center

    McNiff, J.

    2011-01-01

    In this article I argue for higher education practitioners to take focused action to contribute to transforming their societies into open and democratically negotiated forms of living, and why they should do so. The need is especially urgent in South Africa, whose earlier revolutionary spirit led to massive social change. The kind of social…

  2. Taking Library Leadership Personally

    ERIC Educational Resources Information Center

    Davis, Heather; Macauley, Peter

    2011-01-01

    This paper outlines the emerging trends for leadership in the knowledge era. It discusses these within the context of leading, creating and sustaining the performance development cultures that libraries require. The first step is to recognise that we all need to take leadership personally no matter whether we see ourselves as leaders or followers.…

  3. A Parallel Numerical Algorithm To Solve Linear Systems Of Equations Emerging From 3D Radiative Transfer

    NASA Astrophysics Data System (ADS)

    Wichert, Viktoria; Arkenberg, Mario; Hauschildt, Peter H.

    2016-10-01

    Highly resolved state-of-the-art 3D atmosphere simulations will remain computationally extremely expensive for years to come. In addition to the need for more computing power, rethinking coding practices is necessary. We take a dual approach by introducing especially adapted, parallel numerical methods and correspondingly parallelizing critical code passages. In the following, we present our respective work on PHOENIX/3D. With new parallel numerical algorithms, there is a big opportunity for improvement when iteratively solving the system of equations emerging from the operator splitting of the radiative transfer equation J = ΛS. The narrow-banded approximate Λ-operator Λ* , which is used in PHOENIX/3D, occurs in each iteration step. By implementing a numerical algorithm which takes advantage of its characteristic traits, the parallel code's efficiency is further increased and a speed-up in computational time can be achieved.

  4. The half-truth of first-mover advantage.

    PubMed

    Suarez, Fernando; Lanzolla, Gianvito

    2005-04-01

    Many executives take for granted that the first company in a new product category gets an unbeatable head start and reaps long-lasting benefits. But that doesn't always happen. The authors of this article discovered that much depends on the pace at which the category's technology is changing and the speed at which the market is evolving. By analyzing these two factors, companies can improve their odds of succeeding as first movers with the resources they possess. Gradual evolution in both the technology and the market provides a first mover with the best conditions for creating a dominant position that is long lasting (Hoover in the vacuum cleaner industry is a good example). In such calm waters, a company can defend its advantages even without exceptional skills or extensive financial resources. When the market is changing rapidly and the product isn't, a first entrant with extensive resources can obtain a long-lasting advantage (as Sony did with its Walkman personal stereo); a company with only limited resources probably must settle for a short-term benefit. When the market is static but the product is changing constantly, first-mover advantages of either kind--durable or short-lived--are unlikely. Only companies with very deep pockets can survive (think of Sony and the digital cameras it pioneered). Rapid churn in both the technology and the market creates the worst conditions. But if companies have an acute sense of when to exit-as Netscape demonstrated when it agreed to be acquired by AOL-a worthwhile short-term gain is possible. Before venturing into a newly forming market, you need to analyze the environment, assess your resources, then determine which type offirst-mover advantage is most achievable. Once you've gone into the water, you have no choice but to swim.

  5. EVITA: a tool for the early EValuation of pharmaceutical Innovations with regard to Therapeutic Advantage

    PubMed Central

    2010-01-01

    Background New drugs are generally claimed to represent a therapeutic innovation. However, scientific evidence of a substantial clinical advantage is often lacking. This may be the result of using inadequate control groups or surrogate outcomes only in the clinical trials. In view of this, EVITA was developed as a user-friendly transparent tool for the early evaluation of the additional therapeutic value of a new drug. Methods EVITA does not evaluate a new compound per se but in an approved indication in comparison with existing therapeutic strategies. Placebo as a comparator is accepted only in the absence of an established therapy or if employed in an add-on strategy on top. The evaluation attributes rating points to the drug in question, taking into consideration both therapeutic benefit and risk profile. The compound scores positive points for superiority in efficiency and/or adverse effects as demonstrated in randomized controlled trials (RCTs), whilst negative points are awarded for inferiority and/or an unfavorable risk profile. The evaluation follows an algorithm considering the clinical relevance of the outcomes, the strength of the therapeutic effect and the number of RCTs performed. Categories for therapeutic aim and disease severity, although essential parts of the EVITA assessment, are attributed but do not influence the EVITA score which is presented as a color-coded bar graph. In case the available data were unsuitable for an EVITA calculation, a traffic-type yield sign is assigned instead to criticize such practice. The results are presented online http://www.evita-report.de together with all RCTs considered as well as the reasons for excluding a given RCT from the evaluation. This allows for immediate revision in response to justified criticism and simplifies the inclusion of new data. Results As examples, four compounds which received approval within the last years were evaluated for one of their clinical indications: lenalidomide, pioglitazone

  6. Advantages of Parallel Processing and the Effects of Communications Time

    NASA Technical Reports Server (NTRS)

    Eddy, Wesley M.; Allman, Mark

    2000-01-01

    Many computing tasks involve heavy mathematical calculations, or analyzing large amounts of data. These operations can take a long time to complete using only one computer. Networks such as the Internet provide many computers with the ability to communicate with each other. Parallel or distributed computing takes advantage of these networked computers by arranging them to work together on a problem, thereby reducing the time needed to obtain the solution. The drawback to using a network of computers to solve a problem is the time wasted in communicating between the various hosts. The application of distributed computing techniques to a space environment or to use over a satellite network would therefore be limited by the amount of time needed to send data across the network, which would typically take much longer than on a terrestrial network. This experiment shows how much faster a large job can be performed by adding more computers to the task, what role communications time plays in the total execution time, and the impact a long-delay network has on a distributed computing system.

  7. A quantum-inspired genetic algorithm based on probabilistic coding for multiple sequence alignment.

    PubMed

    Huo, Hong-Wei; Stojkovic, Vojislav; Xie, Qiao-Luan

    2010-02-01

    Quantum parallelism arises from the ability of a quantum memory register to exist in a superposition of base states. Since the number of possible base states is 2(n), where n is the number of qubits in the quantum memory register, one operation on a quantum computer performs what an exponential number of operations on a classical computer performs. The power of quantum algorithms comes from taking advantages of quantum parallelism. Quantum algorithms are exponentially faster than classical algorithms. Genetic optimization algorithms are stochastic search algorithms which are used to search large, nonlinear spaces where expert knowledge is lacking or difficult to encode. QGMALIGN--a probabilistic coding based quantum-inspired genetic algorithm for multiple sequence alignment is presented. A quantum rotation gate as a mutation operator is used to guide the quantum state evolution. Six genetic operators are designed on the coding basis to improve the solution during the evolutionary process. The experimental results show that QGMALIGN can compete with the popular methods, such as CLUSTALX and SAGA, and performs well on the presenting biological data. Moreover, the addition of genetic operators to the quantum-inspired algorithm lowers the cost of overall running time.

  8. New Operational Algorithms for Particle Data from Low-Altitude Polar-Orbiting Satellites

    NASA Astrophysics Data System (ADS)

    Machol, J. L.; Green, J. C.; Rodriguez, J. V.; Onsager, T. G.; Denig, W. F.

    2010-12-01

    As part of the algorithm development effort started under the former National Polar-orbiting Operational Environmental Satellite System (NPOESS) program, the NOAA Space Weather Prediction Center (SWPC) is developing operational algorithms for the next generation of low-altitude polar-orbiting weather satellites. This presentation reviews the two new algorithms on which SWPC has focused: Energetic Ions (EI) and Auroral Energy Deposition (AED). Both algorithms take advantage of the improved performance of the Space Environment Monitor - Next (SEM-N) sensors over earlier SEM instruments flown on NOAA Polar Orbiting Environmental Satellites (POES). The EI algorithm iterates a piecewise power law fit in order to derive a differential energy flux spectrum for protons with energies from 10-250 MeV. The algorithm provides the data in physical units (MeV/cm2-s-str-keV) instead of just counts/s as was done in the past, making the data generally more useful and easier to integrate into higher level products. The AED algorithm estimates the energy flux deposited into the atmosphere by precipitating low- and medium-energy charged particles. The AED calculations include particle pitch-angle distributions, information that was not available from POES. This presentation also describes methods that we are evaluating for creating higher level products that would specify the global particle environment based on real time measurements.

  9. Sustainable competitive advantage for accountable care organizations.

    PubMed

    Macfarlane, Michael Alex

    2014-01-01

    In the current period of health industry reform, accountable care organizations (ACOs) have emerged as a new model for the delivery of high-quality and cost-effective healthcare. However, few ACOs operate in direct competition with one another, and the accountable care business model has yet to present a means of continually developing new marginal value for patients and network partners. With value-based purchasing and patient consumerism strengthening as market forces, ACOs must build organizational sustainability and competitive advantage to meet the value demands set by customers and competitors. This essay proposes a strategy, adapted from the disciplines of agile software development and Lean product development, through which ACOs can engage internal and external customers in the development of new products that will provide sustainability and competitive advantage to the organization by decreasing waste in development, promoting specialized knowledge, and closely targeting customer value.

  10. New hydraulic downhole pump offers several advantages

    SciTech Connect

    Not Available

    1983-06-01

    A self-contained, hydraulically operated plunger pump is available to replace conventional equipment in troublesome producing situations. The Soderberg pump from EMI Pump Systems uses an oscillating hydraulic fluid column to energize the plunger, thus eliminating the need for sucker rods and pump jacks or submersible motors. An advantage to the pump's design is that it will stroke only when the pump chamber has been vented of gasses and is filled with well liquids. This reduces energy consumption. Other advantages are discussed. The new pump consists of 4 basic sections including an upper subassembly that contains the pump's intelligence, a chamber to receive well fluids, a plunger and a pressurized nitrogen gas chamber that stores energy for the pump's return stroke.

  11. [Internet research methods: advantages and challenges].

    PubMed

    Liu, Yi; Tien, Yueh-Hsuan

    2009-12-01

    Compared to traditional research methods, using the Internet to conduct research offers a number of advantages to the researcher, which include increased access to sensitive issues and vulnerable / hidden populations; decreased data entry time requirements; and enhanced data accuracy. However, Internet research also presents certain challenges to the researcher. In this article, the advantages and challenges of Internet research methods are discussed in four principle issue areas: (a) recruitment, (b) data quality, (c) practicality, and (d) ethics. Nursing researchers can overcome problems related to sampling bias and data truthfulness using creative methods; resolve technical problems through collaboration with other disciplines; and protect participant's privacy, confidentiality and data security by maintaining a high level of vigilance. Once such issues have been satisfactorily addressed, the Internet should open a new window for Taiwan nursing research.

  12. Were there evolutionary advantages to premenstrual syndrome?

    PubMed Central

    Gillings, Michael R

    2014-01-01

    Premenstrual syndrome (PMS) affects up to 80% of women, often leading to significant personal, social and economic costs. When apparently maladaptive states are widespread, they sometimes confer a hidden advantage, or did so in our evolutionary past. We suggest that PMS had a selective advantage because it increased the chance that infertile pair bonds would dissolve, thus improving the reproductive outcomes of women in such partnerships. We confirm predictions arising from the hypothesis: PMS has high heritability; gene variants associated with PMS can be identified; animosity exhibited during PMS is preferentially directed at current partners; and behaviours exhibited during PMS may increase the chance of finding a new partner. Under this view, the prevalence of PMS might result from genes and behaviours that are adaptive in some societies, but are potentially less appropriate in modern cultures. Understanding this evolutionary mismatch might help depathologize PMS, and suggests solutions, including the choice to use cycle-stopping contraception. PMID:25469168

  13. Will the Latino Mortality Advantage Endure?

    PubMed Central

    Goldman, Noreen

    2016-01-01

    Persons of Mexican origin and some other Latino groups in the US have experienced a survival advantage compared with their non-Latino white counterparts, a pattern known as the Latino, Hispanic or epidemiological paradox. However, high rates of obesity and diabetes among Latinos relative to whites and continued increases in the prevalence of these conditions suggest that this advantage may soon disappear. Other phenomena, including high rates of disability in the older Latino population compared with whites, new evidence of health declines shortly after migration to the US, increasing environmental stressors for immigrants, and high risk values of inflammatory markers among Latinos compared with whites support this prediction. One powerful counterargument, however, is substantially lower smoking-attributable mortality among Latinos. Still, it is questionable as to whether smoking behavior can counteract the many forces at play that may impede Latinos from experiencing future improvements in longevity on a par with whites. PMID:26966251

  14. Fringe pattern demodulation with a two-dimensional digital phase-locked loop algorithm.

    PubMed

    Gdeisat, Munther A; Burton, David R; Lalor, Michael J

    2002-09-10

    A novel technique called a two-dimensional digital phase-locked loop (DPLL) for fringe pattern demodulation is presented. This algorithm is more suitable for demodulation of fringe patterns with varying phase in two directions than the existing DPLL techniques that assume that the phase of the fringe patterns varies only in one direction. The two-dimensional DPLL technique assumes that the phase of a fringe pattern is continuous in both directions and takes advantage of the phase continuity; consequently, the algorithm has better noise performance than the existing DPLL schemes. The two-dimensional DPLL algorithm is also suitable for demodulation of fringe patterns with low sampling rates, and it outperforms the Fourier fringe analysis technique in this aspect.

  15. A Grammar-Based Semantic Similarity Algorithm for Natural Language Sentences

    PubMed Central

    Chang, Jia Wei; Hsieh, Tung Cheng

    2014-01-01

    This paper presents a grammar and semantic corpus based similarity algorithm for natural language sentences. Natural language, in opposition to “artificial language”, such as computer programming languages, is the language used by the general public for daily communication. Traditional information retrieval approaches, such as vector models, LSA, HAL, or even the ontology-based approaches that extend to include concept similarity comparison instead of cooccurrence terms/words, may not always determine the perfect matching while there is no obvious relation or concept overlap between two natural language sentences. This paper proposes a sentence similarity algorithm that takes advantage of corpus-based ontology and grammatical rules to overcome the addressed problems. Experiments on two famous benchmarks demonstrate that the proposed algorithm has a significant performance improvement in sentences/short-texts with arbitrary syntax and structure. PMID:24982952

  16. A grammar-based semantic similarity algorithm for natural language sentences.

    PubMed

    Lee, Ming Che; Chang, Jia Wei; Hsieh, Tung Cheng

    2014-01-01

    This paper presents a grammar and semantic corpus based similarity algorithm for natural language sentences. Natural language, in opposition to "artificial language", such as computer programming languages, is the language used by the general public for daily communication. Traditional information retrieval approaches, such as vector models, LSA, HAL, or even the ontology-based approaches that extend to include concept similarity comparison instead of cooccurrence terms/words, may not always determine the perfect matching while there is no obvious relation or concept overlap between two natural language sentences. This paper proposes a sentence similarity algorithm that takes advantage of corpus-based ontology and grammatical rules to overcome the addressed problems. Experiments on two famous benchmarks demonstrate that the proposed algorithm has a significant performance improvement in sentences/short-texts with arbitrary syntax and structure.

  17. An 'adding' algorithm for the Markov chain formalism for radiation transfer

    NASA Technical Reports Server (NTRS)

    Esposito, L. W.

    1979-01-01

    An adding algorithm is presented, that extends the Markov chain method and considers a preceding calculation as a single state of a new Markov chain. This method takes advantage of the description of the radiation transport as a stochastic process. Successive application of this procedure makes calculation possible for any optical depth without increasing the size of the linear system used. It is determined that the time required for the algorithm is comparable to that for a doubling calculation for homogeneous atmospheres. For an inhomogeneous atmosphere the new method is considerably faster than the standard adding routine. It is concluded that the algorithm is efficient, accurate, and suitable for smaller computers in calculating the diffuse intensity scattered by an inhomogeneous planetary atmosphere.

  18. cOSPREY: A Cloud-Based Distributed Algorithm for Large-Scale Computational Protein Design.

    PubMed

    Pan, Yuchao; Dong, Yuxi; Zhou, Jingtian; Hallen, Mark; Donald, Bruce R; Zeng, Jianyang; Xu, Wei

    2016-09-01

    Finding the global minimum energy conformation (GMEC) of a huge combinatorial search space is the key challenge in computational protein design (CPD) problems. Traditional algorithms lack a scalable and efficient distributed design scheme, preventing researchers from taking full advantage of current cloud infrastructures. We design cloud OSPREY (cOSPREY), an extension to a widely used protein design software OSPREY, to allow the original design framework to scale to the commercial cloud infrastructures. We propose several novel designs to integrate both algorithm and system optimizations, such as GMEC-specific pruning, state search partitioning, asynchronous algorithm state sharing, and fault tolerance. We evaluate cOSPREY on three different cloud platforms using different technologies and show that it can solve a number of large-scale protein design problems that have not been possible with previous approaches.

  19. A Hybrid Neural Network-Genetic Algorithm Technique for Aircraft Engine Performance Diagnostics

    NASA Technical Reports Server (NTRS)

    Kobayashi, Takahisa; Simon, Donald L.

    2001-01-01

    In this paper, a model-based diagnostic method, which utilizes Neural Networks and Genetic Algorithms, is investigated. Neural networks are applied to estimate the engine internal health, and Genetic Algorithms are applied for sensor bias detection and estimation. This hybrid approach takes advantage of the nonlinear estimation capability provided by neural networks while improving the robustness to measurement uncertainty through the application of Genetic Algorithms. The hybrid diagnostic technique also has the ability to rank multiple potential solutions for a given set of anomalous sensor measurements in order to reduce false alarms and missed detections. The performance of the hybrid diagnostic technique is evaluated through some case studies derived from a turbofan engine simulation. The results show this approach is promising for reliable diagnostics of aircraft engines.

  20. Binary 3D image interpolation algorithm based global information and adaptive curves fitting

    NASA Astrophysics Data System (ADS)

    Zhang, Tian-yi; Zhang, Jin-hao; Guan, Xiang-chen; Li, Qiu-ping; He, Meng

    2013-08-01

    Interpolation is a necessary processing step in 3-D reconstruction because of the non-uniform resolution. Conventional interpolation methods simply use two slices to obtain the missing slices between the two slices .when the key slice is missing, those methods may fail to recover it only employing the local information .And the surface of 3D object especially for the medical tissues may be highly complicated, so a single interpolation can hardly get high-quality 3D image. We propose a novel binary 3D image interpolation algorithm. The proposed algorithm takes advantages of the global information. It chooses the best curve adaptively from lots of curves based on the complexity of the surface of 3D object. The results of this algorithm are compared with other interpolation methods on artificial objects and real breast cancer tumor to demonstrate the excellent performance.

  1. Indian Defense Procurements: Advantage Russia or USA?

    DTIC Science & Technology

    2015-02-17

    the 21st century, India has emerged as the biggest importer of defense equipment in the international market . The US, on the other hand, is the...undisputed ruler, and pure logic would entail that it should be the one enjoying a lion???s share of the Indian market . However, India???s strong strategic...the Russian advantage in Indian defense market is just a myth. The author synthesizes the problem by contextualizing the reasons for the India???s

  2. Advantageous effect of theanine intake on cognition.

    PubMed

    Tamano, Haruna; Fukura, Kotaro; Suzuki, Miki; Sakamoto, Kazuhiro; Yokogoshi, Hidehiko; Takeda, Atsushi

    2014-11-01

    Theanine, γ-glutamylethylamide, is one of the major amino acid components in green tea. On the basis of the preventive effect of theanine intake after weaning on stress-induced impairment of recognition memory, the advantageous effect of theanine intake on recognition memory was examined in young rats, which were fed water containing 0.3% theanine for 3 weeks after weaning. The rats were subjected to object recognition test. Object recognition memory was maintained in theanine-administered rats 48 hours after the training, but not in the control rats. When in vivo dentate gyrus long-term potentiation (LTP) was induced, it was more greatly induced in theanine-administered rats than in the control rats. The levels of brain-derived neurotropic factor and nerve growth factor in the hippocampus were significantly higher in theanine-administered rats than in the control rats. The present study indicates the advantageous effect of theanine intake after weaning on recognition memory. It is likely that theanine intake is of advantage to the development of hippocampal function after weaning.

  3. Explaining Asian Americans' academic advantage over whites.

    PubMed

    Hsin, Amy; Xie, Yu

    2014-06-10

    The superior academic achievement of Asian Americans is a well-documented phenomenon that lacks a widely accepted explanation. Asian Americans' advantage in this respect has been attributed to three groups of factors: (i) socio-demographic characteristics, (ii) cognitive ability, and (iii) academic effort as measured by characteristics such as attentiveness and work ethic. We combine data from two nationally representative cohort longitudinal surveys to compare Asian-American and white students in their educational trajectories from kindergarten through high school. We find that the Asian-American educational advantage is attributable mainly to Asian students exerting greater academic effort and not to advantages in tested cognitive abilities or socio-demographics. We test explanations for the Asian-white gap in academic effort and find that the gap can be further attributed to (i) cultural differences in beliefs regarding the connection between effort and achievement and (ii) immigration status. Finally, we highlight the potential psychological and social costs associated with Asian-American achievement success.

  4. Assessing the binocular advantage in aided vision.

    PubMed

    Harrington, Lawrence K; McIntire, John P; Hopper, Darrel G

    2014-09-01

    Advances in microsensors, microprocessors, and microdisplays are creating new opportunities for improving vision in degraded environments through the use of head-mounted displays. Initially, the cutting-edge technology used in these new displays will be expensive. Inevitably, the cost of providing the additional sensor and processing required to support binocularity brings the value of binocularity into question. Several assessments comparing binocular, binocular, and monocular head-mounted displays for aided vision have concluded that the additional performance, if any, provided by binocular head-mounted displays does not justify the cost. The selection of a biocular [corrected] display for use in the F-35 is a current example of this recurring decision process. It is possible that the human binocularity advantage does not carry over to the aided vision application, but more likely the experimental approaches used in the past have been too coarse to measure its subtle but important benefits. Evaluating the value of binocularity in aided vision applications requires an understanding of the characteristics of both human vision and head-mounted displays. With this understanding, the value of binocularity in aided vision can be estimated and experimental evidence can be collected to confirm or reject the presumed binocular advantage, enabling improved decisions in aided vision system design. This paper describes four computational approaches-geometry of stereopsis, modulation transfer function area for stereopsis, probability summation, and binocular summation-that may be useful in quantifying the advantage of binocularity in aided vision.

  5. Algorithm That Synthesizes Other Algorithms for Hashing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2010-01-01

    An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the

  6. Auditory perspective taking.

    PubMed

    Martinson, Eric; Brock, Derek

    2013-06-01

    Effective communication with a mobile robot using speech is a difficult problem even when you can control the auditory scene. Robot self-noise or ego noise, echoes and reverberation, and human interference are all common sources of decreased intelligibility. Moreover, in real-world settings, these problems are routinely aggravated by a variety of sources of background noise. Military scenarios can be punctuated by high decibel noise from materiel and weaponry that would easily overwhelm a robot's normal speaking volume. Moreover, in nonmilitary settings, fans, computers, alarms, and transportation noise can cause enough interference to make a traditional speech interface unusable. This work presents and evaluates a prototype robotic interface that uses perspective taking to estimate the effectiveness of its own speech presentation and takes steps to improve intelligibility for human listeners.

  7. Take the "C" Train

    ERIC Educational Resources Information Center

    Lawton, Rebecca

    2008-01-01

    In this essay, the author recalls several of her experiences in which she successfully pulled her boats out of river holes by throwing herself to the water as a sea-anchor. She learned this trick from her senior guides at a spring training. Her guides told her, "When you're stuck in a hole, take the "C" train."" "Meaning?" The author asked her…

  8. It Takes an Ecosystem

    DTIC Science & Technology

    2012-04-25

    Business Review (April 2006). [7] Marco Iansiti and Roy Levien. “Strategy as Ecology,” Harvard Business Review , March 2004. 27 It Takes an... Harvard Business Review (March 2004). [8] Viljainen, Martti & Kauppinen, Marjo. "Software Ecosystems: A Set of Management Practices for Platform...E. “How Competitive Forces Shape Strategy.” Harvard Business Review (March 1979). [11] ASA(ALT) Common Operating Environment Implementation Plan

  9. "Greenbook Algorithms and Hardware Needs Analysis"

    SciTech Connect

    De Jong, Wibe A.; Oehmen, Chris S.; Baxter, Douglas J.

    2007-01-09

    "This document describes the algorithms, and hardware balance requirements needed to enable the solution of real scientific problems in the DOE core mission areas of environmental and subsurface chemistry, computational and systems biology, and climate science. The MSCF scientific drivers have been outlined in the Greenbook, which is available online at http://mscf.emsl.pnl.gov/docs/greenbook_for_web.pdf . Historically, the primary science driver has been the chemical and the molecular dynamics of the biological science area, whereas the remaining applications in the biological and environmental systems science areas have been occupying a smaller segment of the available hardware resources. To go from science drivers to hardware balance requirements, the major applications were identified. Major applications on the MSCF resources are low- to high-accuracy electronic structure methods, molecular dynamics, regional climate modeling, subsurface transport, and computational biology. The algorithms of these applications were analyzed to identify the computational kernels in both sequential and parallel execution. This analysis shows that a balanced architecture is needed with respect to processor speed, peak flop rate, peak integer operation rate, and memory hierarchy, interprocessor communication, and disk access and storage. A single architecture can satisfy the needs of all of the science areas, although some areas may take greater advantage of certain aspects of the architecture. "

  10. GPU acceleration of simplex volume algorithm for hyperspectral endmember extraction

    NASA Astrophysics Data System (ADS)

    Qu, Haicheng; Zhang, Junping; Lin, Zhouhan; Chen, Hao; Huang, Bormin

    2012-10-01

    The simplex volume algorithm (SVA)1 is an endmember extraction algorithm based on the geometrical properties of a simplex in the feature space of hyperspectral image. By utilizing the relation between a simplex volume and its corresponding parallelohedron volume in the high-dimensional space, the algorithm extracts endmembers from the initial hyperspectral image directly without the need of dimension reduction. It thus avoids the drawback of the N-FINDER algorithm, which requires the dimension of the data to be reduced to one less than the number of the endmembers. In this paper, we take advantage of the large-scale parallelism of CUDA (Compute Unified Device Architecture) to accelerate the computation of SVA on the NVidia GeForce 560 GPU. The time for computing a simplex volume increases with the number of endmembers. Experimental results show that the proposed GPU-based SVA achieves a significant 112.56x speedup for extracting 16 endmembers, as compared to its CPU-based single-threaded counterpart.

  11. Towards General Algorithms for Grammatical Inference

    NASA Astrophysics Data System (ADS)

    Clark, Alexander

    Many algorithms for grammatical inference can be viewed as instances of a more general algorithm which maintains a set of primitive elements, which distributionally define sets of strings, and a set of features or tests that constrain various inference rules. Using this general framework, which we cast as a process of logical inference, we re-analyse Angluin's famous lstar algorithm and several recent algorithms for the inference of context-free grammars and multiple context-free grammars. Finally, to illustrate the advantages of this approach, we extend it to the inference of functional transductions from positive data only, and we present a new algorithm for the inference of finite state transducers.

  12. Scheduling algorithms

    NASA Astrophysics Data System (ADS)

    Wolfe, William J.; Wood, David; Sorensen, Stephen E.

    1996-12-01

    This paper discusses automated scheduling as it applies to complex domains such as factories, transportation, and communications systems. The window-constrained-packing problem is introduced as an ideal model of the scheduling trade offs. Specific algorithms are compared in terms of simplicity, speed, and accuracy. In particular, dispatch, look-ahead, and genetic algorithms are statistically compared on randomly generated job sets. The conclusion is that dispatch methods are fast and fairly accurate; while modern algorithms, such as genetic and simulate annealing, have excessive run times, and are too complex to be practical.

  13. Haplotyping algorithms

    SciTech Connect

    Sobel, E.; Lange, K.; O`Connell, J.R.

    1996-12-31

    Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.

  14. Enforced Clonality Confers a Fitness Advantage

    PubMed Central

    Martínková, Jana; Klimešová, Jitka

    2016-01-01

    In largely clonal plants, splitting of a maternal plant into potentially independent plants (ramets) is usually spontaneous; however, such fragmentation also occurs in otherwise non-clonal species due to application of external force. This process might play an important yet largely overlooked role for otherwise non-clonal plants by providing a mechanism to regenerate after disturbance. Here, in a 5-year garden experiment on two short-lived, otherwise non-clonal species, Barbarea vulgaris and Barbarea stricta, we compared the fitness of plants fragmented by simulated disturbance (“enforced ramets”) both with plants that contemporaneously originate in seed and with individuals unscathed by the disturbance event. Because the ability to regrow from fragments is related to plant age and stored reserves, we compared the effects of disturbance applied during three different ontogenetic stages of the plants. In B. vulgaris, enforced ramet fitness was higher than the measured fitness values of both uninjured plants and plants established from seed after the disturbance. This advantage decreased with increasing plant age at the time of fragmentation. In B. stricta, enforced ramet fitness was lower than or similar to fitness of uninjured plants and plants grown from seed. Our results likely reflect the habitat preferences of the study species, as B. vulgaris occurs in anthropogenic, disturbed habitats where body fragmentation is more probable and enforced clonality thus more advantageous than in the more natural habitats preferred by B. stricta. Generalizing from our results, we see that increased fitness yielded by enforced clonality would confer an evolutionary advantage in the face of disturbance, especially in habitats where a seed bank has not been formed, e.g., during invasion or colonization. Our results thus imply that enforced clonality should be taken into account when studying population dynamics and life strategies of otherwise non-clonal species in disturbed

  15. Does Medicare Advantage Cost Less Than Traditional Medicare?

    PubMed

    Biles, Brian; Casillas, Giselle; Guterman, Stuart

    2016-01-01

    The costs of providing benefits to enrollees in private Medicare Advantage (MA) plans are slightly less, on average, than what traditional Medicare spends per beneficiary in the same county. However, MA plans that are able to keep their costs comparatively low are concen­trated in a fairly small number of U.S. counties. In the 25 counties where the cost differences between MA plans and traditional Medicare are largest, MA plans spent a total of $5.2 billion less than what traditional Medicare would have been expected to spend on the same benefi­ciaries, with health maintenance organizations (HMOs) accounting for all of that difference. In the rest of the country, MA plans spent $4.8 billion above the expected costs under tradi­tional Medicare. Broad determinations about the relative efficiency of MA plans and traditional Medicare can therefore be misleading, as they fail to take into account local conditions and individual plans' performance.

  16. Establishing a competitive advantage through quality management.

    PubMed

    George, R J

    1996-06-01

    The successful dentist of the future will establish a sustainable competitive advantage in the marketplace by recognising that patients undergoing dental treatment cannot see the result before purchase, and that they therefore look for signs of service quality to reduce uncertainty. Thus the successful dentist will implement a quality programme that recognises not only that quality is defined by meeting patients' needs and expectations, but also that quality service is fundamental to successful business strategy. Finally, the successful dentist of the future will realise that the pursuit of quality is a never-ending process which requires leadership by example.

  17. Stochastic proximity embedding on graphics processing units: taking multidimensional scaling to a new scale.

    PubMed

    Yang, Eric; Liu, Pu; Rassokhin, Dimitrii N; Agrafiotis, Dimitris K

    2011-11-28

    Stochastic proximity embedding (SPE) was developed as a method for efficiently calculating lower dimensional embeddings of high-dimensional data sets. Rather than using a global minimization scheme, SPE relies upon updating the distances of randomly selected points in an iterative fashion. This was found to generate embeddings of comparable quality to those obtained using classical multidimensional scaling algorithms. However, SPE is able to obtain these results in O(n) rather than O(n²) time and thus is much better suited to large data sets. In an effort both to speed up SPE and utilize it for even larger problems, we have created a multithreaded implementation which takes advantage of the growing general computing power of graphics processing units (GPUs). The use of GPUs allows the embedding of data sets containing millions of data points in interactive time scales.

  18. Categorizing Variations of Student-Implemented Sorting Algorithms

    ERIC Educational Resources Information Center

    Taherkhani, Ahmad; Korhonen, Ari; Malmi, Lauri

    2012-01-01

    In this study, we examined freshmen students' sorting algorithm implementations in data structures and algorithms' course in two phases: at the beginning of the course before the students received any instruction on sorting algorithms, and after taking a lecture on sorting algorithms. The analysis revealed that many students have insufficient…

  19. Complexity, Competitive Intelligence and the "First Mover" Advantage

    NASA Astrophysics Data System (ADS)

    Fellman, Philip Vos; Post, Jonathan Vos

    In the following paper we explore some of the ways in which competitive intelligence and game theory can be employed to assist firms in deciding whether or not to undertake international market diversification and whether or not there is an advantage to being a market leader or a market follower overseas. In attempting to answer these questions, we take a somewhat unconventional approach. We first examine how some of the most recent advances in the physical and biological sciences can contribute to the ways in which we understand how firms behave. Subsequently, we propose a formal methodology for competitive intelligence. While space considerations here do not allow for a complete game-theoretic treatment of competitive intelligence and its use with respect to understanding first and second mover advantage in firm internationalization, that treatment can be found in its entirety in the on-line proceedings of the 6th International Conference on Complex Systems at http://knowledgetoday.org/wiki/indec.php/ICCS06/89

  20. Complexity, Competitive Intelligence and the "First Mover" Advantage

    NASA Astrophysics Data System (ADS)

    Fellman, Philip Vos; Post, Jonathan Vos

    In the following paper we explore some of the ways in which competitive intelligence and game theory can be employed to assist firms in deciding whether or not to undertake international market diversification and whether or not there is an advantage to being a market leader or a market follower overseas. In attempting to answer these questions, we take a somewhat unconventional approach. We first examine how some of the most recent advances in the physical and biological sciences can contribute to the ways in which we understand how firms behave. Subsequently, we propose a formal methodology for competitive intelligence. While space considerations here do not allow for a complete game-theoretic treatment of competitive intelligence and its use with respect to understanding first and second mover advantage in firm internationalization, that treatment can be found in its entirety in the on-line proceedings of the 6th International Conference on Complex Systems at e">http://knowledgetoday.org/wiki/indec.php/ICCS06/89.

  1. Rural Medicare Advantage Plan Payment in 2015.

    PubMed

    Kemper, Leah; Barker, Abigail R; McBride, Timothy D; Mueller, Keith

    2015-12-01

    Payment to Medicare Advantage (MA) plans was fundamentally altered in the Patient Protection and Affordable Care Act of 2010 (ACA). MA plans now operate under a new formula for county-level payment area benchmarks, and in 2012 began receiving quality-based bonus payments. The Medicare Advantage Quality Bonus Payment Demonstration expanded the bonus payments to most MA plans through 2014; however, with the end of the demonstration bonus payments has been reduced for intermediate quality MA plans. This brief examines the impact that these changes in MA baseline payment are having on MA plans and beneficiaries in rural and urban areas. Key Data Findings. (1) Payments to plans in rural areas were 3.9 percent smaller under ACA payment policies in 2015 than they would have been in the absence of the ACA. For plans in urban areas, the payments were 8.8 percent smaller than they would have been. These figures were determined using hypothetical pre-ACA and actual ACA-mandated benchmarks for 2015. (2) MA plans in rural areas received an average annual bonus payment of $326.77 per enrollee in 2014, but only $63.76 per enrollee in 2015, with the conclusion of the demonstration. (3) In 2014, 92 percent of rural MA beneficiaries were in a plan that received quality-based bonus payments under the demonstration, while in March 2015, 56 percent of rural MA beneficiaries were in a plan that was eligible for quality-based bonus payments.

  2. An evolutionary advantage for extravagant honesty.

    PubMed

    Bullock, Seth

    2012-01-07

    A game-theoretic model of handicap signalling over a pair of signalling channels is introduced in order to determine when one channel has an evolutionary advantage over the other. The stability conditions for honest handicap signalling are presented for a single channel and are shown to conform with the results of prior handicap signalling models. Evolutionary simulations are then used to show that, for a two-channel system in which honest signalling is possible on both channels, the channel featuring larger advertisements at equilibrium is favoured by evolution. This result helps to address a significant tension in the handicap principle literature. While the original theory was motivated by the prevalence of extravagant natural signalling, contemporary models have demonstrated that it is the cost associated with deception that stabilises honesty, and that the honest signals exhibited at equilibrium need not be extravagant at all. The current model suggests that while extravagant and wasteful signals are not required to ensure a signalling system's evolutionary stability, extravagant signalling systems may enjoy an advantage in terms of evolutionary attainability.

  3. A fast non-local image denoising algorithm

    NASA Astrophysics Data System (ADS)

    Dauwe, A.; Goossens, B.; Luong, H. Q.; Philips, W.

    2008-02-01

    In this paper we propose several improvements to the original non-local means algorithm introduced by Buades et al. which obtains state-of-the-art denoising results. The strength of this algorithm is to exploit the repetitive character of the image in order to denoise the image unlike conventional denoising algorithms, which typically operate in a local neighbourhood. Due to the enormous amount of weight computations, the original algorithm has a high computational cost. An improvement of image quality towards the original algorithm is to ignore the contributions from dissimilar windows. Even though their weights are very small at first sight, the new estimated pixel value can be severely biased due to the many small contributions. This bad influence of dissimilar windows can be eliminated by setting their corresponding weights to zero. Using the preclassification based on the first three statistical moments, only contributions from similar neighborhoods are computed. To decide whether a window is similar or dissimilar, we will derive thresholds for images corrupted with additive white Gaussian noise. Our accelerated approach is further optimized by taking advantage of the symmetry in the weights, which roughly halves the computation time, and by using a lookup table to speed up the weight computations. Compared to the original algorithm, our proposed method produces images with increased PSNR and better visual performance in less computation time. Our proposed method even outperforms state-of-the-art wavelet denoising techniques in both visual quality and PSNR values for images containing a lot of repetitive structures such as textures: the denoised images are much sharper and contain less artifacts. The proposed optimizations can also be applied in other image processing tasks which employ the concept of repetitive structures such as intra-frame super-resolution or detection of digital image forgery.

  4. Routing Algorithm Exploits Spatial Relations

    NASA Technical Reports Server (NTRS)

    Okino, Clayton; Jennings, Esther

    2004-01-01

    A recently developed routing algorithm for broadcasting in an ad hoc wireless communication network takes account of, and exploits, the spatial relationships among the locations of nodes, in addition to transmission power levels and distances between the nodes. In contrast, most prior algorithms for discovering routes through ad hoc networks rely heavily on transmission power levels and utilize limited graph-topology techniques that do not involve consideration of the aforesaid spatial relationships. The present algorithm extracts the relevant spatial-relationship information by use of a construct denoted the relative-neighborhood graph (RNG).

  5. [Biomarkers for liver fibrosis: advances, advantages and disadvantages].

    PubMed

    Cequera, A; García de León Méndez, M C

    2014-01-01

    Liver cirrhosis in Mexico is one of the most important causes of death in persons between the ages of 25 and 50 years. One of the reasons for therapeutic failure is the lack of knowledge about the molecular mechanisms that cause liver disorder and make it irreversible. One of its prevalent anatomical characteristics is an excessive deposition of fibrous tissue that takes different forms depending on etiology and disease stage. Liver biopsy, traditionally regarded as the gold standard of fibrosis staging, has been brought into question over the past decade, resulting in the proposal for developing non-invasive technologies based on different, but complementary, approaches: a biological one that takes the serum levels of products arising from the fibrosis into account, and a more physical one that evaluates scarring of the liver by methods such as ultrasound and magnetic resonance elastography; some of the methods were originally studied and validated in patients with hepatitis C. There is great interest in determining non-invasive markers for the diagnosis of liver fibrosis, since at present there is no panel or parameter efficient and reliable enough for diagnostic use. In this paper, we describe the biomarkers that are currently being used for studying liver fibrosis in humans, their advantages and disadvantages, as well as the implementation of new-generation technologies and the evaluation of their possible use in the diagnosis of fibrosis.

  6. Taking action against violence.

    PubMed

    Kunz, K

    1996-05-01

    Significant increase in violent crimes in recent years forced Icelandic men to take action against violence. Television was seen as a major contributory factor in increasing violence. Surveys indicate that 10-15 years after television broadcasting commences in a particular society, the incidence of crime can be expected to double. While the majority of the individuals arrested for violent crimes are men, being male does not necessarily mean being violent. The Men's Committee of the Icelandic Equal Rights Council initiated a week-long information and education campaign under the theme "Men Against Violence". This campaign involved several events including an art exhibit, speeches on violence in families, treatment sought by those who are likely to resort to violence, booklet distribution among students in secondary schools, and a mass media campaign to raise public awareness on this pressing problem.

  7. Algorithms for distributed and mobile sensing

    NASA Astrophysics Data System (ADS)

    Isler, Ibrahim Volkan

    Sensing remote, complex and large environments is an important task that arises in diverse applications including planetary exploration, monitoring forest fires and the surveillance of large factories. Currently, automation of such sensing tasks in complex environments is achieved either by deploying many stationary sensors to the environment, or by mounting a sensor on a mobile device and using the device to sense the environment. The Eighties and Nineties witnessed tremendous advances in both distributed and mobile sensing technologies. To take advantage of these technologies, it is crucial to design algorithms to perform sensing tasks in an autonomous fashion. In this dissertation, we study four fundamental sensing problems that arise in sensing complex environments with distributed and mobile systems. For mobile sensing systems we study exploration and pursuit-evasion problems. In the exploration problem, the goal is to design a strategy for a mobile robot so that the robot sees every point in an unknown environment as quickly as possible. In the pursuit-evasion problem, the goal is to design a strategy for a pursuer to capture an adversarial evader. For distributed sensing systems we study placement and assignment problems. In the placement problem, the goal is to place sensors to an environment so that every point in the environment is in the range of at least one sensor. The assignment problem deals with the issue of assigning targets to sensors in a network, so that overall error in estimating the position of the targets is minimized. We present algorithms to perform these sensing tasks in an efficient fashion. Performance guarantees of the algorithms are mathematically proven and evaluated by simulations.

  8. Advantages and Challenges of Superconducting Accelerators

    NASA Astrophysics Data System (ADS)

    Krischel, Detlef

    After a short review of the history toward high-energy superconducting (SC) accelerators for ion beam therapy (IBT), an overview is given on material properties and technical developments enabling to use SC components in a medical accelerator for full body cancer treatment. The design concept and the assembly of a commercially available SC cyclotron for proton therapy (PT) are described and the potential advantages for applying superconductivity are assessed. The discussion includes the first years of operation experience with regard to cryogenic and magnetic performance, automated beam control, and maintenance aspects. An outlook is given on alternative machine concepts for protons-only or for heavier ions. Finally, it is discussed whether the application of superconductivity might be expanded in the future to a broader range of subsystems of clinical IBT accelerators such as SC magnets for transfer beam lines or gantries.

  9. Optical advantages in retinal scanning displays

    NASA Astrophysics Data System (ADS)

    Urey, Hakan

    2000-06-01

    Virtual Retinal DisplayTM technology is a retinal scanning display (RSD) technology being developed at Microvision, Inc., for a variety of applications including microdisplays. An RSD scans a modulated light beam onto a viewer's retina to produce a perceived image. Red, green and blue light sources, such as lasers, laser diodes or LEDs combine with Microvision's proprietary miniaturized scanner designs to make the RSD very well suited for head-worn and helmet-mounted displays (HMD). This paper compares the features of RSD technology to other display technologies such as the cathode ray tubes or matrix-based displays for HMD and other wearable display applications, and notes important performance advantages due to the number of pixel- generating elements. Also discussed are some fundamental optical limitations for virtual displays used in the HMD applications.

  10. The mechanical defence advantage of small seeds.

    PubMed

    Fricke, Evan C; Wright, S Joseph

    2016-08-01

    Seed size and toughness affect seed predators, and size-dependent investment in mechanical defence could affect relationships between seed size and predation. We tested how seed toughness and mechanical defence traits (tissue density and protective tissue content) are related to seed size among tropical forest species. Absolute toughness increased with seed size. However, smaller seeds had higher specific toughness both within and among species, with the smallest seeds requiring over 2000 times more energy per gram to break than the largest seeds. Investment in mechanical defence traits varied widely but independently of the toughness-mass allometry. Instead, a physical scaling relationship confers a toughness advantage on small seeds independent of selection on defence traits and without a direct cost. This scaling relationship may contribute to seed size diversity by decreasing fitness differences among large and small seeds. Allometric scaling of toughness reconciles predictions and conflicting empirical relationships between seed size and predation.

  11. Accounting for the Down syndrome advantage?

    PubMed

    Esbensen, Anna J; Seltzer, Marsha Mailick

    2011-01-01

    The authors examined factors that could explain the higher levels of psychosocial well being observed in past research in mothers of individuals with Down syndrome compared with mothers of individuals with other types of intellectual disabilities. The authors studied 155 mothers of adults with Down syndrome, contrasting factors that might validly account for the ?Down syndrome advantage? (behavioral phenotype) with those that have been portrayed in past research as artifactual (maternal age, social supports). The behavioral phenotype predicted less pessimism, more life satisfaction, and a better quality of the mother?child relationship. However, younger maternal age and fewer social supports, as well as the behavioral phenotype, predicted higher levels of caregiving burden. Implications for future research on families of individuals with Down syndrome are discussed.

  12. The competitive advantage of sanctioning institutions.

    PubMed

    Gürerk, Ozgür; Irlenbusch, Bernd; Rockenbach, Bettina

    2006-04-07

    Understanding the fundamental patterns and determinants of human cooperation and the maintenance of social order in human societies is a challenge across disciplines. The existing empirical evidence for the higher levels of cooperation when altruistic punishment is present versus when it is absent systematically ignores the institutional competition inherent in human societies. Whether punishment would be deliberately adopted and would similarly enhance cooperation when directly competing with nonpunishment institutions is highly controversial in light of recent findings on the detrimental effects of punishment. We show experimentally that a sanctioning institution is the undisputed winner in a competition with a sanction-free institution. Despite initial aversion, the entire population migrates successively to the sanctioning institution and strongly cooperates, whereas the sanction-free society becomes fully depopulated. The findings demonstrate the competitive advantage of sanctioning institutions and exemplify the emergence and manifestation of social order driven by institutional selection.

  13. The kinematic advantage of electric cars

    NASA Astrophysics Data System (ADS)

    Meyn, Jan-Peter

    2015-11-01

    Acceleration of a common car with with a turbocharged diesel engine is compared to the same type with an electric motor in terms of kinematics. Starting from a state of rest, the electric car reaches a distant spot earlier than the diesel car, even though the latter has a better specification for engine power and average acceleration from 0 to 100 km h-1. A three phase model of acceleration as a function of time fits the data of the electric car accurately. The first phase is a quadratic growth of acceleration in time. It is shown that the tenfold higher coefficient for the first phase accounts for most of the kinematic advantage of the electric car.

  14. Longitudinal research strategies: advantages, problems, and prospects.

    PubMed

    Farrington, D P

    1991-05-01

    The single-cohort, long-term longitudinal survey has many advantages in comparison with a cross-sectional survey in advancing knowledge about offending and other types of psychopathology, notably in providing information about onset and desistance, about continuity and prediction, and about within-individual change. However, the longitudinal survey also has significant problems, notably in confounding aging and period effects, delayed results, achieving continuity in funding and research direction, and cumulative attrition. This paper suggests the use of a multiple-cohort sequential strategy (the "accelerated longitudinal design") as a way of achieving the benefits of the longitudinal method while minimizing the problems in advancing knowledge about the natural history, causes, prevention, and treatment of psychopathological disorders.

  15. Improvements to previous algorithms to predict gene structure and isoform concentrations using Affymetrix Exon arrays

    PubMed Central

    2010-01-01

    Background Exon arrays provide a way to measure the expression of different isoforms of genes in an organism. Most of the procedures to deal with these arrays are focused on gene expression or on exon expression. Although the only biological analytes that can be properly assigned a concentration are transcripts, there are very few algorithms that focus on them. The reason is that previously developed summarization methods do not work well if applied to transcripts. In addition, gene structure prediction, i.e., the correspondence between probes and novel isoforms, is a field which is still unexplored. Results We have modified and adapted a previous algorithm to take advantage of the special characteristics of the Affymetrix exon arrays. The structure and concentration of transcripts -some of them possibly unknown- in microarray experiments were predicted using this algorithm. Simulations showed that the suggested modifications improved both specificity (SP) and sensitivity (ST) of the predictions. The algorithm was also applied to different real datasets showing its effectiveness and the concordance with PCR validated results. Conclusions The proposed algorithm shows a substantial improvement in the performance over the previous version. This improvement is mainly due to the exploitation of the redundancy of the Affymetrix exon arrays. An R-Package of SPACE with the updated algorithms have been developed and is freely available. PMID:21110835

  16. Implementation on Landsat Data of a Simple Cloud Mask Algorithm Developed for MODIS Land Bands

    NASA Technical Reports Server (NTRS)

    Oreopoulos, Lazaros; Wilson, Michael J.; Varnai, Tamas

    2010-01-01

    This letter assesses the performance on Landsat-7 images of a modified version of a cloud masking algorithm originally developed for clear-sky compositing of Moderate Resolution Imaging Spectroradiometer (MODIS) images at northern mid-latitudes. While data from recent Landsat missions include measurements at thermal wavelengths, and such measurements are also planned for the next mission, thermal tests are not included in the suggested algorithm in its present form to maintain greater versatility and ease of use. To evaluate the masking algorithm we take advantage of the availability of manual (visual) cloud masks developed at USGS for the collection of Landsat scenes used here. As part of our evaluation we also include the Automated Cloud Cover Assesment (ACCA) algorithm that includes thermal tests and is used operationally by the Landsat-7 mission to provide scene cloud fractions, but no cloud masks. We show that the suggested algorithm can perform about as well as ACCA both in terms of scene cloud fraction and pixel-level cloud identification. Specifically, we find that the algorithm gives an error of 1.3% for the scene cloud fraction of 156 scenes, and a root mean square error of 7.2%, while it agrees with the manual mask for 93% of the pixels, figures very similar to those from ACCA (1.2%, 7.1%, 93.7%).

  17. Flap reconstruction of the knee: A review of current concepts and a proposed algorithm

    PubMed Central

    Gravvanis, Andreas; Kyriakopoulos, Antonios; Kateros, Konstantinos; Tsoutsos, Dimosthenis

    2014-01-01

    A literature search focusing on flap knee reconstruction revealed much controversy regarding the optimal management of around the knee defects. Muscle flaps are the preferred option, mainly in infected wounds. Perforator flaps have recently been introduced in knee coverage with significant advantages due to low donor morbidity and long pedicles with wide arc of rotation. In the case of free flap the choice of recipient vessels is the key point to the reconstruction. Taking the published experience into account, a reconstructive algorithm is proposed according to the size and location of the wound, the presence of infection and/or 3-dimensional defect. PMID:25405089

  18. Note: Fast imaging of DNA in atomic force microscopy enabled by a local raster scan algorithm

    SciTech Connect

    Huang, Peng; Andersson, Sean B.

    2014-06-15

    Approaches to high-speed atomic force microscopy typically involve some combination of novel mechanical design to increase the physical bandwidth and advanced controllers to take maximum advantage of the physical capabilities. For certain classes of samples, however, imaging time can be reduced on standard instruments by reducing the amount of measurement that is performed to image the sample. One such technique is the local raster scan algorithm, developed for imaging of string-like samples. Here we provide experimental results on the use of this technique to image DNA samples, demonstrating the efficacy of the scheme and illustrating the order-of-magnitude improvement in imaging time that it provides.

  19. Reconstruction algorithm for limited-angle diffraction tomography for microwave NDE

    SciTech Connect

    Paladhi, P. Roy; Klaser, J.; Tayebi, A.; Udpa, L.; Udpa, S.

    2014-02-18

    Microwave tomography is becoming a popular imaging modality in nondestructive evaluation and medicine. A commonly encountered challenge in tomography in general is that in many practical situations a full 360° angular access is not possible and with limited access, the quality of reconstructed image is compromised. This paper presents an approach for reconstruction with limited angular access in diffraction tomography. The algorithm takes advantage of redundancies in image Fourier space data obtained from diffracted field measurements and couples it to an error minimization technique using a constrained total variation (CTV) minimization. Initial results from simulated data have been presented here to validate the approach.

  20. Spectral Classification of Similar Materials using the Tetracorder Algorithm: The Calcite-Epidote-Chlorite Problem

    NASA Technical Reports Server (NTRS)

    Dalton, J. Brad; Bove, Dana; Mladinich, Carol; Clark, Roger; Rockwell, Barnaby; Swayze, Gregg; King, Trude; Church, Stanley

    2001-01-01

    Recent work on automated spectral classification algorithms has sought to distinguish ever-more similar materials. From modest beginnings separating shade, soil, rock and vegetation to ambitious attempts to discriminate mineral types and specific plant species, the trend seems to be toward using increasingly subtle spectral differences to perform the classification. Rule-based expert systems exploiting the underlying physics of spectroscopy such as the US Geological Society Tetracorder system are now taking advantage of the high spectral resolution and dimensionality of current imaging spectrometer designs to discriminate spectrally similar materials. The current paper details recent efforts to discriminate three minerals having absorptions centered at the same wavelength, with encouraging results.

  1. Developments of aerosol retrieval algorithm for Geostationary Environmental Monitoring Spectrometer (GEMS) and the retrieval accuracy test

    NASA Astrophysics Data System (ADS)

    KIM, M.; Kim, J.; Jeong, U.; Ahn, C.; Bhartia, P. K.; Torres, O.

    2013-12-01

    A scanning UV-Visible spectrometer, the GEMS (Geostationary Environment Monitoring Spectrometer) onboard the GEO-KOMPSAT2B (Geostationary Korea Multi-Purpose Satellite) is planned to be launched in geostationary orbit in 2018. The GEMS employs hyper-spectral imaging with 0.6 nm resolution to observe solar backscatter radiation in the UV and Visible range. In the UV range, the low surface contribution to the backscattered radiation and strong interaction between aerosol absorption and molecular scattering can be advantageous in retrieving aerosol optical properties such as aerosol optical depth (AOD) and single scattering albedo (SSA). By taking the advantage, the OMI UV aerosol algorithm has provided information on the absorbing aerosol (Torres et al., 2007; Ahn et al., 2008). This study presents a UV-VIS algorithm to retrieve AOD and SSA from GEMS. The algorithm is based on the general inversion method, which uses pre-calculated look-up table with assumed aerosol properties and measurement condition. To obtain the retrieval accuracy, the error of the look-up table method occurred by the interpolation of pre-calculated radiances is estimated by using the reference dataset, and the uncertainties about aerosol type and height are evaluated. Also, the GEMS aerosol algorithm is tested with measured normalized radiance from OMI, a provisional data set for GEMS measurement, and the results are compared with the values from AERONET measurements over Asia. Additionally, the method for simultaneous retrieve of the AOD and aerosol height is discussed.

  2. A Note on Evolutionary Algorithms and Its Applications

    ERIC Educational Resources Information Center

    Bhargava, Shifali

    2013-01-01

    This paper introduces evolutionary algorithms with its applications in multi-objective optimization. Here elitist and non-elitist multiobjective evolutionary algorithms are discussed with their advantages and disadvantages. We also discuss constrained multiobjective evolutionary algorithms and their applications in various areas.

  3. Constraint treatment techniques and parallel algorithms for multibody dynamic analysis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chiou, Jin-Chern

    1990-01-01

    Computational procedures for kinematic and dynamic analysis of three-dimensional multibody dynamic (MBD) systems are developed from the differential-algebraic equations (DAE's) viewpoint. Constraint violations during the time integration process are minimized and penalty constraint stabilization techniques and partitioning schemes are developed. The governing equations of motion, a two-stage staggered explicit-implicit numerical algorithm, are treated which takes advantage of a partitioned solution procedure. A robust and parallelizable integration algorithm is developed. This algorithm uses a two-stage staggered central difference algorithm to integrate the translational coordinates and the angular velocities. The angular orientations of bodies in MBD systems are then obtained by using an implicit algorithm via the kinematic relationship between Euler parameters and angular velocities. It is shown that the combination of the present solution procedures yields a computationally more accurate solution. To speed up the computational procedures, parallel implementation of the present constraint treatment techniques, the two-stage staggered explicit-implicit numerical algorithm was efficiently carried out. The DAE's and the constraint treatment techniques were transformed into arrowhead matrices to which Schur complement form was derived. By fully exploiting the sparse matrix structural analysis techniques, a parallel preconditioned conjugate gradient numerical algorithm is used to solve the systems equations written in Schur complement form. A software testbed was designed and implemented in both sequential and parallel computers. This testbed was used to demonstrate the robustness and efficiency of the constraint treatment techniques, the accuracy of the two-stage staggered explicit-implicit numerical algorithm, and the speed up of the Schur-complement-based parallel preconditioned conjugate gradient algorithm on a parallel computer.

  4. Prochlorococcus: advantages and limits of minimalism.

    PubMed

    Partensky, Frédéric; Garczarek, Laurence

    2010-01-01

    Prochlorococcus is the key phytoplanktonic organism of tropical gyres, large ocean regions that are depleted of the essential macronutrients needed for photosynthesis and cell growth. This cyanobacterium has adapted itself to oligotrophy by minimizing the resources necessary for life through a drastic reduction of cell and genome sizes. This rarely observed strategy in free-living organisms has conferred on Prochlorococcus a considerable advantage over other phototrophs, including its closest relative Synechococcus, for life in this vast yet little variable ecosystem. However, this strategy seems to reach its limits in the upper layer of the S Pacific gyre, the most oligotrophic region of the world ocean. By losing some important genes and/or functions during evolution, Prochlorococcus has seemingly become dependent on co-occurring microorganisms. In this review, we present some of the recent advances in the ecology, biology, and evolution of Prochlorococcus, which because of its ecological importance and tiny genome is rapidly imposing itself as a model organism in environmental microbiology.

  5. Competitive advantages of Caedibacter-infected Paramecia.

    PubMed

    Kusch, Jürgen; Czubatinski, Lars; Wegmann, Silke; Hubner, Markus; Alter, Margret; Albrecht, Petra

    2002-03-01

    Intracellular bacteria of the genus Caedibacter limit the reproduction of their host, the freshwater ciliate Paramecium. Reproduction rates of infected strains of paramecia were significantly lower than those of genetically identical strains that had lost their parasites after treatment with an antibiotic. Interference competition occurs when infected paramecia release a toxic form of the parasitic bacterium that kills uninfected paramecia. In mixed cultures of infected and uninfected strains of either P tetraurelia or of P novaurelia, the infected strains outcompeted the uninfected strains. Infection of new host paramecia seems to be rare. Infection of new hosts was not observed in either mixtures of infected with uninfected strains, or after incubation of paramecia with isolated parasites. The competitive advantages of the host paramecia, in combination with their vegetative reproduction, makes infection of new hosts by the bacterial parasites unnecessary, and could be responsible for the continued existence of "killer paramecia" in nature. Caedibacter parasites are not a defensive adaptation. Feeding rates and reproduction of the predators Didinium nasutum (Ciliophora) and Amoeba proteus (Amoebozoa, Gymnamoebia) were not influenced by whether or not their paramecia prey were infected. Infection of the predators frequently occurred when they preyed on infected paramecia. Caedibacter-infected predators may influence competition between Paramecium strains by release of toxic parasites into the environment that are harmful to uninfected strains.

  6. Vegetarian diets: what are the advantages?

    PubMed

    Leitzmann, Claus

    2005-01-01

    A growing body of scientific evidence indicates that wholesome vegetarian diets offer distinct advantages compared to diets containing meat and other foods of animal origin. The benefits arise from lower intakes of saturated fat, cholesterol and animal protein as well as higher intakes of complex carbohydrates, dietary fiber, magnesium, folic acid, vitamin C and E, carotenoids and other phytochemicals. Since vegetarians consume widely divergent diets, a differentiation between various types of vegetarian diets is necessary. Indeed, many contradictions and misunderstandings concerning vegetarianism are due to scientific data from studies without this differentiation. In the past, vegetarian diets have been described as being deficient in several nutrients including protein, iron, zinc, calcium, vitamin B12 and A, n-3 fatty acids and iodine. Numerous studies have demonstrated that the observed deficiencies are usually due to poor meal planning. Well-balanced vegetarian diets are appropriate for all stages of the life cycle, including children, adolescents, pregnant and lactating women, the elderly and competitive athletes. In most cases, vegetarian diets are beneficial in the prevention and treatment of certain diseases, such as cardiovascular disease, hypertension, diabetes, cancer, osteoporosis, renal disease and dementia, as well as diverticular disease, gallstones and rheumatoid arthritis. The reasons for choosing a vegetarian diet often go beyond health and well-being and include among others economical, ecological and social concerns. The influences of these aspects of vegetarian diets are the subject of the new field of nutritional ecology that is concerned with sustainable life styles and human development.

  7. Advantageous grain boundaries in iron pnictide superconductors

    PubMed Central

    Katase, Takayoshi; Ishimaru, Yoshihiro; Tsukamoto, Akira; Hiramatsu, Hidenori; Kamiya, Toshio; Tanabe, Keiichi; Hosono, Hideo

    2011-01-01

    High critical temperature superconductors have zero power consumption and could be used to produce ideal electric power lines. The principal obstacle in fabricating superconducting wires and tapes is grain boundaries—the misalignment of crystalline orientations at grain boundaries, which is unavoidable for polycrystals, largely deteriorates critical current density. Here we report that high critical temperature iron pnictide superconductors have advantages over cuprates with respect to these grain boundary issues. The transport properties through well-defined bicrystal grain boundary junctions with various misorientation angles (θGB) were systematically investigated for cobalt-doped BaFe2As2 (BaFe2As2:Co) epitaxial films fabricated on bicrystal substrates. The critical current density through bicrystal grain boundary (JcBGB) remained high (>1 MA cm−2) and nearly constant up to a critical angle θc of ∼9°, which is substantially larger than the θc of ∼5° for YBa2Cu3O7–δ. Even at θGB>θc, the decay of JcBGB was much slower than that of YBa2Cu3O7–δ. PMID:21811238

  8. Clinical advantages of carbon-ion radiotherapy

    NASA Astrophysics Data System (ADS)

    Tsujii, Hirohiko; Kamada, Tadashi; Baba, Masayuki; Tsuji, Hiroshi; Kato, Hirotoshi; Kato, Shingo; Yamada, Shigeru; Yasuda, Shigeo; Yanagi, Takeshi; Kato, Hiroyuki; Hara, Ryusuke; Yamamoto, Naotaka; Mizoe, Junetsu

    2008-07-01

    Carbon-ion radiotherapy (C-ion RT) possesses physical and biological advantages. It was started at NIRS in 1994 using the Heavy Ion Medical Accelerator in Chiba (HIMAC); since then more than 50 protocol studies have been conducted on almost 4000 patients with a variety of tumors. Clinical experiences have demonstrated that C-ion RT is effective in such regions as the head and neck, skull base, lung, liver, prostate, bone and soft tissues, and pelvic recurrence of rectal cancer, as well as for histological types including adenocarcinoma, adenoid cystic carcinoma, malignant melanoma and various types of sarcomas, against which photon therapy could be less effective. Furthermore, when compared with photon and proton RT, a significant reduction of overall treatment time and fractions has been accomplished without enhancing toxicities. Currently, the number of irradiation sessions per patient averages 13 fractions spread over approximately three weeks. This means that in a carbon therapy facility a larger number of patients than is possible with other modalities can be treated over the same period of time.

  9. Prochlorococcus: Advantages and Limits of Minimalism

    NASA Astrophysics Data System (ADS)

    Partensky, Frédéric; Garczarek, Laurence

    2010-01-01

    Prochlorococcus is the key phytoplanktonic organism of tropical gyres, large ocean regions that are depleted of the essential macronutrients needed for photosynthesis and cell growth. This cyanobacterium has adapted itself to oligotrophy by minimizing the resources necessary for life through a drastic reduction of cell and genome sizes. This rarely observed strategy in free-living organisms has conferred on Prochlorococcus a considerable advantage over other phototrophs, including its closest relative Synechococcus, for life in this vast yet little variable ecosystem. However, this strategy seems to reach its limits in the upper layer of the S Pacific gyre, the most oligotrophic region of the world ocean. By losing some important genes and/or functions during evolution, Prochlorococcus has seemingly become dependent on co-occurring microorganisms. In this review, we present some of the recent advances in the ecology, biology, and evolution of Prochlorococcus, which because of its ecological importance and tiny genome is rapidly imposing itself as a model organism in environmental microbiology.

  10. Childhood eczema: disease of the advantaged?

    PubMed Central

    Williams, H. C.; Strachan, D. P.; Hay, R. J.

    1994-01-01

    OBJECTIVE--To determine whether the increased prevalence of childhood eczema in advantaged socioeconomic groups is due to increased parental reporting. DESIGN--Comparison of parental reports of eczema with visible eczema recorded by medical officers during a detailed physical examination. SETTING--National birth cohort study. SUBJECTS--8279 children from England, Wales, and Scotland born during 3-9 March 1958 and followed up at the ages of 7, 11, and 16. MAIN OUTCOME MEASURES--Prevalence of eczema according to parental report compared with medical officer's examination at the ages of 7, 11, and 16. RESULTS--Prevalence of both reported and examined eczema increased with rising social class at the ages of 7, 11, and 16 years. The point prevalence of examined eczema at age 7 was 4.8%, 3.6%, 3.6%, 2.4%, 2.2%, and 2.4% in social classes I, II, III non-manual, III manual, IV, and V respectively (chi 2 value for linear trend 12.6, P < 0.001). This trend persisted after adjustment for potential confounders such as region and family size and was not present for examined psoriasis or acne. CONCLUSIONS--Eczema is more prevalent among British schoolchildren in social classes I and II than those in lower classes. Exposures associated with social class are probably at least as important as genetic factors in the expression of childhood eczema. PMID:8173454

  11. The advantages and disadvantages of pacifier use.

    PubMed

    Cinar, Dede Nursan

    2004-01-01

    A powerful reflex of the infant in the weeks following birth is sucking. Breastfed babies benefit from both the nutrition in mother's milk and the satisfaction of their sucking instinct. Babies that can not be breastfed due to various reasons may satisfy their sucking instinct by using pacifiers. Pacifier use and digit sucking are believed to be harmless habits. In many places of the world, and especially in developing countries, pacifier use in early childhood is very common. It is said that pacifier use eases the baby and satisfies its sucking instinct. It has been reported in several studies that pacifier use reduces the risk of Sudden Infant Death Syndrome (SIDS). The most important risks of this non-nutritive sucking habit are failure of breastfeeding, dental deformities, recurrent acute otitis media, and the possibility of accidents. The development of latex allergy, tooth decay, oral ulcers and sleep disorders are other problems encountered with pacifier use. Parents may hesitate to use pacifiers for their babies and consult nurses or midwives on this issue. In this article, the advantages and disadvantages of pacifier use are discussed with the aim of providing guidance to nurses and midwives working in the field of pediatrics and infant health.

  12. Advantages of a leveled commitment contracting protocol

    SciTech Connect

    Sandholm, T.W.; Lesser, V.R.

    1996-12-31

    In automated negotiation systems consisting of self-interested agents, contracts have traditionally been binding. Such contracts do not allow agents to efficiently accommodate future events. Game theory has proposed contingency contracts to solve this problem. Among computational agents, contingency contracts are often impractical due to large numbers of interdependent and unanticipated future events to be conditioned on, and because some events are not mutually observable. This paper proposes a leveled commitment contracting protocol that allows self-interested agents to efficiently accommodate future events by having the possibility of unilaterally decommitting from a contract based on local reasoning. A decommitment penalty is assigned to both agents in a contract: to be freed from the contract, an agent only pays this penalty to the other party. It is shown through formal analysis of several contracting settings that this leveled commitment feature in a contracting protocol increases Pareto efficiency of deals and can make contracts individually rational when no full commitment contract can. This advantage holds even if the agents decommit manipulatively.

  13. Parallelization of the Red-Black Algorithm on Solving the Second-Order PN Transport Equation with the Hybrid Finite Element Method

    SciTech Connect

    Yaqi Wang; Cristian Rabiti; Giuseppe Palmiotti

    2011-06-01

    The Red-Black algorithm has been successfully applied on solving the second-order parity transport equation with the PN approximation in angle and the Hybrid Finite Element Method (HFEM) in space, i.e., the Variational Nodal Method (VNM) [1,2,3,4,5]. Any transport solving techniques, including the Red-Black algorithm, need to be parallelized in order to take the advantage of the development of supercomputers with multiple processors for the advanced modeling and simulation. To our knowledge, an attempt [6] was done to parallelize it, but it was devoted only to the z axis plans in three-dimensional calculations. General parallelization of the Red-Black algorithm with the spatial domain decomposition has not been reported in the literature. In this summary, we present our implementation of the parallelization of the Red-Black algorithm and its efficiency results.

  14. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  15. Approximation algorithms

    PubMed Central

    Schulz, Andreas S.; Shmoys, David B.; Williamson, David P.

    1997-01-01

    Increasing global competition, rapidly changing markets, and greater consumer awareness have altered the way in which corporations do business. To become more efficient, many industries have sought to model some operational aspects by gigantic optimization problems. It is not atypical to encounter models that capture 106 separate “yes” or “no” decisions to be made. Although one could, in principle, try all 2106 possible solutions to find the optimal one, such a method would be impractically slow. Unfortunately, for most of these models, no algorithms are known that find optimal solutions with reasonable computation times. Typically, industry must rely on solutions of unguaranteed quality that are constructed in an ad hoc manner. Fortunately, for some of these models there are good approximation algorithms: algorithms that produce solutions quickly that are provably close to optimal. Over the past 6 years, there has been a sequence of major breakthroughs in our understanding of the design of approximation algorithms and of limits to obtaining such performance guarantees; this area has been one of the most flourishing areas of discrete mathematics and theoretical computer science. PMID:9370525

  16. Is Concentrated Advantage the Cause? The Relative Contributions of Neighborhood Advantage and Disadvantage to Educational Inequality

    ERIC Educational Resources Information Center

    Johnson, Odis, Jr.

    2013-01-01

    Supported by persistent educational inequality and growth of the field of neighborhood effects research, this meta-analysis investigates the relative association of neighborhood advantage and disadvantage to educational outcomes; the consistency of associations across different educational indicators; and the moderating influence of model…

  17. An Adaptive Reputation-Based Algorithm for Grid Virtual Organization Formation

    NASA Astrophysics Data System (ADS)

    Cui, Yongrui; Li, Mingchu; Ren, Yizhi; Sakurai, Kouichi

    A novel adaptive reputation-based virtual organization formation is proposed. It restrains the bad performers effectively based on the consideration of the global experience of the evaluator and evaluates the direct trust relation between two grid nodes accurately by consulting the previous trust value rationally. It also consults and improves the reputation evaluation process in PathTrust model by taking account of the inter-organizational trust relationship and combines it with direct and recommended trust in a weighted way, which makes the algorithm more robust against collusion attacks. Additionally, the proposed algorithm considers the perspective of the VO creator and takes required VO services as one of the most important fine-grained evaluation criterion, which makes the algorithm more suitable for constructing VOs in grid environments that include autonomous organizations. Simulation results show that our algorithm restrains the bad performers and resists against fake transaction attacks and badmouth attacks effectively. It provides a clear advantage in the design of a VO infrastructure.

  18. Intelligent QoS routing algorithm based on improved AODV protocol for Ad Hoc networks

    NASA Astrophysics Data System (ADS)

    Huibin, Liu; Jun, Zhang

    2016-04-01

    Mobile Ad Hoc Networks were playing an increasingly important part in disaster reliefs, military battlefields and scientific explorations. However, networks routing difficulties are more and more outstanding due to inherent structures. This paper proposed an improved cuckoo searching-based Ad hoc On-Demand Distance Vector Routing protocol (CSAODV). It elaborately designs the calculation methods of optimal routing algorithm used by protocol and transmission mechanism of communication-package. In calculation of optimal routing algorithm by CS Algorithm, by increasing QoS constraint, the found optimal routing algorithm can conform to the requirements of specified bandwidth and time delay, and a certain balance can be obtained among computation spending, bandwidth and time delay. Take advantage of NS2 simulation software to take performance test on protocol in three circumstances and validate the feasibility and validity of CSAODV protocol. In results, CSAODV routing protocol is more adapt to the change of network topological structure than AODV protocol, which improves package delivery fraction of protocol effectively, reduce the transmission time delay of network, reduce the extra burden to network brought by controlling information, and improve the routing efficiency of network.

  19. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; A Recursive Maximum Likelihood Decoding

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    The Viterbi algorithm is indeed a very simple and efficient method of implementing the maximum likelihood decoding. However, if we take advantage of the structural properties in a trellis section, other efficient trellis-based decoding algorithms can be devised. Recently, an efficient trellis-based recursive maximum likelihood decoding (RMLD) algorithm for linear block codes has been proposed. This algorithm is more efficient than the conventional Viterbi algorithm in both computation and hardware requirements. Most importantly, the implementation of this algorithm does not require the construction of the entire code trellis, only some special one-section trellises of relatively small state and branch complexities are needed for constructing path (or branch) metric tables recursively. At the end, there is only one table which contains only the most likely code-word and its metric for a given received sequence r = (r(sub 1), r(sub 2),...,r(sub n)). This algorithm basically uses the divide and conquer strategy. Furthermore, it allows parallel/pipeline processing of received sequences to speed up decoding.

  20. [The precautionary principle: advantages and risks].

    PubMed

    Tubiana, M

    2001-04-01

    The extension of the precautionary principle to the field of healthcare is the social response to two demands of the population: improved health safety and the inclusion of an informed public in the decision-making process. The necessary balance between cost (treatment-induced risk) and benefit (therapeutic effect) underlies all healthcare decisions. An underestimation or an overestimation of cost, i.e. risk, is equally harmful in public healthcare. A vaccination should be prescribed when its beneficial effect outweighs its inevitable risk. Mandatory vaccination, such as in the case of the Hepatitis B virus, is a health policy requiring some courage because those who benefit will never be aware of its positive effect while those who are victims of the risk could resort to litigation. Defense against such accusations requires an accurate assessment of risk and benefit, which underlines the importance of expertise. Even within the framework of the precautionary principle, it is impossible to act without knowledge, or at least a plausible estimation, of expected effects. Recent affairs (blood contamination, transmissible spongiform encephalitis by growth hormone, and new variant of Creutzfeldt-Jacob disease) illustrate that in such cases the precautionary principle would have had limited impact and it is only when enough knowledge was available that effective action could be taken. Likewise, in current debates concerning the possible risks of electromagnetic fields, cellular phones and radon, research efforts must be given priority. The general public understands intuitively the concept of cost and benefit. For example, the possible health risks of oral contraceptives and hormone replacement therapy were not ignored, but the public has judged that their advantages justify the risk. Estimating risk and benefit and finding a balance between risk and preventive measures could help avoid the main drawbacks of the precautionary principle, i.e. inaction and refusal of

  1. Evaluation of a photovoltaic energy mechatronics system with a built-in quadratic maximum power point tracking algorithm

    SciTech Connect

    Chao, R.M.; Ko, S.H.; Lin, I.H.; Pai, F.S.; Chang, C.C.

    2009-12-15

    The historically high cost of crude oil price is stimulating research into solar (green) energy as an alternative energy source. In general, applications with large solar energy output require a maximum power point tracking (MPPT) algorithm to optimize the power generated by the photovoltaic effect. This work aims to provide a stand-alone solution for solar energy applications by integrating a DC/DC buck converter to a newly developed quadratic MPPT algorithm along with its appropriate software and hardware. The quadratic MPPT method utilizes three previously used duty cycles with their corresponding power outputs. It approaches the maximum value by using a second order polynomial formula, which converges faster than the existing MPPT algorithm. The hardware implementation takes advantage of the real-time controller system from National Instruments, USA. Experimental results have shown that the proposed solar mechatronics system can correctly and effectively track the maximum power point without any difficulties. (author)

  2. Competitive advantage of PET/MRI.

    PubMed

    Jadvar, Hossein; Colletti, Patrick M

    2014-01-01

    Multimodality imaging has made great strides in the imaging evaluation of patients with a variety of diseases. Positron emission tomography/computed tomography (PET/CT) is now established as the imaging modality of choice in many clinical conditions, particularly in oncology. While the initial development of combined PET/magnetic resonance imaging (PET/MRI) was in the preclinical arena, hybrid PET/MR scanners are now available for clinical use. PET/MRI combines the unique features of MRI including excellent soft tissue contrast, diffusion-weighted imaging, dynamic contrast-enhanced imaging, fMRI and other specialized sequences as well as MR spectroscopy with the quantitative physiologic information that is provided by PET. Most evidence for the potential clinical utility of PET/MRI is based on studies performed with side-by-side comparison or software-fused MRI and PET images. Data on distinctive utility of hybrid PET/MRI are rapidly emerging. There are potential competitive advantages of PET/MRI over PET/CT. In general, PET/MRI may be preferred over PET/CT where the unique features of MRI provide more robust imaging evaluation in certain clinical settings. The exact role and potential utility of simultaneous data acquisition in specific research and clinical settings will need to be defined. It may be that simultaneous PET/MRI will be best suited for clinical situations that are disease-specific, organ-specific, related to diseases of the children or in those patients undergoing repeated imaging for whom cumulative radiation dose must be kept as low as reasonably achievable. PET/MRI also offers interesting opportunities for use of dual modality probes. Upon clear definition of clinical utility, other important and practical issues related to business operational model, clinical workflow and reimbursement will also be resolved.

  3. Searching for the Advantages of Virus Sex

    NASA Astrophysics Data System (ADS)

    Turner, Paul E.

    2003-02-01

    Sex (genetic exchange) is a nearly universal phenomenon in biological populations. But this is surprising given the costs associated with sex. For example, sex tends to break apart co-adapted genes, and sex causes a female to inefficiently contribute only half the genes to her offspring. Why then did sex evolve? One famous model poses that sex evolved to combat Muller's ratchet, the mutational load that accrues when harmful mutations drift to high frequencies in populations of small size. In contrast, the Fisher-Muller Hypothesis predicts that sex evolved to promote genetic variation that speeds adaptation in novel environments. Sexual mechanisms occur in viruses, which feature high rates of deleterious mutation and frequent exposure to novel or changing environments. Thus, confirmation of one or both hypotheses would shed light on the selective advantages of virus sex. Experimental evolution has been used to test these classic models in the RNA bacteriophage φ6, a virus that experiences sex via reassortment of its chromosomal segments. Empirical data suggest that sex might have originated in φ6 to assist in purging deleterious mutations from the genome. However, results do not support the idea that sex evolved because it provides beneficial variation in novel environments. Rather, experiments show that too much sex can be bad for φ6 promiscuity allows selfish viruses to evolve and spread their inferior genes to subsequent generations. Here I discuss various explanations for the evolution of segmentation in RNA viruses, and the added cost of sex when large numbers of viruses co-infect the same cell.

  4. 2014: Rural Medicare Advantage Enrollment Update.

    PubMed

    Kemper, Leah; Barker, Abigail; McBride, Timothy; Mueller, Keith

    2015-01-01

    Key Data Findings. (1) Reclassification of rural and urban county designations (due to the switch from 2000 census data to 2010 census data) resulted in a 10 percent decline in the number of Medicare eligible Americans living in rural counties in 2014 (from roughly 10.7 million to 9.6 million). These changes also resulted in a decline in the number of MA enrollees considered to be living in a rural area, from 2.19 million to 1.95 million. However, the percentage of Medicare beneficiaries enrolled in MA and prepaid plans in rural areas declined only slightly from 20.6 percent to 20.3 percent. (2) Rural Medicare Advantage (MA) and other prepaid plan enrollment in March 2014 was nearly 1.95 million, or 20.3 percent of all rural Medicare beneficiaries, an increase of more than 216,000 from March 2013. Enrollment increased to 1.99 million (20.4 percent) in October 2014. (3) In March 2014, 56 percent of rural MA enrollees were enrolled in Preferred Provider Organization (PPO) plans, 29 percent were enrolled in Health Maintenance Organization (HMO) or Point-of-Service (POS) plans, 7 percent were enrolled in Private Fee-for-Service (PFFS) plans, and 8 percent were enrolled in other prepaid plans, including Cost plans and Program of All-Inclusive Care for the Elderly (PACE) plans. (4) States with the highest percentage of rural Medicare beneficiaries enrolled in MA and other prepaid plans include Minnesota (49.1 percent), Hawaii (41.1 percent), Pennsylvania (35.4 percent), Wisconsin (34.3 percent), New York (30.4 percent), and Ohio (30.1 percent).

  5. Copper-phosphorus alloys offer advantages in brazing copper

    SciTech Connect

    Rupert, W.D.

    1996-05-01

    Copper-phosphorus brazing alloys are used extensively for joining copper, especially refrigeration and air-conditioning copper tubing and electrical conductors. What is the effect of phosphorus when alloyed with copper? The following are some of the major effects: (1) It lowers the melt temperature of copper (a temperature depressant). (2) It increases the fluidity of the copper when in the liquid state. (3) It acts as a deoxidant or a fluxing agent with copper. (4) It lowers the ductility of copper (embrittles). There is a misconception that silver improves the ductility of the copper-phosphorus alloys. In reality, silver added to copper acts in a similar manner as phosphorus. The addition of silver to copper lowers the melt temperature (temperature depressant) and decreases the ductility. Fortunately, the rate and amount at which silver lowers copper ductility is significantly less than that of phosphorus. Therefore, taking advantage of the temperature depressant property of silver, a Ag-Cu-P alloy can be selected at approximately the same melt temperature as a Cu-P alloy, but at a lower phosphorus content. The lowering of the phosphorus content actually makes the alloy more ductile, not the silver addition. A major advantage of the copper-phosphorus alloys is the self-fluxing characteristic when joining copper to copper. They may also be used with the addition of a paste flux on brass, bronze, and specialized applications on silver, tungsten and molybdenum. Whether it is selection of the proper BCuP alloy or troubleshooting an existing problem, the suggested approach is a review of the desired phosphorus content in the liquid metal and how it is being altered during application. In torch brazing, a slight change in the oxygen-fuel ratio can affect the joint quality or leak tightness.

  6. Virtual online consultations: advantages and limitations (VOCAL) study

    PubMed Central

    Greenhalgh, Trisha; Vijayaraghavan, Shanti; Wherton, Joe; Shaw, Sara; Byrne, Emma; Campbell-Richards, Desirée; Bhattacharya, Satya; Hanson, Philippa; Ramoutar, Seendy; Gutteridge, Charles; Hodkinson, Isabel; Collard, Anna; Morris, Joanne

    2016-01-01

    Introduction Remote video consultations between clinician and patient are technically possible and increasingly acceptable. They are being introduced in some settings alongside (and occasionally replacing) face-to-face or telephone consultations. Methods To explore the advantages and limitations of video consultations, we will conduct in-depth qualitative studies of real consultations (microlevel) embedded in an organisational case study (mesolevel), taking account of national context (macrolevel). The study is based in 2 contrasting clinical settings (diabetes and cancer) in a National Health Service (NHS) acute trust in London, UK. Main data sources are: microlevel—audio, video and screen capture to produce rich multimodal data on 45 remote consultations; mesolevel—interviews, ethnographic observations and analysis of documents within the trust; macrolevel—key informant interviews of national-level stakeholders and document analysis. Data will be analysed and synthesised using a sociotechnical framework developed from structuration theory. Ethics approval City Road and Hampstead NHS Research Ethics Committee, 9 December 2014, reference 14/LO/1883. Planned outputs We plan outputs for 5 main audiences: (1) academics: research publications and conference presentations; (2) service providers: standard operating procedures, provisional operational guidance and key safety issues; (3) professional bodies and defence societies: summary of relevant findings to inform guidance to members; (4) policymakers: summary of key findings; (5) patients and carers: ‘what to expect in your virtual consultation’. Discussion The research literature on video consultations is sparse. Such consultations offer potential advantages to patients (who are spared the cost and inconvenience of travel) and the healthcare system (eg, they may be more cost-effective), but fears have been expressed that they may be clinically risky and/or less acceptable to patients or staff, and they

  7. Taking centre stage...

    NASA Astrophysics Data System (ADS)

    1998-11-01

    HAMLET (Highly Automated Multimedia Light Enhanced Theatre) was the star performance at the recent finals of the `Young Engineer for Britain' competition, held at the Commonwealth Institute in London. This state-of-the-art computer-controlled theatre lighting system won the title `Young Engineers for Britain 1998' for David Kelnar, Jonathan Scott, Ramsay Waller and John Wyllie (all aged 16) from Merchiston Castle School, Edinburgh. HAMLET replaces conventional manually-operated controls with a special computer program, and should find use in the thousands of small theatres, schools and amateur drama productions that operate with limited resources and without specialist expertise. The four students received a £2500 prize between them, along with £2500 for their school, and in addition they were invited to spend a special day with the Royal Engineers. A project designed to improve car locking systems enabled Ian Robinson of Durham University to take the `Working in industry award' worth £1000. He was also given the opportunity of a day at sea with the Royal Navy. Other prizewinners with their projects included: Jun Baba of Bloxham School, Banbury (a cardboard armchair which converts into a desk and chair); Kobika Sritharan and Gemma Hancock, Bancroft's School, Essex (a rain warning system for a washing line); and Alistair Clarke, Sam James and Ruth Jenkins, Bishop of Llandaff High School, Cardiff (a mechanism to open and close the retractable roof of the Millennium Stadium in Cardiff). The two principal national sponsors of the competition, which is organized by the Engineering Council, are Lloyd's Register and GEC. Industrial companies, professional engineering institutions and educational bodies also provided national and regional prizes and support. During this year's finals, various additional activities took place, allowing the students to surf the Internet and navigate individual engineering websites on a network of computers. They also visited the

  8. Computational Discovery of Materials Using the Firefly Algorithm

    NASA Astrophysics Data System (ADS)

    Avendaño-Franco, Guillermo; Romero, Aldo

    Our current ability to model physical phenomena accurately, the increase computational power and better algorithms are the driving forces behind the computational discovery and design of novel materials, allowing for virtual characterization before their realization in the laboratory. We present the implementation of a novel firefly algorithm, a population-based algorithm for global optimization for searching the structure/composition space. This novel computation-intensive approach naturally take advantage of concurrency, targeted exploration and still keeping enough diversity. We apply the new method in both periodic and non-periodic structures and we present the implementation challenges and solutions to improve efficiency. The implementation makes use of computational materials databases and network analysis to optimize the search and get insights about the geometric structure of local minima on the energy landscape. The method has been implemented in our software PyChemia, an open-source package for materials discovery. We acknowledge the support of DMREF-NSF 1434897 and the Donors of the American Chemical Society Petroleum Research Fund for partial support of this research under Contract 54075-ND10.

  9. Simplified calculation of distance measure in DP algorithm

    NASA Astrophysics Data System (ADS)

    Hu, Tao; Ren, Xian-yi; Lu, Yu-ming

    2014-01-01

    Distance measure of point to segment is one of the determinants which affect the efficiency of DP (Douglas-Peucker) polyline simplification algorithm. Zone-divided distance measure instead of only perpendicular distance is proposed by Dan Sunday [1] to improve the deficiency of the original DP algorithm. A new efficiency zone-divided distance measure method is proposed in this paper. Firstly, a rotating coordinate is established based on the two endpoints of curve. Secondly, the new coordinate value in the rotating coordinate is computed for each point. Finally, the new coordinate values are used to divide points into three zones and to calculate distance, Manhattan distance is adopted in zone I and III, perpendicular distance in zone II. Compared with Dan Sunday's method, the proposed method can take full advantage of the computation result of previous point. The calculation amount basically keeps for points in zone I and III, and the calculation amount reduces significantly for points in zone II which own highest proportion. Experimental results show that the proposed distance measure method can improve the efficiency of original DP algorithm.

  10. An Enhanced Differential Evolution Algorithm Based on Multiple Mutation Strategies

    PubMed Central

    Xiang, Wan-li; Meng, Xue-lei; An, Mei-qing; Li, Yin-zhen; Gao, Ming-xia

    2015-01-01

    Differential evolution algorithm is a simple yet efficient metaheuristic for global optimization over continuous spaces. However, there is a shortcoming of premature convergence in standard DE, especially in DE/best/1/bin. In order to take advantage of direction guidance information of the best individual of DE/best/1/bin and avoid getting into local trap, based on multiple mutation strategies, an enhanced differential evolution algorithm, named EDE, is proposed in this paper. In the EDE algorithm, an initialization technique, opposition-based learning initialization for improving the initial solution quality, and a new combined mutation strategy composed of DE/current/1/bin together with DE/pbest/bin/1 for the sake of accelerating standard DE and preventing DE from clustering around the global best individual, as well as a perturbation scheme for further avoiding premature convergence, are integrated. In addition, we also introduce two linear time-varying functions, which are used to decide which solution search equation is chosen at the phases of mutation and perturbation, respectively. Experimental results tested on twenty-five benchmark functions show that EDE is far better than the standard DE. In further comparisons, EDE is compared with other five state-of-the-art approaches and related results show that EDE is still superior to or at least equal to these methods on most of benchmark functions. PMID:26609304

  11. Advantage of support vector machine for neural spike train decoding under spike sorting errors.

    PubMed

    Hwan Kim, Kyung; Shin Kim, Sung; June Kim, Sung

    2005-01-01

    Decoding of kinematic variables from neuronal spike trains is important for neuroprosthetic devices. The spike trains from single units must be extracted from extracellular neural signals and thus spike detection and sorting procedure is essential. Since the spike detection and sorting procedure may yield considerable errors, decoding algorithm should be robust against spike train errors. Here we showed that the spike train decoding algorithms employing a nonlinear mapping, especially support vector machine (SVM), may be more advantageous contrary to conventional belief that linear filter is sufficient. The advantage became more conspicuous with erroneous spike trains. Using the SVM, satisfactory performance could be obtained much more easily, compared to the case of using multilayer perceptron, which was employed for previous studies. The results suggests the possibility of neuroprosthetic device with a low-quality spike sorting preprocessor.

  12. Is a Complex-Valued Stepsize Advantageous in Complex-Valued Gradient Learning Algorithms?

    PubMed

    Zhang, Huisheng; Mandic, Danilo P

    2016-12-01

    Complex gradient methods have been widely used in learning theory, and typically aim to optimize real-valued functions of complex variables. The stepsize of complex gradient learning methods (CGLMs) is a positive number, and little is known about how a complex stepsize would affect the learning process. To this end, we undertake a comprehensive analysis of CGLMs with a complex stepsize, including the search space, convergence properties, and the dynamics near critical points. Furthermore, several adaptive stepsizes are derived by extending the Barzilai-Borwein method to the complex domain, in order to show that the complex stepsize is superior to the corresponding real one in approximating the information in the Hessian. A numerical example is presented to support the analysis.

  13. Tightly Coupled Multiphysics Algorithm for Pebble Bed Reactors

    SciTech Connect

    HyeongKae Park; Dana Knoll; Derek Gaston; Richard Martineau

    2010-10-01

    We have developed a tightly coupled multiphysics simulation tool for the pebble-bed reactor (PBR) concept, a type of Very High-Temperature gas-cooled Reactor (VHTR). The simulation tool, PRONGHORN, takes advantages of the Multiphysics Object-Oriented Simulation Environment library, and is capable of solving multidimensional thermal-fluid and neutronics problems implicitly with a Newton-based approach. Expensive Jacobian matrix formation is alleviated via the Jacobian-free Newton-Krylov method, and physics-based preconditioning is applied to minimize Krylov iterations. Motivation for the work is provided via analysis and numerical experiments on simpler multiphysics reactor models. We then provide detail of the physical models and numerical methods in PRONGHORN. Finally, PRONGHORN's algorithmic capability is demonstrated on a number of PBR test cases.

  14. Cropping and noise resilient steganography algorithm using secret image sharing

    NASA Astrophysics Data System (ADS)

    Juarez-Sandoval, Oswaldo; Fierro-Radilla, Atoany; Espejel-Trujillo, Angelina; Nakano-Miyatake, Mariko; Perez-Meana, Hector

    2015-03-01

    This paper proposes an image steganography scheme, in which a secret image is hidden into a cover image using a secret image sharing (SIS) scheme. Taking advantage of the fault tolerant property of the (k,n)-threshold SIS, where using any k of n shares (k≤n), the secret data can be recovered without any ambiguity, the proposed steganography algorithm becomes resilient to cropping and impulsive noise contamination. Among many SIS schemes proposed until now, Lin and Chan's scheme is selected as SIS, due to its lossless recovery capability of a large amount of secret data. The proposed scheme is evaluated from several points of view, such as imperceptibility of the stegoimage respect to its original cover image, robustness of hidden data to cropping operation and impulsive noise contamination. The evaluation results show a high quality of the extracted secret image from the stegoimage when it suffered more than 20% cropping or high density noise contamination.

  15. Calculating Home Advantage in the First Decade of the 21th Century UEFA Soccer Leagues

    PubMed Central

    García, Miguel Saavedra; Aguilar, Óscar Gutiérrez; Marques, Paulo Sa; Tobío, Gabriel Torres; Fernández Romero, Juan J.

    2013-01-01

    Home advantage has been studied in different sports, establishing its existence and its possible causes. This article analyzes the home advantage in soccer leagues of UEFA countries in the first part of the 21st century. The sample of 52 countries monitored during a period of 10 years allows us to study 520 leagues and 111,030 matches of the highest level in each country associated with UEFA. Home advantage exists and is significant in 32 of the 52 UEFA countries, where it equals 55.6%. A decrease can be observed in the tendency towards home advantage between the years 2000 and 2010. Values between 55 and 56 were observed for home advantage in the top ten leagues in Europe. It has also been observed that home advantage depends on the level of the league evaluated using UEFA’s 2010/11 Country coefficients. The home advantage is calculated taking into account the teams’ position and the points obtained in each of the leagues. A direct relationship was observed with the number of points gained and an inverse relationship was observed with the team position. PMID:24235990

  16. Calculating Home Advantage in the First Decade of the 21th Century UEFA Soccer Leagues.

    PubMed

    García, Miguel Saavedra; Aguilar, Oscar Gutiérrez; Marques, Paulo Sa; Tobío, Gabriel Torres; Fernández Romero, Juan J

    2013-01-01

    Home advantage has been studied in different sports, establishing its existence and its possible causes. This article analyzes the home advantage in soccer leagues of UEFA countries in the first part of the 21st century. The sample of 52 countries monitored during a period of 10 years allows us to study 520 leagues and 111,030 matches of the highest level in each country associated with UEFA. Home advantage exists and is significant in 32 of the 52 UEFA countries, where it equals 55.6%. A decrease can be observed in the tendency towards home advantage between the years 2000 and 2010. Values between 55 and 56 were observed for home advantage in the top ten leagues in Europe. It has also been observed that home advantage depends on the level of the league evaluated using UEFA's 2010/11 Country coefficients. The home advantage is calculated taking into account the teams' position and the points obtained in each of the leagues. A direct relationship was observed with the number of points gained and an inverse relationship was observed with the team position.

  17. The advantages of logarithmically scaled data for electromagnetic inversion

    NASA Astrophysics Data System (ADS)

    Wheelock, Brent; Constable, Steven; Key, Kerry

    2015-06-01

    Non-linear inversion algorithms traverse a data misfit space over multiple iterations of trial models in search of either a global minimum or some target misfit contour. The success of the algorithm in reaching that objective depends upon the smoothness and predictability of the misfit space. For any given observation, there is no absolute form a datum must take, and therefore no absolute definition for the misfit space; in fact, there are many alternatives. However, not all misfit spaces are equal in terms of promoting the success of inversion. In this work, we appraise three common forms that complex data take in electromagnetic geophysical methods: real and imaginary components, a power of amplitude and phase, and logarithmic amplitude and phase. We find that the optimal form is logarithmic amplitude and phase. Single-parameter misfit curves of log-amplitude and phase data for both magnetotelluric and controlled-source electromagnetic methods are the smoothest of the three data forms and do not exhibit flattening at low model resistivities. Synthetic, multiparameter, 2-D inversions illustrate that log-amplitude and phase is the most robust data form, converging to the target misfit contour in the fewest steps regardless of starting model and the amount of noise added to the data; inversions using the other two data forms run slower or fail under various starting models and proportions of noise. It is observed that inversion with log-amplitude and phase data is nearly two times faster in converging to a solution than with other data types. We also assess the statistical consequences of transforming data in the ways discussed in this paper. With the exception of real and imaginary components, which are assumed to be Gaussian, all other data types do not produce an expected mean-squared misfit value of 1.00 at the true model (a common assumption) as the errors in the complex data become large. We recommend that real and imaginary data with errors larger than 10 per

  18. The advantage of knowing the talker

    PubMed Central

    Souza, Pamela; Gehani, Namita; Wright, Richard; McCloy, Daniel

    2013-01-01

    Background Many audiologists have observed a situation where a patient appears to understand something spoken by his/her spouse or a close friend but not the same information spoken by a stranger. However, it is not clear whether this observation reflects choice of communication strategy or a true benefit derived from the talker’s voice. Purpose The current study measured the benefits of long-term talker familiarity for older individuals with hearing impairment in a variety of listening situations. Research Design In Experiment 1, we measured speech recognition with familiar and unfamiliar voices when the difficulty level was manipulated by varying levels of a speech-shaped background noise. In Experiment 2, we measured the benefit of a familiar voice when the background noise was other speech (informational masking). Study Sample A group of 31 older listeners with high-frequency sensorineural hearing loss participated in the study. Fifteen of the participants served as talkers, and sixteen as listeners. In each case, the talker-listener pair for the familiar condition represented a close, long-term relationship (spouse or close friend). Data Collection and Analysis Speech-recognition scores were compared using controlled stimuli (low-context sentences) recorded by the study talkers. The sentences were presented in quiet and in two levels of speech-spectrum noise (Experiment 1) as well as in multitalker babble (Experiment 2). Repeated-measures analysis of variance was used to compare performance between the familiar and unfamiliar talkers, within and across conditions. Results Listeners performed better when speech was produced by a talker familiar to them, whether that talker was in a quiet or noisy environment. The advantage of the familiar talker was greater in a more adverse listening situation (i.e., in the highest level of background noise), but was similar for speech-spectrum noise and multi-talker babble. Conclusions The present data support a frequent

  19. The competitive advantage of corporate philanthropy.

    PubMed

    Porter, Michael E; Kramer, Mark R

    2002-12-01

    When it comes to philanthropy, executives increasingly see themselves as caught between critics demanding ever higher levels of "corporate social responsibility" and investors applying pressure to maximize short-term profits. In response, many companies have sought to make their giving more strategic, but what passes for strategic philanthropy is almost never truly strategic, and often isn't particularly effective as philanthropy. Increasingly, philanthropy is used as a form of public relations or advertising, promoting a company's image through high-profile sponsorships. But there is a more truly strategic way to think about philanthropy. Corporations can use their charitable efforts to improve their competitive context--the quality of the business environment in the locations where they operate. Using philanthropy to enhance competitive context aligns social and economic goals and improves a company's long-term business prospects. Addressing context enables a company to not only give money but also leverage its capabilities and relationships in support of charitable causes. The produces social benefits far exceeding those provided by individual donors, foundations, or even governments. Taking this new direction requires fundamental changes in the way companies approach their contribution programs. For example, philanthropic investments can improve education and local quality of life in ways that will benefit the company. Such investments can also improve the company's competitiveness by contributing to expanding the local market and helping to reduce corruption in the local business environment. Adopting a context-focused approach goes against the grain of current philanthropic practice, and it requires a far more disciplined approach than is prevalent today. But it can make a company's philanthropic activities far more effective.

  20. Flutter signal extracting technique based on FOG and self-adaptive sparse representation algorithm

    NASA Astrophysics Data System (ADS)

    Lei, Jian; Meng, Xiangtao; Xiang, Zheng

    2016-10-01

    Due to various moving parts inside, when a spacecraft runs in orbits, its structure could get a minor angular vibration, which results in vague image formation of space camera. Thus, image compensation technique is required to eliminate or alleviate the effect of movement on image formation and it is necessary to realize precise measuring of flutter angle. Due to the advantages such as high sensitivity, broad bandwidth, simple structure and no inner mechanical moving parts, FOG (fiber optical gyro) is adopted in this study to measure minor angular vibration. Then, movement leading to image degeneration is achieved by calculation. The idea of the movement information extracting algorithm based on self-adaptive sparse representation is to use arctangent function approximating L0 norm to construct unconstrained noisy-signal-aimed sparse reconstruction model and then solve the model by a method based on steepest descent algorithm and BFGS algorithm to estimate sparse signal. Then taking the advantage of the principle of random noises not able to be represented by linear combination of elements, useful signal and random noised are separated effectively. Because the main interference of minor angular vibration to image formation of space camera is random noises, sparse representation algorithm could extract useful information to a large extent and acts as a fitting pre-process method of image restoration. The self-adaptive sparse representation algorithm presented in this paper is used to process the measured minor-angle-vibration signal of FOG used by some certain spacecraft. By component analysis of the processing results, we can find out that the algorithm could extract micro angular vibration signal of FOG precisely and effectively, and can achieve the precision degree of 0.1".

  1. Halftoning and Image Processing Algorithms

    DTIC Science & Technology

    1999-02-01

    screening techniques with the quality advantages of error diffusion in the half toning of color maps, and on color image enhancement for halftone ...image quality. Our goals in this research were to advance the understanding in image science for our new halftone algorithm and to contribute to...image retrieval and noise theory for such imagery. In the field of color halftone printing, research was conducted on deriving a theoretical model of our

  2. SR-71 Taking Off

    NASA Technical Reports Server (NTRS)

    1990-01-01

    One of three U.S. Air Force SR-71 reconnaissance aircraft originally retired from operational service and loaned to NASA for a high-speed research program retracts its landing gear after taking off from NASA's Ames-Dryden Flight Research Facility (later Dryden Flight Research Center), Edwards, California, on a 1990 research flight. One of the SR-71As was later returned to the Air Force for active duty in 1995. Data from the SR-71 high-speed research program will be used to aid designers of future supersonic/hypersonic aircraft and propulsion systems. Two SR-71 aircraft have been used by NASA as testbeds for high-speed and high-altitude aeronautical research. The aircraft, an SR-71A and an SR-71B pilot trainer aircraft, have been based here at NASA's Dryden Flight Research Center, Edwards, California. They were transferred to NASA after the U.S. Air Force program was cancelled. As research platforms, the aircraft can cruise at Mach 3 for more than one hour. For thermal experiments, this can produce heat soak temperatures of over 600 degrees Fahrenheit (F). This operating environment makes these aircraft excellent platforms to carry out research and experiments in a variety of areas -- aerodynamics, propulsion, structures, thermal protection materials, high-speed and high-temperature instrumentation, atmospheric studies, and sonic boom characterization. The SR-71 was used in a program to study ways of reducing sonic booms or over pressures that are heard on the ground, much like sharp thunderclaps, when an aircraft exceeds the speed of sound. Data from this Sonic Boom Mitigation Study could eventually lead to aircraft designs that would reduce the 'peak' overpressures of sonic booms and minimize the startling affect they produce on the ground. One of the first major experiments to be flown in the NASA SR-71 program was a laser air data collection system. It used laser light instead of air pressure to produce airspeed and attitude reference data, such as angle of

  3. Modified hyperspheres algorithm to trace homotopy curves of nonlinear circuits composed by piecewise linear modelled devices.

    PubMed

    Vazquez-Leal, H; Jimenez-Fernandez, V M; Benhammouda, B; Filobello-Nino, U; Sarmiento-Reyes, A; Ramirez-Pinero, A; Marin-Hernandez, A; Huerta-Chua, J

    2014-01-01

    We present a homotopy continuation method (HCM) for finding multiple operating points of nonlinear circuits composed of devices modelled by using piecewise linear (PWL) representations. We propose an adaptation of the modified spheres path tracking algorithm to trace the homotopy trajectories of PWL circuits. In order to assess the benefits of this proposal, four nonlinear circuits composed of piecewise linear modelled devices are analysed to determine their multiple operating points. The results show that HCM can find multiple solutions within a single homotopy trajectory. Furthermore, we take advantage of the fact that homotopy trajectories are PWL curves meant to replace the multidimensional interpolation and fine tuning stages of the path tracking algorithm with a simple and highly accurate procedure based on the parametric straight line equation.

  4. Modified Cholesky factorizations in interior-point algorithms for linear programming.

    SciTech Connect

    Wright, S.; Mathematics and Computer Science

    1999-01-01

    We investigate a modified Cholesky algorithm typical of those used in most interior-point codes for linear programming. Cholesky-based interior-point codes are popular for three reasons: their implementation requires only minimal changes to standard sparse Cholesky algorithms (allowing us to take full advantage of software written by specialists in that area); they tend to be more efficient than competing approaches that use alternative factorizations; and they perform robustly on most practical problems, yielding good interior-point steps even when the coefficient matrix of the main linear system to be solved for the step components is ill conditioned. We investigate this surprisingly robust performance by using analytical tools from matrix perturbation theory and error analysis, illustrating our results with computational experiments. Finally, we point out the potential limitations of this approach.

  5. Lazy skip-lists: An algorithm for fast hybridization-expansion quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Sémon, P.; Yee, Chuck-Hou; Haule, Kristjan; Tremblay, A.-M. S.

    2014-08-01

    The solution of a generalized impurity model lies at the heart of electronic structure calculations with dynamical mean field theory. In the strongly correlated regime, the method of choice for solving the impurity model is the hybridization-expansion continuous-time quantum Monte Carlo (CT-HYB). Enhancements to the CT-HYB algorithm are critical for bringing new physical regimes within reach of current computational power. Taking advantage of the fact that the bottleneck in the algorithm is a product of hundreds of matrices, we present optimizations based on the introduction and combination of two concepts of more general applicability: (a) skip lists and (b) fast rejection of proposed configurations based on matrix bounds. Considering two very different test cases with d electrons, we find speedups of ˜25 up to ˜500 compared to the direct evaluation of the matrix product. Even larger speedups are likely with f electron systems and with clusters of correlated atoms.

  6. Reptation quantum Monte Carlo algorithm for lattice Hamiltonians with a directed-update scheme.

    PubMed

    Carleo, Giuseppe; Becca, Federico; Moroni, Saverio; Baroni, Stefano

    2010-10-01

    We provide an extension to lattice systems of the reptation quantum Monte Carlo algorithm, originally devised for continuous Hamiltonians. For systems affected by the sign problem, a method to systematically improve upon the so-called fixed-node approximation is also proposed. The generality of the method, which also takes advantage of a canonical worm algorithm scheme to measure off-diagonal observables, makes it applicable to a vast variety of quantum systems and eases the study of their ground-state and excited-state properties. As a case study, we investigate the quantum dynamics of the one-dimensional Heisenberg model and we provide accurate estimates of the ground-state energy of the two-dimensional fermionic Hubbard model.

  7. Modified Hyperspheres Algorithm to Trace Homotopy Curves of Nonlinear Circuits Composed by Piecewise Linear Modelled Devices

    PubMed Central

    Vazquez-Leal, H.; Jimenez-Fernandez, V. M.; Benhammouda, B.; Filobello-Nino, U.; Sarmiento-Reyes, A.; Ramirez-Pinero, A.; Marin-Hernandez, A.; Huerta-Chua, J.

    2014-01-01

    We present a homotopy continuation method (HCM) for finding multiple operating points of nonlinear circuits composed of devices modelled by using piecewise linear (PWL) representations. We propose an adaptation of the modified spheres path tracking algorithm to trace the homotopy trajectories of PWL circuits. In order to assess the benefits of this proposal, four nonlinear circuits composed of piecewise linear modelled devices are analysed to determine their multiple operating points. The results show that HCM can find multiple solutions within a single homotopy trajectory. Furthermore, we take advantage of the fact that homotopy trajectories are PWL curves meant to replace the multidimensional interpolation and fine tuning stages of the path tracking algorithm with a simple and highly accurate procedure based on the parametric straight line equation. PMID:25184157

  8. Distributed algorithms for small vehicle detection, classification, and velocity estimation using unattended ground sensors

    NASA Astrophysics Data System (ADS)

    Doser, Adele B.; Yee, Mark L.; O'Rourke, William T.; Slinkard, Megan E.; Craft, David C.; Nguyen, Hung D.

    2005-05-01

    This study developed a distributed vehicle target detection and estimation capability using two algorithmic approaches designed to take advantage of the capabilities of networked sensor systems. The primary interest was on small, quiet vehicles, such as personally owned SUVs and light trucks. The first algorithm approach utilized arrayed sensor beamforming techniques. In addition, it demonstrated a capability to find locations of unknown roads by extending code developed by the Army Acoustic Center for Excellence at Picatinny Arsenal. The second approach utilized single (non-array) sensors and employed generalized correlation techniques. Modifications to both techniques were suggested that, if implemented, could yield robust methods for target classification and tracking using two different types of networked sensor systems.

  9. An architecture for the efficient implementation of compressive sampling reconstruction algorithms in reconfigurable hardware

    NASA Astrophysics Data System (ADS)

    Ortiz, Fernando E.; Kelmelis, Eric J.; Arce, Gonzalo R.

    2007-04-01

    According to the Shannon-Nyquist theory, the number of samples required to reconstruct a signal is proportional to its bandwidth. Recently, it has been shown that acceptable reconstructions are possible from a reduced number of random samples, a process known as compressive sampling. Taking advantage of this realization has radical impact on power consumption and communication bandwidth, crucial in applications based on small/mobile/unattended platforms such as UAVs and distributed sensor networks. Although the benefits of these compression techniques are self-evident, the reconstruction process requires the solution of nonlinear signal processing algorithms, which limit applicability in portable and real-time systems. In particular, (1) the power consumption associated with the difficult computations offsets the power savings afforded by compressive sampling, and (2) limited computational power prevents these algorithms to maintain pace with the data-capturing sensors, resulting in undesirable data loss. FPGA based computers offer low power consumption and high computational capacity, providing a solution to both problems simultaneously. In this paper, we present an architecture that implements the algorithms central to compressive sampling in an FPGA environment. We start by studying the computational profile of the convex optimization algorithms used in compressive sampling. Then we present the design of a pixel pipeline suitable for FPGA implementation, able to compute these algorithms.

  10. Parallel paving: An algorithm for generating distributed, adaptive, all-quadrilateral meshes on parallel computers

    SciTech Connect

    Lober, R.R.; Tautges, T.J.; Vaughan, C.T.

    1997-03-01

    Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.

  11. The Exposure Advantage: Early Exposure to a Multilingual Environment Promotes Effective Communication.

    PubMed

    Fan, Samantha P; Liberman, Zoe; Keysar, Boaz; Kinzler, Katherine D

    2015-07-01

    Early language exposure is essential to developing a formal language system, but may not be sufficient for communicating effectively. To understand a speaker's intention, one must take the speaker's perspective. Multilingual exposure may promote effective communication by enhancing perspective taking. We tested children on a task that required perspective taking to interpret a speaker's intended meaning. Monolingual children failed to interpret the speaker's meaning dramatically more often than both bilingual children and children who were exposed to a multilingual environment but were not bilingual themselves. Children who were merely exposed to a second language performed as well as bilingual children, despite having lower executive-function scores. Thus, the communicative advantages demonstrated by the bilinguals may be social in origin, and not due to enhanced executive control. For millennia, multilingual exposure has been the norm. Our study shows that such an environment may facilitate the development of perspective-taking tools that are critical for effective communication.

  12. Taking Advantage of the "Big Mo"--Momentum in Everyday English and Swedish and in Physics Teaching

    ERIC Educational Resources Information Center

    Haglund, Jesper; Jeppsson, Fredrik; Ahrenberg, Lars

    2015-01-01

    Science education research suggests that our everyday intuitions of motion and interaction of physical objects fit well with how physicists use the term "momentum". Corpus linguistics provides an easily accessible approach to study language in different domains, including everyday language. Analysis of language samples from English text…

  13. LUPA: a European initiative taking advantage of the canine genome architecture for unravelling complex disorders in both human and dogs.

    PubMed

    Lequarré, Anne-Sophie; Andersson, Leif; André, Catherine; Fredholm, Merete; Hitte, Christophe; Leeb, Tosso; Lohi, Hannes; Lindblad-Toh, Kerstin; Georges, Michel

    2011-08-01

    The domestic dog offers a unique opportunity to explore the genetic basis of disease, morphology and behaviour. Humans share many diseases with our canine companions, making dogs an ideal model organism for comparative disease genetics. Using newly developed resources, genome-wide association studies in dog breeds are proving to be exceptionally powerful. Towards this aim, veterinarians and geneticists from 12 European countries are collaborating to collect and analyse the DNA from large cohorts of dogs suffering from a range of carefully defined diseases of relevance to human health. This project, named LUPA, has already delivered considerable results. The consortium has collaborated to develop a new high density single nucleotide polymorphism (SNP) array. Mutations for four monogenic diseases have been identified and the information has been utilised to find mutations in human patients. Several complex diseases have been mapped and fine mapping is underway. These findings should ultimately lead to a better understanding of the molecular mechanisms underlying complex diseases in both humans and their best friend.

  14. A Content Analysis of Kindergarten-12th Grade School-Based Nutrition Interventions: Taking Advantage of Past Learning

    ERIC Educational Resources Information Center

    Roseman, Mary G.; Riddell, Martha C.; Haynes, Jessica N.

    2011-01-01

    Objective: To review the literature, identifying proposed recommendations for school-based nutrition interventions, and evaluate kindergarten through 12th grade school-based nutrition interventions conducted from 2000-2008. Design: Proposed recommendations from school-based intervention reviews were developed and used in conducting a content…

  15. High-phasing-power lanthanide derivatives: taking advantage of ytterbium and lutetium for optimized anomalous diffraction experiments using synchrotron radiation.

    PubMed

    Girard, E; Anelli, P L; Vicat, J; Kahn, R

    2003-10-01

    Ytterbium and lutetium are well suited for optimized anomalous diffraction experiments using synchrotron radiation. Therefore, two lanthanide complexes Yb-HPDO3A and Lu-HPDO3A have been produced that are similar to the Gd-HPDO3A complex already known to give good derivative crystals. Derivative crystals of hen egg-white lysozyme were obtained by co-crystallization using 100 mM solutions of each lanthanide complex. De novo phasing has been carried out using single-wavelength anomalous diffraction on data sets collected on each derivative crystal at the L(III) absorption edge of the corresponding lanthanide (ff" = 28 e(-)). A third data set was collected on a Lu-HPDO3A derivative crystal at the Se K absorption edge with f"(Lu) = 10 e(-). The structures were refined and compared with the known structure of the Gd-HPDO3A lysozyme derivative. The quality of the experimental electron-density maps allows easy model building. With L(III) absorption edges at shorter wavelengths than the gadolinium absorption edge, lutetium and ytterbium, when chelated by a ligand such as HPDO3A, form lanthanide complexes that are especially interesting for synchrotron-radiation experiments in structural biology.

  16. Defying the activity-stability trade-off in enzymes: taking advantage of entropy to enhance activity and thermostability.

    PubMed

    Siddiqui, Khawar Sohail

    2017-05-01

    The biotechnological applications of enzymes are limited due to the activity-stability trade-off, which implies that an increase in activity is accompanied by a concomitant decrease in protein stability. This premise is based on thermally adapted homologous enzymes where cold-adapted enzymes show high intrinsic activity linked to enhanced thermolability. In contrast, thermophilic enzymes show low activity around ambient temperatures. Nevertheless, genetically and chemically modified enzymes are beginning to show that the activity-stability trade-off can be overcome. In this review, the origin of the activity-stability trade-off, the thermodynamic basis for enhanced activity and stability, and various approaches for escaping the activity-stability trade-off are discussed. The role of entropy in enhancing both the activity and the stability of enzymes is highlighted with a special emphasis placed on the involvement of solvent water molecules. This review is concluded with suggestions for further research, which underscores the implications of these findings in the context of productivity curves, the Daniel-Danson equilibrium model, catalytic antibodies, and life on cold planets.

  17. Mentoring the Next Generation of AACRAO Leaders: Taking Advantage of Routines, Exceptions, and Challenges for Developing Leadership Skills

    ERIC Educational Resources Information Center

    Cramer, Sharon F.

    2012-01-01

    As members of enrollment management units look ahead to the next few years, they anticipate many institution-wide challenges: (1) implementation of a new student information system; (2) major upgrade of an existing system; and (3) re-configuring an existing system to reflect changes in academic policies or to accommodate new federal or state…

  18. Taking ad-Vantage of lax advertising regulation in the USA and Canada: reassuring and distracting health-concerned smokers.

    PubMed

    Anderson, Stacey J; Pollay, Richard W; Ling, Pamela M

    2006-10-01

    We explored the evolution from cigarette product attributes to psychosocial needs in advertising campaigns for low-tar cigarettes. Analysis of previously secret tobacco industry documents and print advertising images indicated that low-tar brands targeted smokers who were concerned about their health with advertising images intended to distract them from the health hazards of smoking. Advertising first emphasized product characteristics (filtration, low tar) that implied health benefits. Over time, advertising emphasis shifted to salient psychosocial needs of the target markets. A case study of Vantage cigarettes in the USA and Canada showed that advertising presented images of intelligent, upward-striving people who had achieved personal success and intentionally excluded the act of smoking from the imagery, while minimal product information was provided. This illustrates one strategy to appeal to concerned smokers by not describing the product itself (which may remind smokers of the problems associated with smoking), but instead using evocative imagery to distract smokers from these problems. Current advertising for potential reduced-exposure products (PREPs) emphasizes product characteristics, but these products have not delivered on the promise of a healthier alternative cigarette. Our results suggest that the tobacco control community should be on the alert for a shift in advertising focus for PREPs to the image of the user rather than the cigarette. Global Framework Convention on Tobacco Control-style advertising bans that prohibit all user imagery in tobacco advertising could preempt a psychosocial needs-based advertising strategy for PREPs and maintain public attention on the health hazards of smoking.

  19. Taking Advantage of the "Big Mo"—Momentum in Everyday English and Swedish and in Physics Teaching

    NASA Astrophysics Data System (ADS)

    Haglund, Jesper; Jeppsson, Fredrik; Ahrenberg, Lars

    2015-06-01

    Science education research suggests that our everyday intuitions of motion and interaction of physical objects fit well with how physicists use the term "momentum". Corpus linguistics provides an easily accessible approach to study language in different domains, including everyday language. Analysis of language samples from English text corpora reveals a trend of increasing metaphorical use of "momentum" in non-science domains, and through conceptual metaphor analysis, we show that the use of the word in everyday language, as opposed to for instance "force", is largely adequate from a physics point of view. In addition, "momentum" has recently been borrowed into Swedish as a metaphor in domains such as sports, politics and finance, with meanings similar to those in physics. As an implication for educational practice, we find support for the suggestion to introduce the term "momentum" to English-speaking pupils at an earlier age than what is typically done in the educational system today, thereby capitalising on their intuitions and experiences of everyday language. For Swedish-speaking pupils, and possibly also relevant to other languages, the parallel between "momentum" and the corresponding physics term in the students' mother tongue could be made explicit..

  20. How Users Take Advantage of Different Forms of Interactivity on Online News Sites: Clicking, E-Mailing, and Commenting

    ERIC Educational Resources Information Center

    Boczkowski, Pablo J.; Mitchelstein, Eugenia

    2012-01-01

    This study examines the uptake of multiple interactive features on news sites. It looks at the thematic composition of the most clicked, most e-mailed, and most commented stories during periods of heightened and routine political activity. Results show that (a) during the former period, the most commented stories were more likely to be focused on…

  1. Optimizing Hydraulic Fracture Spacing and Frac Timing in Unconventionals - Taking Advantage of Time-Dependent Pressure Diffusion

    NASA Astrophysics Data System (ADS)

    Sheibani, F.

    2014-12-01

    Due to low natural gas prices, low production rates, and increased development costs, many operators have shifted operations from shale gas to liquid-rich shale plays. One means to make shale gas plays more attractive is to enhance well production through stimulation optimization. In numerous previous works, the authors have highlighted the geomechanical causes and important parameters for hydraulic fracture optimization in naturally fractured shale plays. The authors have, for example, emphasized the impact that stress shadows, from multiple hydraulic fractures, has on increasing the resistance of natural fractures and weakness planes to shear stimulation. The authors have also shown the critical role that in-situ pressure and pressure changes have on natural fracture shear stimulation.In this paper, we present the results of a discrete element model numerical study of both hydraulic fracture spacing and hydraulic fracture timing in a fully hydro-mechanical coupled fashion. The pressure changes in the natural fracture system of an unconventional play, due to hydraulic fracturing, often follow a diffusion-type process, which means the pressure changes are time dependent. As shown in previous works of the authors and others, the time-dependent changes in the in-situ pressure can have a marked impact on shear stimulation. The study performed quantitatively looked at the impact of hydraulic fracture spacing as a function of in-situ pressure change and time for key parameters such as the in-situ stress ratio, natural fracture characteristics, and natural fracture mechanical properties. The results of the study help improve the understanding of in-situ pressure and hydraulic fracture timing on stimulation optimization and enhanced hydrocarbon production. The study also provides a means to optimize hydraulic fracture spacing and increase shear stimulation for unconventional wells.

  2. How can we take advantage of halophyte properties to cope with heavy metal toxicity in salt-affected areas?

    PubMed Central

    Lutts, Stanley; Lefèvre, Isabelle

    2015-01-01

    Background Many areas throughout the world are simultaneously contaminated by high concentrations of soluble salts and by high concentrations of heavy metals that constitute a serious threat to human health. The use of plants to extract or stabilize pollutants is an interesting alternative to classical expensive decontamination procedures. However, suitable plant species still need to be identified for reclamation of substrates presenting a high electrical conductivity. Scope Halophytic plant species are able to cope with several abiotic constraints occurring simultaneously in their natural environment. This review considers their putative interest for remediation of polluted soil in relation to their ability to sequester absorbed toxic ions in trichomes or vacuoles, to perform efficient osmotic adjustment and to limit the deleterious impact of oxidative stress. These physiological adaptations are considered in relation to the impact of salt on heavy metal bioavailabilty in two types of ecosystem: (1) salt marshes and mangroves, and (2) mine tailings in semi-arid areas. Conclusions Numerous halophytes exhibit a high level of heavy metal accumulation and external NaCl may directly influence heavy metal speciation and absorption rate. Maintenance of biomass production and plant water status makes some halophytes promising candidates for further management of heavy-metal-polluted areas in both saline and non-saline environments. PMID:25672360

  3. Host manipulation by an ichneumonid spider ectoparasitoid that takes advantage of preprogrammed web-building behaviour for its cocoon protection.

    PubMed

    Takasuka, Keizo; Yasui, Tomoki; Ishigami, Toru; Nakata, Kensuke; Matsumoto, Rikio; Ikeda, Kenichi; Maeto, Kaoru

    2015-08-01

    Host manipulation by parasites and parasitoids is a fascinating phenomenon within evolutionary ecology, representing an example of extended phenotypes. To elucidate the mechanism of host manipulation, revealing the origin and function of the invoked actions is essential. Our study focused on the ichneumonid spider ectoparasitoid Reclinervellus nielseni, which turns its host spider (Cyclosa argenteoalba) into a drugged navvy, to modify the web structure into a more persistent cocoon web so that the wasp can pupate safely on this web after the spider's death. We focused on whether the cocoon web originated from the resting web that an unparasitized spider builds before moulting, by comparing web structures, building behaviour and silk spectral/tensile properties. We found that both resting and cocoon webs have reduced numbers of radii decorated by numerous fibrous threads and specific decorating behaviour was identical, suggesting that the cocoon web in this system has roots in the innate resting web and ecdysteroid-related components may be responsible for the manipulation. We also show that these decorations reflect UV light, possibly to prevent damage by flying web-destroyers such as birds or large insects. Furthermore, the tensile test revealed that the spider is induced to repeat certain behavioural steps in addition to resting web construction so that many more threads are laid down for web reinforcement.

  4. Two-Year Community College Chemistry: Joules. Changing Trends in Student Interests and Goals: "Taking Advantage of New Student Interests."

    ERIC Educational Resources Information Center

    Bardole, Ellen; Bardole, Jay

    1979-01-01

    General chemistry courses and texts seemed to be geared to students who are going to be chemistry majors. This may relate to the trend of declining interest and knowledge of chemistry. A course is proposed which concentrates only on the most essential facts and fundamental laws and theories. (BB)

  5. Home advantage in retractable-roof baseball stadia.

    PubMed

    Romanowich, Paul

    2012-10-01

    This study examined whether the home advantage varies for open-air, domed, or retractable-roof baseball stadia, and whether having the roof open or closed affects the home advantage in retractable-roof baseball stadia. Data from Major League Baseball (MLB) games played between 2001 and 2009 were analyzed for whether or not the presence of a home-advantage was dependent on the type of home stadium used. Home advantage was robust for all three types of stadia. A significant effect of stadium type on home advantage was found, with a greater home advantage for teams playing home games in domed stadia relative to open-air stadia, replicating a previous study. There was a greater home advantage for teams playing home games in domed stadia relative to retractable-roof stadia. No other differences in the home advantage were found; results are discussed in terms of familiarity with the facility.

  6. An improved conscan algorithm based on a Kalman filter

    NASA Technical Reports Server (NTRS)

    Eldred, D. B.

    1994-01-01

    Conscan is commonly used by DSN antennas to allow adaptive tracking of a target whose position is not precisely known. This article describes an algorithm that is based on a Kalman filter and is proposed to replace the existing fast Fourier transform based (FFT-based) algorithm for conscan. Advantages of this algorithm include better pointing accuracy, continuous update information, and accommodation of missing data. Additionally, a strategy for adaptive selection of the conscan radius is proposed. The performance of the algorithm is illustrated through computer simulations and compared to the FFT algorithm. The results show that the Kalman filter algorithm is consistently superior.

  7. Genetic algorithm optimization for focusing through turbid media in noisy environments.

    PubMed

    Conkey, Donald B; Brown, Albert N; Caravaca-Aguirre, Antonio M; Piestun, Rafael

    2012-02-27

    We introduce genetic algorithms (GA) for wavefront control to focus light through highly scattering media. We theoretically and experimentally compare GAs to existing phase control algorithms and show that GAs are particularly advantageous in low signal-to-noise environments.

  8. A Comprehensive Review of Swarm Optimization Algorithms

    PubMed Central

    2015-01-01

    Many swarm optimization algorithms have been introduced since the early 60’s, Evolutionary Programming to the most recent, Grey Wolf Optimization. All of these algorithms have demonstrated their potential to solve many optimization problems. This paper provides an in-depth survey of well-known optimization algorithms. Selected algorithms are briefly explained and compared with each other comprehensively through experiments conducted using thirty well-known benchmark functions. Their advantages and disadvantages are also discussed. A number of statistical tests are then carried out to determine the significant performances. The results indicate the overall advantage of Differential Evolution (DE) and is closely followed by Particle Swarm Optimization (PSO), compared with other considered approaches. PMID:25992655

  9. A comprehensive review of swarm optimization algorithms.

    PubMed

    Ab Wahab, Mohd Nadhir; Nefti-Meziani, Samia; Atyabi, Adham

    2015-01-01

    Many swarm optimization algorithms have been introduced since the early 60's, Evolutionary Programming to the most recent, Grey Wolf Optimization. All of these algorithms have demonstrated their potential to solve many optimization problems. This paper provides an in-depth survey of well-known optimization algorithms. Selected algorithms are briefly explained and compared with each other comprehensively through experiments conducted using thirty well-known benchmark functions. Their advantages and disadvantages are also discussed. A number of statistical tests are then carried out to determine the significant performances. The results indicate the overall advantage of Differential Evolution (DE) and is closely followed by Particle Swarm Optimization (PSO), compared with other considered approaches.

  10. Taking Sides on "Takings": Rhetorical Resurgence of the Sagebrush Rebellion.

    ERIC Educational Resources Information Center

    Chiaviello, Tony

    The "Takings Clause" of the Fifth Amendment to the United States Constitution seems clear enough: when the government takes an individual's property, it must pay him or her for it. The "Sagebrush Rebellion" refers to the numerous incarnations of a movement to privatize public lands and contain environmental regulation. This…

  11. A Modified Decision Tree Algorithm Based on Genetic Algorithm for Mobile User Classification Problem

    PubMed Central

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389

  12. A modified decision tree algorithm based on genetic algorithm for mobile user classification problem.

    PubMed

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity.

  13. A projected preconditioned conjugate gradient algorithm for computing many extreme eigenpairs of a Hermitian matrix [A projected preconditioned conjugate gradient algorithm for computing a large eigenspace of a Hermitian matrix

    DOE PAGES

    Vecharynski, Eugene; Yang, Chao; Pask, John E.

    2015-02-25

    Here, we present an iterative algorithm for computing an invariant subspace associated with the algebraically smallest eigenvalues of a large sparse or structured Hermitian matrix A. We are interested in the case in which the dimension of the invariant subspace is large (e.g., over several hundreds or thousands) even though it may still be small relative to the dimension of A. These problems arise from, for example, density functional theory (DFT) based electronic structure calculations for complex materials. The key feature of our algorithm is that it performs fewer Rayleigh–Ritz calculations compared to existing algorithms such as the locally optimalmore » block preconditioned conjugate gradient or the Davidson algorithm. It is a block algorithm, and hence can take advantage of efficient BLAS3 operations and be implemented with multiple levels of concurrency. We discuss a number of practical issues that must be addressed in order to implement the algorithm efficiently on a high performance computer.« less

  14. Did Babe Ruth Have a Comparative Advantage as a Pitcher?

    ERIC Educational Resources Information Center

    Scahill, Edward M.

    1990-01-01

    Advocates using baseball statistics to illustrate the advantages of specialization in production. Using Babe Ruth's record as an analogy, suggests a methodology for determining a player's comparative advantage as a teaching illustration. Includes the team's statistical profile in five tables to explain comparative advantage and profit maximizing.…

  15. Back to Basics: A Bilingual Advantage in Infant Visual Habituation

    ERIC Educational Resources Information Center

    Singh, Leher; Fu, Charlene S. L.; Rahman, Aishah A.; Hameed, Waseem B.; Sanmugam, Shamini; Agarwal, Pratibha; Jiang, Binyan; Chong, Yap Seng; Meaney, Michael J.; Rifkin-Graboi, Anne

    2015-01-01

    Comparisons of cognitive processing in monolinguals and bilinguals have revealed a bilingual advantage in inhibitory control. Recent studies have demonstrated advantages associated with exposure to two languages in infancy. However, the domain specificity and scope of the infant bilingual advantage in infancy remains unclear. In the present study,…

  16. Navigation Algorithms for Formation Flying Missions

    NASA Technical Reports Server (NTRS)

    Huxel, Paul J.; Bishop, Robert H.

    2004-01-01

    The objective of the investigations is to develop navigation algorithms to support formation flying missions. In particular, we examine the advantages and concerns associated with the use of combinations of inertial and relative measurements, as well as address observability issues. In our analysis we consider the interaction between measurement types, update frequencies, and trajectory geometry and their cumulative impact on observability. Furthermore, we investigate how relative measurements affect inertial navigation in terms of algorithm performance.

  17. Expectation-maximization algorithms for learning a finite mixture of univariate survival time distributions from partially specified class values

    SciTech Connect

    Lee, Youngrok

    2013-05-15

    Heterogeneity exists on a data set when samples from di erent classes are merged into the data set. Finite mixture models can be used to represent a survival time distribution on heterogeneous patient group by the proportions of each class and by the survival time distribution within each class as well. The heterogeneous data set cannot be explicitly decomposed to homogeneous subgroups unless all the samples are precisely labeled by their origin classes; such impossibility of decomposition is a barrier to overcome for estimating nite mixture models. The expectation-maximization (EM) algorithm has been used to obtain maximum likelihood estimates of nite mixture models by soft-decomposition of heterogeneous samples without labels for a subset or the entire set of data. In medical surveillance databases we can find partially labeled data, that is, while not completely unlabeled there is only imprecise information about class values. In this study we propose new EM algorithms that take advantages of using such partial labels, and thus incorporate more information than traditional EM algorithms. We particularly propose four variants of the EM algorithm named EM-OCML, EM-PCML, EM-HCML and EM-CPCML, each of which assumes a specific mechanism of missing class values. We conducted a simulation study on exponential survival trees with five classes and showed that the advantages of incorporating substantial amount of partially labeled data can be highly signi cant. We also showed model selection based on AIC values fairly works to select the best proposed algorithm on each specific data set. A case study on a real-world data set of gastric cancer provided by Surveillance, Epidemiology and End Results (SEER) program showed a superiority of EM-CPCML to not only the other proposed EM algorithms but also conventional supervised, unsupervised and semi-supervised learning algorithms.

  18. Exact and approximate Fourier rebinning algorithms for the solution of the data truncation problem in 3-D PET.

    PubMed

    Bouallègue, Fayçal Ben; Crouzet, Jean-François; Comtat, Claude; Fourcade, Marjolaine; Mohammadi, Bijan; Mariano-Goulart, Denis

    2007-07-01

    This paper presents an extended 3-D exact rebinning formula in the Fourier space that leads to an iterative reprojection algorithm (iterative FOREPROJ), which enables the estimation of unmeasured oblique projection data on the basis of the whole set of measured data. In first approximation, this analytical formula also leads to an extended Fourier rebinning equation that is the basis for an approximate reprojection algorithm (extended FORE). These algorithms were evaluated on numerically simulated 3-D positron emission tomography (PET) data for the solution of the truncation problem, i.e., the estimation of the missing portions in the oblique projection data, before the application of algorithms that require complete projection data such as some rebinning methods (FOREX) or 3-D reconstruction algorithms (3DRP or direct Fourier methods). By taking advantage of all the 3-D data statistics, the iterative FOREPROJ reprojection provides a reliable alternative to the classical FOREPROJ method, which only exploits the low-statistics nonoblique data. It significantly improves the quality of the external reconstructed slices without loss of spatial resolution. As for the approximate extended FORE algorithm, it clearly exhibits limitations due to axial interpolations, but will require clinical studies with more realistic measured data in order to decide on its pertinence.

  19. An evolutionary computation based algorithm for calculating solar differential rotation by automatic tracking of coronal bright points

    NASA Astrophysics Data System (ADS)

    Shahamatnia, Ehsan; Dorotovič, Ivan; Fonseca, Jose M.; Ribeiro, Rita A.

    2016-03-01

    Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO), solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO), a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.

  20. The Advantages of Fixed Facilities in Characterizing TRU Wastes

    SciTech Connect

    FRENCH, M.S.

    2000-02-08

    In May 1998 the Hanford Site started developing a program for characterization of transuranic (TRU) waste for shipment to the Waste Isolation Pilot Plant (WIPP) in New Mexico. After less than two years, Hanford will have a program certified by the Carlsbad Area Office (CAO). By picking a simple waste stream, taking advantage of lessons learned at the other sites, as well as communicating effectively with the CAO, Hanford was able to achieve certification in record time. This effort was further simplified by having a centralized program centered on the Waste Receiving and Processing (WRAP) Facility that contains most of the equipment required to characterize TRU waste. The use of fixed facilities for the characterization of TRU waste at sites with a long-term clean-up mission can be cost effective for several reasons. These include the ability to control the environment in which sensitive instrumentation is required to operate and ensuring that calibrations and maintenance activities are scheduled and performed as an operating routine. Other factors contributing to cost effectiveness include providing approved procedures and facilities for handling hazardous materials and anticipated contingencies and performing essential evolutions, and regulating and smoothing the work load and environmental conditions to provide maximal efficiency and productivity. Another advantage is the ability to efficiently provide characterization services to other sites in the Department of Energy (DOE) Complex that do not have the same capabilities. The Waste Receiving and Processing (WRAP) Facility is a state-of-the-art facility designed to consolidate the operations necessary to inspect, process and ship waste to facilitate verification of contents for certification to established waste acceptance criteria. The WRAP facility inspects, characterizes, treats, and certifies transuranic (TRU), low-level and mixed waste at the Hanford Site in Washington state. Fluor Hanford operates the $89

  1. Children's understanding of certainty and evidentiality: advantage of grammaticalized forms over lexical alternatives.

    PubMed

    Matsui, Tomoko; Miura, Yui

    2009-01-01

    In verbal communication, the hearer takes advantage of the linguistic expressions of certainty and evidentiality to assess how committed the speaker might be to the truth of the informational content of the utterance. Little is known, however, about the precise developmental mechanism of this ability. In this chapter, we approach the question by elucidating factors that are likely to constrain young children's understanding of linguistically encoded certainty and evidentiality, including the types of linguistic form of these expressions, namely, grammaticalized or lexical forms.

  2. Associated petroleum gas utilization in Tomsk Oblast: energy efficiency and tax advantages

    NASA Astrophysics Data System (ADS)

    Vazim, A.; Romanyuk, V.; Ahmadeev, K.; Matveenko, I.

    2015-11-01

    This article deals with oil production companies activities in increasing the utilization volume of associated petroleum gas (APG) in Tomsk Oblast. Cost-effectiveness analysis of associated petroleum gas utilization was carried out using the example of gas engine power station AGP-350 implementation at Yuzhno-Cheremshanskoye field, Tomsk Oblast. Authors calculated the effectiveness taking into account the tax advantages of 2012. The implementation of this facility shows high profitability, the payback period being less than 2 years.

  3. Taking Over a Broken Program

    ERIC Educational Resources Information Center

    Grabowski, Carl

    2008-01-01

    Taking over a broken program can be one of the hardest tasks to take on. However, working towards a vision and a common goal--and eventually getting there--makes it all worth it in the end. In this article, the author shares the lessons she learned as the new director for the Bright Horizons Center in Ashburn, Virginia. She suggests that new…

  4. Taking Chances in Romantic Relationships

    ERIC Educational Resources Information Center

    Elliott, Lindsey; Knox, David

    2016-01-01

    A 64 item Internet questionnaire was completed by 381 undergraduates at a large southeastern university to assess taking chances in romantic relationships. Almost three fourths (72%) self-identified as being a "person willing to take chances in my love relationship." Engaging in unprotected sex, involvement in a "friends with…

  5. Survey cover pages: to take or not to take.

    PubMed

    Sansone, Randy A; Lam, Charlene; Wiederman, Michael W

    2010-01-01

    In survey research, the elements of informed conset, including contact information for the researchers and the Institutional Review Board, may be located on a cover page, which participants are advised that they may take. To date, we are not aware of any studies examining the percentage of research participants that actually take these cover pages, which was the purpose of this study. Among a consecutive sample of 419 patients in an internal medicine setting, 16% removed the cover page. There were no demographic predictors regarding who took versus did not take the cover page.

  6. Motion Cueing Algorithm Development: Initial Investigation and Redesign of the Algorithms

    NASA Technical Reports Server (NTRS)

    Telban, Robert J.; Wu, Weimin; Cardullo, Frank M.; Houck, Jacob A. (Technical Monitor)

    2000-01-01

    In this project four motion cueing algorithms were initially investigated. The classical algorithm generated results with large distortion and delay and low magnitude. The NASA adaptive algorithm proved to be well tuned with satisfactory performance, while the UTIAS adaptive algorithm produced less desirable results. Modifications were made to the adaptive algorithms to reduce the magnitude of undesirable spikes. The optimal algorithm was found to have the potential for improved performance with further redesign. The center of simulator rotation was redefined. More terms were added to the cost function to enable more tuning flexibility. A new design approach using a Fortran/Matlab/Simulink setup was employed. A new semicircular canals model was incorporated in the algorithm. With these changes results show the optimal algorithm has some advantages over the NASA adaptive algorithm. Two general problems observed in the initial investigation required solutions. A nonlinear gain algorithm was developed that scales the aircraft inputs by a third-order polynomial, maximizing the motion cues while remaining within the operational limits of the motion system. A braking algorithm was developed to bring the simulator to a full stop at its motion limit and later release the brake to follow the cueing algorithm output.

  7. Economic advantage of pharmacogenomics - clinical trials with genetic information.

    PubMed

    Ohashi, Wataru; Mizushima, Hiroshi; Tanaka, Hiroshi

    2008-01-01

    The purpose of this study is to clarify the benefit and loss for the pharmaceutical companies when they adopt introducing pharmacogenomics in their clinical trials (in the following description, clinical trials by using pharmacogenomics is called "pgx clinical trial"), that is, when they use genetic information in their clinical trials. Particularly, the benefit for the pharmaceutical companies in terms of following two points is analyzed. 1. Development cost of new drug and period of clinical trial can be reduced because a clinical trial needs less subjects, 2. The new drug can be placed on the market earlier because the development period can be shortened. A survey conducted by Japan Pharmaceutical Manufacturers Association revealed that the pharmaceutical companies in Japan are interested in "pgx clinical trial". Specifically, 95% of the member companies (n=19) of the Association replied that the establishment of a guideline for pgx clinical trial by regulatory authorities are highly desirable. However, 65% of them (n=13) also replied that pgx clinical trial is difficult for the time being. It can be concluded that the pharmaceutical companies are positive about pgx clinical trial, but they cannot take a step towards it for several reasons: some of them may be worried their sales for non-responders will be reduced, poor understanding of pgx among the concerned parties, and not matured methodology of pgx clinical trial. This study shows that the advantage of pgx clinical trial outweighs its disadvantage. The sales may decrease because the drug is not used for non-responders, however, the number of subjects necessary for a clinical trial can be reduced, study period can be shortened and the drug can be marketed earlier. Furthermore, adverse events (AE) and adverse drug reactions (ADR) during the clinical trial and post-marketing phase can be markedly reduced. This represents a great benefit for the patients, pharmaceutical companies and the society as a whole.

  8. Fourier Lucas-Kanade algorithm.

    PubMed

    Lucey, Simon; Navarathna, Rajitha; Ashraf, Ahmed Bilal; Sridharan, Sridha

    2013-06-01

    In this paper, we propose a framework for both gradient descent image and object alignment in the Fourier domain. Our method centers upon the classical Lucas & Kanade (LK) algorithm where we represent the source and template/model in the complex 2D Fourier domain rather than in the spatial 2D domain. We refer to our approach as the Fourier LK (FLK) algorithm. The FLK formulation is advantageous when one preprocesses the source image and template/model with a bank of filters (e.g., oriented edges, Gabor, etc.) as 1) it can handle substantial illumination variations, 2) the inefficient preprocessing filter bank step can be subsumed within the FLK algorithm as a sparse diagonal weighting matrix, 3) unlike traditional LK, the computational cost is invariant to the number of filters and as a result is far more efficient, and 4) this approach can be extended to the Inverse Compositional (IC) form of the LK algorithm where nearly all steps (including Fourier transform and filter bank preprocessing) can be precomputed, leading to an extremely efficient and robust approach to gradient descent image matching. Further, these computational savings translate to nonrigid object alignment tasks that are considered extensions of the LK algorithm, such as those found in Active Appearance Models (AAMs).

  9. Managing patients taking edoxaban in dentistry

    PubMed Central

    Curto, Daniel; Sanchez, Jorge

    2017-01-01

    Background Anticoagulation therapy is used in several conditions to prevent or treat thromboembolism. A new group of oral anticoagulants with clear advantages over classic dicoumarin oral anticoagulants (warfarin and acenocoumarol) has been developed in recent years. The Food and Drug Administration has approved edoxaban, dabigatran, rivaroxaban and apixaban. Their advantages include: predictable pharmacokinetics, drug interactions and limited food, rapid onset of action and short half-life. However, they lack a specific reversal agent. Material and Methods This paper examines the available evidence regarding rivaroxaban and sets out proposals for clinical guidance of dental practitioners treating these patients in primary dental care. A literature search was conducted through July 2016 for publications in PubMed and Cochrane Library using the keywords “edoxaban”, “dabigatran”, “rivaroxaban”, “apixaban”, “new oral anticoagulants”, “novel oral anticoagulants”, “bleeding” and “dental treatment” with the “and” boolean operator in the last 10 years. Results The number of patients taking edoxaban is increasing. There is no need for regular coagulation monitoring of patients on edoxaban therapy. For patients requiring minor oral surgery procedures, interruption of edoxaban is not generally necessary. Management of patients on anticoagulation therapy requires that dentists can accurately assess the patient prior to dental treatments. Conclusions Their increased use means that oral care clinicians should have a sound understanding of the mechanism of action, pharmacology, reversal strategies and management of bleeding in patients taking edoxaban. There is a need for further clinical studies in order to establish more evidence-based guidelines for dental patients requiring edoxaban. Key words:Edoxaban, dabigatran, rivaroxaban, apixaban, novel oral anticoagulants, bleeding. PMID:28210454

  10. Disconnects between popular discourse and home advantage research: what can fans and media tell us about the home advantage phenomenon?

    PubMed

    Smith, D Randall

    2005-04-01

    Many of the factors identified as influencing the home advantage have an underlying social basis, presumably through the influence exerted by the home crowd. Beliefs in the home advantage and the causes of that advantage also have a social basis: sports coverage and fan discourse focus on some aspects of the phenomenon at the expense of others. This paper compares home advantage research with the use of the concept in media narratives and fan Intemet postings. While there are many similarities across sources, the findings suggest three major differences. Fans, and to a lesser extent the media, (1) focus almost exclusively on winning as the evidence for a home advantage, (2) see crowd noise as the main factor for the home advantage, and (3) treat the phenomenon as much more transient than is suggested by academic studies. I identify several features of the phenomenon that facilitate popular views of the home advantage and suggest how future research may benefit from incorporating those views.

  11. Testing block subdivision algorithms on block designs

    NASA Astrophysics Data System (ADS)

    Wiseman, Natalie; Patterson, Zachary

    2016-01-01

    Integrated land use-transportation models predict future transportation demand taking into account how households and firms arrange themselves partly as a function of the transportation system. Recent integrated models require parcels as inputs and produce household and employment predictions at the parcel scale. Block subdivision algorithms automatically generate parcel patterns within blocks. Evaluating block subdivision algorithms is done by way of generating parcels and comparing them to those in a parcel database. Three block subdivision algorithms are evaluated on how closely they reproduce parcels of different block types found in a parcel database from Montreal, Canada. While the authors who developed each of the algorithms have evaluated them, they have used their own metrics and block types to evaluate their own algorithms. This makes it difficult to compare their strengths and weaknesses. The contribution of this paper is in resolving this difficulty with the aim of finding a better algorithm suited to subdividing each block type. The proposed hypothesis is that given the different approaches that block subdivision algorithms take, it's likely that different algorithms are better adapted to subdividing different block types. To test this, a standardized block type classification is used that consists of mutually exclusive and comprehensive categories. A statistical method is used for finding a better algorithm and the probability it will perform well for a given block type. Results suggest the oriented bounding box algorithm performs better for warped non-uniform sites, as well as gridiron and fragmented uniform sites. It also produces more similar parcel areas and widths. The Generalized Parcel Divider 1 algorithm performs better for gridiron non-uniform sites. The Straight Skeleton algorithm performs better for loop and lollipop networks as well as fragmented non-uniform and warped uniform sites. It also produces more similar parcel shapes and patterns.

  12. Nurse! What's Taking So Long?

    MedlinePlus

    ... page: https://medlineplus.gov/news/fullstory_164577.html Nurse! What's Taking So Long? Study at a children's ... in a child's hospital room, anxious parents expect nurses to respond pronto. That rarely happens, however, and ...

  13. LRO Takes the Moon's Temperature

    NASA Video Gallery

    During the June 2011 lunar eclipse, scientists will be able to get a unique view of the moon. While the sun is blocked by the Earth, LRO's Diviner instrument will take the temperature on the lunar ...

  14. Take Care with Pet Reptiles

    MedlinePlus

    ... CDC Features Take Care with Pet Reptiles and Amphibians Language: English Español (Spanish) Recommend on Facebook Tweet ... helpful resources. Safe Handling Tips for Reptiles and Amphibians Always wash your hands thoroughly after handling reptiles ...

  15. Taking Care of Your Heart

    MedlinePlus

    ... attack or stroke. Lifestyle changes, like making smart food choices and being physically active, and taking medicine can ... your risk by managing your “ABCs” with smart food choices, physical activity, and medicine. Losing weight and quitting ...

  16. Brazilian physicists take centre stage

    NASA Astrophysics Data System (ADS)

    Curtis, Susan

    2014-06-01

    With the FIFA World Cup taking place in Brazil this month, Susan Curtis travels to South America's richest nation to find out how its physicists are exploiting recent big increases in science funding.

  17. LRO Takes the Moon's Temperature

    NASA Video Gallery

    During the December 2011 lunar eclipse, LRO's Diviner instrument will take the temperature on the lunar surface. Since different rock sizes cool at different rates, scientists will be able to infer...

  18. Taking America To New Heights

    NASA Video Gallery

    NASA's Commercial Crew Program (CCP) is taking America to new heights with its Commercial Crew Development Round 2 (CCDev2) partners. In 2011, NASA entered into funded Space Act Agreements (SAAs) w...

  19. Algorithmic chemistry

    SciTech Connect

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  20. Plant circadian clocks increase photosynthesis, growth, survival, and competitive advantage.

    PubMed

    Dodd, Antony N; Salathia, Neeraj; Hall, Anthony; Kévei, Eva; Tóth, Réka; Nagy, Ferenc; Hibberd, Julian M; Millar, Andrew J; Webb, Alex A R

    2005-07-22

    Circadian clocks are believed to confer an advantage to plants, but the nature of that advantage has been unknown. We show that a substantial photosynthetic advantage is conferred by correct matching of the circadian clock period with that of the external light-dark cycle. In wild type and in long- and short-circadian period mutants of Arabidopsis thaliana, plants with a clock period matched to the environment contain more chlorophyll, fix more carbon, grow faster, and survive better than plants with circadian periods differing from their environment. This explains why plants gain advantage from circadian control.

  1. Feature Subset Selection, Class Separability, and Genetic Algorithms

    SciTech Connect

    Cantu-Paz, E

    2004-01-21

    The performance of classification algorithms in machine learning is affected by the features used to describe the labeled examples presented to the inducers. Therefore, the problem of feature subset selection has received considerable attention. Genetic approaches to this problem usually follow the wrapper approach: treat the inducer as a black box that is used to evaluate candidate feature subsets. The evaluations might take a considerable time and the traditional approach might be unpractical for large data sets. This paper describes a hybrid of a simple genetic algorithm and a method based on class separability applied to the selection of feature subsets for classification problems. The proposed hybrid was compared against each of its components and two other feature selection wrappers that are used widely. The objective of this paper is to determine if the proposed hybrid presents advantages over the other methods in terms of accuracy or speed in this problem. The experiments used a Naive Bayes classifier and public-domain and artificial data sets. The experiments suggest that the hybrid usually finds compact feature subsets that give the most accurate results, while beating the execution time of the other wrappers.

  2. The Chorus Conflict and Loss of Separation Resolution Algorithms

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Hagen, George E.; Maddalon, Jeffrey M.

    2013-01-01

    The Chorus software is designed to investigate near-term, tactical conflict and loss of separation detection and resolution concepts for air traffic management. This software is currently being used in two different problem domains: en-route self- separation and sense and avoid for unmanned aircraft systems. This paper describes the core resolution algorithms that are part of Chorus. The combination of several features of the Chorus program distinguish this software from other approaches to conflict and loss of separation resolution. First, the program stores a history of state information over time which enables it to handle communication dropouts and take advantage of previous input data. Second, the underlying conflict algorithms find resolutions that solve the most urgent conflict, but also seek to prevent secondary conflicts with the other aircraft. Third, if the program is run on multiple aircraft, and the two aircraft maneuver at the same time, the result will be implicitly co-ordinated. This implicit coordination property is established by ensuring that a resolution produced by Chorus will comply with a mathematically-defined criteria whose correctness has been formally verified. Fourth, the program produces both instantaneous solutions and kinematic solutions, which are based on simple accel- eration models. Finally, the program provides resolutions for recovery from loss of separation. Different versions of this software are implemented as Java and C++ software programs, respectively.

  3. Modified Discrete Grey Wolf Optimizer Algorithm for Multilevel Image Thresholding

    PubMed Central

    Sun, Lijuan; Guo, Jian; Xu, Bin; Li, Shujing

    2017-01-01

    The computation of image segmentation has become more complicated with the increasing number of thresholds, and the option and application of the thresholds in image thresholding fields have become an NP problem at the same time. The paper puts forward the modified discrete grey wolf optimizer algorithm (MDGWO), which improves on the optimal solution updating mechanism of the search agent by the weights. Taking Kapur's entropy as the optimized function and based on the discreteness of threshold in image segmentation, the paper firstly discretizes the grey wolf optimizer (GWO) and then proposes a new attack strategy by using the weight coefficient to replace the search formula for optimal solution used in the original algorithm. The experimental results show that MDGWO can search out the optimal thresholds efficiently and precisely, which are very close to the result examined by exhaustive searches. In comparison with the electromagnetism optimization (EMO), the differential evolution (DE), the Artifical Bee Colony (ABC), and the classical GWO, it is concluded that MDGWO has advantages over the latter four in terms of image segmentation quality and objective function values and their stability. PMID:28127305

  4. Modified Discrete Grey Wolf Optimizer Algorithm for Multilevel Image Thresholding.

    PubMed

    Li, Linguo; Sun, Lijuan; Guo, Jian; Qi, Jin; Xu, Bin; Li, Shujing

    2017-01-01

    The computation of image segmentation has become more complicated with the increasing number of thresholds, and the option and application of the thresholds in image thresholding fields have become an NP problem at the same time. The paper puts forward the modified discrete grey wolf optimizer algorithm (MDGWO), which improves on the optimal solution updating mechanism of the search agent by the weights. Taking Kapur's entropy as the optimized function and based on the discreteness of threshold in image segmentation, the paper firstly discretizes the grey wolf optimizer (GWO) and then proposes a new attack strategy by using the weight coefficient to replace the search formula for optimal solution used in the original algorithm. The experimental results show that MDGWO can search out the optimal thresholds efficiently and precisely, which are very close to the result examined by exhaustive searches. In comparison with the electromagnetism optimization (EMO), the differential evolution (DE), the Artifical Bee Colony (ABC), and the classical GWO, it is concluded that MDGWO has advantages over the latter four in terms of image segmentation quality and objective function values and their stability.

  5. Lesion detection in magnetic resonance brain images by hyperspectral imaging algorithms

    NASA Astrophysics Data System (ADS)

    Xue, Bai; Wang, Lin; Li, Hsiao-Chi; Chen, Hsian Min; Chang, Chein-I.

    2016-05-01

    Magnetic Resonance (MR) images can be considered as multispectral images so that MR imaging can be processed by multispectral imaging techniques such as maximum likelihood classification. Unfortunately, most multispectral imaging techniques are not particularly designed for target detection. On the other hand, hyperspectral imaging is primarily developed to address subpixel detection, mixed pixel classification for which multispectral imaging is generally not effective. This paper takes advantages of hyperspectral imaging techniques to develop target detection algorithms to find lesions in MR brain images. Since MR images are collected by only three image sequences, T1, T2 and PD, if a hyperspectral imaging technique is used to process MR images it suffers from the issue of insufficient dimensionality. To address this issue, two approaches to nonlinear dimensionality expansion are proposed, nonlinear correlation expansion and nonlinear band ratio expansion. Once dimensionality is expanded hyperspectral imaging algorithms are readily applied. The hyperspectral detection algorithm to be investigated for lesion detection in MR brain is the well-known subpixel target detection algorithm, called Constrained Energy Minimization (CEM). In order to demonstrate the effectiveness of proposed CEM in lesion detection, synthetic images provided by BrainWeb are used for experiments.

  6. Through-Wall Multiple Targets Vital Signs Tracking Based on VMD Algorithm

    PubMed Central

    Yan, Jiaming; Hong, Hong; Zhao, Heng; Li, Yusheng; Gu, Chen; Zhu, Xiaohua

    2016-01-01

    Targets located at the same distance are easily neglected in most through-wall multiple targets detecting applications which use the single-input single-output (SISO) ultra-wideband (UWB) radar system. In this paper, a novel multiple targets vital signs tracking algorithm for through-wall detection using SISO UWB radar has been proposed. Taking advantage of the high-resolution decomposition of the Variational Mode Decomposition (VMD) based algorithm, the respiration signals of different targets can be decomposed into different sub-signals, and then, we can track the time-varying respiration signals accurately when human targets located in the same distance. Intensive evaluation has been conducted to show the effectiveness of our scheme with a 0.15 m thick concrete brick wall. Constant, piecewise-constant and time-varying vital signs could be separated and tracked successfully with the proposed VMD based algorithm for two targets, even up to three targets. For the multiple targets’ vital signs tracking issues like urban search and rescue missions, our algorithm has superior capability in most detection applications. PMID:27537880

  7. Accelerating k-NN Algorithm with Hybrid MPI and OpenSHMEM

    SciTech Connect

    Lin, Jian; Hamidouche, Khaled; Zheng, Jie; Lu, Xiaoyi; Vishnu, Abhinav; Panda, Dhabaleswar

    2015-08-05

    Machine Learning algorithms are benefiting from the continuous improvement of programming models, including MPI, MapReduce and PGAS. k-Nearest Neighbors (k-NN) algorithm is a widely used machine learning algorithm, applied to supervised learning tasks such as classification. Several parallel implementations of k-NN have been proposed in the literature and practice. However, on high-performance computing systems with high-speed interconnects, it is important to further accelerate existing designs of the k-NN algorithm through taking advantage of scalable programming models. To improve the performance of k-NN on large-scale environment with InfiniBand network, this paper proposes several alternative hybrid MPI+OpenSHMEM designs and performs a systemic evaluation and analysis on typical workloads. The hybrid designs leverage the one-sided memory access to better overlap communication with computation than the existing pure MPI design, and propose better schemes for efficient buffer management. The implementation based on k-NN program from MaTEx with MVAPICH2-X (Unified MPI+PGAS Communication Runtime over InfiniBand) shows up to 9.0% time reduction for training KDD Cup 2010 workload over 512 cores, and 27.6% time reduction for small workload with balanced communication and computation. Experiments of running with varied number of cores show that our design can maintain good scalability.

  8. Toward a practical ultrasound waveform tomography algorithm for improving breast imaging

    NASA Astrophysics Data System (ADS)

    Li, Cuiping; Sandhu, Gursharan S.; Roy, Olivier; Duric, Neb; Allada, Veerendra; Schmidt, Steven

    2014-03-01

    Ultrasound tomography is an emerging modality for breast imaging. However, most current ultrasonic tomography imaging algorithms, historically hindered by the limited memory and processor speed of computers, are based on ray theory and assume a homogeneous background which is inaccurate for complex heterogeneous regions. Therefore, wave theory, which accounts for diffraction effects, must be used in ultrasonic imaging algorithms to properly handle the heterogeneous nature of breast tissue in order to accurately image small lesions. However, application of waveform tomography to medical imaging has been limited by extreme computational cost and convergence. By taking advantage of the computational architecture of Graphic Processing Units (GPUs), the intensive processing burden of waveform tomography can be greatly alleviated. In this study, using breast imaging methods, we implement a frequency domain waveform tomography algorithm on GPUs with the goal of producing high-accuracy and high-resolution breast images on clinically relevant time scales. We present some simulation results and assess the resolution and accuracy of our waveform tomography algorithms based on the simulation data.

  9. jClustering, an Open Framework for the Development of 4D Clustering Algorithms

    PubMed Central

    Mateos-Pérez, José María; García-Villalba, Carmen; Pascau, Javier; Desco, Manuel; Vaquero, Juan J.

    2013-01-01

    We present jClustering, an open framework for the design of clustering algorithms in dynamic medical imaging. We developed this tool because of the difficulty involved in manually segmenting dynamic PET images and the lack of availability of source code for published segmentation algorithms. Providing an easily extensible open tool encourages publication of source code to facilitate the process of comparing algorithms and provide interested third parties with the opportunity to review code. The internal structure of the framework allows an external developer to implement new algorithms easily and quickly, focusing only on the particulars of the method being implemented and not on image data handling and preprocessing. This tool has been coded in Java and is presented as an ImageJ plugin in order to take advantage of all the functionalities offered by this imaging analysis platform. Both binary packages and source code have been published, the latter under a free software license (GNU General Public License) to allow modification if necessary. PMID:23990913

  10. jClustering, an open framework for the development of 4D clustering algorithms.

    PubMed

    Mateos-Pérez, José María; García-Villalba, Carmen; Pascau, Javier; Desco, Manuel; Vaquero, Juan J

    2013-01-01

    We present jClustering, an open framework for the design of clustering algorithms in dynamic medical imaging. We developed this tool because of the difficulty involved in manually segmenting dynamic PET images and the lack of availability of source code for published segmentation algorithms. Providing an easily extensible open tool encourages publication of source code to facilitate the process of comparing algorithms and provide interested third parties with the opportunity to review code. The internal structure of the framework allows an external developer to implement new algorithms easily and quickly, focusing only on the particulars of the method being implemented and not on image data handling and preprocessing. This tool has been coded in Java and is presented as an ImageJ plugin in order to take advantage of all the functionalities offered by this imaging analysis platform. Both binary packages and source code have been published, the latter under a free software license (GNU General Public License) to allow modification if necessary.

  11. Computational Analysis of Distance Operators for the Iterative Closest Point Algorithm

    PubMed Central

    Mora-Pascual, Jerónimo M.; García-García, Alberto; Martínez-González, Pablo

    2016-01-01

    The Iterative Closest Point (ICP) algorithm is currently one of the most popular methods for rigid registration so that it has become the standard in the Robotics and Computer Vision communities. Many applications take advantage of it to align 2D/3D surfaces due to its popularity and simplicity. Nevertheless, some of its phases present a high computational cost thus rendering impossible some of its applications. In this work, it is proposed an efficient approach for the matching phase of the Iterative Closest Point algorithm. This stage is the main bottleneck of that method so that any efficiency improvement has a great positive impact on the performance of the algorithm. The proposal consists in using low computational cost point-to-point distance metrics instead of classic Euclidean one. The candidates analysed are the Chebyshev and Manhattan distance metrics due to their simpler formulation. The experiments carried out have validated the performance, robustness and quality of the proposal. Different experimental cases and configurations have been set up including a heterogeneous set of 3D figures, several scenarios with partial data and random noise. The results prove that an average speed up of 14% can be obtained while preserving the convergence properties of the algorithm and the quality of the final results. PMID:27768714

  12. A procedure and program to calculate shuttle mask advantage

    NASA Astrophysics Data System (ADS)

    Balasinski, A.; Cetin, J.; Kahng, A.; Xu, X.

    2006-10-01

    A well-known recipe for reducing mask cost component in product development is to place non-redundant elements of layout databases related to multiple products on one reticle plate [1,2]. Such reticles are known as multi-product, multi-layer, or, in general, multi-IP masks. The composition of the mask set should minimize not only the layout placement cost, but also the cost of the manufacturing process, design flow setup, and product design and introduction to market. An important factor is the quality check which should be expeditious and enable thorough visual verification to avoid costly modifications once the data is transferred to the mask shop. In this work, in order to enable the layer placement and quality check procedure, we proposed an algorithm where mask layers are first lined up according to the price and field tone [3]. Then, depending on the product die size, expected fab throughput, and scribeline requirements, the subsequent product layers are placed on the masks with different grades. The actual reduction of this concept to practice allowed us to understand the tradeoffs between the automation of layer placement and setup related constraints. For example, the limited options of the numbers of layer per plate dictated by the die size and other design feedback, made us consider layer pairing based not only on the final price of the mask set, but also on the cost of mask design and fab-friendliness. We showed that it may be advantageous to introduce manual layer pairing to ensure that, e.g., all interconnect layers would be placed on the same plate, allowing for easy and simultaneous design fixes. Another enhancement was to allow some flexibility in mixing and matching of the layers such that non-critical ones requiring low mask grade would be placed in a less restrictive way, to reduce the count of orphan layers. In summary, we created a program to automatically propose and visualize shuttle mask architecture for design verification, with

  13. Effects of visualization on algorithm comprehension

    NASA Astrophysics Data System (ADS)

    Mulvey, Matthew

    Computer science students are expected to learn and apply a variety of core algorithms which are an essential part of the field. Any one of these algorithms by itself is not necessarily extremely complex, but remembering the large variety of algorithms and the differences between them is challenging. To address this challenge, we present a novel algorithm visualization tool designed to enhance students understanding of Dijkstra's algorithm by allowing them to discover the rules of the algorithm for themselves. It is hoped that a deeper understanding of the algorithm will help students correctly select, adapt and apply the appropriate algorithm when presented with a problem to solve, and that what is learned here will be applicable to the design of other visualization tools designed to teach different algorithms. Our visualization tool is currently in the prototype stage, and this thesis will discuss the pedagogical approach that informs its design, as well as the results of some initial usability testing. Finally, to clarify the direction for further development of the tool, four different variations of the prototype were implemented, and the instructional effectiveness of each was assessed by having a small sample participants use the different versions of the prototype and then take a quiz to assess their comprehension of the algorithm.

  14. Study on Underwater Image Denoising Algorithm Based on Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Jian, Sun; Wen, Wang

    2017-02-01

    This paper analyzes the application of MATLAB in underwater image processing, the transmission characteristics of the underwater laser light signal and the kinds of underwater noise has been described, the common noise suppression algorithm: Wiener filter, median filter, average filter algorithm is brought out. Then the advantages and disadvantages of each algorithm in image sharpness and edge protection areas have been compared. A hybrid filter algorithm based on wavelet transform has been proposed which can be used for Color Image Denoising. At last the PSNR and NMSE of each algorithm has been given out, which compares the ability to de-noising

  15. Versatility of the CFR algorithm for limited angle reconstruction

    SciTech Connect

    Fujieda, I.; Heiskanen, K.; Perez-Mendez, V. )

    1990-04-01

    The constrained Fourier reconstruction (CFR) algorithm and the iterative reconstruction-reprojection (IRR) algorithm are evaluated based on their accuracy for three types of limited angle reconstruction problems. The cFR algorithm performs better for problems such as Xray CT imaging of a nuclear reactor core with one large data gap due to structural blocking of the source and detector pair. For gated heart imaging by Xray CT, radioisotope distribution imaging by PET or SPECT, using a polygonal array of gamma cameras with insensitive gaps between camera boundaries, the IRR algorithm has a slight advantage over the CFR algorithm but the difference is not significant.

  16. A Comparative Study of Protein Sequence Clustering Algorithms

    NASA Astrophysics Data System (ADS)

    Eldin, A. Sharaf; Abdelgaber, S.; Soliman, T.; Kassim, S.; Abdo, A.

    In this paper, we survey four clustering techniques and discuss their advantages and drawbacks. A review of eight different protein sequence clustering algorithms has been accomplished. Moreover, a comparison between the algorithms on the basis of some factors has been presented.

  17. NASA Team 2 Sea Ice Concentration Algorithm Retrieval Uncertainty

    NASA Technical Reports Server (NTRS)

    Brucker, Ludovic; Cavalieri, Donald J.; Markus, Thorsten; Ivanoff, Alvaro

    2014-01-01

    Satellite microwave radiometers are widely used to estimate sea ice cover properties (concentration, extent, and area) through the use of sea ice concentration (IC) algorithms. Rare are the algorithms providing associated IC uncertainty estimates. Algorithm uncertainty estimates are needed to assess accurately global and regional trends in IC (and thus extent and area), and to improve sea ice predictions on seasonal to interannual timescales using data assimilation approaches. This paper presents a method to provide relative IC uncertainty estimates using the enhanced NASA Team (NT2) IC algorithm. The proposed approach takes advantage of the NT2 calculations and solely relies on the brightness temperatures (TBs) used as input. NT2 IC and its associated relative uncertainty are obtained for both the Northern and Southern Hemispheres using the Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E) TB. NT2 IC relative uncertainties estimated on a footprint-by-footprint swath-by-swath basis were averaged daily over each 12.5-km grid cell of the polar stereographic grid. For both hemispheres and throughout the year, the NT2 relative uncertainty is less than 5%. In the Southern Hemisphere, it is low in the interior ice pack, and it increases in the marginal ice zone up to 5%. In the Northern Hemisphere, areas with high uncertainties are also found in the high IC area of the Central Arctic. Retrieval uncertainties are greater in areas corresponding to NT2 ice types associated with deep snow and new ice. Seasonal variations in uncertainty show larger values in summer as a result of melt conditions and greater atmospheric contributions. Our analysis also includes an evaluation of the NT2 algorithm sensitivity to AMSR-E sensor noise. There is a 60% probability that the IC does not change (to within the computed retrieval precision of 1%) due to sensor noise, and the cumulated probability shows that there is a 90% chance that the IC varies by less than

  18. Taking a History of Childhood Trauma in Psychotherapy

    PubMed Central

    SAPORTA, JOSÉ A.; GANS, JEROME S.

    1995-01-01

    The authors examine the process of taking an initial history of childhood abuse and trauma in psychodynamic psychotherapy. In exploring the advantages, complexities, and potential complications of this practice, they hope to heighten the sensitivities of clinicians taking trauma histories. Emphasis on the need to be active in eliciting important historical material is balanced with discussion of concepts that can help therapists avoid interpersonal dynamics that reenact and perpetuate the traumas the therapy seeks to treat. Ensuring optimal psychotherapeutic treatment for patients who have experienced childhood trauma requires attention to the following concepts: a safe holding environment, destabilization, compliance, the repetition compulsion, and projective identification. PMID:22700250

  19. Advantage of resonant power conversion in aerospace applications

    NASA Technical Reports Server (NTRS)

    Hansen, I. G.

    1983-01-01

    An ultrasonic, sinusoidal aerospace power distribution system is shown to have many advantages over other candidate power systems. These advantages include light weight, ease of fault clearing, versatility in handling many loads including motors, and the capability of production within the limits of present technology. References are cited that demonstrate the state of resonant converter technology and support these conclusions.

  20. Polysemy Advantage with Abstract but Not Concrete Words

    ERIC Educational Resources Information Center

    Jager, Bernadet; Cleland, Alexandra A.

    2016-01-01

    It is a robust finding that ambiguous words are recognized faster than unambiguous words. More recent studies (e.g., Rodd et al. in "J Mem Lang" 46:245-266, 2002) now indicate that this "ambiguity advantage" may in reality be a "polysemy advantage": caused by related senses (polysemy) rather than unrelated meanings…

  1. Information Technology, Core Competencies, and Sustained Competitive Advantage.

    ERIC Educational Resources Information Center

    Byrd, Terry Anthony

    2001-01-01

    Presents a model that depicts a possible connection between competitive advantage and information technology. Focuses on flexibility of the information technology infrastructure as an enabler of core competencies, especially mass customization and time-to-market, that have a relationship to sustained competitive advantage. (Contains 82…

  2. Referee bias contributes to home advantage in English Premiership football.

    PubMed

    Boyko, Ryan H; Boyko, Adam R; Boyko, Mark G

    2007-09-01

    Officiating bias is thought to contribute to home advantage. Recent research has shown that sports with subjective officiating tend to experience greater home advantage and that referees' decisions can be influenced by crowd noise, but little work has been done to examine whether individual referees vary in their home bias or whether biased decisions contribute to overall home advantage. We develop an ordinal regression model to determine whether various measures of home advantage are affected by the official for the match and by crowd size while controlling for team ability. We examine 5244 English Premier League (EPL) match results involving 50 referees and find that home bias differs between referees. Individual referees give significantly different levels of home advantage, measured as goal differential between the home and away teams, although the significance of this result depends on one referee with a particularly high home advantage (an outlier). Referees vary significantly and robustly in their yellow card and penalty differentials even excluding the outlier. These results confirm that referees are responsible for some of the observed home advantage in the EPL and suggest that home advantage is dependent on the subjective decisions of referees that vary between individuals. We hypothesize that individual referees respond differently to factors such as crowd noise and suggest further research looking at referees' psychological and behavioural responses to biased crowds.

  3. Comparative Advantage, Relative Wages, and the Accumulation of Human Capital.

    ERIC Educational Resources Information Center

    Teulings, Coen N.

    2005-01-01

    I apply Ricardo's principle of comparative advantage to a theory of factor substitutability in a model with a continuum of worker and job types. Highly skilled workers have a comparative advantage in complex jobs. The model satisfies the distance-dependent elasticity of substitution (DIDES) characteristic: substitutability between types declines…

  4. A Parallel EM Algorithm for Model-Based Clustering Applied to the Exploration of Large Spatio-Temporal Data

    SciTech Connect

    Chen, Wei-Chen; Ostrouchov, George; Pugmire, Dave; Prabhat,; Wehner, Michael

    2013-01-01

    We develop a parallel EM algorithm for multivariate Gaussian mixture models and use it to perform model-based clustering of a large climate data set. Three variants of the EM algorithm are reformulated in parallel and a new variant that is faster is presented. All are implemented using the single program, multiple data (SPMD) programming model, which is able to take advantage of the combined collective memory of large distributed computer architectures to process larger data sets. Displays of the estimated mixture model rather than the data allow us to explore multivariate relationships in a way that scales to arbitrary size data. We study the performance of our methodology on simulated data and apply our methodology to a high resolution climate dataset produced by the community atmosphere model (CAM5). This article has supplementary material online.

  5. Algorithms and applications of aberration correction and American standard-based digital evaluation in surface defects evaluating system

    NASA Astrophysics Data System (ADS)

    Wu, Fan; Cao, Pin; Yang, Yongying; Li, Chen; Chai, Huiting; Zhang, Yihui; Xiong, Haoliang; Xu, Wenlin; Yan, Kai; Zhou, Lin; Liu, Dong; Bai, Jian; Shen, Yibing

    2016-11-01

    The inspection of surface defects is one of significant sections of optical surface quality evaluation. Based on microscopic scattering dark-field imaging, sub-aperture scanning and stitching, the Surface Defects Evaluating System (SDES) can acquire full-aperture image of defects on optical elements surface and then extract geometric size and position information of defects with image processing such as feature recognization. However, optical distortion existing in the SDES badly affects the inspection precision of surface defects. In this paper, a distortion correction algorithm based on standard lattice pattern is proposed. Feature extraction, polynomial fitting and bilinear interpolation techniques in combination with adjacent sub-aperture stitching are employed to correct the optical distortion of the SDES automatically in high accuracy. Subsequently, in order to digitally evaluate surface defects with American standard by using American military standards MIL-PRF-13830B to judge the surface defects information obtained from the SDES, an American standard-based digital evaluation algorithm is proposed, which mainly includes a judgment method of surface defects concentration. The judgment method establishes weight region for each defect and adopts the method of overlap of weight region to calculate defects concentration. This algorithm takes full advantage of convenience of matrix operations and has merits of low complexity and fast in running, which makes itself suitable very well for highefficiency inspection of surface defects. Finally, various experiments are conducted and the correctness of these algorithms are verified. At present, these algorithms have been used in SDES.

  6. Algorithm Animation with Galant.

    PubMed

    Stallmann, Matthias F

    2017-01-01

    Although surveys suggest positive student attitudes toward the use of algorithm animations, it is not clear that they improve learning outcomes. The Graph Algorithm Animation Tool, or Galant, challenges and motivates students to engage more deeply with algorithm concepts, without distracting them with programming language details or GUIs. Even though Galant is specifically designed for graph algorithms, it has also been used to animate other algorithms, most notably sorting algorithms.

  7. Beyond the University of Racial Diversity: Some Remarks on Race, Diversity, (Dis)Advantage and Affirmative Action

    ERIC Educational Resources Information Center

    Waghid, Y.

    2010-01-01

    The compelling essays in this issue of the journal take on the often contentious and complex issue of racial affirmative action. I do not wish to repeat the arguments authors offer either in defence or against student admissions to a university on the grounds of race, (dis)advantage, class, gender, and so on. Rather, I wish to respond to a…

  8. The Optimization of Trained and Untrained Image Classification Algorithms for Use on Large Spatial Datasets

    NASA Technical Reports Server (NTRS)

    Kocurek, Michael J.

    2005-01-01

    The HARVIST project seeks to automatically provide an accurate, interactive interface to predict crop yield over the entire United States. In order to accomplish this goal, large images must be quickly and automatically classified by crop type. Current trained and untrained classification algorithms, while accurate, are highly inefficient when operating on large datasets. This project sought to develop new variants of two standard trained and untrained classification algorithms that are optimized to take advantage of the spatial nature of image data. The first algorithm, harvist-cluster, utilizes divide-and-conquer techniques to precluster an image in the hopes of increasing overall clustering speed. The second algorithm, harvistSVM, utilizes support vector machines (SVMs), a type of trained classifier. It seeks to increase classification speed by applying a "meta-SVM" to a quick (but inaccurate) SVM to approximate a slower, yet more accurate, SVM. Speedups were achieved by tuning the algorithm to quickly identify when the quick SVM was incorrect, and then reclassifying low-confidence pixels as necessary. Comparing the classification speeds of both algorithms to known baselines showed a slight speedup for large values of k (the number of clusters) for harvist-cluster, and a significant speedup for harvistSVM. Future work aims to automate the parameter tuning process required for harvistSVM, and further improve classification accuracy and speed. Additionally, this research will move documents created in Canvas into ArcGIS. The launch of the Mars Reconnaissance Orbiter (MRO) will provide a wealth of image data such as global maps of Martian weather and high resolution global images of Mars. The ability to store this new data in a georeferenced format will support future Mars missions by providing data for landing site selection and the search for water on Mars.

  9. A set-covering based heuristic algorithm for the periodic vehicle routing problem.

    PubMed

    Cacchiani, V; Hemmelmayr, V C; Tricoire, F

    2014-01-30

    We present a hybrid optimization algorithm for mixed-integer linear programming, embedding both heuristic and exact components. In order to validate it we use the periodic vehicle routing problem (PVRP) as a case study. This problem consists of determining a set of minimum cost routes for each day of a given planning horizon, with the constraints that each customer must be visited a required number of times (chosen among a set of valid day combinations), must receive every time the required quantity of product, and that the number of routes per day (each respecting the capacity of the vehicle) does not exceed the total number of available vehicles. This is a generalization of the well-known vehicle routing problem (VRP). Our algorithm is based on the linear programming (LP) relaxation of a set-covering-like integer linear programming formulation of the problem, with additional constraints. The LP-relaxation is solved by column generation, where columns are generated heuristically by an iterated local search algorithm. The whole solution method takes advantage of the LP-solution and applies techniques of fixing and releasing of the columns as a local search, making use of a tabu list to avoid cycling. We show the results of the proposed algorithm on benchmark instances from the literature and compare them to the state-of-the-art algorithms, showing the effectiveness of our approach in producing good quality solutions. In addition, we report the results on realistic instances of the PVRP introduced in Pacheco et al. (2011)  [24] and on benchmark instances of the periodic traveling salesman problem (PTSP), showing the efficacy of the proposed algorithm on these as well. Finally, we report the new best known solutions found for all the tested problems.

  10. Universal perceptron and DNA-like learning algorithm for binary neural networks: LSBF and PBF implementations.

    PubMed

    Chen, Fangyue; Chen, Guanrong Ron; He, Guolong; Xu, Xiubin; He, Qinbin

    2009-10-01

    Universal perceptron (UP), a generalization of Rosenblatt's perceptron, is considered in this paper, which is capable of implementing all Boolean functions (BFs). In the classification of BFs, there are: 1) linearly separable Boolean function (LSBF) class, 2) parity Boolean function (PBF) class, and 3) non-LSBF and non-PBF class. To implement these functions, UP takes different kinds of simple topological structures in which each contains at most one hidden layer along with the smallest possible number of hidden neurons. Inspired by the concept of DNA sequences in biological systems, a novel learning algorithm named DNA-like learning is developed, which is able to quickly train a network with any prescribed BF. The focus is on performing LSBF and PBF by a single-layer perceptron (SLP) with the new algorithm. Two criteria for LSBF and PBF are proposed, respectively, and a new measure for a BF, named nonlinearly separable degree (NLSD), is introduced. In the sense of this measure, the PBF is the most complex one. The new algorithm has many advantages including, in particular, fast running speed, good robustness, and no need of considering the convergence property. For example, the number of iterations and computations in implementing the basic 2-bit logic operations such as AND, OR, and XOR by using the new algorithm is far smaller than the ones needed by using other existing algorithms such as error-correction (EC) and backpropagation (BP) algorithms. Moreover, the synaptic weights and threshold values derived from UP can be directly used in designing of the template of cellular neural networks (CNNs), which has been considered as a new spatial-temporal sensory computing paradigm.

  11. Algorithms for High-Speed Noninvasive Eye-Tracking System

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Morookian, John-Michael; Lambert, James

    2010-01-01

    Two image-data-processing algorithms are essential to the successful operation of a system of electronic hardware and software that noninvasively tracks the direction of a person s gaze in real time. The system was described in High-Speed Noninvasive Eye-Tracking System (NPO-30700) NASA Tech Briefs, Vol. 31, No. 8 (August 2007), page 51. To recapitulate from the cited article: Like prior commercial noninvasive eyetracking systems, this system is based on (1) illumination of an eye by a low-power infrared light-emitting diode (LED); (2) acquisition of video images of the pupil, iris, and cornea in the reflected infrared light; (3) digitization of the images; and (4) processing the digital image data to determine the direction of gaze from the centroids of the pupil and cornea in the images. Most of the prior commercial noninvasive eyetracking systems rely on standard video cameras, which operate at frame rates of about 30 Hz. Such systems are limited to slow, full-frame operation. The video camera in the present system includes a charge-coupled-device (CCD) image detector plus electronic circuitry capable of implementing an advanced control scheme that effects readout from a small region of interest (ROI), or subwindow, of the full image. Inasmuch as the image features of interest (the cornea and pupil) typically occupy a small part of the camera frame, this ROI capability can be exploited to determine the direction of gaze at a high frame rate by reading out from the ROI that contains the cornea and pupil (but not from the rest of the image) repeatedly. One of the present algorithms exploits the ROI capability. The algorithm takes horizontal row slices and takes advantage of the symmetry of the pupil and cornea circles and of the gray-scale contrasts of the pupil and cornea with respect to other parts of the eye. The algorithm determines which horizontal image slices contain the pupil and cornea, and, on each valid slice, the end coordinates of the pupil and cornea

  12. Who Takes the Second Chance? Implementing Educational Equality in Adult Basic Education in a Swedish Context.

    ERIC Educational Resources Information Center

    Fransson, Anders; Larsson, Staffan

    This paper presents an overview of adult basic education in Sweden, focusing on what types of adults take advantage of basic educational opportunities and when they do so. It includes two case studies, one of auxiliary nurses and one of factory workers, that describe what motivates these employees to take literacy education or further education…

  13. Algorithms for Disconnected Diagrams in Lattice QCD

    SciTech Connect

    Gambhir, Arjun Singh; Stathopoulos, Andreas; Orginos, Konstantinos; Yoon, Boram; Gupta, Rajan; Syritsyn, Sergey

    2016-11-01

    Computing disconnected diagrams in Lattice QCD (operator insertion in a quark loop) entails the computationally demanding problem of taking the trace of the all to all quark propagator. We first outline the basic algorithm used to compute a quark loop as well as improvements to this method. Then, we motivate and introduce an algorithm based on the synergy between hierarchical probing and singular value deflation. We present results for the chiral condensate using a 2+1-flavor clover ensemble and compare estimates of the nucleon charges with the basic algorithm.

  14. Algorithm for obtaining angular fluxes in a cell for the LUCKY and LUCKY{sub C} multiprocessor programs

    SciTech Connect

    Moryakov, A. V.

    2012-12-15

    Basic formulas for solving the transport equation in a cell are presented. The algorithm has been implemented in the LUCKY and LUCKY{sub C} programs. The advantages of the proposed algorithm are described.

  15. Algorithm for obtaining angular fluxes in a cell for the LUCKY and LUCKY_C multiprocessor programs

    NASA Astrophysics Data System (ADS)

    Moryakov, A. V.

    2012-12-01

    Basic formulas for solving the transport equation in a cell are presented. The algorithm has been implemented in the LUCKY and LUCKY_C programs. The advantages of the proposed algorithm are described.

  16. Professionalism: Teachers Taking the Reins

    ERIC Educational Resources Information Center

    Helterbran, Valeri R.

    2008-01-01

    It is essential that teachers take a proactive look at their profession and themselves to strengthen areas of professionalism over which they have control. In this article, the author suggests strategies that include collaborative planning, reflectivity, growth in the profession, and the examination of certain personal characteristics.

  17. Taking Stands for Social Justice

    ERIC Educational Resources Information Center

    Lindley, Lorinda; Rios, Francisco

    2004-01-01

    In this paper the authors describe efforts to help students take a stand for social justice in the College of Education at one predominantly White institution in the western Rocky Mountain region. The authors outline the theoretical frameworks that inform this work and the context of our work. The focus is on specific pedagogical strategies used…

  18. Four Takes on Tough Times

    ERIC Educational Resources Information Center

    Rebell, Michael A.; Odden, Allan; Rolle, Anthony; Guthrie, James W.

    2012-01-01

    Educational Leadership talks with four experts in the fields of education policy and finance about how schools can weather the current financial crisis. Michael A. Rebell focuses on the recession and students' rights; Allan Odden suggests five steps schools can take to improve in tough times; Anthony Rolle describes the tension between equity and…

  19. Experiencing discrimination increases risk taking.

    PubMed

    Jamieson, Jeremy P; Koslov, Katrina; Nock, Matthew K; Mendes, Wendy Berry

    2013-02-01

    Prior research has revealed racial disparities in health outcomes and health-compromising behaviors, such as smoking and drug abuse. It has been suggested that discrimination contributes to such disparities, but the mechanisms through which this might occur are not well understood. In the research reported here, we examined whether the experience of discrimination affects acute physiological stress responses and increases risk-taking behavior. Black and White participants each received rejecting feedback from partners who were either of their own race (in-group rejection) or of a different race (out-group rejection, which could be interpreted as discrimination). Physiological (cardiovascular and neuroendocrine) changes, cognition (memory and attentional bias), affect, and risk-taking behavior were assessed. Significant participant race × partner race interactions were observed. Cross-race rejection, compared with same-race rejection, was associated with lower levels of cortisol, increased cardiac output, decreased vascular resistance, greater anger, increased attentional bias, and more risk-taking behavior. These data suggest that perceived discrimination is associated with distinct profiles of physiological reactivity, affect, cognitive processing, and risk taking, implicating direct and indirect pathways to health disparities.

  20. College Presidents Take on 21

    ERIC Educational Resources Information Center

    Fain, Paul

    2008-01-01

    College presidents have long gotten flak for refusing to take controversial stands on national issues. A large group of presidents opened an emotionally charged national debate on the drinking age. In doing so, they triggered an avalanche of news-media coverage and a fierce backlash. While the criticism may sting, the prime-time fracas may help…

  1. Take Charge of Your Career

    ERIC Educational Resources Information Center

    Brown, Marshall A.

    2013-01-01

    Today's work world is full of uncertainty. Every day, people hear about another organization going out of business, downsizing, or rightsizing. To prepare for these uncertain times, one must take charge of their own career. This article presents some tips for surviving in today's world of work: (1) Be self-managing; (2) Know what you…

  2. Taking control of anorexia together.

    PubMed

    Cole, Elaine

    2015-02-27

    Many people with anorexia receive inadequate treatment for what is a debilitating, relentless and life-threatening illness. In Lincolnshire an innovative nurse-led day programme is helping people stay out of hospital and take back control from the illness. Peer support is crucial to the programme's success.

  3. Recursive Branching Simulated Annealing Algorithm

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew; Smith, J. Scott; Aronstein, David

    2012-01-01

    This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal

  4. The Relationship between Puberty and Risk Taking in the Real World and in the Laboratory.

    PubMed

    Collado-Rodriguez, A; MacPherson, L; Kurdziel, G; Rosenberg, L A; Lejuez, C W

    2014-10-01

    Adolescence is marked by the emergence and escalation of risk taking. Puberty has been long-implicated as constituting vulnerability for risk behavior during this developmental period. Sole reliance on self-reports of risk taking however poses limitations to understanding this complex relationship. There exist potential advantages of complementing self-reports by using the BART-Y laboratory task, a well-validated measure of adolescent risk taking. Toward this end, we examined the association between self-reported puberty and both self-reported and BART-Y risk taking in 231 adolescents. Results showed that pubertal status predicted risk taking using both methodologies above and beyond relevant demographic characteristics. Advantages of a multimodal assessment toward understanding the effects of puberty in adolescent risk taking are discussed and future research directions offered.

  5. The Relationship between Puberty and Risk Taking in the Real World and in the Laboratory

    PubMed Central

    Collado-Rodriguez, A.; MacPherson, L.; Kurdziel, G.; Rosenberg, L. A.; Lejuez, C.W.

    2014-01-01

    Adolescence is marked by the emergence and escalation of risk taking. Puberty has been long-implicated as constituting vulnerability for risk behavior during this developmental period. Sole reliance on self-reports of risk taking however poses limitations to understanding this complex relationship. There exist potential advantages of complementing self-reports by using the BART-Y laboratory task, a well-validated measure of adolescent risk taking. Toward this end, we examined the association between self-reported puberty and both self-reported and BART-Y risk taking in 231 adolescents. Results showed that pubertal status predicted risk taking using both methodologies above and beyond relevant demographic characteristics. Advantages of a multimodal assessment toward understanding the effects of puberty in adolescent risk taking are discussed and future research directions offered. PMID:24999291

  6. Detection Algorithms of the Seismic Alert System of Mexico (SASMEX)

    NASA Astrophysics Data System (ADS)

    Cuellar Martinez, A.; Espinosa Aranda, J.; Ramos Perez, S.; Ibarrola Alvarez, G.; Zavala Guerrero, M.; Sasmex

    2013-05-01

    The importance of a rapid and reliable detection of an earthquake, allows taking advantage with more opportunity time of any possible opportunity warnings to the population. Thus detection algorithms in the sensing field station (FS) of an earthquake early earning system, must have a high rate of correct detection; this condition lets perform numerical processes to obtain appropriate parameters for the alert activation. During the evolution and continuous service of the Mexican Seismic Alert System (SASMEX) in more than 23 operation years, it has used various methodologies in the detection process to get the largest opportunity time when an earthquake occurs and it is alerted. In addition to the characteristics of the acceleration signal observed in sensing field stations, it is necessary the site conditions reducing urban noise, but sometimes it is not present through of the first operation years, however, urban growth near to FS cause urban noise, which should be tolerated while carrying out the relocation process of the station, and in the algorithm design should be contemplating the robustness to reduce possible errors and false detections. This work presents some results on detection algorithms used in Mexico for early warning systems for earthquakes considering recent events and different opportunity times obtained depending of the detections on P and S phases of the earthquake detected in the station. Some methodologies are reviewed and described in detail in this work and the main features implemented in The Seismic Alert System of Mexico City (SAS), in continuous operation since 1991, and the Seismic Alert System of Oaxaca City (SASO), today both comprise the SASMEX.

  7. Seeking the competitive advantage: it's more than cost reduction.

    PubMed

    South, S F

    1999-01-01

    Most organizations focus considerable time and energy on reducing operating costs as a way to attain marketplace advantage. This strategy was not inappropriate in the past. To be competitive in the future, however, focus must be placed on other issues, not just cost reduction. The near future will be dominated by service industries, knowledge management, and virtual partnerships, with production optimization and flexibility, innovation, and strong partnerships defining those organizations that attain competitive advantage. Competitive advantage will reside in clarifying the vision and strategic plan, reviewing and redesigning work processes to optimize resources and value-added work, and creating change-ready environments and empowered workforces.

  8. A cognitive architecture account of the visual local advantage phenomenon in autism spectrum disorders.

    PubMed

    van der Helm, Peter A

    2016-09-01

    Ideally, a cognitive architecture is a neurally plausible model that unifies mental representations and cognitive processes. Here, I apply such a model to re-evaluate the local advantage phenomenon in autism spectrum disorders (ASD), that is, the better than typical performance on visual tasks in which local stimulus features are to be discerned. The model takes (a) perceptual organization as a predominantly stimulus-driven process yielding hierarchical stimulus organizations, and (b) attention as predominantly scrutinizing the hierarchical structure of established percepts in a task-driven top-down fashion. This accounts for a dominance of wholes over parts and implies that perceived global structures mask incompatible local features. The model also substantiates that impairments in neuronal synchronization - as found in ASD - reduce the emergence of global structures and, thereby, their masking effect on incompatible features. I argue that this explains the local advantage phenomenon and I discuss implications and suggestions for future research.

  9. 78 FR 69878 - First Advantage Corporation, Including On-Site Leased Workers From Tapfin, Staffworks, Aerotek...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-21

    ... Employment and Training Administration First Advantage Corporation, Including On-Site Leased Workers From... Staffing, St. Petersburg, Florida; First Advantage Corporation, Charlotte, North Carolina, First Advantage Corporation, Bolingbrook, Illinois; First Advantage Corporation, Dallas, Texas; First Advantage...

  10. Cognitive advantage in bilingualism: an example of publication bias?

    PubMed

    de Bruin, Angela; Treccani, Barbara; Della Sala, Sergio

    2015-01-01

    It is a widely held belief that bilinguals have an advantage over monolinguals in executive-control tasks, but is this what all studies actually demonstrate? The idea of a bilingual advantage may result from a publication bias favoring studies with positive results over studies with null or negative effects. To test this hypothesis, we looked at conference abstracts from 1999 to 2012 on the topic of bilingualism and executive control. We then determined which of the studies they reported were subsequently published. Studies with results fully supporting the bilingual-advantage theory were most likely to be published, followed by studies with mixed results. Studies challenging the bilingual advantage were published the least. This discrepancy was not due to differences in sample size, tests used, or statistical power. A test for funnel-plot asymmetry provided further evidence for the existence of a publication bias.

  11. Back to basics: a bilingual advantage in infant visual habituation.

    PubMed

    Singh, Leher; Fu, Charlene S L; Rahman, Aishah A; Hameed, Waseem B; Sanmugam, Shamini; Agarwal, Pratibha; Jiang, Binyan; Chong, Yap Seng; Meaney, Michael J; Rifkin-Graboi, Anne

    2015-01-01

    Comparisons of cognitive processing in monolinguals and bilinguals have revealed a bilingual advantage in inhibitory control. Recent studies have demonstrated advantages associated with exposure to two languages in infancy. However, the domain specificity and scope of the infant bilingual advantage in infancy remains unclear. In the present study, 114 monolingual and bilingual infants were compared in a very basic task of information processing-visual habituation-at 6 months of age. Bilingual infants demonstrated greater efficiency in stimulus encoding as well as in improved recognition memory for familiar stimuli as compared to monolinguals. Findings reveal a generalized cognitive advantage in bilingual infants that is broad in scope, early to emerge, and not specific to language.

  12. Review of ADHD Pharmacotherapies: Advantages, Disadvantages, and Clinical Pearls

    ERIC Educational Resources Information Center

    Daughton, Joan M.; Kratochvil, Christopher J.

    2009-01-01

    The advantages, disadvantages, as well as helpful hints on when to use several drug therapies against attention deficit hyperactivity disorder are discussed. The drugs discussed are methylphenidate, atomoxetine, clonidine, and bupropion.

  13. Advantages of thin silicon solar cells for use in space

    NASA Technical Reports Server (NTRS)

    Denman, O. S.

    1978-01-01

    A system definition study on the Solar Power Satellite System showed that a thin, 50 micrometers, silicon solar cell has significant advantages. The advantages include a significantly lower performance degradation in a radiation environment and high power-to-mass ratios. The advantages of such cells for an employment in space is further investigated. Basic questions concerning the operation of solar cells are considered along with aspects of radiation induced performance degradation. The question arose in this connection how thin a silicon solar cell had to be to achieve resistance to radiation degradation and still have good initial performance. It was found that single-crystal silicon solar cells could be as thin as 50 micrometers and still develop high conversion efficiencies. It is concluded that the use of 50 micrometer silicon solar cells in space-based photovoltaic power systems would be advantageous.

  14. A hard-threshold based sparse inverse imaging algorithm for optical scanning holography reconstruction

    NASA Astrophysics Data System (ADS)

    Zhao, Fengjun; Qu, Xiaochao; Zhang, Xing; Poon, Ting-Chung; Kim, Taegeun; Kim, You Seok; Liang, Jimin

    2012-03-01

    The optical imaging takes advantage of coherent optics and has promoted the development of visualization of biological application. Based on the temporal coherence, optical coherence tomography can deliver three-dimensional optical images with superior resolutions, but the axial and lateral scanning is a time-consuming process. Optical scanning holography (OSH) is a spatial coherence technique which integrates three-dimensional object into a two-dimensional hologram through a two-dimensional optical scanning raster. The advantages of high lateral resolution and fast image acquisition offer it a great potential application in three-dimensional optical imaging, but the prerequisite is the accurate and practical reconstruction algorithm. Conventional method was first adopted to reconstruct sectional images and obtained fine results, but some drawbacks restricted its practicality. An optimization method based on 2 l norm obtained more accurate results than that of the conventional methods, but the intrinsic smooth of 2 l norm blurs the reconstruction results. In this paper, a hard-threshold based sparse inverse imaging algorithm is proposed to improve the sectional image reconstruction. The proposed method is characterized by hard-threshold based iterating with shrinkage threshold strategy, which only involves lightweight vector operations and matrix-vector multiplication. The performance of the proposed method has been validated by real experiment, which demonstrated great improvement on reconstruction accuracy at appropriate computational cost.

  15. A smoothing algorithm using cubic spline functions

    NASA Technical Reports Server (NTRS)

    Smith, R. E., Jr.; Price, J. M.; Howser, L. M.

    1974-01-01

    Two algorithms are presented for smoothing arbitrary sets of data. They are the explicit variable algorithm and the parametric variable algorithm. The former would be used where large gradients are not encountered because of the smaller amount of calculation required. The latter would be used if the data being smoothed were double valued or experienced large gradients. Both algorithms use a least-squares technique to obtain a cubic spline fit to the data. The advantage of the spline fit is that the first and second derivatives are continuous. This method is best used in an interactive graphics environment so that the junction values for the spline curve can be manipulated to improve the fit.

  16. The BR eigenvalue algorithm

    SciTech Connect

    Geist, G.A.; Howell, G.W.; Watkins, D.S.

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  17. HOW MUCH FAVORABLE SELECTION IS LEFT IN MEDICARE ADVANTAGE?

    PubMed

    Newhouse, Joseph P; Price, Mary; McWilliams, J Michael; Hsu, John; McGuire, Thomas G

    2015-01-01

    The health economics literature contains two models of selection, one with endogenous plan characteristics to attract good risks and one with fixed plan characteristics; neither model contains a regulator. Medicare Advantage, a principal example of selection in the literature, is, however, subject to anti-selection regulations. Because selection causes economic inefficiency and because the historically favorable selection into Medicare Advantage plans increased government cost, the effectiveness of the anti-selection regulations is an important policy question, especially since the Medicare Advantage program has grown to comprise 30 percent of Medicare beneficiaries. Moreover, similar anti-selection regulations are being used in health insurance exchanges for those under 65. Contrary to earlier work, we show that the strengthened anti-selection regulations that Medicare introduced starting in 2004 markedly reduced government overpayment attributable to favorable selection in Medicare Advantage. At least some of the remaining selection is plausibly related to fixed plan characteristics of Traditional Medicare versus Medicare Advantage rather than changed selection strategies by Medicare Advantage plans.

  18. HOW MUCH FAVORABLE SELECTION IS LEFT IN MEDICARE ADVANTAGE?

    PubMed Central

    PRICE, MARY; MCWILLIAMS, J. MICHAEL; HSU, JOHN; MCGUIRE, THOMAS G.

    2015-01-01

    The health economics literature contains two models of selection, one with endogenous plan characteristics to attract good risks and one with fixed plan characteristics; neither model contains a regulator. Medicare Advantage, a principal example of selection in the literature, is, however, subject to anti-selection regulations. Because selection causes economic inefficiency and because the historically favorable selection into Medicare Advantage plans increased government cost, the effectiveness of the anti-selection regulations is an important policy question, especially since the Medicare Advantage program has grown to comprise 30 percent of Medicare beneficiaries. Moreover, similar anti-selection regulations are being used in health insurance exchanges for those under 65. Contrary to earlier work, we show that the strengthened anti-selection regulations that Medicare introduced starting in 2004 markedly reduced government overpayment attributable to favorable selection in Medicare Advantage. At least some of the remaining selection is plausibly related to fixed plan characteristics of Traditional Medicare versus Medicare Advantage rather than changed selection strategies by Medicare Advantage plans. PMID:26389127

  19. Contrasting behavior between dispersive seismic velocity and attenuation: advantages in subsoil characterization.

    PubMed

    Zhubayev, Alimzhan; Ghose, Ranajit

    2012-02-01

    A careful look into the pertinent models of poroelasticity reveals that in water-saturated sediments or soils, the seismic (P and S wave) velocity dispersion and attenuation in the low field-seismic frequency band (20-200 Hz) have a contrasting behavior in the porosity-permeability domain. Taking advantage of this nearly orthogonal behavior, a new approach has been proposed, which leads to unique estimates of both porosity and permeability simultaneously. Through realistic numerical tests, the effect of maximum frequency content in data and the integration of P and S waves on the accuracy and robustness of the estimates are demonstrated.

  20. Sleep Deprivation and Advice Taking

    PubMed Central

    Häusser, Jan Alexander; Leder, Johannes; Ketturat, Charlene; Dresler, Martin; Faber, Nadira Sophie

    2016-01-01

    Judgements and decisions in many political, economic or medical contexts are often made while sleep deprived. Furthermore, in such contexts individuals are required to integrate information provided by – more or less qualified – advisors. We asked if sleep deprivation affects advice taking. We conducted a 2 (sleep deprivation: yes vs. no) ×2 (competency of advisor: medium vs. high) experimental study to examine the effects of sleep deprivation on advice taking in an estimation task. We compared participants with one night of total sleep deprivation to participants with a night of regular sleep. Competency of advisor was manipulated within subjects. We found that sleep deprived participants show increased advice taking. An interaction of condition and competency of advisor and further post-hoc analyses revealed that this effect was more pronounced for the medium competency advisor compared to the high competency advisor. Furthermore, sleep deprived participants benefited more from an advisor of high competency in terms of stronger improvement in judgmental accuracy than well-rested participants. PMID:27109507

  1. Improving the medical 'take sheet'.

    PubMed

    Reed, Oliver

    2014-01-01

    The GMC states that "Trainees in hospital posts must have well organised handover arrangements, ensuring continuity of patient care[1]". In the Belfast City Hospital throughout the day there can be multiple new medical admissions. These can be via the GP Unit, transfers for tertiary care, and transfers due to bed shortages in other hospitals. Over the course of 24 hours there can be up to four medical SHOs and three registrars that fill in the take sheet. Due to the variety of admission routes and number of doctors looking after the medical take information can be lost during handover between SHOs. In the current format there is little room to write and key and relevant information on the medical take sheet about new and transferring patients. I felt that this handover sheet could be improved. An initial questionnaire demonstrated that 47% found the old proforma easy to use and 28.2% felt that it allowed them to identify sick patients. 100% of SHOs and Registrars surveyed felt that it could be improved from its current form. From feedback from my colleagues I created a new template and trialled it in the hospital. A repeat questionnaire demonstrated that 92.3% of responders felt the new format had improved medical handover and that 92.6% felt that it allowed safe handover most of the time/always. The success of this new proforma resulted in it being implemented on a permanent basis for new medical admissions and transfers to the hospital.

  2. Parallelization of the Pipelined Thomas Algorithm

    NASA Technical Reports Server (NTRS)

    Povitsky, A.

    1998-01-01

    In this study the following questions are addressed. Is it possible to improve the parallelization efficiency of the Thomas algorithm? How should the Thomas algorithm be formulated in order to get solved lines that are used as data for other computational tasks while processors are idle? To answer these questions, two-step pipelined algorithms (PAs) are introduced formally. It is shown that the idle processor time is invariant with respect to the order of backward and forward steps in PAs starting from one outermost processor. The advantage of PAs starting from two outermost processors is small. Versions of the pipelined Thomas algorithms considered here fall into the category of PAs. These results show that the parallelization efficiency of the Thomas algorithm cannot be improved directly. However, the processor idle time can be used if some data has been computed by the time processors become idle. To achieve this goal the Immediate Backward pipelined Thomas Algorithm (IB-PTA) is developed in this article. The backward step is computed immediately after the forward step has been completed for the first portion of lines. This enables the completion of the Thomas algorithm for some of these lines before processors become idle. An algorithm for generating a static processor schedule recursively is developed. This schedule is used to switch between forward and backward computations and to control communications between processors. The advantage of the IB-PTA over the basic PTA is the presence of solved lines, which are available for other computations, by the time processors become idle.

  3. A new algorithm for the detection of seismic quiescence: introduction of the RTM algorithm, a modified RTL algorithm

    NASA Astrophysics Data System (ADS)

    Nagao, Toshiyasu; Takeuchi, Akihiro; Nakamura, Kenji

    2011-03-01

    There are a number of reports on seismic quiescence phenomena before large earthquakes. The RTL algorithm is a weighted coefficient statistical method that takes into account the magnitude, occurrence time, and place of earthquake when seismicity pattern changes before large earthquakes are being investigated. However, we consider the original RTL algorithm to be overweighted on distance. In this paper, we introduce a modified RTL algorithm, called the RTM algorithm, and apply it to three large earthquakes in Japan, namely, the Hyogo-ken Nanbu earthquake in 1995 ( M JMA7.3), the Noto Hanto earthquake in 2007 ( M JMA 6.9), and the Iwate-Miyagi Nairiku earthquake in 2008 ( M JMA 7.2), as test cases. Because this algorithm uses several parameters to characterize the weighted coefficients, multiparameter sets have to be prepared for the tests. The results show that the RTM algorithm is more sensitive than the RTL algorithm to seismic quiescence phenomena. This paper represents the first step in a series of future analyses of seismic quiescence phenomena using the RTM algorithm. At this moment, whole surveyed parameters are empirically selected for use in the method. We have to consider the physical meaning of the "best fit" parameter, such as the relation of ACFS, among others, in future analyses.

  4. Take the monkey and run

    PubMed Central

    Phillips, Kimberley A.; Hambright, M. Karen; Hewes, Kelly; Schilder, Brian M.; Ross, Corinna N.; Tardif, Suzette D.

    2015-01-01

    Background The common marmoset (Callithrix jacchus) is a small, New World primate that is used extensively in biomedical and behavioral research. This short-lived primate, with its small body size, ease of handling, and docile temperament, has emerged as a valuable model for aging and neurodegenerative research. A growing body of research has indicated exercise, aerobic exercise especially, imparts beneficial effects to normal aging. Understanding the mechanisms underlying these positive effects of exercise, and the degree to which exercise has neurotherapeutic effects, is an important research focus. Thus, developing techniques to engage marmosets in aerobic exercise would have great advantages. New method Here we describe the marmoset exercise ball (MEB) paradigm: a safe (for both experimenter and subjects), novel and effective means to engage marmosets in aerobic exercise. We trained young adult male marmosets to run on treadmills for 30 min a day, 3 days a week. Results Our training procedures allowed us to engage male marmosets in this aerobic exercise within 4 weeks, and subjects maintained this frequency of exercise for 3 months. Comparison with existing methods To our knowledge, this is the first described method to engage marmosets in aerobic exercise. A major advantage of this exercise paradigm is that while it was technically forced exercise, it did not appear to induce stress in the marmosets. Conclusions These techniques should be useful to researchers wishing to address physiological responses of exercise in a marmoset model. PMID:25835199

  5. Combined string searching algorithm based on knuth-morris- pratt and boyer-moore algorithms

    NASA Astrophysics Data System (ADS)

    Tsarev, R. Yu; Chernigovskiy, A. S.; Tsareva, E. A.; Brezitskaya, V. V.; Nikiforov, A. Yu; Smirnov, N. A.

    2016-04-01

    The string searching task can be classified as a classic information processing task. Users either encounter the solution of this task while working with text processors or browsers, employing standard built-in tools, or this task is solved unseen by the users, while they are working with various computer programmes. Nowadays there are many algorithms for solving the string searching problem. The main criterion of these algorithms’ effectiveness is searching speed. The larger the shift of the pattern relative to the string in case of pattern and string characters’ mismatch is, the higher is the algorithm running speed. This article offers a combined algorithm, which has been developed on the basis of well-known Knuth-Morris-Pratt and Boyer-Moore string searching algorithms. These algorithms are based on two different basic principles of pattern matching. Knuth-Morris-Pratt algorithm is based upon forward pattern matching and Boyer-Moore is based upon backward pattern matching. Having united these two algorithms, the combined algorithm allows acquiring the larger shift in case of pattern and string characters’ mismatch. The article provides an example, which illustrates the results of Boyer-Moore and Knuth-Morris- Pratt algorithms and combined algorithm’s work and shows advantage of the latter in solving string searching problem.

  6. Study on Increasing the Accuracy of Classification Based on Ant Colony algorithm

    NASA Astrophysics Data System (ADS)

    Yu, M.; Chen, D.-W.; Dai, C.-Y.; Li, Z.-L.

    2013-05-01

    The application for GIS advances the ability of data analysis on remote sensing image. The classification and distill of remote sensing image is the primary information source for GIS in LUCC application. How to increase the accuracy of classification is an important content of remote sensing research. Adding features and researching new classification methods are the ways to improve accuracy of classification. Ant colony algorithm based on mode framework defined, agents of the algorithms in nature-inspired computation field can show a kind of uniform intelligent computation mode. It is applied in remote sensing image classification is a new method of preliminary swarm intelligence. Studying the applicability of ant colony algorithm based on more features and exploring the advantages and performance of ant colony algorithm are provided with very important significance. The study takes the outskirts of Fuzhou with complicated land use in Fujian Province as study area. The multi-source database which contains the integration of spectral information (TM1-5, TM7, NDVI, NDBI) and topography characters (DEM, Slope, Aspect) and textural information (Mean, Variance, Homogeneity, Contrast, Dissimilarity, Entropy, Second Moment, Correlation) were built. Classification rules based different characters are discovered from the samples through ant colony algorithm and the classification test is performed based on these rules. At the same time, we compare with traditional maximum likelihood method, C4.5 algorithm and rough sets classifications for checking over the accuracies. The study showed that the accuracy of classification based on the ant colony algorithm is higher than other methods. In addition, the land use and cover changes in Fuzhou for the near term is studied and display the figures by using remote sensing technology based on ant colony algorithm. In addition, the land use and cover changes in Fuzhou for the near term is studied and display the figures by using

  7. Underwater Sensor Network Redeployment Algorithm Based on Wolf Search

    PubMed Central

    Jiang, Peng; Feng, Yang; Wu, Feng

    2016-01-01

    This study addresses the optimization of node redeployment coverage in underwater wireless sensor networks. Given that nodes could easily become invalid under a poor environment and the large scale of underwater wireless sensor networks, an underwater sensor network redeployment algorithm was developed based on wolf search. This study is to apply the wolf search algorithm combined with crowded degree control in the deployment of underwater wireless sensor networks. The proposed algorithm uses nodes to ensure coverage of the events, and it avoids the prematurity of the nodes. The algorithm has good coverage effects. In addition, considering that obstacles exist in the underwater environment, nodes are prevented from being invalid by imitating the mechanism of avoiding predators. Thus, the energy consumption of the network is reduced. Comparative analysis shows that the algorithm is simple and effective in wireless sensor network deployment. Compared with the optimized artificial fish swarm algorithm, the proposed algorithm exhibits advantages in network coverage, energy conservation, and obstacle avoidance. PMID:27775659

  8. Algorithmic Mechanism Design of Evolutionary Computation

    PubMed Central

    Pei, Yan

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777

  9. ALFA: Automated Line Fitting Algorithm

    NASA Astrophysics Data System (ADS)

    Wesson, R.

    2015-12-01

    ALFA fits emission line spectra of arbitrary wavelength coverage and resolution, fully automatically. It uses a catalog of lines which may be present to construct synthetic spectra, the parameters of which are then optimized by means of a genetic algorithm. Uncertainties are estimated using the noise structure of the residuals. An emission line spectrum containing several hundred lines can be fitted in a few seconds using a single processor of a typical contemporary desktop or laptop PC. Data cubes in FITS format can be analysed using multiple processors, and an analysis of tens of thousands of deep spectra obtained with instruments such as MUSE will take a few hours.

  10. Cluster Algorithm Special Purpose Processor

    NASA Astrophysics Data System (ADS)

    Talapov, A. L.; Shchur, L. N.; Andreichenko, V. B.; Dotsenko, Vl. S.

    We describe a Special Purpose Processor, realizing the Wolff algorithm in hardware, which is fast enough to study the critical behaviour of 2D Ising-like systems containing more than one million spins. The processor has been checked to produce correct results for a pure Ising model and for Ising model with random bonds. Its data also agree with the Nishimori exact results for spin glass. Only minor changes of the SPP design are necessary to increase the dimensionality and to take into account more complex systems such as Potts models.

  11. Algorithm for dynamic Speckle pattern processing

    NASA Astrophysics Data System (ADS)

    Cariñe, J.; Guzmán, R.; Torres-Ruiz, F. A.

    2016-07-01

    In this paper we present a new algorithm for determining surface activity by processing speckle pattern images recorded with a CCD camera. Surface activity can be produced by motility or small displacements among other causes, and is manifested as a change in the pattern recorded in the camera with reference to a static background pattern. This intensity variation is considered to be a small perturbation compared with the mean intensity. Based on a perturbative method we obtain an equation with which we can infer information about the dynamic behavior of the surface that generates the speckle pattern. We define an activity index based on our algorithm that can be easily compared with the outcomes from other algorithms. It is shown experimentally that this index evolves in time in the same way as the Inertia Moment method, however our algorithm is based on direct processing of speckle patterns without the need for other kinds of post-processes (like THSP and co-occurrence matrix), making it a viable real-time method. We also show how this algorithm compares with several other algorithms when applied to calibration experiments. From these results we conclude that our algorithm offer qualitative and quantitative advantages over current methods.

  12. Photoacoustic imaging taking into account thermodynamic attenuation

    NASA Astrophysics Data System (ADS)

    Acosta, Sebastián; Montalto, Carlos

    2016-11-01

    In this paper we consider a mathematical model for photoacoustic imaging which takes into account attenuation due to thermodynamic dissipation. The propagation of acoustic (compressional) waves is governed by a scalar wave equation coupled to the heat equation for the excess temperature. We seek to recover the initial acoustic profile from knowledge of acoustic measurements at the boundary. We recognize that this inverse problem is a special case of boundary observability for a thermoelastic system. This leads to the use of control/observability tools to prove the unique and stable recovery of the initial acoustic profile in the weak thermoelastic coupling regime. This approach is constructive, yielding a solvable equation for the unknown acoustic profile. Moreover, the solution to this reconstruction equation can be approximated numerically using the conjugate gradient method. If certain geometrical conditions for the wave speed are satisfied, this approach is well-suited for variable media and for measurements on a subset of the boundary. We also present a numerical implementation of the proposed reconstruction algorithm.

  13. "Don't take diabetes for granted."

    MedlinePlus

    ... please turn Javascript on. Feature: Diabetes Stories "Don't take diabetes for granted." Past Issues / Fall 2009 ... regularly, and take your medicines on time. Don't take diabetes for granted! Fall 2009 Issue: Volume ...

  14. Uses of clinical algorithms.

    PubMed

    Margolis, C Z

    1983-02-04

    The clinical algorithm (flow chart) is a text format that is specially suited for representing a sequence of clinical decisions, for teaching clinical decision making, and for guiding patient care. A representative clinical algorithm is described in detail; five steps for writing an algorithm and seven steps for writing a set of algorithms are outlined. Five clinical education and patient care uses of algorithms are then discussed, including a map for teaching clinical decision making and protocol charts for guiding step-by-step care of specific problems. Clinical algorithms are compared as to their clinical usefulness with decision analysis. Three objections to clinical algorithms are answered, including the one that they restrict thinking. It is concluded that methods should be sought for writing clinical algorithms that represent expert consensus. A clinical algorithm could then be written for any area of medical decision making that can be standardized. Medical practice could then be taught more effectively, monitored accurately, and understood better.

  15. Algorithm For Hypersonic Flow In Chemical Equilibrium

    NASA Technical Reports Server (NTRS)

    Palmer, Grant

    1989-01-01

    Implicit, finite-difference, shock-capturing algorithm calculates inviscid, hypersonic flows in chemical equilibrium. Implicit formulation chosen because overcomes limitation on mathematical stability encountered in explicit formulations. For dynamical portion of problem, Euler equations written in conservation-law form in Cartesian coordinate system for two-dimensional or axisymmetric flow. For chemical portion of problem, equilibrium state of gas at each point in computational grid determined by minimizing local Gibbs free energy, subject to local conservation of molecules, atoms, ions, and total enthalpy. Major advantage: resulting algorithm naturally stable and captures strong shocks without help of artificial-dissipation terms to damp out spurious numerical oscillations.

  16. Places and faces: Geographic environment influences the ingroup memory advantage.

    PubMed

    Rule, Nicholas O; Garrett, James V; Ambady, Nalini

    2010-03-01

    The preferential allocation of attention and memory to the ingroup (the ingroup memory advantage) is one of the most replicated effects in the psychological literature. But little is known about what factors may influence such effects. Here the authors investigated a potential influence: category salience as determined by the perceiver's geographic environment. They did so by studying the ingroup memory advantage in perceptually ambiguous groups for whom perceptual cues do not make group membership immediately salient. Individuals in an environment in which a particular group membership was salient (Mormon and non-Mormon men and women living in Salt Lake City, Utah) showed better memory for faces belonging to their ingroup in an incidental encoding paradigm. Majority group participants in an environment where this group membership was not salient (non-Mormon men and women in the northeastern United States), however, showed no ingroup memory advantage whereas minority group participants (Mormons) in the same environment did. But in the same environment, when differences in group membership were made accessible via an unobtrusive priming task, non-Mormons did show an ingroup memory advantage and Mormons' memory for ingroup members increased. Environmental context cues therefore influence the ingroup memory advantage for categories that are not intrinsically salient.

  17. Declining longevity advantage and low birthweight in Okinawa.

    PubMed

    Hokama, Tomiko; Binns, Colin

    2008-10-01

    The prefecture of Okinawa is known for the longevity of its population, for 30 years it had the longest life expectancy of all prefectures in Japan. However this advantage was lost in 2000 and male longevity is now ranked 26th among the 47 prefectures of Japan. The aim of this study was to explore whether the recent decline in Okinawan life expectancy advantage is due to the cohort effect of low birthweight infants becoming middle- and older- aged Okinawans. This is an observational study using existing demographic and health statistics. Data on life expectancy, mortality and low birthweight rates were obtained from the Okinawan Prefectural Department of Health and Welfare and the Japanese Ministry of Health, Labour and Welfare. In the year 2000 the longevity advantage of Okinawan males over the Japanese mainland was lost and the relative life expectancy of females declines. The mortality ratio for heart disease has reversed showing a cohort effect, with younger Okinawans having higher death rates than those living in the rest of Japan. The low birthweight rate for Okinawa is 20% greater than mainland Japan. As the post World War cohort of low birthweight infants reaches middle age, the longevity advantage of Okinawans has been lost. The loss of the longevity advantage of Okinawa over the rest of Japan may be due to the increase in non-communicable disease in the post war cohort that has experienced a higher low birthweight rate.

  18. Advantages of natural gas as a vehicular fuel

    SciTech Connect

    Remick, R.J.; Blazek, C.F.

    1992-01-01

    The advantages of natural gas vehicles can be broken down into four major categories: social/political, technical, economic, and environmental. The social/political advantages of natural gas as a vehicular fuel lie predominantly in its ability to substitute for petroleum fuels. This frees petroleum reserves for other uses or, in areas with dwindling reserves, it reduces the dependence on imported oil and oil products. The technical advantages of natural gas include its high octane rating, which permits higher compression ratios to be used with spark ignition engines. The economic advantages, although variable from one geographical region to another, are derived from the price differential between natural gas and refined oil products. In approximate terms, the average price of a megajoule (MJ) of natural gas is about 60% that of an MJ of refined petroleum products. Finally, there are significant environmental advantages associated with the use of natural gas as a vehicle fuel. Emissions from dedicated natural gas vehicles equipped with catalytic convertors have met the 1996 clean air standards set by the US EPA for both heavy-duty trucks and passenger cars. With further research, they also will be able to meet the 1997 ultra-low emission vehicle (ULEV) California standards set by the South Coast Air Quality Management District.

  19. Advantages of natural gas as a vehicular fuel

    SciTech Connect

    Remick, R.J.; Blazek, C.F.

    1992-12-31

    The advantages of natural gas vehicles can be broken down into four major categories: social/political, technical, economic, and environmental. The social/political advantages of natural gas as a vehicular fuel lie predominantly in its ability to substitute for petroleum fuels. This frees petroleum reserves for other uses or, in areas with dwindling reserves, it reduces the dependence on imported oil and oil products. The technical advantages of natural gas include its high octane rating, which permits higher compression ratios to be used with spark ignition engines. The economic advantages, although variable from one geographical region to another, are derived from the price differential between natural gas and refined oil products. In approximate terms, the average price of a megajoule (MJ) of natural gas is about 60% that of an MJ of refined petroleum products. Finally, there are significant environmental advantages associated with the use of natural gas as a vehicle fuel. Emissions from dedicated natural gas vehicles equipped with catalytic convertors have met the 1996 clean air standards set by the US EPA for both heavy-duty trucks and passenger cars. With further research, they also will be able to meet the 1997 ultra-low emission vehicle (ULEV) California standards set by the South Coast Air Quality Management District.

  20. Polyploidy in haloarchaea: advantages for growth and survival

    PubMed Central

    Zerulla, Karolin; Soppa, Jörg

    2014-01-01

    The investigated haloarchaeal species, Halobacterium salinarum, Haloferax mediterranei, and H. volcanii, have all been shown to be polyploid. They contain several replicons that have independent copy number regulation, and most have a higher copy number during exponential growth phase than in stationary phase. The possible evolutionary advantages of polyploidy for haloarchaea, most of which have experimental support for at least one species, are discussed. These advantages include a low mutation rate and high resistance toward X-ray irradiation and desiccation, which depend on homologous recombination. For H. volcanii, it has been shown that gene conversion operates in the absence of selection, which leads to the equalization of genome copies. On the other hand, selective forces might lead to heterozygous cells, which have been verified in the laboratory. Additional advantages of polyploidy are survival over geological times in halite deposits as well as at extreme conditions on earth and at simulated Mars conditions. Recently, it was found that H. volcanii uses genomic DNA as genetic material and as a storage polymer for phosphate. In the absence of phosphate, H. volcanii dramatically decreases its genome copy number, thereby enabling cell multiplication, but diminishing the genetic advantages of polyploidy. Stable storage of phosphate is proposed as an alternative driving force for the emergence of DNA in early evolution. Several additional potential advantages of polyploidy are discussed that have not been addressed experimentally for haloarchaea. An outlook summarizes selected current trends and possible future developments. PMID:24982654

  1. Competitive Advantage and its Sources in an Evolving Market

    NASA Astrophysics Data System (ADS)

    Zaridis, Apostolos D.

    2009-08-01

    In a continuously altered and evolving Market, as is the food manufacturing market, the main and long-lasting objective of firm that is the maximization of its wealth and consequently the continuous remaining in profit regions, appears that it is possible to be achieved via the obtainment and maintenance of diachronically long-term competitive advantage, which it will render the firm unique or leader force in a inexorable competition that is continuously extended in a globalized market. Various definitions and different regards are developed in regard to the competitive advantage and the way with which a firm it is possible, acquiring it, to star in the market in which it is activated. As result of sustainable competitive advantage in a firm comes the above the average performance. Abundance of resources and competences that are proposed as sources of competitive advantage in the resource-based view literature exists, while they are added continuously new based on empiric studies. In any case, it appears to suffer hierarchy of sources of competitive advantage, with regard to sustainability of these.

  2. A three-dimensional weighted cone beam filtered backprojection (CB-FBP) algorithm for image reconstruction in volumetric CT under a circular source trajectory

    NASA Astrophysics Data System (ADS)

    Tang, Xiangyang; Hsieh, Jiang; Hagiwara, Akira; Nilsen, Roy A.; Thibault, Jean-Baptiste; Drapkin, Evgeny

    2005-08-01

    can be implemented in either the native CB geometry or the so-called cone-parallel geometry. By taking the cone-parallel geometry as an example, the experimental evaluation shows that, up to a moderate cone angle corresponding to a detector dimension of 64 × 0.625 mm, the CB artefacts can be substantially suppressed by the proposed algorithm, while advantages of the original FDK algorithm, such as the filtered backprojection algorithm structure, 1D ramp filtering and data manipulation efficiency, are maintained.

  3. Frequency domain simultaneous algebraic reconstruction techniques: algorithm and convergence

    NASA Astrophysics Data System (ADS)

    Wang, Jiong; Zheng, Yibin

    2005-03-01

    We propose a simultaneous algebraic reconstruction technique (SART) in the frequency domain for linear imaging problems. This algorithm has the advantage of efficiently incorporating pixel correlations in an a priori image model. First it is shown that the generalized SART algorithm converges to the weighted minimum norm solution of a weighted least square problem. Then an implementation in the frequency domain is described. The performance of the new algorithm is demonstrated with fan beam computed tomography (CT) examples. Compared to the traditional SART and its major alternative ART, the new algorithm offers superior image quality and potential application to other modalities.

  4. A hyperspectral imagery anomaly detection algorithm based on local three-dimensional orthogonal subspace projection

    NASA Astrophysics Data System (ADS)

    Zhang, Xing; Wen, Gongjian

    2015-10-01

    Anomaly detection (AD) becomes increasingly important in hyperspectral imagery analysis with many practical applications. Local orthogonal subspace projection (LOSP) detector is a popular anomaly detector which exploits local endmembers/eigenvectors around the pixel under test (PUT) to construct background subspace. However, this subspace only takes advantage of the spectral information, but the spatial correlat ion of the background clutter is neglected, which leads to the anomaly detection result sensitive to the accuracy of the estimated subspace. In this paper, a local three dimensional orthogonal subspace projection (3D-LOSP) algorithm is proposed. Firstly, under the jointly use of both spectral and spatial information, three directional background subspaces are created along the image height direction, the image width direction and the spectral direction, respectively. Then, the three corresponding orthogonal subspaces are calculated. After that, each vector along three direction of the local cube is projected onto the corresponding orthogonal subspace. Finally, a composite score is given through the three direction operators. In 3D-LOSP, the anomalies are redefined as the target not only spectrally different to the background, but also spatially distinct. Thanks to the addition of the spatial information, the robustness of the anomaly detection result has been improved greatly by the proposed 3D-LOSP algorithm. It is noteworthy that the proposed algorithm is an expansion of LOSP and this ideology can inspire many other spectral-based anomaly detection methods. Experiments with real hyperspectral images have proved the stability of the detection result.

  5. A Prototype Algorithm for Land Surface Temperature Retrieval from Sentinel-3 Mission

    NASA Astrophysics Data System (ADS)

    Sobrino, Jose A.; Jimenez-Munoz, Juan C.; Soria, Guillem; Brockmann, Carsten; Ruescas, Ana; Danne, Olaf; North, Peter; Phillipe, Pierre; Berger, Michel; Merchant, Chris; Ghent, Darren; Remedios, John

    2015-12-01

    In this work we present a prototype algorithm to retrieve Land Surface Temperature (LST) from OLCI and SLSTR instruments on board Sentinel-3 platform, which was developed in the framework of the SEN4LST project. For this purpose, data acquired with the ENVISAT MERIS and AATSR instruments are used as a benchmark. The objective is to improve the LST standard product (level 2) currently derived from the single AATSR instrument taking advantages of the improved characteristics of the future OLCI and SLSTR instruments. Hence, the high spectral resolution of OLCI instrument and the dual-view and thermal bands available in the SLSTR instruments have the potential to improve the characterization of the atmosphere and therefore to improve the atmospheric correction and cloud mask. Bands in the solar domain available in both instruments allow the retrieval of the surface emissivity, being a key input to the LST algorithm. Pairs of MERIS/AATSR are processed over different sites and validated with in situ measurements using the LST processor included in the BEAM software. Results showed that the proposed LST algorithm improves LST retrievals of the standard level-2 product.

  6. ParAlign: a parallel sequence alignment algorithm for rapid and sensitive database searches.

    PubMed

    Rognes, T

    2001-04-01

    There is a need for faster and more sensitive algorithms for sequence similarity searching in view of the rapidly increasing amounts of genomic sequence data available. Parallel processing capabilities in the form of the single instruction, multiple data (SIMD) technology are now available in common microprocessors and enable a single microprocessor to perform many operations in parallel. The ParAlign algorithm has been specifically designed to take advantage of this technology. The new algorithm initially exploits parallelism to perform a very rapid computation of the exact optimal ungapped alignment score for all diagonals in the alignment matrix. Then, a novel heuristic is employed to compute an approximate score of a gapped alignment by combining the scores of several diagonals. This approximate score is used to select the most interesting database sequences for a subsequent Smith-Waterman alignment, which is also parallelised. The resulting method represents a substantial improvement compared to existing heuristics. The sensitivity and specificity of ParAlign was found to be as good as Smith-Waterman implementations when the same method for computing the statistical significance of the matches was used. In terms of speed, only the significantly less sensitive NCBI BLAST 2 program was found to outperform the new approach. Online searches are available at http://dna.uio.no/search/

  7. Stochastic coalescence in finite systems: an algorithm for the numerical solution of the multivariate master equation.

    NASA Astrophysics Data System (ADS)

    Alfonso, Lester; Zamora, Jose; Cruz, Pedro

    2015-04-01

    The stochastic approach to coagulation considers the coalescence process going in a system of a finite number of particles enclosed in a finite volume. Within this approach, the full description of the system can be obtained from the solution of the multivariate master equation, which models the evolution of the probability distribution of the state vector for the number of particles of a given mass. Unfortunately, due to its complexity, only limited results were obtained for certain type of kernels and monodisperse initial conditions. In this work, a novel numerical algorithm for the solution of the multivariate master equation for stochastic coalescence that works for any type of kernels and initial conditions is introduced. The performance of the method was checked by comparing the numerically calculated particle mass spectrum with analytical solutions obtained for the constant and sum kernels, with an excellent correspondence between the analytical and numerical solutions. In order to increase the speedup of the algorithm, software parallelization techniques with OpenMP standard were used, along with an implementation in order to take advantage of new accelerator technologies. Simulations results show an important speedup of the parallelized algorithms. This study was funded by a grant from Consejo Nacional de Ciencia y Tecnologia de Mexico SEP-CONACYT CB-131879. The authors also thanks LUFAC® Computacion SA de CV for CPU time and all the support provided.

  8. Public and expert collaborative evaluation model and algorithm for enterprise knowledge

    NASA Astrophysics Data System (ADS)

    Le, Chengyi; Gu, Xinjian; Pan, Kai; Dai, Feng; Qi, Guoning

    2013-08-01

    Knowledge is becoming the most important resource for more and more enterprises and increases exponentially, but there is not an effective method to evaluate them cogently. Based on Web2.0, this article firstly builds an enterprise knowledge sharing model. Synthetically taking the advantage of the convenience and low cost in public evaluation and of the specialty in peer review, a public and expert collaborative evaluation (PECE) model and algorithm for enterprise knowledge are put forward. Through analyzing interaction between user's domain weights and scores of knowledge points, the PECE model and algorithm serve to recognise valuable knowledge and domain experts efficiently and therefore improve ordering and utilisation of knowledge. This article also studies malicious and casual evaluation from users and a method is proposed to update user's domain weights. Finally, a case of knowledge sharing system for amanufacturing enterprise is developed and realised. User's behaviour of publishing and evaluating knowledge is simulated and then analysed to verify the feasibility of PECE algorithm based on the system.

  9. The cost of privatization: extra payments to Medicare Advantage plans.

    PubMed

    Biles, Brian; Nicholas, Lauren Hersch; Cooper, Barbara S

    2004-05-01

    The recently enacted Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA) includes a broad set of provisions intended to enlarge the role of private health plans (called Medicare Advantage plans) in Medicare. This issue brief examines the payments that private plans are receiving in 2004 relative to costs in traditional fee-for-service Medicare, using data from the 2004 Medicare Advantage Rate Calculation Data spreadsheet. The authors find that, for 2004, Medicare Advantage payments will average 8.4 percent more than costs in traditional fee-for-service Medicare: $552 for each of the 5 million Medicare enrollees in managed care, for a total of more than $2.75 billion. In some counties, extra payments by Medicare are more than double this amount. Although the stated objective of efforts to increase enrollment in private plans is to lower costs, the policies of MMA regarding private plans explicitly increase Medicare costs in 2004 and through 2013.

  10. Reasoning about other people's beliefs: bilinguals have an advantage.

    PubMed

    Rubio-Fernández, Paula; Glucksberg, Sam

    2012-01-01

    Bilingualism can have widespread cognitive effects. In this article we investigate whether bilingualism might have an effect on adults' abilities to reason about other people's beliefs. In particular, we tested whether bilingual adults might have an advantage over monolingual adults in false-belief reasoning analogous to the advantage that has been observed with bilingual children. Using a traditional false-belief task coupled with an eye-tracking technique, we found that adults in general suffer interference from their own perspective when reasoning about other people's beliefs. However, bilinguals are reliably less susceptible to this egocentric bias than are monolinguals. Moreover, performance on the false-belief task significantly correlated with performance on an executive control task. We argue that bilinguals' early sociolinguistic sensitivity and enhanced executive control may account for their advantage in false-belief reasoning.

  11. A Longitudinal Study of Memory Advantages in Bilinguals

    PubMed Central

    Ljungberg, Jessica K.; Hansson, Patrik; Andrés, Pilar; Josefsson, Maria; Nilsson, Lars-Göran

    2013-01-01

    Typically, studies of cognitive advantages in bilinguals have been conducted previously by using executive and inhibitory tasks (e.g. Simon task) and applying cross-sectional designs. This study longitudinally investigated bilingual advantages on episodic memory recall, verbal letter and categorical fluency during the trajectory of life. Monolingual and bilingual participants (n = 178) between 35–70 years at baseline were drawn from the Betula Prospective Cohort Study of aging, memory, and health. Results showed that bilinguals outperformed monolinguals at the first testing session and across time both in episodic memory recall and in letter fluency. No interaction with age was found indicating that the rate of change across ages was similar for bilinguals and monolinguals. As predicted and in line with studies applying cross-sectional designs, no advantages associated with bilingualism were found in the categorical fluency task. The results are discussed in the light of successful aging. PMID:24023803

  12. Automated Vectorization of Decision-Based Algorithms

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    Virtually all existing vectorization algorithms are designed to only analyze the numeric properties of an algorithm and distribute those elements across multiple processors. This advances the state of the practice because it is the only known system, at the time of this reporting, that takes high-level statements and analyzes them for their decision properties and converts them to a form that allows them to automatically be executed in parallel. The software takes a high-level source program that describes a complex decision- based condition and rewrites it as a disjunctive set of component Boolean relations that can then be executed in parallel. This is important because parallel architectures are becoming more commonplace in conventional systems and they have always been present in NASA flight systems. This technology allows one to take existing condition-based code and automatically vectorize it so it naturally decomposes across parallel architectures.

  13. [An improved N-FINDR endmember extraction algorithm based on manifold learning and spatial information].

    PubMed

    Tang, Xiao-yan; Gao, Kun; Ni, Guo-qiang; Zhu, Zhen-yu; Cheng, Hao-bo

    2013-09-01

    An improved N-FINDR endmember extraction algorithm by combining manifold learning and spatial information is presented under nonlinear mixing assumptions. Firstly, adaptive local tangent space alignment is adapted to seek potential intrinsic low-dimensional structures of hyperspectral high-diemensional data and reduce original data into a low-dimensional space. Secondly, spatial preprocessing is used by enhancing each pixel vector in spatially homogeneous areas, according to the continuity of spatial distribution of the materials. Finally, endmembers are extracted by looking for the largest simplex volume. The proposed method can increase the precision of endmember extraction by solving the nonlinearity of hyperspectral data and taking advantage of spatial information. Experimental results on simulated and real hyperspectral data demonstrate that the proposed approach outperformed the geodesic simplex volume maximization (GSVM), vertex component analysis (VCA) and spatial preprocessing N-FINDR method (SPPNFINDR).

  14. IDP++: signal and image processing algorithms in C++ version 4.1

    SciTech Connect

    Lehman, S.K.

    1996-11-01

    IDP++ (Image and Data Processing in C++) is a collection of signal and image processing algorithms written in C++. It is a compiled signal processing environment which supports four data types of up to four dimensions. It is developed within Lawrence Livermore National Laboratory`s Image and Data Processing group as a partial replacement for View. IDP ++ takes advantage of the latest, implemented and actually working, object-oriented compiler technology to provide `information hiding.` Users need only know C, not C++. Signals are treated like any other variable with a defined set of operators and functions in an intuitive manner. IDP++ is designed for real-time environment where interpreted processing packages are less efficient. IDP++ exists for both SUNs and Silicon Graphics using their most current compilers.

  15. Quantum communication complexity advantage implies violation of a Bell inequality

    NASA Astrophysics Data System (ADS)

    Buhrman, Harry; Czekaj, Łukasz; Grudka, Andrzej; Horodecki, Michał; Horodecki, Paweł; Markiewicz, Marcin; Speelman, Florian; Strelchuk, Sergii

    2016-03-01

    We obtain a general connection between a large quantum advantage in communication complexity and Bell nonlocality. We show that given any protocol offering a sufficiently large quantum advantage in communication complexity, there exists a way of obtaining measurement statistics that violate some Bell inequality. Our main tool is port-based teleportation. If the gap between quantum and classical communication complexity can grow arbitrarily large, the ratio of the quantum value to the classical value of the Bell quantity becomes unbounded with the increase in the number of inputs and outputs.

  16. Quantum communication complexity advantage implies violation of a Bell inequality.

    PubMed

    Buhrman, Harry; Czekaj, Łukasz; Grudka, Andrzej; Horodecki, Michał; Horodecki, Paweł; Markiewicz, Marcin; Speelman, Florian; Strelchuk, Sergii

    2016-03-22

    We obtain a general connection between a large quantum advantage in communication complexity and Bell nonlocality. We show that given any protocol offering a sufficiently large quantum advantage in communication complexity, there exists a way of obtaining measurement statistics that violate some Bell inequality. Our main tool is port-based teleportation. If the gap between quantum and classical communication complexity can grow arbitrarily large, the ratio of the quantum value to the classical value of the Bell quantity becomes unbounded with the increase in the number of inputs and outputs.

  17. Minimally Invasive Suturectomy and Postoperative Helmet Therapy : Advantages and Limitations

    PubMed Central

    Chong, Sangjoon; Wang, Kyu-Chang; Phi, Ji Hoon; Lee, Ji Yeoun

    2016-01-01

    Various operative techniques are available for the treatment of craniosynostosis. The patient's age at presentation is one of the most important factors in the determination of the surgical modality. Minimally invasive suturectomy and postoperative helmet therapy may be performed for relatively young infants, whose age is younger than 6 months. It relies upon the potential for rapid brain growth in this age group. Its minimal invasiveness is also advantageous. In this article, we review the advantages and limitations of minimally invasive suturectomy followed by helmet therapy for the treatment of craniosynostosis. PMID:27226853

  18. Spatial Ability Explains the Male Advantage in Approximate Arithmetic

    PubMed Central

    Wei, Wei; Chen, Chuansheng; Zhou, Xinlin

    2016-01-01

    Previous research has shown that females consistently outperform males in exact arithmetic, perhaps due to the former’s advantage in language processing. Much less is known about gender difference in approximate arithmetic. Given that approximate arithmetic is closely associated with visuospatial processing, which shows a male advantage we hypothesized that males would perform better than females in approximate arithmetic. In two experiments (496 children in Experiment 1 and 554 college students in Experiment 2), we found that males showed better performance in approximate arithmetic, which was accounted for by gender differences in spatial ability. PMID:27014124

  19. Competitive Advantage in Intercollegiate Athletics: Role of Intangible Resources.

    PubMed

    Won, Doyeon; Chelladurai, Packianathan

    2016-01-01

    The present research explored the dynamics of competitive advantages in intercollegiate athletics by investigating the contribution of intangible resources (i.e., athletic and academic reputations) on the generation of more tangible resources (i.e., human and financial resources), which in turn influence the athletic performance (i.e., winning record) and academic performance (i.e., graduation rates), and gender equity. The research was based entirely on archival data of 324 NCAA Division I member institutions. The results of the SEM supported the study's basic arguments that tangible resources are the sources of competitive advantages in Division I intercollegiate athletics, and that intangible resources contribute to the generation of tangible resources.

  20. Quantum communication complexity advantage implies violation of a Bell inequality

    PubMed Central

    Buhrman, Harry; Czekaj, Łukasz; Grudka, Andrzej; Horodecki, Michał; Horodecki, Paweł; Markiewicz, Marcin; Speelman, Florian; Strelchuk, Sergii

    2016-01-01

    We obtain a general connection between a large quantum advantage in communication complexity and Bell nonlocality. We show that given any protocol offering a sufficiently large quantum advantage in communication complexity, there exists a way of obtaining measurement statistics that violate some Bell inequality. Our main tool is port-based teleportation. If the gap between quantum and classical communication complexity can grow arbitrarily large, the ratio of the quantum value to the classical value of the Bell quantity becomes unbounded with the increase in the number of inputs and outputs. PMID:26957600

  1. Romanowsky staining in cytopathology: history, advantages and limitations.

    PubMed

    Krafts, K P; Pambuccian, S E

    2011-04-01

    If the entire discipline of diagnostic cytopathology could be distilled into a single theme, it would be the Papanicolaou stain. Yet it was the Romanowsky stain upon which the discipline of cytopathology was founded. Both stains are used today in the cytopathology laboratory, each for a different and complementary purpose. We trace the history of cytopathological stains and discuss the advantages and limitations of Romanowsky-type stains for cytological evaluation. We also provide suggestions for the advantageous use of Romanowsky-type stains in cytopathology.

  2. A new FFT-based algorithm to compute Born radii in the generalized Born theory of biomolecule solvation

    SciTech Connect

    Cai Wei Xu Zhenli; Baumketner, Andrij

    2008-12-20

    In this paper, a new method for calculating effective atomic radii within the generalized Born (GB) model of implicit solvation is proposed, for use in computer simulations of biomolecules. First, a new formulation for the GB radii is developed, in which smooth kernels are used to eliminate the divergence in volume integrals intrinsic in the model. Next, the fast Fourier transform (FFT) algorithm is applied to integrate smoothed functions, taking advantage of the rapid spectral decay provided by the smoothing. The total cost of the proposed algorithm scales as O(N{sup 3}logN+M) where M is the number of atoms comprised in a molecule and N is the number of FFT grid points in one dimension, which depends only on the geometry of the molecule and the spectral decay of the smooth kernel but not on M. To validate our algorithm, numerical tests are performed for three solute models: one spherical object for which exact solutions exist and two protein molecules of differing size. The tests show that our algorithm is able to reach the accuracy of other existing GB implementations, while offering much lower computational cost.

  3. A new FFT-based algorithm to compute Born radii in the generalized Born theory of biomolecule solvation.

    PubMed

    Cai, Wei; Xu, Zhenli; Baumketner, Andrij

    2008-12-20

    In this paper, a new method for calculating effective atomic radii within the generalized Born (GB) model of implicit solvation is proposed, for use in computer simulations of bio-molecules. First, a new formulation for the GB radii is developed, in which smooth kernels are used to eliminate the divergence in volume integrals intrinsic in the model. Next, the Fast Fourier Transform (FFT) algorithm is applied to integrate smoothed functions, taking advantage of the rapid spectral decay provided by the smoothing. The total cost of the proposed algorithm scales as O(N(3)logN + M) where M is the number of atoms comprised in a molecule, and N is the number of FFT grid points in one dimension, which depends only on the geometry of the molecule and the spectral decay of the smooth kernel but not on M. To validate our algorithm, numerical tests are performed for three solute models: one spherical object for which exact solutions exist and two protein molecules of differing size. The tests show that our algorithm is able to reach the accuracy of other existing GB implementations, while offering much lower computational cost.

  4. Parallelization of a blind deconvolution algorithm

    NASA Astrophysics Data System (ADS)

    Matson, Charles L.; Borelli, Kathy J.

    2006-09-01

    Often it is of interest to deblur imagery in order to obtain higher-resolution images. Deblurring requires knowledge of the blurring function - information that is often not available separately from the blurred imagery. Blind deconvolution algorithms overcome this problem by jointly estimating both the high-resolution image and the blurring function from the blurred imagery. Because blind deconvolution algorithms are iterative in nature, they can take minutes to days to deblur an image depending how many frames of data are used for the deblurring and the platforms on which the algorithms are executed. Here we present our progress in parallelizing a blind deconvolution algorithm to increase its execution speed. This progress includes sub-frame parallelization and a code structure that is not specialized to a specific computer hardware architecture.

  5. On a concurrent element-by-element preconditioned conjugate gradient algorithm for multiple load cases

    NASA Technical Reports Server (NTRS)

    Watson, Brian; Kamat, M. P.

    1990-01-01

    Element-by-element preconditioned conjugate gradient (EBE-PCG) algorithms have been advocated for use in parallel/vector processing environments as being superior to the conventional LDL(exp T) decomposition algorithm for single load cases. Although there may be some advantages in using such algorithms for a single load case, when it comes to situations involving multiple load cases, the LDL(exp T) decomposition algorithm would appear to be decidedly more cost-effective. The authors have outlined an EBE-PCG algorithm suitable for multiple load cases and compared its effectiveness to the highly efficient LDL(exp T) decomposition scheme. The proposed algorithm offers almost no advantages over the LDL(exp T) algorithm for the linear problems investigated on the Alliant FX/8. However, there may be some merit in the algorithm in solving nonlinear problems with load incrementation, but that remains to be investigated.

  6. Software For Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steve E.

    1992-01-01

    SPLICER computer program is genetic-algorithm software tool used to solve search and optimization problems. Provides underlying framework and structure for building genetic-algorithm application program. Written in Think C.

  7. Algorithm-development activities

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    1994-01-01

    The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.

  8. Taking charge: a personal responsibility.

    PubMed Central

    Newman, D M

    1987-01-01

    Women can adopt health practices that will help them to maintain good health throughout their various life stages. Women can take charge of their health by maintaining a nutritionally balanced diet, exercising, and using common sense. Women can also employ known preventive measures against osteoporosis, stroke, lung and breast cancer and accidents. Because women experience increased longevity and may require long-term care with age, the need for restructuring the nation's care system for the elderly becomes an important women's health concern. Adult day care centers, home health aides, and preventive education will be necessary, along with sufficient insurance to maintain quality care and self-esteem without depleting a person's resources. PMID:3120224

  9. Accuracy estimation for supervised learning algorithms

    SciTech Connect

    Glover, C.W.; Oblow, E.M.; Rao, N.S.V.

    1997-04-01

    This paper illustrates the relative merits of three methods - k-fold Cross Validation, Error Bounds, and Incremental Halting Test - to estimate the accuracy of a supervised learning algorithm. For each of the three methods we point out the problem they address, some of the important assumptions that are based on, and illustrate them through an example. Finally, we discuss the relative advantages and disadvantages of each method.

  10. OPERA: Objective Prism Enhanced Reduction Algorithms

    NASA Astrophysics Data System (ADS)

    Universidad Complutense de Madrid Astrophysics Research Group

    2015-09-01

    OPERA (Objective Prism Enhanced Reduction Algorithms) automatically analyzes astronomical images using the objective-prism (OP) technique to register thousands of low resolution spectra in large areas. It detects objects in an image, extracts one-dimensional spectra, and identifies the emission line feature. The main advantages of this method are: 1) to avoid subjectivity inherent to visual inspection used in past studies; and 2) the ability to obtain physical parameters without follow-up spectroscopy.

  11. Algorithms for Real-Time Processing

    DTIC Science & Technology

    2003-04-01

    algorithm has been mapped on an application specific prototyping platform which contains four VLSI CORDIC ASICs and some FPGAs (Field Programmable Gate... makes less critical the implementation of a VLSI based systolic array. A practical application of systolic processing for classical ground based or ship...interferometry (ATI) - SAR to detect moving targets [ 18]. It can be shown that this approach offers a considerable computational advantage; FPGA technology has

  12. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  13. Quantum algorithms: an overview

    NASA Astrophysics Data System (ADS)

    Montanaro, Ashley

    2016-01-01

    Quantum computers are designed to outperform standard computers by running quantum algorithms. Areas in which quantum algorithms can be applied include cryptography, search and optimisation, simulation of quantum systems and solving large systems of linear equations. Here we briefly survey some known quantum algorithms, with an emphasis on a broad overview of their applications rather than their technical details. We include a discussion of recent developments and near-term applications of quantum algorithms.

  14. INSENS classification algorithm report

    SciTech Connect

    Hernandez, J.E.; Frerking, C.J.; Myers, D.W.

    1993-07-28

    This report describes a new algorithm developed for the Imigration and Naturalization Service (INS) in support of the INSENS project for classifying vehicles and pedestrians using seismic data. This algorithm is less sensitive to nuisance alarms due to environmental events than the previous algorithm. Furthermore, the algorithm is simple enough that it can be implemented in the 8-bit microprocessor used in the INSENS system.

  15. Emerging Bilingualism: Dissociating Advantages for Metalinguistic Awareness and Executive Control

    ERIC Educational Resources Information Center

    Bialystok, Ellen; Barac, Raluca

    2012-01-01

    The present studies revealed different factors associated with the reported advantages found in fully bilingual children for metalinguistic awareness and executive control. Participants were 100 children in Study 1 and 80 children in Study 2 in the process of becoming bilingual by attending immersion programs. In both studies, "level of…

  16. Elasticity and Mechanical Advantage in Cables and Ropes

    ERIC Educational Resources Information Center

    O'Shea, M. J.

    2007-01-01

    Abstract. The conditions under which one can gain mechanical advantage by pulling with a force F perpendicular to a cable (or rope) that is fixed at both ends are examined. While this is a commonly discussed example in introductory physics classes, its solution in terms of fundamental properties of the cable requires one to model the elasticity of…

  17. The UNIX/XENIX Advantage: Applications in Libraries.

    ERIC Educational Resources Information Center

    Gordon, Kelly L.

    1988-01-01

    Discusses the application of the UNIX/XENIX operating system to support administrative office automation functions--word processing, spreadsheets, database management systems, electronic mail, and communications--at the Central Michigan University Libraries. Advantages and disadvantages of the XENIX operating system and system configuration are…

  18. Career management: a competitive advantage in today's health care marketplace.

    PubMed

    Bourbeau, J

    2001-01-01

    A valuable new tool to attract and retain new employees is being used by some of the most progressive companies in Michigan. It is called career management, and it is being used with great success by businesses of all types to give themselves a competitive advantage.

  19. Binaural Advantage for Younger and Older Adults with Normal Hearing

    ERIC Educational Resources Information Center

    Dubno, Judy R.; Ahlstrom, Jayne B.; Horwitz, Amy R.

    2008-01-01

    Purpose: Three experiments measured benefit of spatial separation, benefit of binaural listening, and masking-level differences (MLDs) to assess age-related differences in binaural advantage. Method: Participants were younger and older adults with normal hearing through 4.0 kHz. Experiment 1 compared spatial benefit with and without head shadow.…

  20. Sustainable Competitive Advantage for Educational Institutions: A Suggested Model.

    ERIC Educational Resources Information Center

    Mazzarol, Tim; Soutar, Geoffrey Norman

    1999-01-01

    Outlines a model of factors critical to establishing and maintaining sustainable competitive advantage for education-services enterprises in international markets. The model, which combines industrial economics, management theory, and services marketing, seeks to explain the strategic decision-making environment in which the education exporter…

  1. Educating Students to Give Them a Sustainable Competitive Advantage

    ERIC Educational Resources Information Center

    Hopkins, Christopher D.; Raymond, Mary Anne; Carlson, Les

    2011-01-01

    With an increasingly competitive job market, this study focuses on what marketing educators can do to help students develop a sustainable competitive advantage. The authors conducted a survey of students, faculty, and recruiters to develop a better understanding of what skills and characteristics might be of value to each group of respondents and…

  2. Congruent Knowledge Management Behaviors as Discriminate Sources of Competitive Advantage

    ERIC Educational Resources Information Center

    Magnier-Watanabe, Remy; Senoo, Dai

    2009-01-01

    Purpose: While knowledge management has been shown to be a strategic source of competitive advantage, processes designed to enhance the productivity of knowledge do not, however, equally contribute to the organization's capabilities. Consequently, this research aims to focus on the relationship between each mode of the knowledge management process…

  3. Providing Homeless Adults with Advantage: A Sustainable University Degree Program

    ERIC Educational Resources Information Center

    Sinatra, Richard; Lanctot, Melissa Kim

    2016-01-01

    A university partnered with the New York City Department of Homeless Services (NYC DHS) to provide cohorts of adults a 60-credit Associate Degree Program in Business Administration over a 2-year period. Results of two cohorts of 30 Advantage Academy Program graduates revealed significant improvement in College Board AccuPlacer (ACPL) Arithmetic…

  4. A Generation Advantage for Multiplication Skill and Nonword Vocabulary Acquisition.

    DTIC Science & Technology

    1998-07-01

    of Psychology Box 344 University of Colorado Boulder, CO 80309 mcnamaraSclipr.Colorado.edu Running Head: A GENERATION ADVANTAGE FOR SKILLS * A...a class in introductory psychology participated for course credit. There were two experimental conditions (read and generate ); subjects were... generation effect. Journal o_f Experimental Psychology : Learning, Memory, & Cognition, 15., 669-675. Donaldson, W., & Bass, M. (1980). Relational

  5. Strategic Mergers of Strong Institutions to Enhance Competitive Advantage

    ERIC Educational Resources Information Center

    Harman, Grant; Harman, Kay

    2008-01-01

    Strategic mergers are formal combinations or amalgamations of higher education institutions with the aim of enhancing competitive advantage, or merging for "mutual growth". Recently, in a number of countries, there has been a decided shift from mergers initiated by governments, and dealing mainly with "problem" cases, towards…

  6. Advantages of Laser Polarimetry Applied to Tequila Industrial Process Control

    NASA Astrophysics Data System (ADS)

    Fajer, V.; Rodriguez, C.; Flores, R.; Naranjo, S.; Cossio, G.; Lopez, J.

    2002-03-01

    The development of a polarimetric method for crude and cooked agave juice quality control not only by direct polarimetric measurement also by means of laser polarimeter LASERPOL 101M used as a liquid chromatographic detector is presented. The viability and advantage of this method for raw material quality control and during Tequila industrial process is shown.

  7. Cognitive Advantages and Disadvantages in Early and Late Bilinguals

    ERIC Educational Resources Information Center

    Pelham, Sabra D.; Abrams, Lise

    2014-01-01

    Previous research has documented advantages and disadvantages of early bilinguals, defined as learning a 2nd language by school age and using both languages since that time. Relative to monolinguals, early bilinguals manifest deficits in lexical access but benefits in executive function. We investigated whether becoming bilingual "after"…

  8. Online Data Collection in Academic Research: Advantages and Limitations

    ERIC Educational Resources Information Center

    Lefever, Samuel; Dal, Michael; Matthiasdottir, Asrun

    2007-01-01

    Online data collection in academic research might be replacing paper-and-pencil surveys or questionnaires in the near future. This paper discusses the advantages and limitations of online data collection, with particular reference to the conduct of two qualitative studies involving upper secondary school teachers and students in Iceland in 2002.…

  9. Enduring Advantages of Early Cochlear Implantation for Spoken Language Development

    ERIC Educational Resources Information Center

    Geers, Anne E.; Nicholas, Johanna G.

    2013-01-01

    Purpose: In this article, the authors sought to determine whether the precise age of implantation (AOI) remains an important predictor of spoken language outcomes in later childhood for those who received a cochlear implant (CI) between 12 and 38 months of age. Relative advantages of receiving a bilateral CI after age 4.5 years, better…

  10. Advantages and Disadvantages of Student Loans Repayment Patterns

    ERIC Educational Resources Information Center

    Shen, Hua

    2010-01-01

    It is a difficulty problem to choice repayment patterns of student loan. "Conventional mortgage-type loan" and "Income contingent loan" has been performed in many countries. These loan repayment manners have their own characteristics. In this paper, we discuss their advantages and disadvantages, and would provide policy choice…

  11. The 'Adventist advantage'. Glendale Adventist Medical Center distinguishes itself.

    PubMed

    Botvin, Judith D

    2002-01-01

    Glendale Adventist Medical Center, Glendale, Calif., adopted an image-building campaign to differentiate the 450-bed hospital from its neighbors. This included the headline "Adventist Advantage," used in a series of sophisticated ads, printed in gold. In all their efforts, marketers consider the sensibilities of the sizable Armenian, Korean, Hispanic and Chinese populations.

  12. A Bilateral Advantage for Storage in Visual Working Memory

    ERIC Educational Resources Information Center

    Umemoto, Akina; Drew, Trafton; Ester, Edward F.; Awh, Edward

    2010-01-01

    Various studies have demonstrated enhanced visual processing when information is presented across both visual hemifields rather than in a single hemifield (the "bilateral advantage"). For example, Alvarez and Cavanagh (2005) reported that observers were able to track twice as many moving visual stimuli when the tracked items were presented…

  13. Are Articulatory Settings Mechanically Advantageous for Speech Motor Control?

    PubMed Central

    Ramanarayanan, Vikram; Lammert, Adam; Goldstein, Louis; Narayanan, Shrikanth

    2014-01-01

    We address the hypothesis that postures adopted during grammatical pauses in speech production are more “mechanically advantageous” than absolute rest positions for facilitating efficient postural motor control of vocal tract articulators. We quantify vocal tract posture corresponding to inter-speech pauses, absolute rest intervals as well as vowel and consonant intervals using automated analysis of video captured with real-time magnetic resonance imaging during production of read and spontaneous speech by 5 healthy speakers of American English. We then use locally-weighted linear regression to estimate the articulatory forward map from low-level articulator variables to high-level task/goal variables for these postures. We quantify the overall magnitude of the first derivative of the forward map as a measure of mechanical advantage. We find that postures assumed during grammatical pauses in speech as well as speech-ready postures are significantly more mechanically advantageous than postures assumed during absolute rest. Further, these postures represent empirical extremes of mechanical advantage, between which lie the postures assumed during various vowels and consonants. Relative mechanical advantage of different postures might be an important physical constraint influencing planning and control of speech production. PMID:25133544

  14. Optimisation of nonlinear motion cueing algorithm based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Asadi, Houshyar; Mohamed, Shady; Rahim Zadeh, Delpak; Nahavandi, Saeid

    2015-04-01

    Motion cueing algorithms (MCAs) are playing a significant role in driving simulators, aiming to deliver the most accurate human sensation to the simulator drivers compared with a real vehicle driver, without exceeding the physical limitations of the simulator. This paper provides the optimisation design of an MCA for a vehicle simulator, in order to find the most suitable washout algorithm parameters, while respecting all motion platform physical limitations, and minimising human perception error between real and simulator driver. One of the main limitations of the classical washout filters is that it is attuned by the worst-case scenario tuning method. This is based on trial and error, and is effected by driving and programmers experience, making this the most significant obstacle to full motion platform utilisation. This leads to inflexibility of the structure, production of false cues and makes the resulting simulator fail to suit all circumstances. In addition, the classical method does not take minimisation of human perception error and physical constraints into account. Production of motion cues and the impact of different parameters of classical washout filters on motion cues remain inaccessible for designers for this reason. The aim of this paper is to provide an optimisation method for tuning the MCA parameters, based on nonlinear filtering and genetic algorithms. This is done by taking vestibular sensation error into account between real and simulated cases, as well as main dynamic limitations, tilt coordination and correlation coefficient. Three additional compensatory linear blocks are integrated into the MCA, to be tuned in order to modify the performance of the filters successfully. The proposed optimised MCA is implemented in MATLAB/Simulink software packages. The results generated using the proposed method show increased performance in terms of human sensation, reference shape tracking and exploiting the platform more efficiently without reaching

  15. Hair testing is taking root.

    PubMed

    Cooper, Gail Audrey Ann

    2011-11-01

    An increasing number of toxicology laboratories are choosing to expand the services they offer to include hair testing in response to customer demands. Hair provides the toxicologist with many advantages over conventional matrices in that it is easy to collect, is a robust and stable matrix that does not require refrigeration, and most importantly, provides a historical profile of an individual's exposure to drugs or analytes of interest. The establishment of hair as a complementary technique in forensic toxicology is a direct result of the success of the matrix in medicolegal cases and the wide range of applications. However, before introducing hair testing, laboratories must consider what additional requirements they will need that extend beyond simply adapting methodologies already validated for blood or urine. Hair presents many challenges with respect to the lack of available quality control materials, extensive sample handling protocols and low drug concentrations requiring greater instrument sensitivity. Unfortunately, a common pitfall involves over-interpretation of the findings and must be avoided.

  16. Clustering algorithm studies

    NASA Astrophysics Data System (ADS)

    Graf, Norman A.

    2001-07-01

    An object-oriented framework for undertaking clustering algorithm studies has been developed. We present here the definitions for the abstract Cells and Clusters as well as the interface for the algorithm. We intend to use this framework to investigate the interplay between various clustering algorithms and the resulting jet reconstruction efficiency and energy resolutions to assist in the design of the calorimeter detector.

  17. Development of sensor-based nitrogen recommendation algorithms for cereal crops

    NASA Astrophysics Data System (ADS)

    Asebedo, Antonio Ray

    through 2014 to evaluate the previously developed KSU sensor-based N recommendation algorithm in corn N fertigation systems. Results indicate that the current KSU corn algorithm was effective at achieving high yields, but has the tendency to overestimate N requirements. To optimize sensor-based N recommendations for N fertigation systems, algorithms must be specifically designed for these systems to take advantage of their full capabilities, thus allowing implementation of high NUE N management systems.

  18. An Overview of GPM At-Launch Level 2 Precipitation Algorithms (Invited)

    NASA Astrophysics Data System (ADS)

    Munchak, S. J.; Meneghini, R.; Kummerow, C. D.; Olson, W. S.

    2013-12-01

    The Global Precipitation Measurement core satellite will carry the most advanced array of precipitation sensing instruments yet flown in space, the GPM Microwave Imager (GMI) and Dual-Frequency Precipitation Radar (DPR). Algorithms to convert the measurements from these instruments to precipitation rates have been developed and tested with data from aircraft instruments, physical model simulations, and existing satellites. These algorithms build upon the heritage of the Tropical Rainfall Measuring Mission (TRMM) algorithms to take advantage of the additional frequencies probed by GMI and DPR. As with TRMM, three instrument-specific level 2 precipitation products will be available: Radar-only, radiometer-only, and combined radar-radiometer. The radar-only product will be further subdivided into three subproducts: Ku-band-only (245 km swath), Ka-band-only (120 km swath with enhanced sensitivity), and Ku-Ka (120 km swath). The dual-frequency algorithm will provide enhanced estimation of rainfall rates and microphysical parameters such as mean raindrop size and phase identification relative to single-frequency products. The GMI precipitation product will be based upon a Bayesian algorithm that seeks to match observed brightness against those in a database. After launch, this database will be populated with observations from the GPM Core Observatory, but the at-launch database consists of profiles observed by TRMM, CloudSat, ground radars, and is augmented by model data fields to facilitate the generation of databases at non-observed frequencies. Ancillary data is used to subset the database by surface temperature, column water vapor, and surface type. This algorithm has been tested with data from the Special Sensor Microwave Imager/Sounder and comparisons with ground-based radar mosaic rainfall (NMQ) will be presented. The combined GMI-DPR algorithm uses an ensemble filtering approach to create and adjust many solutions (owing to different assumptions about the

  19. BootGraph: probabilistic fiber tractography using bootstrap algorithms and graph theory.

    PubMed

    Vorburger, Robert S; Reischauer, Carolin; Boesiger, Peter

    2013-02-01

    Bootstrap methods have recently been introduced to diffusion-weighted magnetic resonance imaging to estimate the measurement uncertainty of ensuing diffusion parameters directly from the acquired data without the necessity to assume a noise model. These methods have been previously combined with deterministic streamline tractography algorithms to allow for the assessment of connection probabilities in the human brain. Thereby, the local noise induced disturbance in the diffusion data is accumulated additively due to the incremental progression of streamline tractography algorithms. Graph based approaches have been proposed to overcome this drawback of streamline techniques. For this reason, the bootstrap method is in the present work incorporated into a graph setup to derive a new probabilistic fiber tractography method, called BootGraph. The acquired data set is thereby converted into a weighted, undirected graph by defining a vertex in each voxel and edges between adjacent vertices. By means of the cone of uncertainty, which is derived using the wild bootstrap, a weight is thereafter assigned to each edge. Two path finding algorithms are subsequently applied to derive connection probabilities. While the first algorithm is based on the shortest path approach, the second algorithm takes all existing paths between two vertices into consideration. Tracking results are compared to an established algorithm based on the bootstrap method in combination with streamline fiber tractography and to another graph based algorithm. The BootGraph shows a very good performance in crossing situations with respect to false negatives and permits incorporating additional constraints, such as a curvature threshold. By inheriting the advantages of the bootstrap method and graph theory, the BootGraph method provides a computationally efficient and flexible probabilistic tractography setup to compute connection probability maps and virtual fiber pathways without the drawbacks of

  20. Global Precipitation Measurement (GPM) Microwave Imager Falling Snow Retrieval Algorithm Performance

    NASA Astrophysics Data System (ADS)

    Skofronick Jackson, Gail; Munchak, Stephen J.; Johnson, Benjamin T.

    2015-04-01

    Retrievals of falling snow from space represent an important data set for understanding the Earth's atmospheric, hydrological, and energy cycles. While satellite-based remote sensing provides global coverage of falling snow events, the science is relatively new and retrievals are still undergoing development with challenges and uncertainties remaining. This work reports on the development and post-launch testing of retrieval algorithms for the NASA Global Precipitation Measurement (GPM) mission Core Observatory satellite launched in February 2014. In particular, we will report on GPM Microwave Imager (GMI) radiometer instrument algorithm performance with respect to falling snow detection and estimation. Since GPM's launch, the at-launch GMI precipitation algorithms, based on a Bayesian framework, have been used with the new GPM data. The at-launch database is generated using proxy satellite data merged with surface measurements (instead of models). One year after launch, the Bayesian database will begin to be replaced with the more realistic observational data from the GPM spacecraft radar retrievals and GMI data. It is expected that the observational database will be much more accurate for falling snow retrievals because that database will take full advantage of the 166 and 183 GHz snow-sensitive channels. Furthermore, much retrieval algorithm work has been done to improve GPM retrievals over land. The Bayesian framework for GMI retrievals is dependent on the a priori database used in the algorithm and how profiles are selected from that database. Thus, a land classification sorts land surfaces into ~15 different categories for surface-specific databases (radiometer brightness temperatures are quite dependent on surface characteristics). In addition, our work has shown that knowing if the land surface is snow-covered, or not, can improve the performance of the algorithm. Improvements were made to the algorithm that allow for daily inputs of ancillary snow cover

  1. Adaptive thresholding algorithm based on SAR images and wind data to segment oil spills along the northwest coast of the Iberian Peninsula.

    PubMed

    Mera, David; Cotos, José M; Varela-Pet, José; Garcia-Pineda, Oscar

    2012-10-01

    Satellite Synthetic Aperture Radar (SAR) has been established as a useful tool for detecting hydrocarbon spillage on the ocean's surface. Several surveillance applications have been developed based on this technology. Environmental variables such as wind speed should be taken into account for better SAR image segmentation. This paper presents an adaptive thresholding algorithm for detecting oil spills based on SAR data and a wind field estimation as well as its implementation as a part of a functional prototype. The algorithm was adapted to an important shipping route off the Galician coast (northwest Iberian Peninsula) and was developed on the basis of confirmed oil spills. Image testing revealed 99.93% pixel labelling accuracy. By taking advantage of multi-core processor architecture, the prototype was optimized to get a nearly 30% improvement in processing time.

  2. Statistics enhancement in hyperspectral data analysis using spectral-spatial labeling, the EM algorithm, and the leave-one-out covariance estimator

    NASA Astrophysics Data System (ADS)

    Hsieh, Pi-Fuei; Landgrebe, David A.

    1998-10-01

    Hyperspectral data potentially contain more information than multispectral data because of higher dimensionality. Information extraction algorithm performance is strongly related to the quantitative precision with which the desired classes are defined, a characteristic which increase rapidly with dimensionality. Due to the limited number of training samples used in defining classes, the information extraction of hyperspectral data may not perform as well as needed. In this paper, schemes for statistics enhancement are investigated for alleviating this problem. Previous works including the EM algorithm and the Leave-One-Out covariance estimator are discussed. The HALF covariance estimator is proposed for two-class problems by using the symmetry property of the normal distribution. A spectral-spatial labeling scheme is proposed to increase the training sample sizes automatically. We also seek to combine previous works with the proposed methods so as to take full advantage of statistics enhancement. Using these techniques, improvement in classification accuracy has been observed.

  3. A Danger-Theory-Based Immune Network Optimization Algorithm

    PubMed Central

    Li, Tao; Xiao, Xin; Shi, Yuanquan

    2013-01-01

    Existing artificial immune optimization algorithms reflect a number of shortcomings, such as premature convergence and poor local search ability. This paper proposes a danger-theory-based immune network optimization algorithm, named dt-aiNet. The danger theory emphasizes that danger signals generated from changes of environments will guide different levels of immune responses, and the areas around danger signals are called danger zones. By defining the danger zone to calculate danger signals for each antibody, the algorithm adjusts antibodies' concentrations through its own danger signals and then triggers immune responses of self-regulation. So the population diversity can be maintained. Experimental results show that the algorithm has more advantages in the solution quality and diversity of the population. Compared with influential optimization algorithms, CLONALG, opt-aiNet, and dopt-aiNet, the algorithm has smaller error values and higher success rates and can find solutions to meet the accuracies within the specified function evaluation times. PMID:23483853

  4. Limited-data computed tomograpy algorithms for the physical sciences

    NASA Astrophysics Data System (ADS)

    Verhoeven, Dean

    1993-07-01

    Results are presented from a comparison of implementations of five computed tomography algorithms which were either designed expressly to work with, or have been shown to work with, limited data and which may be applied to a wide variety of objects. These include the adapted versions of the algebraic reconstruction technique, the multiplicative algebraic reconstruction technique (MART), the Gerchberg-Papoulis algorithgm, a spectral extrapolation algorithm derived from that of Harris (1964), and an algorithm based on the singular value decomposition technique. The algorithms were used to reconstruct phantom data with realistic levels of noise from a number of different imaging geometries. It was found that the MART algorithm has a combination of advantages that makes it superior to other algorithms tested.

  5. General inference algorithm of Bayesian networks based on clique tree

    NASA Astrophysics Data System (ADS)

    Li, Haijun; Liu, Xiao

    2008-10-01

    A general inference algorithm which based on exact algorithm of clique tree and importance sampling principle was put forward this article. It applied advantages of two algorithms, made information transfer from one clique to another, but don't calculate exact interim result. It calculated and dealt with the information using approximate algorithm, calculated the information from one clique to another using current potential. Because this algorithm was an iterative course of improvement, this continuous ran could increases potential of each clique, and produced much more exact information. Hybrid Bayesian Networks inference algorithm based on general softmax function could deal whit any function for CPD, and could be applicable for any models. Simulation test proved that the effect of classification was fine.

  6. A fast algorithm for functional mapping of complex traits.

    PubMed Central

    Zhao, Wei; Wu, Rongling; Ma, Chang-Xing; Casella, George

    2004-01-01

    By integrating the underlying developmental mechanisms for the phenotypic formation of traits into a mapping framework, functional mapping has emerged as an important statistical approach for mapping complex traits. In this note, we explore the feasibility of using the simplex algorithm as an alternative to solve the mixture-based likelihood for functional mapping of complex traits. The results from the simplex algorithm are consistent with those from the traditional EM algorithm, but the simplex algorithm has considerably reduced computational times. Moreover, because of its nonderivative nature and easy implementation with current software, the simplex algorithm enjoys an advantage over the EM algorithm in the dynamic modeling and analysis of complex traits. PMID:15342547

  7. Turbopump Performance Improved by Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Oyama, Akira; Liou, Meng-Sing

    2002-01-01

    The development of design optimization technology for turbomachinery has been initiated using the multiobjective evolutionary algorithm under NASA's Intelligent Synthesis Environment and Revolutionary Aeropropulsion Concepts programs. As an alternative to the traditional gradient-based methods, evolutionary algorithms (EA's) are emergent design-optimization algorithms modeled after the mechanisms found in natural evolution. EA's search from multiple points, instead of moving from a single point. In addition, they require no derivatives or gradients of the objective function, leading to robustness and simplicity in coupling any evaluation codes. Parallel efficiency also becomes very high by using a simple master-slave concept for function evaluations, since such evaluations often consume the most CPU time, such as computational fluid dynamics. Application of EA's to multiobjective design problems is also straightforward because EA's maintain a population of design candidates in parallel. Because of these advantages, EA's are a unique and attractive approach to real-world design optimization problems.

  8. Bootstrap performance profiles in stochastic algorithms assessment

    SciTech Connect

    Costa, Lino; Espírito Santo, Isabel A.C.P.; Oliveira, Pedro

    2015-03-10

    Optimization with stochastic algorithms has become a relevant research field. Due to its stochastic nature, its assessment is not straightforward and involves integrating accuracy and precision. Performance profiles for the mean do not show the trade-off between accuracy and precision, and parametric stochastic profiles require strong distributional assumptions and are limited to the mean performance for a large number of runs. In this work, bootstrap performance profiles are used to compare stochastic algorithms for different statistics. This technique allows the estimation of the sampling distribution of almost any statistic even with small samples. Multiple comparison profiles are presented for more than two algorithms. The advantages and drawbacks of each assessment methodology are discussed.

  9. Image compression algorithm using wavelet transform

    NASA Astrophysics Data System (ADS)

    Cadena, Luis; Cadena, Franklin; Simonov, Konstantin; Zotin, Alexander; Okhotnikov, Grigory

    2016-09-01

    Within the multi-resolution analysis, the study of the image compression algorithm using the Haar wavelet has been performed. We have studied the dependence of the image quality on the compression ratio. Also, the variation of the compression level of the studied image has been obtained. It is shown that the compression ratio in the range of 8-10 is optimal for environmental monitoring. Under these conditions the compression level is in the range of 1.7 - 4.2, depending on the type of images. It is shown that the algorithm used is more convenient and has more advantages than Winrar. The Haar wavelet algorithm has improved the method of signal and image processing.

  10. Microgravity Smoldering Combustion Takes Flight

    NASA Technical Reports Server (NTRS)

    1996-01-01

    The Microgravity Smoldering Combustion (MSC) experiment lifted off aboard the Space Shuttle Endeavour in September 1995 on the STS-69 mission. This experiment is part of series of studies focused on the smolder characteristics of porous, combustible materials in a microgravity environment. Smoldering is a nonflaming form of combustion that takes place in the interior of combustible materials. Common examples of smoldering are nonflaming embers, charcoal briquettes, and cigarettes. The objective of the study is to provide a better understanding of the controlling mechanisms of smoldering, both in microgravity and Earth gravity. As with other forms of combustion, gravity affects the availability of air and the transport of heat, and therefore, the rate of combustion. Results of the microgravity experiments will be compared with identical experiments carried out in Earth's gravity. They also will be used to verify present theories of smoldering combustion and will provide new insights into the process of smoldering combustion, enhancing our fundamental understanding of this frequently encountered combustion process and guiding improvement in fire safety practices.

  11. Apollo - Lunar Take Off Simulator

    NASA Technical Reports Server (NTRS)

    1961-01-01

    Lunar Take Off Simulator: This simulator is used by scientists at the Langley Research Center ... to help determine human ability to control a lunar launch vehicle in vertical alignment during takeoff from the moon for rendezvous with a lunar satellite vehicle on the return trip to earth. The three-axis chair, a concept which allows the pilot to sit upright during launch, gives the navigator angular motion (pitch, role, and yaw) cues as he operates the vehicle through a sidearm control system. The sight apparatus in front of the pilot's face enables him to align the vehicle on a course toward a chosen star, which will be followed as a guidance reference during the lunar launch. The pilot's right hand controls angular motions, while his left hand manipulates the thrust lever. The simulator is designed for operation inside an artificial planetarium, where a star field will be projected against the ceiling during 'flights'. The tests are part of an extensive NASA program at Langley in the study of problems relating to a manned lunar mission. (From a NASA Langley, photo release caption.)

  12. Improved interpretation of satellite altimeter data using genetic algorithms

    NASA Technical Reports Server (NTRS)

    Messa, Kenneth; Lybanon, Matthew

    1992-01-01

    Genetic algorithms (GA) are optimization techniques that are based on the mechanics of evolution and natural selection. They take advantage of the power of cumulative selection, in which successive incremental improvements in a solution structure become the basis for continued development. A GA is an iterative procedure that maintains a 'population' of 'organisms' (candidate solutions). Through successive 'generations' (iterations) the population as a whole improves in simulation of Darwin's 'survival of the fittest'. GA's have been shown to be successful where noise significantly reduces the ability of other search techniques to work effectively. Satellite altimetry provides useful information about oceanographic phenomena. It provides rapid global coverage of the oceans and is not as severely hampered by cloud cover as infrared imagery. Despite these and other benefits, several factors lead to significant difficulty in interpretation. The GA approach to the improved interpretation of satellite data involves the representation of the ocean surface model as a string of parameters or coefficients from the model. The GA searches in parallel, a population of such representations (organisms) to obtain the individual that is best suited to 'survive', that is, the fittest as measured with respect to some 'fitness' function. The fittest organism is the one that best represents the ocean surface model with respect to the altimeter data.

  13. Confidence intervals for expected moments algorithm flood quantile estimates

    USGS Publications Warehouse

    Cohn, T.A.; Lane, W.L.; Stedinger, J.R.

    2001-01-01

    Historical and paleoflood information can substantially improve flood frequency estimates if appropriate statistical procedures are properly applied. However, the Federal guidelines for flood frequency analysis, set forth in Bulletin 17B, rely on an inefficient "weighting" procedure that fails to take advantage of historical and paleoflood information. This has led researchers to propose several more efficient alternatives including the Expected Moments Algorithm (EMA), which is attractive because it retains Bulletin 17B's statistical structure (method of moments with the Log Pearson Type 3 distribution) and thus can be easily integrated into flood analyses employing the rest of the Bulletin 17B approach. The practical utility of EMA, however, has been limited because no closed-form method has been available for quantifying the uncertainty of EMA-based flood quantile estimates. This paper addresses that concern by providing analytical expressions for the asymptotic variance of EMA flood-quantile estimators and confidence intervals for flood quantile estimates. Monte Carlo simulations demonstrate the properties of such confidence intervals for sites where a 25- to 100-year streamgage record is augmented by 50 to 150 years of historical information. The experiments show that the confidence intervals, though not exact, should be acceptable for most purposes.

  14. Algorithm-dependent fault tolerance for distributed computing

    SciTech Connect

    P. D. Hough; M. e. Goldsby; E. J. Walsh

    2000-02-01

    Large-scale distributed systems assembled from commodity parts, like CPlant, have become common tools in the distributed computing world. Because of their size and diversity of parts, these systems are prone to failures. Applications that are being run on these systems have not been equipped to efficiently deal with failures, nor is there vendor support for fault tolerance. Thus, when a failure occurs, the application crashes. While most programmers make use of checkpoints to allow for restarting of their applications, this is cumbersome and incurs substantial overhead. In many cases, there are more efficient and more elegant ways in which to address failures. The goal of this project is to develop a software architecture for the detection of and recovery from faults in a cluster computing environment. The detection phase relies on the latest techniques developed in the fault tolerance community. Recovery is being addressed in an application-dependent manner, thus allowing the programmer to take advantage of algorithmic characteristics to reduce the overhead of fault tolerance. This architecture will allow large-scale applications to be more robust in high-performance computing environments that are comprised of clusters of commodity computers such as CPlant and SMP clusters.

  15. Online Planning Algorithms for POMDPs

    PubMed Central

    Ross, Stéphane; Pineau, Joelle; Paquet, Sébastien; Chaib-draa, Brahim

    2009-01-01

    Partially Observable Markov Decision Processes (POMDPs) provide a rich framework for sequential decision-making under uncertainty in stochastic domains. However, solving a POMDP is often intractable except for small problems due to their complexity. Here, we focus on online approaches that alleviate the computational complexity by computing good local policies at each decision step during the execution. Online algorithms generally consist of a lookahead search to find the best action to execute at each time step in an environment. Our objectives here are to survey the various existing online POMDP methods, analyze their properties and discuss their advantages and disadvantages; and to thoroughly evaluate these online approaches in different environments under various metrics (return, error bound reduction, lower bound improvement). Our experimental results indicate that state-of-the-art online heuristic search methods can handle large POMDP domains efficiently. PMID:19777080

  16. Environmental structure and competitive scoring advantages in team competitions.

    PubMed

    Merritt, Sears; Clauset, Aaron

    2013-10-29

    In most professional sports, playing field structure is kept neutral so that scoring imbalances may be attributed to differences in team skill. It thus remains unknown what impact environmental heterogeneities can have on scoring dynamics or competitive advantages. Applying a novel generative model of scoring dynamics to roughly 10 million team competitions drawn from an online game, we quantify the relationship between the structure within a competition and its scoring dynamics, while controlling the impact of chance. Despite wide structural variations, we observe a common three-phase pattern in the tempo of events. Tempo and balance are highly predictable from a competition's structural features alone and teams exploit environmental heterogeneities for sustained competitive advantage. Surprisingly, the most balanced competitions are associated with specific environmental heterogeneities, not from equally skilled teams. These results shed new light on the design principles of balanced competition, and illustrate the potential of online game data for investigating social dynamics and competition.

  17. Environmental structure and competitive scoring advantages in team competitions

    NASA Astrophysics Data System (ADS)

    Merritt, Sears; Clauset, Aaron

    2013-10-01

    In most professional sports, playing field structure is kept neutral so that scoring imbalances may be attributed to differences in team skill. It thus remains unknown what impact environmental heterogeneities can have on scoring dynamics or competitive advantages. Applying a novel generative model of scoring dynamics to roughly 10 million team competitions drawn from an online game, we quantify the relationship between the structure within a competition and its scoring dynamics, while controlling the impact of chance. Despite wide structural variations, we observe a common three-phase pattern in the tempo of events. Tempo and balance are highly predictable from a competition's structural features alone and teams exploit environmental heterogeneities for sustained competitive advantage. Surprisingly, the most balanced competitions are associated with specific environmental heterogeneities, not from equally skilled teams. These results shed new light on the design principles of balanced competition, and illustrate the potential of online game data for investigating social dynamics and competition.

  18. Informal leadership support: an often overlooked competitive advantage.

    PubMed

    Peters, L H; O'Connor, E J

    2001-01-01

    As environmental pressures mount, the advantage of using the same strategies and tactics employed by competitors continues to shrink. An alternative is adapting and applying answers successfully employed in other industries to health care organizations. Working with informal influence leaders to share your change management efforts represents one such example. Informal influence leaders offer an often-overlooked source of competitive advantage--they have already earned credibility and respect from others, who regularly look to them for guidance. When sharing their views, they significantly influence the acceptance or rejection of new initiatives. Influence leaders reach into every conversation, every meeting, and every decision made in an organization. The important question is whether they will exert their leadership in support or in opposition to changes you propose. By identifying influence leaders and inviting them to join a group to discuss change initiatives, physician executives can create a positive force for change.

  19. An In–Group Advantage in Detecting Intergroup Anxiety

    PubMed Central

    Gray, Heather M.; Mendes, Wendy Berry; Denny-Brown, Carrigan

    2009-01-01

    We examined the possibility of an in-group advantage in detecting intergroup anxiety. Specifically, we videotaped White and Black participants while they engaged in same-race or interrace interactions. Then we asked White and Black observers to view these videotapes (unaware of the racial context) and provide their impressions of participants' anxiety. Two results pointed to an in-group advantage in detecting intergroup anxiety. First, only same-race observers perceived a modulation of participants' anxious behavior as a function of racial context. This held true not only for relatively subjective perceptions of global anxiety, but also for perceptions of single, discrete behaviors tied to anxiety. Second, we found that only same-race observers provided descriptions of anxiety that tracked reliably with participants' cortisol changes during the task. These results suggest that White and Black Americans may have difficulty developing a sense of shared emotional experience. PMID:19121129

  20. Competitive Advantage in Intercollegiate Athletics: Role of Intangible Resources

    PubMed Central

    Won, Doyeon; Chelladurai, Packianathan

    2016-01-01

    The present research explored the dynamics of competitive advantages in intercollegiate athletics by investigating the contribution of intangible resources (i.e., athletic and academic reputations) on the generation of more tangible resources (i.e., human and financial resources), which in turn influence the athletic performance (i.e., winning record) and academic performance (i.e., graduation rates), and gender equity. The research was based entirely on archival data of 324 NCAA Division I member institutions. The results of the SEM supported the study’s basic arguments that tangible resources are the sources of competitive advantages in Division I intercollegiate athletics, and that intangible resources contribute to the generation of tangible resources. PMID:26731118