Science.gov

Sample records for algorithm takes advantage

  1. The Take Advantage Now/Parent Education Project.

    ERIC Educational Resources Information Center

    Walters, David

    1987-01-01

    The Take Advantage Now/Parent Education Project (TAN/PEP) is based on the knowledge that college students need their parents' help for academic success. The program was developed at Brooklyn College to offer education, support, and counseling for parents and guardians of entering freshmen who have been accepted in a program known as Search,…

  2. TAKING SCIENTIFIC ADVANTAGE OF A DISASTROUS OIL SPILL

    EPA Science Inventory

    On 19 January 1996, the North Cape barge ran aground on Moonstone Beach in southern Rhode Island, releasing 828,000 gallons of refined oil. This opportunistic study was designed to take scientific advantage of the most severely affected seabird, the common loon (Gavia immer) . As...

  3. Let's Do It: Taking Advantage of the Hand Calculator

    ERIC Educational Resources Information Center

    Bruni, James V.; Silverman, Helene J.

    1976-01-01

    Activities are suggested for using the hand calculator as a complement to any mathematics program. The activities focus on developing skills for using the calculator, exploring basic arithmetic operations, understanding algorithms, mental calculation and estimation, and problem solving. (DT)

  4. Neural Correlates of Traditional Chinese Medicine Induced Advantageous Risk-Taking Decision Making

    ERIC Educational Resources Information Center

    Lee, Tiffany M. Y.; Guo, Li-guo; Shi, Hong-zhi; Li, Yong-zhi; Luo, Yue-jia; Sung, Connie Y. Y.; Chan, Chetwyn C. H.; Lee, Tatia M. C.

    2009-01-01

    This fMRI study examined the neural correlates of the observed improvement in advantageous risk-taking behavior, as measured by the number of adjusted pumps in the Balloon Analogue Risk Task (BART), following a 60-day course of a Traditional Chinese Medicine (TCM) recipe, specifically designed to regulate impulsiveness in order to modulate…

  5. Student Perceptions of the Advantages and Disadvantages of Geologic Note-taking with iPads

    NASA Astrophysics Data System (ADS)

    Dohaney, J. A.; Kennedy, B.; Gravley, D. M.

    2015-12-01

    During fieldwork, students and professionals record information and hypotheses into their geologic notebook. In a pilot study, students on an upper-level volcanology field trip were given iPads, with an open-source geology note-taking application (GeoFieldBook) and volunteered to record notes at two sites (i.e., Tongariro Volcanic Complex and Orakei Korako) in New Zealand. A group of students (n=9) were interviewed several weeks after fieldwork to reflect on using this technology. We aimed to characterise their experiences, strategies and examine the perceived benefits and challenges of hardcopy and digital note-taking. Students reported having a diverse range of strategies when taking notes but the most common strategies mentioned were: a) looking for/describing the differences, b) supporting note-taking with sketches, c) writing everything down, and d) focusing first on structure, texture and then composition of an outcrop. Additionally, students said they that the strategies they used were context-dependent (i.e., bedrock mapping versus detailed outcrop descriptions). When using the iPad, students reported that they specifically used different strategies: varying the length of text (from more to less), increasing the number of sites described (i.e., preferring to describe sites in more spatial detail rather than summarising several features in close proximity), and taking advantage of the 'editability' of iPad notes (abandoning rigid, systematic approaches). Overall, the reported advantages to iPad note-taking included allowing the user to be more efficient, organised and using the GPS mapping function to help them make observations and interpretations in real-time. Students also reported a range of disadvantages, but focused predominantly on the inability to annotate/draw sketches with the iPad in the same manner as pen and paper methods. These differences likely encourage different overall approaches to note-taking and cognition in the field environment, and

  6. ATTRACT--applications in telemedicine taking rapid advantage of cable television network evolution.

    PubMed

    Anogianakis, G; Maglavera, S; Pomportsis, A

    1998-01-01

    ATTRACT is a project that intends to provide telemedicine services over Cable Television Networks. ATTRACT is an European Commission funded project (Healthcare Telematics). The main objective of ATTRACT is to take advantage of emerging European Cable Television network infrastructures and offer cost-effective care to patients at home. This will be achieved through a set of broadband network applications that competitively provide low cost interactive health-care services at home. The applications will be based on existing or developing European Cable Television network infrastructures in order to provide all kind of users with affordable homecare services. It is ATTRACT's intention that citizens and users benefit from high quality access to home telemedical services which also implies cost savings for patients, their families and the already over burdened health institutions. In addition, the European industries will have extensive opportunities to develop, evaluate and validate broadband network infrastructures providing multimedia and interactive telemedical services at home. ATTRACT contributes to the EU telecommunications and telematics policy objectives that promote the development and validation of "applications and services" which "provide an intelligent telematic environment for the patient in institutions and other points of care that helps the patient to continue, as far as possible, normal activities and external communication".

  7. Taking advantage of ground data systems attributes to achieve quality results in testing software

    NASA Technical Reports Server (NTRS)

    Sigman, Clayton B.; Koslosky, John T.; Hageman, Barbara H.

    1994-01-01

    During the software development life cycle process, basic testing starts with the development team. At the end of the development process, an acceptance test is performed for the user to ensure that the deliverable is acceptable. Ideally, the delivery is an operational product with zero defects. However, the goal of zero defects is normally not achieved but is successful to various degrees. With the emphasis on building low cost ground support systems while maintaining a quality product, a key element in the test process is simulator capability. This paper reviews the Transportable Payload Operations Control Center (TPOCC) Advanced Spacecraft Simulator (TASS) test tool that is used in the acceptance test process for unmanned satellite operations control centers. The TASS is designed to support the development, test and operational environments of the Goddard Space Flight Center (GSFC) operations control centers. The TASS uses the same basic architecture as the operations control center. This architecture is characterized by its use of distributed processing, industry standards, commercial off-the-shelf (COTS) hardware and software components, and reusable software. The TASS uses much of the same TPOCC architecture and reusable software that the operations control center developer uses. The TASS also makes use of reusable simulator software in the mission specific versions of the TASS. Very little new software needs to be developed, mainly mission specific telemetry communication and command processing software. By taking advantage of the ground data system attributes, successful software reuse for operational systems provides the opportunity to extend the reuse concept into the test area. Consistency in test approach is a major step in achieving quality results.

  8. Taking Advantage of STEM (Science, Technology, Engineering, and Math) Popularity to Enhance Student/Public Engagement

    NASA Astrophysics Data System (ADS)

    Dittrich, T. M.

    2011-12-01

    . Our goal is to expand the use of these modules to a more broad public audience, including at a future campus/public event know as "All Things Water". We have also organized a walking tour/demo with 3rd-5th graders in a small mining town west of Boulder where we hiked to an old historical mine site, measured water quality (pH, dissolved lead, conductivity), and coated the inside of small bottles with silver. Organizing and hosting a conference can also be a great way to facilitate a discussion of ideas within the community. "All Things STEM" organized a broad student research conference related to water quality and water treatment which included research from 22 students from 11 different countries. We worked with 12 local engineering consultants, municipalities, and local businesses to provide 2000 for student awards. Our presentation will focus on lessons we have learned on how to take advantage of student energy, excitement, and time on campus to receive funding opportunities for planning events that engage the public. We will also talk about our experiences in using student energy to develop partnerships with K-12 schools, community groups, and industry professionals.

  9. The pen is mightier than the keyboard: advantages of longhand over laptop note taking.

    PubMed

    Mueller, Pam A; Oppenheimer, Daniel M

    2014-06-01

    Taking notes on laptops rather than in longhand is increasingly common. Many researchers have suggested that laptop note taking is less effective than longhand note taking for learning. Prior studies have primarily focused on students' capacity for multitasking and distraction when using laptops. The present research suggests that even when laptops are used solely to take notes, they may still be impairing learning because their use results in shallower processing. In three studies, we found that students who took notes on laptops performed worse on conceptual questions than students who took notes longhand. We show that whereas taking more notes can be beneficial, laptop note takers' tendency to transcribe lectures verbatim rather than processing information and reframing it in their own words is detrimental to learning.

  10. Cell-Mediated Delivery of Nanoparticles: Taking Advantage of Circulatory Cells to Target Nanoparticles

    PubMed Central

    Anselmo, Aaron C.; Mitragotri, Samir

    2014-01-01

    Cellular hitchhiking leverages the use of circulatory cells to enhance the biological outcome of nanoparticle drug delivery systems, which often suffer from poor circulation time and limited targeting. Cellular hitchhiking utilizes the natural abilities of circulatory cells to: (i) navigate the vasculature while avoiding immune system clearance, (ii) remain relatively inert until needed and (iii) perform specific functions, including nutrient delivery to tissues, clearance of pathogens, and immune system surveillance. A variety of synthetic nanoparticles attempt to mimic these functional attributes of circulatory cells for drug delivery purposes. By combining the advantages of circulatory cells and synthetic nanoparticles, many advanced drug delivery systems have been developed that adopt the concept of cellular hitchhiking. Here, we review the development and specific applications of cellular hitchhiking-based drug delivery systems. PMID:24747161

  11. Social customer relationship management: taking advantage of Web 2.0 and Big Data technologies.

    PubMed

    Orenga-Roglá, Sergio; Chalmeta, Ricardo

    2016-01-01

    The emergence of Web 2.0 and Big Data technologies has allowed a new customer relationship strategy based on interactivity and collaboration called Social Customer Relationship Management (Social CRM) to be created. This enhances customer engagement and satisfaction. The implementation of Social CRM is a complex task that involves different organisational, human and technological aspects. However, there is a lack of methodologies to assist companies in these processes. This paper shows a novel methodology that helps companies to implement Social CRM, taking into account different aspects such as social customer strategy, the Social CRM performance measurement system, the Social CRM business processes, or the Social CRM computer system. The methodology was applied to one company in order to validate and refine it.

  12. Social customer relationship management: taking advantage of Web 2.0 and Big Data technologies.

    PubMed

    Orenga-Roglá, Sergio; Chalmeta, Ricardo

    2016-01-01

    The emergence of Web 2.0 and Big Data technologies has allowed a new customer relationship strategy based on interactivity and collaboration called Social Customer Relationship Management (Social CRM) to be created. This enhances customer engagement and satisfaction. The implementation of Social CRM is a complex task that involves different organisational, human and technological aspects. However, there is a lack of methodologies to assist companies in these processes. This paper shows a novel methodology that helps companies to implement Social CRM, taking into account different aspects such as social customer strategy, the Social CRM performance measurement system, the Social CRM business processes, or the Social CRM computer system. The methodology was applied to one company in order to validate and refine it. PMID:27652037

  13. Astrosociology and Space Exploration: Taking Advantage of the Other Branch of Science

    NASA Astrophysics Data System (ADS)

    Pass, Jim

    2008-01-01

    The space age marches on. Following President Bush's Vision for Space Exploration (VSE) and our recent celebration of the fiftieth anniversary of spaceflight on October 4, 2007, we should now take time to contemplate where we have been as it relates to where we are going. Space exploration has depended most strongly on engineers and space scientists in the past. This made sense when crews remained small, manned missions tended to operate in low Earth orbit and on a temporary basis, and the bulk of missions were carried out by robotic spacecraft. The question one must now ask is this: What will change in the next fifty years? One fundamental answer to this question involves the strong probability that human beings will increasingly go into space to live and work on long-duration missions and begin to live in space permanently. This article addresses the need to utilize the other neglected branch of science, comprised of the social and behavioral sciences along with the humanities, as it relates to the shift to a more substantial human presence in space. It focuses on the social science perspective needed to make this possible rather than the practical aspects of doing so, such as the engineering of functional habitats. A most important consideration involves the permanent establishment of a formal collaborative mechanism between astrosociologists and the engineers and space scientists who traditionally comprise the space community. The theoretical and applied aspects of astrosociology each have much to contribute toward the human dimension of space exploration, both on the Earth and beyond its atmosphere. The bottom line is that a social species such as ours cannot determine how to live in space without the input from a social science perspective, namely astrosociology.

  14. The development of a charge protocol to take advantage of off- and on-peak demand economics at facilities

    SciTech Connect

    Jeffrey Wishart

    2012-02-01

    This document reports the work performed under Task 1.2.1.1: 'The development of a charge protocol to take advantage of off- and on-peak demand economics at facilities'. The work involved in this task included understanding the experimental results of the other tasks of SOW-5799 in order to take advantage of the economics of electricity pricing differences between on- and off-peak hours and the demonstrated charging and facility energy demand profiles. To undertake this task and to demonstrate the feasibility of plug-in hybrid electric vehicle (PHEV) and electric vehicle (EV) bi-directional electricity exchange potential, BEA has subcontracted Electric Transportation Applications (now known as ECOtality North America and hereafter ECOtality NA) to use the data from the demand and energy study to focus on reducing the electrical power demand of the charging facility. The use of delayed charging as well as vehicle-to-grid (V2G) and vehicle-to-building (V2B) operations were to be considered.

  15. Size and shape of Brain may be such as to take advantage of two Dimensions of Time

    NASA Astrophysics Data System (ADS)

    Kriske, Richard

    2014-03-01

    This author had previously Theorized that there are two non-commuting Dimensions of time. One is Clock Time and the other is Information Time (which we generally refer to as Information, like Spin Up or Spin Down). When time does not commute with another Dimension of Time, one takes the Clock Time at one point in space and the Information time is not known; that is different than if one takes the Information time at that point and the Clock time is not known--This is not explicitly about time but rather space. An example of this non-commutation is that if one knows the Spin at one point and the Time at one point of space then simultaneosly, one knows the Spin at another point of Space and the Time there (It is the same time), it is a restatement of the EPR paradox. As a matter of fact two Dimensions of Time would prove the EPR paradox. It is obvious from that argument that if one needed to take advantage of Information, then a fairly large space needs to be used, a large amount of Energy needs to be Generated and a symmetry needs to be established in Space-like the lobes of a Brain in order to detect the fact that the Tclock and Tinfo are not Commuting. This Non-Commuting deposits a large amount of Information simultaneously in that space, and synchronizes the time there.

  16. Career Transitions. Taking Advantage of Your U.S. Air Force Military Experience To Become the Employer's Choice. Helpful Hints That Result in Employment Advantage.

    ERIC Educational Resources Information Center

    Drier, Harry N.

    This booklet outlines some points about a veteran's unique marketability, advantages acquired by working for the military, benefits earned, and some ideas for packaging a veteran's credentials. It lists worker characteristics with which employers are most impressed. Career planning steps are outlined, complete career examination is recommended,…

  17. Taking Advantage of Electric Field Induced Bacterial Aggregation for the Study of Interactions between Bacteria and Macromolecules by Capillary Electrophoresis.

    PubMed

    Sisavath, Nicolas; Got, Patrice; Charrière, Guillaume M; Destoumieux-Garzon, Delphine; Cottet, Hervé

    2015-07-01

    The quantification of interaction stoichiometry and binding constant between bacteria (or other microorganism) and (macro)molecules remains a challenging issue for which only a few adapted methods are available. In this paper, a new methodology was developed for the determination of the interaction stoichiometry and binding constant between bacteria and (macro)molecules. The originality of this work is to take advantage of the bacterial aggregation phenomenon to directly quantify the free ligand concentration in equilibrated bacteria-ligand mixtures using frontal analysis continuous capillary electrophoresis. The described methodology does not require any sample preparation such as filtration step or centrifugation. It was applied to the study of interactions between Erwinia carotovora and different generations of dendrigraft poly-L-lysines leading to quantitative information (i.e., stoichiometry and binding site constant). High stoichiometries in the order of 10(6)-10(7) were determined between nanometric dendrimer-like ligands and the rod-shaped micrometric bacteria. The effect of the dendrimer generation on the binding constant and the stoichiometry is discussed. Stoichiometries were compared with those obtained by replacing the bacteria by polystyrene microbeads to demonstrate the internalization of the ligands inside the bacteria and the increase of the specific surface via the formation of vesicles. PMID:26086209

  18. Building Models in the Classroom: Taking Advantage of Sophisticated Geomorphic Numerical Tools Using a Simple Graphical User Interface

    NASA Astrophysics Data System (ADS)

    Roy, S. G.; Koons, P. O.; Gerbi, C. C.; Capps, D. K.; Tucker, G. E.; Rogers, Z. A.

    2014-12-01

    Sophisticated numerical tools exist for modeling geomorphic processes and linking them to tectonic and climatic systems, but they are often seen as inaccessible for users with an exploratory level of interest. We have improved the accessibility of landscape evolution models by producing a simple graphics user interface (GUI) that takes advantage of the Channel-Hillslope Integrated Landscape Development (CHILD) model. Model access is flexible: the user can edit values for basic geomorphic, tectonic, and climate parameters, or obtain greater control by defining the spatiotemporal distributions of those parameters. Users can make educated predictions by choosing their own parametric values for the governing equations and interpreting the results immediately through model graphics. This method of modeling allows users to iteratively build their understanding through experimentation. Use of this GUI is intended for inquiry and discovery-based learning activities. We discuss a number of examples of how the GUI can be used at the upper high school, introductory university, and advanced university level. Effective teaching modules initially focus on an inquiry-based example guided by the instructor. As students become familiar with the GUI and the CHILD model, the class can shift to more student-centered exploration and experimentation. To make model interpretations more robust, digital elevation models can be imported and direct comparisons can be made between CHILD model results and natural topography. The GUI is available online through the University of Maine's Earth and Climate Sciences website, through the Community Surface Dynamics Modeling System (CSDMS) model repository, or by contacting the corresponding author.

  19. Taking Advantage of Near-Field Heating in Order to Increase Ablated Volume during High Intensity Focused Ultrasound

    NASA Astrophysics Data System (ADS)

    Curiel, Laura; Damianou, Christakis; Komodromos, M.; Koupis, A.; Chapelon, Jean-Yves

    2006-05-01

    The goal of the study was to suggest ultrasonic parameters that take advantage of near-field heating in order to increase the volume of the ablated tissue and consequently reducing the total time of high intensity ultrasonic ablation. The main parameter evaluated as an indicator of producing controlled lesions was the ratio of width of necrosis at the focal region over the width of necrosis in the near-field (WI/WN). A WI/WN close to 1 indicated a good reflection of the focal heating, meaning that the lesion was controlled and could be used to increase the ablated volume. The most significant ultrasonic parameter that reduced the treatment time was the delay between successive ultrasonic firings. It was found that at a spatial in situ intensity close to 1000 W/cm2, the WI/WN is close to 1 even with a delay between successive ultrasonic firings as low as 10 s (transducer T1: 50 mm diameter, 40 mm radius of curvature, and frequency of 3 MHz). The lower the intensity or the higher the delay, the closer to unity is WI/WN. For a different transducer (T2: 40 mm diameter, 40 mm radius of curvature, and frequency of 3 MHz) the WI/WN was lower, indicating that the transducer geometry can play an important role for producing controlled lesions in the near-field. However, the same concepts were also observed for both geometries. This technique of increasing the ablated volume was verified in turkey tissue in vitro. The effect of other parameters such as frequency, focal depth and area of the grid pattern on is still under investigation.

  20. Know the risk, take the win: how executive functions and probability processing influence advantageous decision making under risk conditions.

    PubMed

    Brand, Matthias; Schiebener, Johannes; Pertl, Marie-Theres; Delazer, Margarete

    2014-01-01

    Recent models on decision making under risk conditions have suggested that numerical abilities are important ingredients of advantageous decision-making performance, but empirical evidence is still limited. The results of our first study show that logical reasoning and basic mental calculation capacities predict ratio processing and that ratio processing predicts decision making under risk. In the second study, logical reasoning together with executive functions predicted probability processing (numeracy and probability knowledge), and probability processing predicted decision making under risk. These findings suggest that increasing an individual's understanding of ratios and probabilities should lead to more advantageous decisions under risk conditions. PMID:25252799

  1. 5 CFR 792.207 - When does the child care subsidy program law become effective and how may agencies take advantage...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false When does the child care subsidy program law become effective and how may agencies take advantage of this law? 792.207 Section 792.207 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED) CIVIL SERVICE REGULATIONS (CONTINUED) FEDERAL EMPLOYEES' HEALTH AND COUNSELING...

  2. A symbiotic gas exchange between bioreactors enhances microalgal biomass and lipid productivities: taking advantage of complementary nutritional modes.

    PubMed

    Santos, C A; Ferreira, M E; da Silva, T Lopes; Gouveia, L; Novais, J M; Reis, A

    2011-08-01

    This paper describes the association of two bioreactors: one photoautotrophic and the other heterotrophic, connected by the gas phase and allowing an exchange of O(2) and CO(2) gases between them, benefiting from a symbiotic effect. The association of two bioreactors was proposed with the aim of improving the microalgae oil productivity for biodiesel production. The outlet gas flow from the autotrophic (O(2) enriched) bioreactor was used as the inlet gas flow for the heterotrophic bioreactor. In parallel, the outlet gas flow from another heterotrophic (CO(2) enriched) bioreactor was used as the inlet gas flow for the autotrophic bioreactor. Aside from using the air supplied from the auto- and hetero-trophic bioreactors as controls, one mixotrophic bioreactor was also studied and used as a model, for its claimed advantage of CO(2) and organic carbon being simultaneously assimilated. The microalga Chlorella protothecoides was chosen as a model due to its ability to grow under different nutritional modes (auto, hetero, and mixotrophic), and its ability to attain a high biomass productivity and lipid content, suitable for biodiesel production. The comparison between heterotrophic, autotrophic, and mixotrophic Chlorella protothecoides growth for lipid production revealed that heterotrophic growth achieved the highest biomass productivity and lipid content (>22%), and furthermore showed that these lipids had the most suitable fatty acid profile in order to produce high quality biodiesel. Both associations showed a higher biomass productivity (10-20%), when comparing the two separately operated bioreactors (controls) which occurred on the fourth day. A more remarkable result would have been seen if in actuality the two bioreactors had been inter-connected in a closed loop. The biomass productivity gain would have been 30% and the lipid productivity gain would have been 100%, as seen by comparing the productivities of the symbiotic assemblage with the sum of the two

  3. Artifacts to avoid while taking advantage of top-down mass spectrometry based detection of protein S-thiolation.

    PubMed

    Auclair, Jared R; Salisbury, Joseph P; Johnson, Joshua L; Petsko, Gregory A; Ringe, Dagmar; Bosco, Daryl A; Agar, Nathalie Y R; Santagata, Sandro; Durham, Heather D; Agar, Jeffrey N

    2014-05-01

    Bottom-up MS studies typically employ a reduction and alkylation step that eliminates a class of PTM, S-thiolation. Given that molecular oxygen can mediate S-thiolation from reduced thiols, which are abundant in the reducing intracellular milieu, we investigated the possibility that some S-thiolation modifications are artifacts of protein preparation. Cu/Zn-superoxide dismutase (SOD1) was chosen for this case study as it has a reactive surface cysteine residue, which is readily cysteinylated in vitro. The ability of oxygen to generate S-thiolation artifacts was tested by comparing purification of SOD1 from postmortem human cerebral cortex under aerobic and anaerobic conditions. S-thiolation was ∼50% higher in aerobically processed preparations, consistent with oxygen-dependent artifactual S-thiolation. The ability of endogenous small molecule disulfides (e.g. cystine) to participate in artifactual S-thiolation was tested by blocking reactive protein cysteine residues during anaerobic homogenization. A 50-fold reduction in S-thiolation occurred indicating that the majority of S-thiolation observed aerobically was artifact. Tissue-specific artifacts were explored by comparing brain- and blood-derived protein, with remarkably more artifacts observed in brain-derived SOD1. Given the potential for such artifacts, rules of thumb for sample preparation are provided. This study demonstrates that without taking extraordinary precaution, artifactual S-thiolation of highly reactive, surface-exposed, cysteine residues can result.

  4. Take Advantage of Constitution Day

    ERIC Educational Resources Information Center

    McCune, Bonnie F.

    2008-01-01

    The announcement of the mandate for Constitution and Citizenship Day shortly before September, 2005, probably led to groans of dismay. Not another "must-do" for teachers and schools already stressed by federal and state requirements for standardized tests, increasingly rigid curricula, and scrutiny from the public and officials. But the idea and…

  5. MOD* Lite: An Incremental Path Planning Algorithm Taking Care of Multiple Objectives.

    PubMed

    Oral, Tugcem; Polat, Faruk

    2016-01-01

    The need for determining a path from an initial location to a target one is a crucial task in many applications, such as virtual simulations, robotics, and computer games. Almost all of the existing algorithms are designed to find optimal or suboptimal solutions considering only a single objective, namely path length. However, in many real life application path length is not the sole criteria for optimization, there are more than one criteria to be optimized that cannot be transformed to each other. In this paper, we introduce a novel multiobjective incremental algorithm, multiobjective D* lite (MOD* lite) built upon a well-known path planning algorithm, D* lite. A number of experiments are designed to compare the solution quality and execution time requirements of MOD* lite with the multiobjective A* algorithm, an alternative genetic algorithm we developed multiobjective genetic path planning and the strength Pareto evolutionary algorithm.

  6. Optimized MPPT algorithm for boost converters taking into account the environmental variables

    NASA Astrophysics Data System (ADS)

    Petit, Pierre; Sawicki, Jean-Paul; Saint-Eve, Frédéric; Maufay, Fabrice; Aillerie, Michel

    2016-07-01

    This paper presents a study on the specific behavior of the Boost DC-DC converters generally used for powering conversion of PV panels connected to a HVDC (High Voltage Direct Current) Bus. It follows some works pointing out that converter MPPT (Maximum Power Point Tracker) is severely perturbed by output voltage variations due to physical dependency of parameters as the input voltage, the output voltage and the duty cycle of the PWM switching control of the MPPT. As a direct consequence many converters connected together on a same load perturb each other because of the output voltage variations induced by fluctuations on the HVDC bus essentially due to a not insignificant bus impedance. In this paper we show that it is possible to include an internal computed variable in charge to compensate local and external variations to take into account the environment variables.

  7. Intra-host competition between co-infecting digeneans within a bivalve second intermediate host: dominance by priority-effect or taking advantage of others?

    PubMed

    Leung, Tommy L F; Poulin, Robert

    2011-03-01

    We experimentally investigated the interactions between two parasites known to manipulate their host's phenotype, the trematodes Acanthoparyphium sp. and Curtuteria australis, which infect the cockle Austrovenus stutchburyi. The larval stages of both species encyst within the tissue of the bivalve's muscular foot, with a preference for the tip of the foot. As more individuals accumulate at that site, they impair the burrowing behaviour of cockles and increase the probability of the parasites' transmission to a bird definitive host. However, individuals at the foot tip are also vulnerable to non-host predators in the form of foot-cropping fish which selectively bite off the foot tip of exposed cockles. Parasites encysted at the foot base are safe from such predators although they do not contribute to altering host behaviour, but nevertheless benefit from host manipulation as all parasites within the cockle are transmitted if it is ingested by a bird. Experimental infection revealed that Acanthoparyphium sp. and C. australis have different encystment patterns within the host, with proportionally fewer Acanthoparyphium metacercariae encysting at the foot tip than C. australis. This indicates that Acanthoparyphium may benefit indirectly from C. australis and incur a lower risk of non-host predation. However, in co-infections, not only did C. australis have higher infectivity than Acanthoparyphium, it also severely affected the latter's infection success. The asymmetrical strategies and interactions between the two species suggest that the advantages obtained from exploiting the host manipulation efforts of another parasite might be offset by traits such as reduced competitiveness in co-infections.

  8. Taking advantage of data on N leaching to improve estimates of N2O emission reductions from agriculture in response to management changes

    NASA Astrophysics Data System (ADS)

    Gurwick, N. P.; Tonitto, C.

    2012-12-01

    Estimates of reductions in N2O emissions from agricultural soils associated with different crop management practices often focus on in-field emissions. This is particularly true in the context of policy development for carbon offsets which are highly relevant in California, given the state's global warming protection law (AB 32). However, data sets often do not cover an entire year, missing key times such as spring thaw, and only rarely do they span multiple years even though inter-annual variation can be large. In the most productive grain systems on tile-drained Mollisols in the U.S. there are no long-term data sets of N2O flux, although these agroecosystems have the highest application rates of N fertilizer in grain systems and are prime candidates for large reductions in N2O emissions. In contrast, estimates of the influence of management practices like cover crops are much stronger because more data are available, and downstream N2O emissions should shift proportionally. Nevertheless, these changes in downstream emissions are frequently not included in estimates of N2O flux change. As an example, cereal cover crops reduce N leakage by 70%, and leguminous cover crops reduce N leakage by 40%. These data should inform estimates of downstream N2O emissions from agricultural fields, particularly in the context of protocol development, where project developers or aggregators will have information about basic management of individual crop fields. Even the IPCC default guidelines for simple (Tier 1) emission factors could take this information into account. Despite the complexity of estimating downstream N2O emissions in the absence of site-specific hydrology data, the IPCC estimates that 30% of applied N is lost and that between 0.75% and 1.0 % of lost N is converted to N2O. That single estimate should be refined based on data showing that leaching varies with management practices.

  9. Taking Full Advantage of Children's Literature

    ERIC Educational Resources Information Center

    Serafini, Frank

    2012-01-01

    Teachers need a deeper understanding of the texts being discussed, in particular the various textual and visual aspects of picturebooks themselves, including the images, written text and design elements, to support how readers made sense of these texts. As teachers become familiar with aspects of literary criticism, art history, visual grammar,…

  10. Educational advantage.

    PubMed

    2006-06-01

    WHAT SPECIAL ADVANTAGE DOES JERHRE offer to research ethics education? Empirical research employs concepts and methods for understanding and addressing problems; the methods employed can be generalized to related problems in new contexts. Research published in JERHRE uses concepts and methods designed to understand and solve ethical problems in human research. These tools can be reused by JERHRE's readership as part of their learning and problem solving. Instead of telling scientists, students, ethics committee members and others what they ought to do, educators can use curriculum based on the empirical articles contained in JERHRE to enable learners to solve the particular research-related problems they confront. Each issue of JERHRE publishes curriculum based on articles published therein. The lesson plans are deliberately general so that they can be adapted to the particular learners.

  11. Educational advantage.

    PubMed

    2006-06-01

    WHAT SPECIAL ADVANTAGE DOES JERHRE offer to research ethics education? Empirical research employs concepts and methods for understanding and addressing problems; the methods employed can be generalized to related problems in new contexts. Research published in JERHRE uses concepts and methods designed to understand and solve ethical problems in human research. These tools can be reused by JERHRE's readership as part of their learning and problem solving. Instead of telling scientists, students, ethics committee members and others what they ought to do, educators can use curriculum based on the empirical articles contained in JERHRE to enable learners to solve the particular research-related problems they confront. Each issue of JERHRE publishes curriculum based on articles published therein. The lesson plans are deliberately general so that they can be adapted to the particular learners. PMID:19385873

  12. Educational advantage.

    PubMed

    2006-03-01

    What special advantage does JERHRE offer to research ethics education? Empirical research employs concepts and methods for understanding and addressing problems; the methods employed can be generalized to related problems in new contexts. Research published in JERHRE uses concepts and methods designed to understand and solve ethical problems in human research. These tools can be reused by JERHRE's readership as part of their learning and problem solving. Instead of telling scientists, students, ethics committee members and others what they ought to do, educators can use curriculum based on the empirical articles contained in JERHRE to enable learners to solve the particular research-related problems they confront. Each issue of JERHRE publishes curriculum based on articles published therein. The lesson plans are deliberately general so that they can be adapted to the particular learners. PMID:19385863

  13. [Validation of the modified algorithm for predicting host susceptibility to viruses taking into account susceptibility parameters of primary target cell cultures and natural immunity factors].

    PubMed

    Zhukov, V A; Shishkina, L N; Safatov, A S; Sergeev, A A; P'iankov, O V; Petrishchenko, V A; Zaĭtsev, B N; Toporkov, V S; Sergeev, A N; Nesvizhskiĭ, Iu V; Vorob'ev, A A

    2010-01-01

    The paper presents results of testing a modified algorithm for predicting virus ID50 values in a host of interest by extrapolation from a model host taking into account immune neutralizing factors and thermal inactivation of the virus. The method was tested for A/Aichi/2/68 influenza virus in SPF Wistar rats, SPF CD-1 mice and conventional ICR mice. Each species was used as a host of interest while the other two served as model hosts. Primary lung and trachea cells and secretory factors of the rats' airway epithelium were used to measure parameters needed for the purpose of prediction. Predicted ID50 values were not significantly different (p = 0.05) from those experimentally measured in vivo. The study was supported by ISTC/DARPA Agreement 450p.

  14. Predictive Algorithm For Aiming An Antenna

    NASA Technical Reports Server (NTRS)

    Gawronski, Wodek K.

    1993-01-01

    Method of computing control signals to aim antenna based on predictive control-and-estimation algorithm that takes advantage of control inputs. Conceived for controlling antenna in tracking spacecraft and celestial objects, near-future trajectories of which are known. Also useful in enhancing aiming performances of other antennas and instruments that track objects that move along fairly well known paths.

  15. Parallelism of the SANDstorm hash algorithm.

    SciTech Connect

    Torgerson, Mark Dolan; Draelos, Timothy John; Schroeppel, Richard Crabtree

    2009-09-01

    Mainstream cryptographic hashing algorithms are not parallelizable. This limits their speed and they are not able to take advantage of the current trend of being run on multi-core platforms. Being limited in speed limits their usefulness as an authentication mechanism in secure communications. Sandia researchers have created a new cryptographic hashing algorithm, SANDstorm, which was specifically designed to take advantage of multi-core processing and be parallelizable on a wide range of platforms. This report describes a late-start LDRD effort to verify the parallelizability claims of the SANDstorm designers. We have shown, with operating code and bench testing, that the SANDstorm algorithm may be trivially parallelized on a wide range of hardware platforms. Implementations using OpenMP demonstrates a linear speedup with multiple cores. We have also shown significant performance gains with optimized C code and the use of assembly instructions to exploit particular platform capabilities.

  16. Algorithmic Perspectives on Problem Formulations in MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2000-01-01

    This work is concerned with an approach to formulating the multidisciplinary optimization (MDO) problem that reflects an algorithmic perspective on MDO problem solution. The algorithmic perspective focuses on formulating the problem in light of the abilities and inabilities of optimization algorithms, so that the resulting nonlinear programming problem can be solved reliably and efficiently by conventional optimization techniques. We propose a modular approach to formulating MDO problems that takes advantage of the problem structure, maximizes the autonomy of implementation, and allows for multiple easily interchangeable problem statements to be used depending on the available resources and the characteristics of the application problem.

  17. Taking advantage of acoustic inhomogeneities in photoacoustic measurements

    NASA Astrophysics Data System (ADS)

    Da Silva, Anabela; Handschin, Charles; Riedinger, Christophe; Piasecki, Julien; Mensah, Serge; Litman, Amélie; Akhouayri, Hassan

    2016-03-01

    Photoacoustic offers promising perspectives in probing and imaging subsurface optically absorbing structures in biological tissues. The optical uence absorbed is partly dissipated into heat accompanied with microdilatations that generate acoustic pressure waves, the intensity which is related to the amount of fluuence absorbed. Hence the photoacoustic signal measured offers access, at least potentially, to a local monitoring of the absorption coefficient, in 3D if tomographic measurements are considered. However, due to both the diffusing and absorbing nature of the surrounding tissues, the major part of the uence is deposited locally at the periphery of the tissue, generating an intense acoustic pressure wave that may hide relevant photoacoustic signals. Experimental strategies have been developed in order to measure exclusively the photoacoustic waves generated by the structure of interest (orthogonal illumination and detection). Temporal or more sophisticated filters (wavelets) can also be applied. However, the measurement of this primary acoustic wave carries a lot of information about the acoustically inhomogeneous nature of the medium. We propose a protocol that includes the processing of this primary intense acoustic wave, leading to the quantification of the surrounding medium sound speed, and, if appropriate to an acoustical parametric image of the heterogeneities. This information is then included as prior knowledge in the photoacoustic reconstruction scheme to improve the localization and quantification.

  18. Taking Advantage of Alice to Teach Programming Concepts

    ERIC Educational Resources Information Center

    Biju, Soly Mathew

    2013-01-01

    Learning the fundamentals of programming languages has always been a difficult task for students. It is equally challenging for lecturers to teach these concepts. A number of methods have been deployed by teachers to teach these concepts. This article analyses the result of a class test to identify fundamental programming concepts that students…

  19. Taking Advantage of Murder and Mayhem for Social Studies.

    ERIC Educational Resources Information Center

    Harden, G. Daniel

    1991-01-01

    Suggests the use of key historical antisocial acts to teach social studies concepts as a means of arousing the interest of adolescents. Recommends overcoming initial sensationalism by shifting emphasis to more appropriate interests. Includes discussion of the Abraham Lincoln and John F. Kennedy assassinations and the Rosenberg spy case. Suggests…

  20. Superior piezoelectric composite films: taking advantage of carbon nanomaterials.

    PubMed

    Saber, Nasser; Araby, Sherif; Meng, Qingshi; Hsu, Hung-Yao; Yan, Cheng; Azari, Sara; Lee, Sang-Heon; Xu, Yanan; Ma, Jun; Yu, Sirong

    2014-01-31

    Piezoelectric composites comprising an active phase of ferroelectric ceramic and a polymer matrix have recently found numerous sensory applications. However, it remains a major challenge to further improve their electromechanical response for advanced applications such as precision control and monitoring systems. We here investigated the incorporation of graphene platelets (GnPs) and multi-walled carbon nanotubes (MWNTs), each with various weight fractions, into PZT (lead zirconate titanate)/epoxy composites to produce three-phase nanocomposites. The nanocomposite films show markedly improved piezoelectric coefficients and electromechanical responses (50%) besides an enhancement of ~200% in stiffness. The carbon nanomaterials strengthened the impact of electric field on the PZT particles by appropriately raising the electrical conductivity of the epoxy. GnPs have been proved to be far more promising in improving the poling behavior and dynamic response than MWNTs. The superior dynamic sensitivity of GnP-reinforced composite may be caused by the GnPs' high load transfer efficiency arising from their two-dimensional geometry and good compatibility with the matrix. The reduced acoustic impedance mismatch resulting from the improved thermal conductance may also contribute to the higher sensitivity of GnP-reinforced composite. This research pointed out the potential of employing GnPs to develop highly sensitive piezoelectric composites for sensing applications. PMID:24398819

  1. Empirical study of parallel LRU simulation algorithms

    NASA Technical Reports Server (NTRS)

    Carr, Eric; Nicol, David M.

    1994-01-01

    This paper reports on the performance of five parallel algorithms for simulating a fully associative cache operating under the LRU (Least-Recently-Used) replacement policy. Three of the algorithms are SIMD, and are implemented on the MasPar MP-2 architecture. Two other algorithms are parallelizations of an efficient serial algorithm on the Intel Paragon. One SIMD algorithm is quite simple, but its cost is linear in the cache size. The two other SIMD algorithm are more complex, but have costs that are independent on the cache size. Both the second and third SIMD algorithms compute all stack distances; the second SIMD algorithm is completely general, whereas the third SIMD algorithm presumes and takes advantage of bounds on the range of reference tags. Both MIMD algorithm implemented on the Paragon are general and compute all stack distances; they differ in one step that may affect their respective scalability. We assess the strengths and weaknesses of these algorithms as a function of problem size and characteristics, and compare their performance on traces derived from execution of three SPEC benchmark programs.

  2. Algorithms for minimization of charge sharing effects in a hybrid pixel detector taking into account hardware limitations in deep submicron technology

    NASA Astrophysics Data System (ADS)

    Maj, P.; Baumbaugh, A.; Deptuch, G.; Grybos, P.; Szczygiel, R.

    2012-12-01

    Charge sharing is the main limitation of pixel detectors used in spectroscopic applications, noting that this applies to both time and amplitude/energy spectroscopy. Even though, charge sharing was the subject of many studies, there is still no ultimate solution which could be implemented in the hardware to suppress the negative effects of charge sharing. This is mainly because of strong demand on low power dissipation and small silicon area of a single pixel. The first solution of this problem was proposed by CERN and consequently it was implemented in the Medipix III chip. However, due to pixel-to-pixel threshold dispersions and some imperfections of the simplified algorithm, the hit allocation was not functioning properly. We are presenting novel algorithms which allow proper hit allocation even at the presence of charge sharing. They can be implemented in an integrated circuit using a deep submicron technology. In performed simulations, we assumed not only diffusive charge spread occurring in the course of charge drifting towards the electrodes but also limitations in the readout electronics, i.e. signal fluctuations due to noise and mismatch (gain and offsets). The simulations show that using, for example, a silicon pixel detector in the low X-ray energy range, we have been able to perform proper hit position identification and use the information from summing inter-pixel nodes for spectroscopy measurements.

  3. The Certification Advantage

    ERIC Educational Resources Information Center

    Foster, John C.; Pritz, Sandra G.

    2006-01-01

    Certificates have become an important career credential and can give students an advantage when they enter the workplace. However, many types of certificates exist, and the number of people seeking them and organizations offering them are both growing rapidly. In a time of such growth, the authors review some of the basics about certification--the…

  4. Advantages of Team Teaching

    ERIC Educational Resources Information Center

    Frey, John

    1973-01-01

    Describes a high school biology program which successfully utilizes team teaching. Outlines the advantages of team teaching and how it is used in the large group lecture-discussion situation, with small groups in the laboratory and on field trips. (JR)

  5. A limited-memory algorithm for bound-constrained optimization

    SciTech Connect

    Byrd, R.H.; Peihuang, L.; Nocedal, J. |

    1996-03-01

    An algorithm for solving large nonlinear optimization problems with simple bounds is described. It is based on the gradient projection method and uses a limited-memory BFGS matrix to approximate the Hessian of the objective function. We show how to take advantage of the form of the limited-memory approximation to implement the algorithm efficiently. The results of numerical tests on a set of large problems are reported.

  6. Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems

    DOEpatents

    Van Benthem, Mark H.; Keenan, Michael R.

    2008-11-11

    A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.

  7. Filtered refocusing: a volumetric reconstruction algorithm for plenoptic-PIV

    NASA Astrophysics Data System (ADS)

    Fahringer, Timothy W.; Thurow, Brian S.

    2016-09-01

    A new algorithm for reconstruction of 3D particle fields from plenoptic image data is presented. The algorithm is based on the technique of computational refocusing with the addition of a post reconstruction filter to remove the out of focus particles. This new algorithm is tested in terms of reconstruction quality on synthetic particle fields as well as a synthetically generated 3D Gaussian ring vortex. Preliminary results indicate that the new algorithm performs as well as the MART algorithm (used in previous work) in terms of the reconstructed particle position accuracy, but produces more elongated particles. The major advantage to the new algorithm is the dramatic reduction in the computational cost required to reconstruct a volume. It is shown that the new algorithm takes 1/9th the time to reconstruct the same volume as MART while using minimal resources. Experimental results are presented in the form of the wake behind a cylinder at a Reynolds number of 185.

  8. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  9. Taking Risks.

    ERIC Educational Resources Information Center

    Merson, Martha, Ed.; Reuys, Steve, Ed.

    1999-01-01

    Following an introduction on "Taking Risks" (Martha Merson), this journal contains 11 articles on taking risks in teaching adult literacy, mostly by educators in the Boston area. The following are included: "My Dreams Are Bigger than My Fears Now" (Sharon Carey); "Making a Pitch for Poetry in ABE [Adult Basic Education]" (Marie Hassett); "Putting…

  10. Creating corporate advantage.

    PubMed

    Collis, D J; Montgomery, C A

    1998-01-01

    What differentiates truly great corporate strategies from the merely adequate? How can executives at the corporate level create tangible advantage for their businesses that makes the whole more than the sum of the parts? This article presents a comprehensive framework for value creation in the multibusiness company. It addresses the most fundamental questions of corporate strategy: What businesses should a company be in? How should it coordinate activities across businesses? What role should the corporate office play? How should the corporation measure and control performance? Through detailed case studies of Tyco International, Sharp, the Newell Company, and Saatchi and Saatchi, the authors demonstrate that the answers to all those questions are driven largely by the nature of a company's special resources--its assets, skills, and capabilities. These range along a continuum from the highly specialized at one end to the very general at the other. A corporation's location on the continuum constrains the set of businesses it should compete in and limits its choices about the design of its organization. Applying the framework, the authors point out the common mistakes that result from misaligned corporate strategies. Companies mistakenly enter businesses based on similarities in products rather than the resources that contribute to competitive advantage in each business. Instead of tailoring organizational structures and systems to the needs of a particular strategy, they create plain-vanilla corporate offices and infrastructures. The company examples demonstrate that one size does not fit all. One can find great corporate strategies all along the continuum.

  11. Creating corporate advantage.

    PubMed

    Collis, D J; Montgomery, C A

    1998-01-01

    What differentiates truly great corporate strategies from the merely adequate? How can executives at the corporate level create tangible advantage for their businesses that makes the whole more than the sum of the parts? This article presents a comprehensive framework for value creation in the multibusiness company. It addresses the most fundamental questions of corporate strategy: What businesses should a company be in? How should it coordinate activities across businesses? What role should the corporate office play? How should the corporation measure and control performance? Through detailed case studies of Tyco International, Sharp, the Newell Company, and Saatchi and Saatchi, the authors demonstrate that the answers to all those questions are driven largely by the nature of a company's special resources--its assets, skills, and capabilities. These range along a continuum from the highly specialized at one end to the very general at the other. A corporation's location on the continuum constrains the set of businesses it should compete in and limits its choices about the design of its organization. Applying the framework, the authors point out the common mistakes that result from misaligned corporate strategies. Companies mistakenly enter businesses based on similarities in products rather than the resources that contribute to competitive advantage in each business. Instead of tailoring organizational structures and systems to the needs of a particular strategy, they create plain-vanilla corporate offices and infrastructures. The company examples demonstrate that one size does not fit all. One can find great corporate strategies all along the continuum. PMID:10179655

  12. Binocular advantages in reading.

    PubMed

    Jainta, Stephanie; Blythe, Hazel I; Liversedge, Simon P

    2014-03-01

    Reading, an essential skill for successful function in today's society, is a complex psychological process involving vision, memory, and language comprehension. Variability in fixation durations during reading reflects the ease of text comprehension, and increased word frequency results in reduced fixation times. Critically, readers not only process the fixated foveal word but also preprocess the parafoveal word to its right, thereby facilitating subsequent foveal processing. Typically, text is presented binocularly, and the oculomotor control system precisely coordinates the two frontally positioned eyes online. Binocular, compared to monocular, visual processing typically leads to superior performance, termed the "binocular advantage"; few studies have investigated the binocular advantage in reading. We used saccade-contingent display change methodology to demonstrate the benefit of binocular relative to monocular text presentation for both parafoveal and foveal lexical processing during reading. Our results demonstrate that denial of a unified visual signal derived from binocular inputs provides a cost to the efficiency of reading, particularly in relation to high-frequency words. Our findings fit neatly with current computational models of eye movement control during reading, wherein successful word identification is a primary determinant of saccade initiation.

  13. An efficient algorithm for retinal blood vessel segmentation using h-maxima transform and multilevel thresholding.

    PubMed

    Saleh, Marwan D; Eswaran, C

    2012-01-01

    Retinal blood vessel detection and analysis play vital roles in early diagnosis and prevention of several diseases, such as hypertension, diabetes, arteriosclerosis, cardiovascular disease and stroke. This paper presents an automated algorithm for retinal blood vessel segmentation. The proposed algorithm takes advantage of powerful image processing techniques such as contrast enhancement, filtration and thresholding for more efficient segmentation. To evaluate the performance of the proposed algorithm, experiments were conducted on 40 images collected from DRIVE database. The results show that the proposed algorithm yields an accuracy rate of 96.5%, which is higher than the results achieved by other known algorithms.

  14. An automated blood vessel segmentation algorithm using histogram equalization and automatic threshold selection.

    PubMed

    Saleh, Marwan D; Eswaran, C; Mueen, Ahmed

    2011-08-01

    This paper focuses on the detection of retinal blood vessels which play a vital role in reducing the proliferative diabetic retinopathy and for preventing the loss of visual capability. The proposed algorithm which takes advantage of the powerful preprocessing techniques such as the contrast enhancement and thresholding offers an automated segmentation procedure for retinal blood vessels. To evaluate the performance of the new algorithm, experiments are conducted on 40 images collected from DRIVE database. The results show that the proposed algorithm performs better than the other known algorithms in terms of accuracy. Furthermore, the proposed algorithm being simple and easy to implement, is best suited for fast processing applications.

  15. Analysis of a parallel multigrid algorithm

    NASA Technical Reports Server (NTRS)

    Chan, Tony F.; Tuminaro, Ray S.

    1989-01-01

    The parallel multigrid algorithm of Frederickson and McBryan (1987) is considered. This algorithm uses multiple coarse-grid problems (instead of one problem) in the hope of accelerating convergence and is found to have a close relationship to traditional multigrid methods. Specifically, the parallel coarse-grid correction operator is identical to a traditional multigrid coarse-grid correction operator, except that the mixing of high and low frequencies caused by aliasing error is removed. Appropriate relaxation operators can be chosen to take advantage of this property. Comparisons between the standard multigrid and the new method are made.

  16. Taking antacids

    MedlinePlus

    ... magnesium may cause diarrhea. Brands with calcium or aluminum may cause constipation. Rarely, brands with calcium may ... you take large amounts of antacids that contain aluminum, you may be at risk for calcium loss, ...

  17. Improved hybrid optimization algorithm for 3D protein structure prediction.

    PubMed

    Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang

    2014-07-01

    A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins. PMID:25069136

  18. A Hybrid Graph Representation for Recursive Backtracking Algorithms

    NASA Astrophysics Data System (ADS)

    Abu-Khzam, Faisal N.; Langston, Michael A.; Mouawad, Amer E.; Nolan, Clinton P.

    Many exact algorithms for NP-hard graph problems adopt the old Davis-Putman branch-and-reduce paradigm. The performance of these algorithms often suffers from the increasing number of graph modifications, such as deletions, that reduce the problem instance and have to be "taken back" frequently during the search process. The use of efficient data structures is necessary for fast graph modification modules as well as fast take-back procedures. In this paper, we investigate practical implementation-based aspects of exact algorithms by providing a hybrid graph representation that addresses the take-back challenge and combines the advantage of {O}(1) adjacency-queries in adjacency-matrices with the advantage of efficient neighborhood traversal in adjacency-lists.

  19. Taking Turns

    ERIC Educational Resources Information Center

    Hopkins, Brian

    2010-01-01

    Two people take turns selecting from an even number of items. Their relative preferences over the items can be described as a permutation, then tools from algebraic combinatorics can be used to answer various questions. We describe each person's optimal selection strategies including how each could make use of knowing the other's preferences. We…

  20. Double Take

    ERIC Educational Resources Information Center

    Educational Leadership, 2011

    2011-01-01

    This paper begins by discussing the results of two studies recently conducted in Australia. According to the two studies, taking a gap year between high school and college may help students complete a degree once they return to school. The gap year can involve such activities as travel, service learning, or work. Then, the paper presents links to…

  1. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  2. Evaluation of TCP congestion control algorithms.

    SciTech Connect

    Long, Robert Michael

    2003-12-01

    Sandia, Los Alamos, and Lawrence Livermore National Laboratories currently deploy high speed, Wide Area Network links to permit remote access to their Supercomputer systems. The current TCP congestion algorithm does not take full advantage of high delay, large bandwidth environments. This report involves evaluating alternative TCP congestion algorithms and comparing them with the currently used congestion algorithm. The goal was to find if an alternative algorithm could provide higher throughput with minimal impact on existing network traffic. The alternative congestion algorithms used were Scalable TCP and High-Speed TCP. Network lab experiments were run to record the performance of each algorithm under different network configurations. The network configurations used were back-to-back with no delay, back-to-back with a 30ms delay, and two-to-one with a 30ms delay. The performance of each algorithm was then compared to the existing TCP congestion algorithm to determine if an acceptable alternative had been found. Comparisons were made based on throughput, stability, and fairness.

  3. A new algorithm for agile satellite-based acquisition operations

    NASA Astrophysics Data System (ADS)

    Bunkheila, Federico; Ortore, Emiliano; Circi, Christian

    2016-06-01

    Taking advantage of the high manoeuvrability and the accurate pointing of the so-called agile satellites, an algorithm which allows efficient management of the operations concerning optical acquisitions is described. Fundamentally, this algorithm can be subdivided into two parts: in the first one the algorithm operates a geometric classification of the areas of interest and a partitioning of these areas into stripes which develop along the optimal scan directions; in the second one it computes the succession of the time windows in which the acquisition operations of the areas of interest are feasible, taking into consideration the potential restrictions associated with these operations and with the geometric and stereoscopic constraints. The results and the performances of the proposed algorithm have been determined and discussed considering the case of the Periodic Sun-Synchronous Orbits.

  4. A real-time FORTRAN implementation of a sensor failure detection, isolation and accommodation algorithm

    NASA Technical Reports Server (NTRS)

    Delaat, J. C.

    1984-01-01

    An advanced, sensor failure detection, isolation, and accomodation algorithm has been developed by NASA for the F100 turbofan engine. The algorithm takes advantage of the analytical redundancy of the sensors to improve the reliability of the sensor set. The method requires the controls computer, to determine when a sensor failure has occurred without the help of redundant hardware sensors in the control system. The controls computer provides an estimate of the correct value of the output of the failed sensor. The algorithm has been programmed in FORTRAN using a real-time microprocessor-based controls computer. A detailed description of the algorithm and its implementation on a microprocessor is given.

  5. Home advantage in professional tennis.

    PubMed

    Koning, Ruud H

    2011-01-01

    Home advantage is a pervasive phenomenon in sport. It has been established in team sports such as basketball, baseball, American football, and European soccer. Attention to home advantage in individual sports has so far been limited. The aim of this study was to examine home advantage in professional tennis. Match-level data are used to measure home advantage. The test used is based on logit models, and consistent specification is addressed explicitly. Depending on the interpretation of home advantage, restrictions on the specification of the model need to be imposed. We find that although significant home advantage exists for men, the performance of women tennis players appears to be unaffected by home advantage.

  6. Algorithm Optimally Allocates Actuation of a Spacecraft

    NASA Technical Reports Server (NTRS)

    Motaghedi, Shi

    2007-01-01

    A report presents an algorithm that solves the following problem: Allocate the force and/or torque to be exerted by each thruster and reaction-wheel assembly on a spacecraft for best performance, defined as minimizing the error between (1) the total force and torque commanded by the spacecraft control system and (2) the total of forces and torques actually exerted by all the thrusters and reaction wheels. The algorithm incorporates the matrix vector relationship between (1) the total applied force and torque and (2) the individual actuator force and torque values. It takes account of such constraints as lower and upper limits on the force or torque that can be applied by a given actuator. The algorithm divides the aforementioned problem into two optimization problems that it solves sequentially. These problems are of a type, known in the art as semi-definite programming problems, that involve linear matrix inequalities. The algorithm incorporates, as sub-algorithms, prior algorithms that solve such optimization problems very efficiently. The algorithm affords the additional advantage that the solution requires the minimum rate of consumption of fuel for the given best performance.

  7. Modeling words with subword units in an articulatorily constrained speech recognition algorithm

    SciTech Connect

    Hogden, J.

    1997-11-20

    The goal of speech recognition is to find the most probable word given the acoustic evidence, i.e. a string of VQ codes or acoustic features. Speech recognition algorithms typically take advantage of the fact that the probability of a word, given a sequence of VQ codes, can be calculated.

  8. Home advantage in sport: an overview of studies on the advantage of playing at home.

    PubMed

    Nevill, A M; Holder, R L

    1999-10-01

    This review identifies the most likely causes of home advantage. The results of previous studies have identified 4 factors thought to be responsible for the home advantage. These can be categorised under the general headings of crowd, learning, travel and rule factors. From the accumulated evidence, rule factors were found to play only a minor role (in a limited number of sports) in contributing to home advantage. Studies investigating the effect of learning factors found that little benefit was to be gained from being familiar with the local conditions when playing at home. There was evidence to suggest that travel factors were responsible for part of the home advantage, provided the journey involved crossing a number of time zones. However, since high levels of home advantage are observed within countries where travel distances are not great. travel factors were not thought to be a major cause of home advantage. The evidence from studies investigating crowd factors appeared to provide the most dominant causes of home advantage. A number of studies provide strong evidence that home advantage increases with crowd size, until the crowd reaches a certain size or consistency (a more balanced number of home and away supporters), after which a peak in home advantage is observed. Two possible mechanisms were proposed to explain these observations: either (i) the crowd is able to raise the performance of the home competitors relative to the away competitors; or (ii) the crowd is able to influence the officials to subconsciously favour the home team. The literature supports the latter to be the most important and dominant explanation. Clearly, it only takes 2 or 3 crucial decisions to go against the away team or in favour of the home team to give the side playing at home the 'edge'.

  9. Implicit, nonswitching, vector-oriented algorithm for steady transonic flow

    NASA Technical Reports Server (NTRS)

    Lottati, I.

    1983-01-01

    A rapid computation of a sequence of transonic flow solutions has to be performed in many areas of aerodynamic technology. The employment of low-cost vector array processors makes the conduction of such calculations economically feasible. However, for a full utilization of the new hardware, the developed algorithms must take advantage of the special characteristics of the vector array processor. The present investigation has the objective to develop an efficient algorithm for solving transonic flow problems governed by mixed partial differential equations on an array processor.

  10. Flow mediated endothelium function: advantages of an automatic measuring technique

    NASA Astrophysics Data System (ADS)

    Maio, Yamila; Casciaro, Mariano E.; José Urcola y, Maria; Craiem, Damian

    2007-11-01

    The objective of this work is to show the advantages of a non invasive automated method for measuring flow mediated dilation (FMD) in the forearm. This dilation takes place in answer to a shear tension generated by the increase of blood flow, sensed by the endothelium, after the liberation of an occlusion sustained in the time. The method consists of three stages: the continuous acquisition of images of the brachial artery using ultrasound techniques, the pulse to pulse measurement of the vessel's diameter by means of a border detection algorithm, and the later analysis of the results. By means of this technique one cannot only obtain the maximum dilation percentage (FMD%), but a continuous diameter curve that allows to evaluate other relevant aspects such as dilation speed, dilation sustain in time and general maneuver performance. The simplicity of this method, robustness of the technique and accessibility of the required elements makes it a viable alternative of great clinical value for diagnosis in the early detection of numerous cardiovascular pathologies.

  11. Simulation System of Car Crash Test in C-NCAP Analysis Based on an Improved Apriori Algorithm*

    NASA Astrophysics Data System (ADS)

    Xiang, LI

    In order to analysis car crash test in C-NCAP, an improved algorithm is given based on Apriori algorithm in this paper. The new algorithm is implemented with vertical data layout, breadth first searching, and intersecting. It takes advantage of the efficiency of vertical data layout and intersecting, and prunes candidate frequent item sets like Apriori. Finally, the new algorithm is applied in simulation of car crash test analysis system. The result shows that the relations will affect the C-NCAP test results, and it can provide a reference for the automotive design.

  12. Using an improved association rules mining optimization algorithm in web-based mobile-learning system

    NASA Astrophysics Data System (ADS)

    Huang, Yin; Chen, Jianhua; Xiong, Shaojun

    2009-07-01

    Mobile-Learning (M-learning) makes many learners get the advantages of both traditional learning and E-learning. Currently, Web-based Mobile-Learning Systems have created many new ways and defined new relationships between educators and learners. Association rule mining is one of the most important fields in data mining and knowledge discovery in databases. Rules explosion is a serious problem which causes great concerns, as conventional mining algorithms often produce too many rules for decision makers to digest. Since Web-based Mobile-Learning System collects vast amounts of student profile data, data mining and knowledge discovery techniques can be applied to find interesting relationships between attributes of learners, assessments, the solution strategies adopted by learners and so on. Therefore ,this paper focus on a new data-mining algorithm, combined with the advantages of genetic algorithm and simulated annealing algorithm , called ARGSA(Association rules based on an improved Genetic Simulated Annealing Algorithm), to mine the association rules. This paper first takes advantage of the Parallel Genetic Algorithm and Simulated Algorithm designed specifically for discovering association rules. Moreover, the analysis and experiment are also made to show the proposed method is superior to the Apriori algorithm in this Mobile-Learning system.

  13. Home advantage in Greek football.

    PubMed

    Armatas, Vasilis; Pollard, Richard

    2014-01-01

    Home advantage as it relates to team performance at football was examined in Superleague Greece using nine seasons of game-by-game performance data, a total of 2160 matches. After adjusting for team ability and annual fluctuations in home advantage, there were significant differences between teams. Previous findings regarding the role of territorial protection were strengthened by the fact that home advantage was above average for the team from Xanthi (P =0.015), while lower for teams from the capital city Athens (P =0.008). There were differences between home and away teams in the incidence of most of the 13 within-game match variables, but associated effect sizes were only moderate. In contrast, outcome ratios derived from these variables, and measuring shot success, had negligible effect sizes. This supported a previous finding that home and away teams differed in the incidence of on-the-ball behaviours, but not in their outcomes. By far the most important predictor of home advantage, as measured by goal difference, was the difference between home and away teams in terms of kicked shots from inside the penalty area. Other types of shots had little effect on the final score. The absence of a running track between spectators and the playing field was also a significant predictor of goal difference, worth an average of 0.102 goals per game to the home team. Travel distance did not affect home advantage.

  14. Home advantage in Greek football.

    PubMed

    Armatas, Vasilis; Pollard, Richard

    2014-01-01

    Home advantage as it relates to team performance at football was examined in Superleague Greece using nine seasons of game-by-game performance data, a total of 2160 matches. After adjusting for team ability and annual fluctuations in home advantage, there were significant differences between teams. Previous findings regarding the role of territorial protection were strengthened by the fact that home advantage was above average for the team from Xanthi (P =0.015), while lower for teams from the capital city Athens (P =0.008). There were differences between home and away teams in the incidence of most of the 13 within-game match variables, but associated effect sizes were only moderate. In contrast, outcome ratios derived from these variables, and measuring shot success, had negligible effect sizes. This supported a previous finding that home and away teams differed in the incidence of on-the-ball behaviours, but not in their outcomes. By far the most important predictor of home advantage, as measured by goal difference, was the difference between home and away teams in terms of kicked shots from inside the penalty area. Other types of shots had little effect on the final score. The absence of a running track between spectators and the playing field was also a significant predictor of goal difference, worth an average of 0.102 goals per game to the home team. Travel distance did not affect home advantage. PMID:24533517

  15. Advantages of proteins being disordered

    PubMed Central

    Liu, Zhirong; Huang, Yongqi

    2014-01-01

    The past decade has witnessed great advances in our understanding of protein structure-function relationships in terms of the ubiquitous existence of intrinsically disordered proteins (IDPs) and intrinsically disordered regions (IDRs). The structural disorder of IDPs/IDRs enables them to play essential functions that are complementary to those of ordered proteins. In addition, IDPs/IDRs are persistent in evolution. Therefore, they are expected to possess some advantages over ordered proteins. In this review, we summarize and survey nine possible advantages of IDPs/IDRs: economizing genome/protein resources, overcoming steric restrictions in binding, achieving high specificity with low affinity, increasing binding rate, facilitating posttranslational modifications, enabling flexible linkers, preventing aggregation, providing resistance to non-native conditions, and allowing compatibility with more available sequences. Some potential advantages of IDPs/IDRs are not well understood and require both experimental and theoretical approaches to decipher. The connection with protein design is also briefly discussed. PMID:24532081

  16. A projected preconditioned conjugate gradient algorithm for computing many extreme eigenpairs of a Hermitian matrix

    SciTech Connect

    Vecharynski, Eugene; Yang, Chao; Pask, John E.

    2015-06-01

    We present an iterative algorithm for computing an invariant subspace associated with the algebraically smallest eigenvalues of a large sparse or structured Hermitian matrix A. We are interested in the case in which the dimension of the invariant subspace is large (e.g., over several hundreds or thousands) even though it may still be small relative to the dimension of A. These problems arise from, for example, density functional theory (DFT) based electronic structure calculations for complex materials. The key feature of our algorithm is that it performs fewer Rayleigh–Ritz calculations compared to existing algorithms such as the locally optimal block preconditioned conjugate gradient or the Davidson algorithm. It is a block algorithm, and hence can take advantage of efficient BLAS3 operations and be implemented with multiple levels of concurrency. We discuss a number of practical issues that must be addressed in order to implement the algorithm efficiently on a high performance computer.

  17. Energy Advantages for Green Schools

    ERIC Educational Resources Information Center

    Griffin, J. Tim

    2012-01-01

    Because of many advantages associated with central utility systems, school campuses, from large universities to elementary schools, have used district energy for decades. District energy facilities enable thermal and electric utilities to be generated with greater efficiency and higher system reliability, while requiring fewer maintenance and…

  18. Competitive Intelligence and Social Advantage.

    ERIC Educational Resources Information Center

    Davenport, Elisabeth; Cronin, Blaise

    1994-01-01

    Presents an overview of issues concerning civilian competitive intelligence (CI). Topics discussed include competitive advantage in academic and research environments; public domain information and libraries; covert and overt competitive intelligence; data diversity; use of the Internet; cooperative intelligence; and implications for library and…

  19. Achieving a sustainable service advantage.

    PubMed

    Coyne, K P

    1993-01-01

    Many managers believe that superior service should play little or no role in competitive strategy; they maintain that service innovations are inherently copiable. However, the author states that this view is too narrow. For a company to achieve a lasting service advantage, it must base a new service on a capability gap that competitors cannot or will not copy.

  20. Information Technology: Tomorrow's Advantage Today.

    ERIC Educational Resources Information Center

    Haag, Stephen; Keen, Peter

    This textbook is designed for a one-semester introductory course in which the goal is to give students a foundation in the basics of information technology (IT). It focuses on how the technology works, issues relating to its use and development, how it can lend personal and business advantages, and how it is creating a globally networked society.…

  1. An Experiment in Comparative Advantage.

    ERIC Educational Resources Information Center

    Haupert, Michael J.

    1996-01-01

    Describes an undergraduate economics course experiment designed to teach the concepts of comparative advantage and opportunity costs. Students have a limited number of labor hours and can chose to produce either wheat or steel. As the project progresses, the students trade commodities in an attempt to maximize use of their labor hours. (MJP)

  2. Selective advantage for sexual reproduction

    NASA Astrophysics Data System (ADS)

    Tannenbaum, Emmanuel

    2006-03-01

    We develop a simplified model for sexual replication within the quasispecies formalism. We assume that the genomes of the replicating organisms are two-chromosomed and diploid, and that the fitness is determined by the number of chromosomes that are identical to a given master sequence. We also assume that there is a cost to sexual replication, given by a characteristic time τseek during which haploid cells seek out a mate with which to recombine. If the mating strategy is such that only viable haploids can mate, then when τseek= 0 , it is possible to show that sexual replication will always outcompete asexual replication. However, as τseek increases, sexual replication only becomes advantageous at progressively higher mutation rates. Once the time cost for sex reaches a critical threshold, the selective advantage for sexual replication disappears entirely. The results of this talk suggest that sexual replication is not advantageous in small populations per se, but rather in populations with low replication rates. In this regime, the cost for sex is sufficiently low that the selective advantage obtained through recombination leads to the dominance of the strategy. In fact, at a given replication rate and for a fixed environment volume, sexual replication is selected for in high populations because of the reduced time spent finding a reproductive partner.

  3. Selective advantage for sexual reproduction

    NASA Astrophysics Data System (ADS)

    Tannenbaum, Emmanuel

    2006-06-01

    This paper develops a simplified model for sexual reproduction within the quasispecies formalism. The model assumes a diploid genome consisting of two chromosomes, where the fitness is determined by the number of chromosomes that are identical to a given master sequence. We also assume that there is a cost to sexual reproduction, given by a characteristic time τseek during which haploid cells seek out a mate with which to recombine. If the mating strategy is such that only viable haploids can mate, then when τseek=0 , it is possible to show that sexual reproduction will always out compete asexual reproduction. However, as τseek increases, sexual reproduction only becomes advantageous at progressively higher mutation rates. Once the time cost for sex reaches a critical threshold, the selective advantage for sexual reproduction disappears entirely. The results of this paper suggest that sexual reproduction is not advantageous in small populations per se, but rather in populations with low replication rates. In this regime, the cost for sex is sufficiently low that the selective advantage obtained through recombination leads to the dominance of the strategy. In fact, at a given replication rate and for a fixed environment volume, sexual reproduction is selected for in high populations because of the reduced time spent finding a reproductive partner.

  4. A universal optimization strategy for ant colony optimization algorithms based on the Physarum-inspired mathematical model.

    PubMed

    Zhang, Zili; Gao, Chao; Liu, Yuxin; Qian, Tao

    2014-09-01

    Ant colony optimization (ACO) algorithms often fall into the local optimal solution and have lower search efficiency for solving the travelling salesman problem (TSP). According to these shortcomings, this paper proposes a universal optimization strategy for updating the pheromone matrix in the ACO algorithms. The new optimization strategy takes advantages of the unique feature of critical paths reserved in the process of evolving adaptive networks of the Physarum-inspired mathematical model (PMM). The optimized algorithms, denoted as PMACO algorithms, can enhance the amount of pheromone in the critical paths and promote the exploitation of the optimal solution. Experimental results in synthetic and real networks show that the PMACO algorithms are more efficient and robust than the traditional ACO algorithms, which are adaptable to solve the TSP with single or multiple objectives. Meanwhile, we further analyse the influence of parameters on the performance of the PMACO algorithms. Based on these analyses, the best values of these parameters are worked out for the TSP.

  5. Algorithms to Automate LCLS Undulator Tuning

    SciTech Connect

    Wolf, Zachary

    2010-12-03

    Automation of the LCLS undulator tuning offers many advantages to the project. Automation can make a substantial reduction in the amount of time the tuning takes. Undulator tuning is fairly complex and automation can make the final tuning less dependent on the skill of the operator. Also, algorithms are fixed and can be scrutinized and reviewed, as opposed to an individual doing the tuning by hand. This note presents algorithms implemented in a computer program written for LCLS undulator tuning. The LCLS undulators must meet the following specifications. The maximum trajectory walkoff must be less than 5 {micro}m over 10 m. The first field integral must be below 40 x 10{sup -6} Tm. The second field integral must be below 50 x 10{sup -6} Tm{sup 2}. The phase error between the electron motion and the radiation field must be less than 10 degrees in an undulator. The K parameter must have the value of 3.5000 {+-} 0.0005. The phase matching from the break regions into the undulator must be accurate to better than 10 degrees. A phase change of 113 x 2{pi} must take place over a distance of 3.656 m centered on the undulator. Achieving these requirements is the goal of the tuning process. Most of the tuning is done with Hall probe measurements. The field integrals are checked using long coil measurements. An analysis program written in Matlab takes the Hall probe measurements and computes the trajectories, phase errors, K value, etc. The analysis program and its calculation techniques were described in a previous note. In this note, a second Matlab program containing tuning algorithms is described. The algorithms to determine the required number and placement of the shims are discussed in detail. This note describes the operation of a computer program which was written to automate LCLS undulator tuning. The algorithms used to compute the shim sizes and locations are discussed.

  6. Fractal Landscape Algorithms for Environmental Simulations

    NASA Astrophysics Data System (ADS)

    Mao, H.; Moran, S.

    2014-12-01

    Natural science and geographical research are now able to take advantage of environmental simulations that more accurately test experimental hypotheses, resulting in deeper understanding. Experiments affected by the natural environment can benefit from 3D landscape simulations capable of simulating a variety of terrains and environmental phenomena. Such simulations can employ random terrain generation algorithms that dynamically simulate environments to test specific models against a variety of factors. Through the use of noise functions such as Perlin noise, Simplex noise, and diamond square algorithms, computers can generate simulations that model a variety of landscapes and ecosystems. This study shows how these algorithms work together to create realistic landscapes. By seeding values into the diamond square algorithm, one can control the shape of landscape. Perlin noise and Simplex noise are also used to simulate moisture and temperature. The smooth gradient created by coherent noise allows more realistic landscapes to be simulated. Terrain generation algorithms can be used in environmental studies and physics simulations. Potential studies that would benefit from simulations include the geophysical impact of flash floods or drought on a particular region and regional impacts on low lying area due to global warming and rising sea levels. Furthermore, terrain generation algorithms also serve as aesthetic tools to display landscapes (Google Earth), and simulate planetary landscapes. Hence, it can be used as a tool to assist science education. Algorithms used to generate these natural phenomena provide scientists a different approach in analyzing our world. The random algorithms used in terrain generation not only contribute to the generating the terrains themselves, but are also capable of simulating weather patterns.

  7. The optimal algorithm for Multi-source RS image fusion.

    PubMed

    Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan

    2016-01-01

    In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA. PMID:27408827

  8. The optimal algorithm for Multi-source RS image fusion.

    PubMed

    Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan

    2016-01-01

    In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.

  9. An Assessment of Current Satellite Precipitation Algorithms

    NASA Technical Reports Server (NTRS)

    Smith, Eric A.

    2007-01-01

    The H-SAF Program requires an experimental operational European-centric Satellite Precipitation Algorithm System (E-SPAS) that produces medium spatial resolution and high temporal resolution surface rainfall and snowfall estimates over the Greater European Region including the Greater Mediterranean Basin. Currently, there are various types of experimental operational algorithm methods of differing spatiotemporal resolutions that generate global precipitation estimates. This address will first assess the current status of these methods and then recommend a methodology for the H-SAF Program that deviates somewhat from the current approach under development but one that takes advantage of existing techniques and existing software developed for the TRMM Project and available through the public domain.

  10. CAST: Contraction Algorithm for Symmetric Tensors

    SciTech Connect

    Rajbhandari, Samyam; NIkam, Akshay; Lai, Pai-Wei; Stock, Kevin; Krishnamoorthy, Sriram; Sadayappan, Ponnuswamy

    2014-09-22

    Tensor contractions represent the most compute-intensive core kernels in ab initio computational quantum chemistry and nuclear physics. Symmetries in these tensor contractions makes them difficult to load balance and scale to large distributed systems. In this paper, we develop an efficient and scalable algorithm to contract symmetric tensors. We introduce a novel approach that avoids data redistribution in contracting symmetric tensors while also avoiding redundant storage and maintaining load balance. We present experimental results on two parallel supercomputers for several symmetric contractions that appear in the CCSD quantum chemistry method. We also present a novel approach to tensor redistribution that can take advantage of parallel hyperplanes when the initial distribution has replicated dimensions, and use collective broadcast when the final distribution has replicated dimensions, making the algorithm very efficient.

  11. High performance graphics processor based computed tomography reconstruction algorithms for nuclear and other large scale applications.

    SciTech Connect

    Jimenez, Edward Steven,

    2013-09-01

    The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.

  12. [Psychological aspects of home advantage].

    PubMed

    Renáta, Valach; Dezso, Németh

    2006-01-01

    It has been shown that home teams supported by their audience win over 50% of the games in sports competitions. Researchers have also been paying increased attention to this topic during the last 10-15 years. Their main goal, in addition to verifying the existence of this phenomenon, was to find explanatory factors which can be associated - at least partly - with the development of home advantage. Our study demonstrates the biological basis of this phenomenon through the connection between the hormone system and territoriality, moreover, it discusses in detail the four possible contributing factors: noise of the supporting audience; familiarity; travel and rules. Latest research has emphasized an evolutionary explanation of home advantage, which, beyond the context of sports competitions, tries to give an answer to the differences found between male and female coping strategies.

  13. Faculty practice: advantages and disadvantages.

    PubMed

    Klooster, J

    1978-10-01

    In considering the advantages and disadvantages of full-time dental faculty members involved in private practice, some suggestions for coping with problems represented by the disadvantages have been cited. Faculty members may find a more satisfactory climate for the patient service aspect of their professional activity in a system where several options are made available from which a teacher may select his preferred office environment and practice style.

  14. Home advantage in Turkish professional soccer.

    PubMed

    Seçkin, Aylin; Pollard, Richard

    2008-08-01

    Home advantage is known to play an important role in the outcome of professional soccer games and to vary considerably worldwide. In the Turkish Super League over the last 12 years, 61.5% of the total points gained have been won by the home team, a figure similar to the worldwide average and to the Premier League in England. It is lower (57.7%) for games played between teams from Istanbul and especially high for games involving teams from cities in the more remote and ethnically distinct parts of Turkey (Van and Diyarbakir). Match performance data show that although home teams in Turkey take 26% more shots at goal than away teams, the success rates for shots do not differ. For fouls and disciplinary cards, home and away teams do not differ significantly in Turkey, a finding that differs from games in England, perhaps due to less referee bias.

  15. March 2013: Medicare Advantage update.

    PubMed

    Sayavong, Sarah; Kemper, Leah; Barker, Abigail; McBride, Timothy

    2013-09-01

    Key Data Findings. (1) From March 2012 to March 2013, rural enrollment in Medicare Advantage (MA) and other prepaid plans increased by over 200,000 enrollees, to more than 1.9 million. (2) Preferred provider organization (PPO) plan enrollment increased to nearly one million enrollees, accounting for more than 51% of the rural MA market (up from 48% in March 2012). (3) Health maintenance organization (HMO) enrollment continued to grow in 2013, with over 31% of the rural MA market, while private fee-for-service (PFFS) plan enrollment decreased to less than 10% of market share. (4) Despite recent changes to MA payment, rural MA enrollment continues to increase.

  16. Advantages of GPU technology in DFT calculations of intercalated graphene

    NASA Astrophysics Data System (ADS)

    Pešić, J.; Gajić, R.

    2014-09-01

    Over the past few years, the expansion of general-purpose graphic-processing unit (GPGPU) technology has had a great impact on computational science. GPGPU is the utilization of a graphics-processing unit (GPU) to perform calculations in applications usually handled by the central processing unit (CPU). Use of GPGPUs as a way to increase computational power in the material sciences has significantly decreased computational costs in already highly demanding calculations. A level of the acceleration and parallelization depends on the problem itself. Some problems can benefit from GPU acceleration and parallelization, such as the finite-difference time-domain algorithm (FTDT) and density-functional theory (DFT), while others cannot take advantage of these modern technologies. A number of GPU-supported applications had emerged in the past several years (www.nvidia.com/object/gpu-applications.html). Quantum Espresso (QE) is reported as an integrated suite of open source computer codes for electronic-structure calculations and materials modeling at the nano-scale. It is based on DFT, the use of a plane-waves basis and a pseudopotential approach. Since the QE 5.0 version, it has been implemented as a plug-in component for standard QE packages that allows exploiting the capabilities of Nvidia GPU graphic cards (www.qe-forge.org/gf/proj). In this study, we have examined the impact of the usage of GPU acceleration and parallelization on the numerical performance of DFT calculations. Graphene has been attracting attention worldwide and has already shown some remarkable properties. We have studied an intercalated graphene, using the QE package PHonon, which employs GPU. The term ‘intercalation’ refers to a process whereby foreign adatoms are inserted onto a graphene lattice. In addition, by intercalating different atoms between graphene layers, it is possible to tune their physical properties. Our experiments have shown there are benefits from using GPUs, and we reached an

  17. Generation of multi-million element meshes for solid model-based geometries: The Dicer algorithm

    SciTech Connect

    Melander, D.J.; Benzley, S.E.; Tautges, T.J.

    1997-06-01

    The Dicer algorithm generates a fine mesh by refining each element in a coarse all-hexahedral mesh generated by any existing all-hexahedral mesh generation algorithm. The fine mesh is geometry-conforming. Using existing all-hexahedral meshing algorithms to define the initial coarse mesh simplifies the overall meshing process and allows dicing to take advantage of improvements in other meshing algorithms immediately. The Dicer algorithm will be used to generate large meshes in support of the ASCI program. The authors also plan to use dicing as the basis for parallel mesh generation. Dicing strikes a careful balance between the interactive mesh generation and multi-million element mesh generation processes for complex 3D geometries, providing an efficient means for producing meshes of varying refinement once the coarse mesh is obtained.

  18. A projected preconditioned conjugate gradient algorithm for computing many extreme eigenpairs of a Hermitian matrix [A projected preconditioned conjugate gradient algorithm for computing a large eigenspace of a Hermitian matrix

    SciTech Connect

    Vecharynski, Eugene; Yang, Chao; Pask, John E.

    2015-02-25

    Here, we present an iterative algorithm for computing an invariant subspace associated with the algebraically smallest eigenvalues of a large sparse or structured Hermitian matrix A. We are interested in the case in which the dimension of the invariant subspace is large (e.g., over several hundreds or thousands) even though it may still be small relative to the dimension of A. These problems arise from, for example, density functional theory (DFT) based electronic structure calculations for complex materials. The key feature of our algorithm is that it performs fewer Rayleigh–Ritz calculations compared to existing algorithms such as the locally optimal block preconditioned conjugate gradient or the Davidson algorithm. It is a block algorithm, and hence can take advantage of efficient BLAS3 operations and be implemented with multiple levels of concurrency. We discuss a number of practical issues that must be addressed in order to implement the algorithm efficiently on a high performance computer.

  19. Advanced signal separation and recovery algorithms for digital x-ray spectroscopy

    NASA Astrophysics Data System (ADS)

    Mahmoud, Imbaby I.; El Tokhy, Mohamed S.

    2015-02-01

    X-ray spectroscopy is widely used for in-situ applications for samples analysis. Therefore, spectrum drawing and assessment of x-ray spectroscopy with high accuracy is the main scope of this paper. A Silicon Lithium Si(Li) detector that cooled with a nitrogen is used for signal extraction. The resolution of the ADC is 12 bits. Also, the sampling rate of ADC is 5 MHz. Hence, different algorithms are implemented. These algorithms were run on a personal computer with Intel core TM i5-3470 CPU and 3.20 GHz. These algorithms are signal preprocessing, signal separation and recovery algorithms, and spectrum drawing algorithm. Moreover, statistical measurements are used for evaluation of these algorithms. Signal preprocessing based on DC-offset correction and signal de-noising is performed. DC-offset correction was done by using minimum value of radiation signal. However, signal de-noising was implemented using fourth order finite impulse response (FIR) filter, linear phase least-square FIR filter, complex wavelet transforms (CWT) and Kalman filter methods. We noticed that Kalman filter achieves large peak signal to noise ratio (PSNR) and lower error than other methods. However, CWT takes much longer execution time. Moreover, three different algorithms that allow correction of x-ray signal overlapping are presented. These algorithms are 1D non-derivative peak search algorithm, second derivative peak search algorithm and extrema algorithm. Additionally, the effect of signal separation and recovery algorithms on spectrum drawing is measured. Comparison between these algorithms is introduced. The obtained results confirm that second derivative peak search algorithm as well as extrema algorithm have very small error in comparison with 1D non-derivative peak search algorithm. However, the second derivative peak search algorithm takes much longer execution time. Therefore, extrema algorithm introduces better results over other algorithms. It has the advantage of recovering and

  20. Upward Wealth Mobility: Exploring the Roman Catholic Advantage

    ERIC Educational Resources Information Center

    Keister, Lisa A.

    2007-01-01

    Wealth inequality is among the most extreme forms of stratification in the United States, and upward wealth mobility is not common. Yet mobility is possible, and this paper takes advantage of trends among a unique group to explore the processes that generate mobility. I show that non-Hispanic whites raised in Roman Catholic families have been…

  1. Systematic Risk-Taking.

    ERIC Educational Resources Information Center

    Neihart, Maureen

    1999-01-01

    Describes systematic risk-taking, a strategy designed to develop skills and increase self-esteem, confidence, and courage in gifted youth. The steps of systematic risk-taking include understanding the benefits, initial self-assessment for risk-taking categories, identifying personal needs, determining a risk to take, taking the risk, and…

  2. Enhanced Deep Blue Aerosol Retrieval Algorithm: The Second Generation

    NASA Technical Reports Server (NTRS)

    Hsu, N. C.; Jeong, M.-J.; Bettenhausen, C.; Sayer, A. M.; Hansell, R.; Seftor, C. S.; Huang, J.; Tsay, S.-C.

    2013-01-01

    The aerosol products retrieved using the MODIS collection 5.1 Deep Blue algorithm have provided useful information about aerosol properties over bright-reflecting land surfaces, such as desert, semi-arid, and urban regions. However, many components of the C5.1 retrieval algorithm needed to be improved; for example, the use of a static surface database to estimate surface reflectances. This is particularly important over regions of mixed vegetated and non- vegetated surfaces, which may undergo strong seasonal changes in land cover. In order to address this issue, we develop a hybrid approach, which takes advantage of the combination of pre-calculated surface reflectance database and normalized difference vegetation index in determining the surface reflectance for aerosol retrievals. As a result, the spatial coverage of aerosol data generated by the enhanced Deep Blue algorithm has been extended from the arid and semi-arid regions to the entire land areas.

  3. Optimization of image processing algorithms on mobile platforms

    NASA Astrophysics Data System (ADS)

    Poudel, Pramod; Shirvaikar, Mukul

    2011-03-01

    This work presents a technique to optimize popular image processing algorithms on mobile platforms such as cell phones, net-books and personal digital assistants (PDAs). The increasing demand for video applications like context-aware computing on mobile embedded systems requires the use of computationally intensive image processing algorithms. The system engineer has a mandate to optimize them so as to meet real-time deadlines. A methodology to take advantage of the asymmetric dual-core processor, which includes an ARM and a DSP core supported by shared memory, is presented with implementation details. The target platform chosen is the popular OMAP 3530 processor for embedded media systems. It has an asymmetric dual-core architecture with an ARM Cortex-A8 and a TMS320C64x Digital Signal Processor (DSP). The development platform was the BeagleBoard with 256 MB of NAND RAM and 256 MB SDRAM memory. The basic image correlation algorithm is chosen for benchmarking as it finds widespread application for various template matching tasks such as face-recognition. The basic algorithm prototypes conform to OpenCV, a popular computer vision library. OpenCV algorithms can be easily ported to the ARM core which runs a popular operating system such as Linux or Windows CE. However, the DSP is architecturally more efficient at handling DFT algorithms. The algorithms are tested on a variety of images and performance results are presented measuring the speedup obtained due to dual-core implementation. A major advantage of this approach is that it allows the ARM processor to perform important real-time tasks, while the DSP addresses performance-hungry algorithms.

  4. Evolutionary advantages of adaptive rewarding

    NASA Astrophysics Data System (ADS)

    Szolnoki, Attila; Perc, Matjaž

    2012-09-01

    Our well-being depends on both our personal success and the success of our society. The realization of this fact makes cooperation an essential trait. Experiments have shown that rewards can elevate our readiness to cooperate, but since giving a reward inevitably entails paying a cost for it, the emergence and stability of such behavior remains elusive. Here we show that allowing for the act of rewarding to self-organize in dependence on the success of cooperation creates several evolutionary advantages that instill new ways through which collaborative efforts are promoted. Ranging from indirect territorial battle to the spontaneous emergence and destruction of coexistence, phase diagrams and the underlying spatial patterns reveal fascinatingly rich social dynamics that explain why this costly behavior has evolved and persevered. Comparisons with adaptive punishment, however, uncover an Achilles heel of adaptive rewarding, coming from over-aggression, which in turn hinders optimal utilization of network reciprocity. This may explain why, despite its success, rewarding is not as firmly embedded into our societal organization as punishment.

  5. Advantages and Uses of AMTEC

    NASA Astrophysics Data System (ADS)

    Lodhi, M. A. K.

    2012-10-01

    Static conversion systems are gaining importance in recent times because of newer applications of electricity like in spacecraft, hybrid-electric vehicles, military uses and domestic purposes. Of the many new static energy conversion systems that are being considered, one is the Alkali Metal Thermal Electric Converter (AMTEC). It is a thermally regenerative, electrochemical device for the direct conversion of heat to electrical power. As the name suggests, this system uses an alkali metal in its process. The electrochemical process involved in the working of AMTEC is ionization of alkali metal atoms at the interface of electrode and electrolyte. The electrons produced as a result flow through the external load thus doing work, and finally recombine with the metal ions at the cathode. AMTECs convert the work done during the nearly isothermal expansion of metal vapor to produce a high current and low voltage electron flow. Due to its principle of working it has many inherent advantages over other conventional generators. These will be discussed briefly.

  6. Network-Control Algorithm

    NASA Technical Reports Server (NTRS)

    Chan, Hak-Wai; Yan, Tsun-Yee

    1989-01-01

    Algorithm developed for optimal routing of packets of data along links of multilink, multinode digital communication network. Algorithm iterative and converges to cost-optimal assignment independent of initial assignment. Each node connected to other nodes through links, each containing number of two-way channels. Algorithm assigns channels according to message traffic leaving and arriving at each node. Modified to take account of different priorities among packets belonging to different users by using different delay constraints or imposing additional penalties via cost function.

  7. Home advantage in the Australian Football League.

    PubMed

    Clarke, Stephen R

    2005-04-01

    The results of this study on home advantage in Australian rules football demonstrate that individual clubs have different home advantages. Traditional measures of home advantage as applied to whole competitions such as percentage of games won, and alternative measures such as average margin of victory for the home team, are calculated. Problems with these measures are discussed. Individual home advantages for each team are obtained using a linear model fitted to individual match margins; the resultant home advantages are analysed, and variations and possible causes or groupings of home advantage are proposed. It is shown that some models allowing different home advantages for different clubs are a significant improvement over previous models assuming a common home advantage. The results show a strong isolation effect, with non-Victorian teams having large home advantages, and lend support to the conclusion that crowd effects and ground familiarity are a major determinant of home advantage.

  8. Quantum computation with classical light: Implementation of the Deutsch-Jozsa algorithm

    NASA Astrophysics Data System (ADS)

    Perez-Garcia, Benjamin; McLaren, Melanie; Goyal, Sandeep K.; Hernandez-Aranda, Raul I.; Forbes, Andrew; Konrad, Thomas

    2016-05-01

    We propose an optical implementation of the Deutsch-Jozsa Algorithm using classical light in a binary decision-tree scheme. Our approach uses a ring cavity and linear optical devices in order to efficiently query the oracle functional values. In addition, we take advantage of the intrinsic Fourier transforming properties of a lens to read out whether the function given by the oracle is balanced or constant.

  9. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  10. How Successful Is Medicare Advantage?

    PubMed Central

    Newhouse, Joseph P; McGuire, Thomas G

    2014-01-01

    Context Medicare Part C, or Medicare Advantage (MA), now almost 30 years old, has generally been viewed as a policy disappointment. Enrollment has vacillated but has never come close to the penetration of managed care plans in the commercial insurance market or in Medicaid, and because of payment policy decisions and selection, the MA program is viewed as having added to cost rather than saving funds for the Medicare program. Recent changes in Medicare policy, including improved risk adjustment, however, may have changed this picture. Methods This article summarizes findings from our group's work evaluating MA's recent performance and investigating payment options for improving its performance even more. We studied the behavior of both beneficiaries and plans, as well as the effects of Medicare policy. Findings Beneficiaries make “mistakes” in their choice of MA plan options that can be explained by behavioral economics. Few beneficiaries make an active choice after they enroll in Medicare. The high prevalence of “zero-premium” plans signals inefficiency in plan design and in the market's functioning. That is, Medicare premium policies interfere with economically efficient choices. The adverse selection problem, in which healthier, lower-cost beneficiaries tend to join MA, appears much diminished. The available measures, while limited, suggest that, on average, MA plans offer care of equal or higher quality and for less cost than traditional Medicare (TM). In counties, greater MA penetration appears to improve TM's performance. Conclusions Medicare policies regarding lock-in provisions and risk adjustment that were adopted in the mid-2000s have mitigated the adverse selection problem previously plaguing MA. On average, MA plans appear to offer higher value than TM, and positive spillovers from MA into TM imply that reimbursement should not necessarily be neutral. Policy changes in Medicare that reform the way that beneficiaries are charged for MA plan

  11. Tailored logistics: the next advantage.

    PubMed

    Fuller, J B; O'Conor, J; Rawlinson, R

    1993-01-01

    How many top executives have ever visited with managers who move materials from the factory to the store? How many still reduce the costs of logistics to the rent of warehouses and the fees charged by common carriers? To judge by hours of senior management attention, logistics problems do not rank high. But logistics have the potential to become the next governing element of strategy. Whether they know it or not, senior managers of every retail store and diversified manufacturing company compete in logistically distinct businesses. Customer needs vary, and companies can tailor their logistics systems to serve their customers better and more profitably. Companies do not create value for customers and sustainable advantage for themselves merely by offering varieties of goods. Rather, they offer goods in distinct ways. A particular can of Coca-Cola, for example, might be a can of Coca-Cola going to a vending machine, or a can of Coca-Cola that comes with billing services. There is a fortune buried in this distinction. The goal of logistics strategy is building distinct approaches to distinct groups of customers. The first step is organizing a cross-functional team to proceed through the following steps: segmenting customers according to purchase criteria, establishing different standards of service for different customer segments, tailoring logistics pipelines to support each segment, and creating economics of scale to determine which assets can be shared among various pipelines. The goal of establishing logistically distinct businesses is familiar: improved knowledge of customers and improved means of satisfying them.

  12. Tailored logistics: the next advantage.

    PubMed

    Fuller, J B; O'Conor, J; Rawlinson, R

    1993-01-01

    How many top executives have ever visited with managers who move materials from the factory to the store? How many still reduce the costs of logistics to the rent of warehouses and the fees charged by common carriers? To judge by hours of senior management attention, logistics problems do not rank high. But logistics have the potential to become the next governing element of strategy. Whether they know it or not, senior managers of every retail store and diversified manufacturing company compete in logistically distinct businesses. Customer needs vary, and companies can tailor their logistics systems to serve their customers better and more profitably. Companies do not create value for customers and sustainable advantage for themselves merely by offering varieties of goods. Rather, they offer goods in distinct ways. A particular can of Coca-Cola, for example, might be a can of Coca-Cola going to a vending machine, or a can of Coca-Cola that comes with billing services. There is a fortune buried in this distinction. The goal of logistics strategy is building distinct approaches to distinct groups of customers. The first step is organizing a cross-functional team to proceed through the following steps: segmenting customers according to purchase criteria, establishing different standards of service for different customer segments, tailoring logistics pipelines to support each segment, and creating economics of scale to determine which assets can be shared among various pipelines. The goal of establishing logistically distinct businesses is familiar: improved knowledge of customers and improved means of satisfying them. PMID:10126157

  13. The Take Action Project

    ERIC Educational Resources Information Center

    Boudreau, Sue

    2010-01-01

    The Take Action Project (TAP) was created to help middle school students take informed and effective action on science-related issues. The seven steps of TAP ask students to (1) choose a science-related problem of interest to them, (2) research their problem, (3) select an action to take on the problem, (4) plan that action, (5) take action, (6)…

  14. Nurses’ Creativity: Advantage or Disadvantage

    PubMed Central

    Shahsavari Isfahani, Sara; Hosseini, Mohammad Ali; Fallahi Khoshknab, Masood; Peyrovi, Hamid; Khanke, Hamid Reza

    2015-01-01

    Background Recently, global nursing experts have been aggressively encouraging nurses to pursue creativity and innovation in nursing to improve nursing outcomes. Nurses’ creativity plays a significant role in health and well-being. In most health systems across the world, nurses provide up to 80% of the primary health care; therefore, they are critically positioned to provide creative solutions for current and future global health challenges. Objectives The purpose of this study was to explore Iranian nurses’ perceptions and experiences toward the expression of creativity in clinical settings and the outcomes of their creativity for health care organizations. Patients and Methods A qualitative approach using content analysis was adopted. Data were collected through in-depth semistructured interviews with 14 nurses who were involved in the creative process in educational hospitals affiliated to Jahrom and Tehran Universities of Medical Sciences in Iran. Results Four themes emerged from the data analysis, including a) Improvement in quality of patient care, b) Improvement in nurses’ quality of work, personal and social life, c) Promotion of organization, and d) Unpleasant outcomes. Conclusions The findings indicated that nurses’ creativity in health care organizations can lead to major changes of nursing practice, improvement of care and organizational performance. Therefore, policymakers, nurse educators, nursing and hospital managers should provide a nurturing environment that is conducive to creative thinking, giving the nurses opportunity for flexibility, creativity, support for change, and risk taking. PMID:25793116

  15. Cloud model bat algorithm.

    PubMed

    Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi

    2014-01-01

    Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425

  16. A quantum search algorithm for future spacecraft attitude determination

    NASA Astrophysics Data System (ADS)

    Tsai, Jack; Hsiao, Fu-Yuen; Li, Yi-Ju; Shen, Jen-Fu

    2011-04-01

    In this paper we study the potential application of a quantum search algorithm to spacecraft navigation with a focus on attitude determination. Traditionally, attitude determination is achieved by recognizing the relative position/attitude with respect to the background stars using sun sensors, earth limb sensors, or star trackers. However, due to the massive celestial database, star pattern recognition is a complicated and power consuming job. We propose a new method of attitude determination by applying the quantum search algorithm to the search for a specific star or star pattern. The quantum search algorithm, proposed by Grover in 1996, could search the specific data out of an unstructured database containing a number N of data in only O(√{N}) steps, compared to an average of N/2 steps in conventional computers. As a result, by taking advantage of matching a particular star in a vast celestial database in very few steps, we derive a new algorithm for attitude determination, collaborated with Grover's search algorithm and star catalogues of apparent magnitude and absorption spectra. Numerical simulations and examples are also provided to demonstrate the feasibility and robustness of our new algorithm.

  17. Fast conjugate gradient algorithm extension for analyzer-based imaging reconstruction

    NASA Astrophysics Data System (ADS)

    Caudevilla, Oriol; Brankov, Jovan G.

    2016-04-01

    This paper presents an extension of the classic Conjugate Gradient Algorithm. Motivated by the Analyzer-Based Imaging inverse problem, the novel method maximizes the Poisson regularized log-likelihood with a non-linear transformation of parameter faster than other solutions. The new approach takes advantage of the special properties of the Poisson log-likelihood to conjugate each ascend direction with respect all the previous directions taken by the algorithm. Our solution is compared with the general solution for non-quadratic unconstrained problems: the Polak- Ribiere formula. Both methods are applied to the ABI reconstruction problem.

  18. Transport implementation of the Bernstein–Vazirani algorithm with ion qubits

    NASA Astrophysics Data System (ADS)

    Fallek, S. D.; Herold, C. D.; McMahon, B. J.; Maller, K. M.; Brown, K. R.; Amini, J. M.

    2016-08-01

    Using trapped ion quantum bits in a scalable microfabricated surface trap, we perform the Bernstein–Vazirani algorithm. Our architecture takes advantage of the ion transport capabilities of such a trap. The algorithm is demonstrated using two- and three-ion chains. For three ions, an improvement is achieved compared to a classical system using the same number of oracle queries. For two ions and one query, we correctly determine an unknown bit string with probability 97.6(8)%. For three ions, we succeed with probability 80.9(3)%.

  19. A DRAM compiler algorithm for high performance VLSI embedded memories

    NASA Technical Reports Server (NTRS)

    Eldin, A. G.

    1992-01-01

    In many applications, the limited density of the embedded SRAM does not allow integrating the memory on the same chip with other logic and functional blocks. In such cases, the embedded DRAM provides the optimum combination of very high density, low power, and high performance. For ASIC's to take full advantage of this design strategy, an efficient and highly reliable DRAM compiler must be used. The embedded DRAM architecture, cell, and peripheral circuit design considerations and the algorithm of a high performance memory compiler are presented .

  20. Fast algorithms for transport models. Final report

    SciTech Connect

    Manteuffel, T.A.

    1994-10-01

    This project has developed a multigrid in space algorithm for the solution of the S{sub N} equations with isotropic scattering in slab geometry. The algorithm was developed for the Modified Linear Discontinuous (MLD) discretization in space which is accurate in the thick diffusion limit. It uses a red/black two-cell {mu}-line relaxation. This relaxation solves for all angles on two adjacent spatial cells simultaneously. It takes advantage of the rank-one property of the coupling between angles and can perform this inversion in O(N) operations. A version of the multigrid in space algorithm was programmed on the Thinking Machines Inc. CM-200 located at LANL. It was discovered that on the CM-200 a block Jacobi type iteration was more efficient than the block red/black iteration. Given sufficient processors all two-cell block inversions can be carried out simultaneously with a small number of parallel steps. The bottleneck is the need for sums of N values, where N is the number of discrete angles, each from a different processor. These are carried out by machine intrinsic functions and are well optimized. The overall algorithm has computational complexity O(log(M)), where M is the number of spatial cells. The algorithm is very efficient and represents the state-of-the-art for isotropic problems in slab geometry. For anisotropic scattering in slab geometry, a multilevel in angle algorithm was developed. A parallel version of the multilevel in angle algorithm has also been developed. Upon first glance, the shifted transport sweep has limited parallelism. Once the right-hand-side has been computed, the sweep is completely parallel in angle, becoming N uncoupled initial value ODE`s. The author has developed a cyclic reduction algorithm that renders it parallel with complexity O(log(M)). The multilevel in angle algorithm visits log(N) levels, where shifted transport sweeps are performed. The overall complexity is O(log(N)log(M)).

  1. Medicare Advantage Plans Pay Hospitals Less Than Traditional Medicare Pays.

    PubMed

    Baker, Laurence C; Bundorf, M Kate; Devlin, Aileen M; Kessler, Daniel P

    2016-08-01

    There is ongoing debate about how prices paid to providers by Medicare Advantage plans compare to prices paid by fee-for-service Medicare. We used data from Medicare and the Health Care Cost Institute to identify the prices paid for hospital services by fee-for-service (FFS) Medicare, Medicare Advantage plans, and commercial insurers in 2009 and 2012. We calculated the average price per admission, and its trend over time, in each of the three types of insurance for fixed baskets of hospital admissions across metropolitan areas. After accounting for differences in hospital networks, geographic areas, and case-mix between Medicare Advantage and FFS Medicare, we found that Medicare Advantage plans paid 5.6 percent less for hospital services than FFS Medicare did. Without taking into account the narrower networks of Medicare Advantage, the program paid 8.0 percent less than FFS Medicare. We also found that the rates paid by commercial plans were much higher than those of either Medicare Advantage or FFS Medicare, and growing. At least some of this difference comes from the much higher prices that commercial plans pay for profitable service lines.

  2. Medicare Advantage Plans Pay Hospitals Less Than Traditional Medicare Pays.

    PubMed

    Baker, Laurence C; Bundorf, M Kate; Devlin, Aileen M; Kessler, Daniel P

    2016-08-01

    There is ongoing debate about how prices paid to providers by Medicare Advantage plans compare to prices paid by fee-for-service Medicare. We used data from Medicare and the Health Care Cost Institute to identify the prices paid for hospital services by fee-for-service (FFS) Medicare, Medicare Advantage plans, and commercial insurers in 2009 and 2012. We calculated the average price per admission, and its trend over time, in each of the three types of insurance for fixed baskets of hospital admissions across metropolitan areas. After accounting for differences in hospital networks, geographic areas, and case-mix between Medicare Advantage and FFS Medicare, we found that Medicare Advantage plans paid 5.6 percent less for hospital services than FFS Medicare did. Without taking into account the narrower networks of Medicare Advantage, the program paid 8.0 percent less than FFS Medicare. We also found that the rates paid by commercial plans were much higher than those of either Medicare Advantage or FFS Medicare, and growing. At least some of this difference comes from the much higher prices that commercial plans pay for profitable service lines. PMID:27503970

  3. 77 FR 67433 - Community Advantage Pilot Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-09

    ... ADMINISTRATION Community Advantage Pilot Program AGENCY: U.S. Small Business Administration. ACTION: Notice of extension of and changes to Community Advantage Pilot Program and request for comments. SUMMARY: The Community Advantage (``CA'') Pilot Program is a pilot program to increase SBA-guaranteed loans to...

  4. 76 FR 56262 - Community Advantage Pilot Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-12

    ... Community Advantage Pilot Program (``CA Pilot Program'') (76 FR 9626). Pursuant to the authority provided to... ADMINISTRATION Community Advantage Pilot Program AGENCY: U.S. Small Business Administration (SBA). ACTION: Notice of change to Community Advantage Pilot Program. SUMMARY: On February 18, 2011, SBA published a...

  5. How to obtain efficient GPU kernels: An illustration using FMM & FGT algorithms

    NASA Astrophysics Data System (ADS)

    Cruz, Felipe A.; Layton, Simon K.; Barba, L. A.

    2011-10-01

    Computing on graphics processors is maybe one of the most important developments in computational science to happen in decades. Not since the arrival of the Beowulf cluster, which combined open source software with commodity hardware to truly democratize high-performance computing, has the community been so electrified. Like then, the opportunity comes with challenges. The formulation of scientific algorithms to take advantage of the performance offered by the new architecture requires rethinking core methods. Here, we have tackled fast summation algorithms (fast multipole method and fast Gauss transform), and applied algorithmic redesign for attaining performance on GPUs. The progression of performance improvements attained illustrates the exercise of formulating algorithms for the massively parallel architecture of the GPU. The end result has been GPU kernels that run at over 500 Gop/s on one NVIDIATESLA C1060 card, thereby reaching close to practical peak.

  6. Image enhancement based on edge boosting algorithm

    NASA Astrophysics Data System (ADS)

    Ngernplubpla, Jaturon; Chitsobhuk, Orachat

    2015-12-01

    In this paper, a technique for image enhancement based on proposed edge boosting algorithm to reconstruct high quality image from a single low resolution image is described. The difficulty in single-image super-resolution is that the generic image priors resided in the low resolution input image may not be sufficient to generate the effective solutions. In order to achieve a success in super-resolution reconstruction, efficient prior knowledge should be estimated. The statistics of gradient priors in terms of priority map based on separable gradient estimation, maximum likelihood edge estimation, and local variance are introduced. The proposed edge boosting algorithm takes advantages of these gradient statistics to select the appropriate enhancement weights. The larger weights are applied to the higher frequency details while the low frequency details are smoothed. From the experimental results, the significant performance improvement quantitatively and perceptually is illustrated. It can be seen that the proposed edge boosting algorithm demonstrates high quality results with fewer artifacts, sharper edges, superior texture areas, and finer detail with low noise.

  7. Vector processor algorithms for transonic flow calculations

    NASA Technical Reports Server (NTRS)

    South, J. C., Jr.; Keller, J. D.; Hafez, M. M.

    1979-01-01

    This paper discusses a number of algorithms for solving the transonic full-potential equation in conservative form on a vector computer, such as the CDC STAR-100 or the CRAY-1. Recent research with the 'artificial density' method for transonics has led to development of some new iteration schemes which take advantage of vector-computer architecture without suffering significant loss of convergence rate. Several of these more promising schemes are described and 2-D and 3-D results are shown comparing the computational rates on the STAR and CRAY vector computers, and the CYBER-175 serial computer. Schemes included are: (1) Checkerboard SOR, (2) Checkerboard Leapfrog, (3) odd-even vertical line SOR, and (4) odd-even horizontal line SOR.

  8. Taking Advantage of Automated Assessment of Student-Constructed Graphs in Science

    ERIC Educational Resources Information Center

    Vitale, Jonathan M.; Lai, Kevin; Linn, Marcia C.

    2015-01-01

    We present a new system for automated scoring of graph construction items that address complex science concepts, feature qualitative prompts, and support a range of possible solutions. This system utilizes analysis of spatial features (e.g., slope of a line) to evaluate potential student ideas represented within graphs. Student ideas are then…

  9. Taking Advantage: The Rural Competitive Preference in the Investing in Innovation Program

    ERIC Educational Resources Information Center

    Strange, Marty

    2011-01-01

    The Investing in Innovation (i3) program is a U.S. Department of Education competitive grant program supporting innovation in public schools. To encourage projects focusing on rural education in its first round of grants in 2010 the Department offered two bonus points in the scoring system for "projects that would implement innovative practices,…

  10. Taking advantage of surface proximity effects with aero-marine vehicles

    NASA Astrophysics Data System (ADS)

    Trillo, Robert L.

    A comprehensive account is given of the operational characteristics and limiting operational conditions of 'wing-in-ground' surface effect marine aircraft. It is emphasized that operation must be restricted to the calmest of river and lake waters, not merely for the sake of safety margin enhancement but for the maximization of air cushion effect efficiency, and its consequent economic benefits. Quietness of operation in these environments is also essential, strongly recommending reliance on shrouded propellers as the primary means of propulsion.

  11. Take advantage of mycorrhizal fungi for improved soil fertility and plant health

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Arbuscular mycorrhizal [AM] fungi are naturally-occurring soil fungi that form a beneficial symbiosis with the roots of most crops. The plants benefit because the symbiosis increases mineral nutrient uptake, drought resistance, and disease resistance. These characteristics make utilization of AM f...

  12. Taking advantage of the ESA G-POD service to study deformation processes in mountain areas

    NASA Astrophysics Data System (ADS)

    Manconi, Andrea; Cignetti, Martina; Ardizzone, Francesca; Giordan, Daniele; Allasia, Paolo; De Luca, Claudio; Manunta, Michele; Casu, Francesco

    2015-04-01

    In mountain environments, the analysis of surface displacements is extremely important for a better understanding the effects of mass wasting phenomena, such as landslides, rock-glaciers, and glacier activity. In this scenario, the use of straightforward tools and approaches to monitor surface displacements at high spatial and temporal resolutions is a real need. Here we use the Parallel-SBAS service recently released within the ESA's Grid Processing On Demand environment (G-POD, http://gpod.eo.esa.int/) to generate Earth's surface deformation time series and interferometric production. This service performs the full SBAS-DInSAR chain starting from Level 0 data, and generates displacement time series. We use the data available on the Virtual Archive 4 (http://eo-virtual-archive4.esa.int/, in the framework of Supersite initiative. In the framework of the HAMMER project (part of the NextData initiative, http://www.nextdataproject.it/ ), we produced mean deformation velocity maps, as well as deformation time series, on a regional scale case (Aosta Valley Region, northern Italy), and at local landslide scale (Puy landslide, Piedmont, northen Italy). The possibility to gather the final results in less than 24h (by processing an average of about 30 SAR images for each frame considered), allowed to perform in relatively short time a large number of attempts. By "tuning" the processing, we have maximized for both datasets the final coverage of coherent points, by analysing the effect of SAR images acquired in the winter season, as well as of the impact of perpendicular and temporal baseline constraints. The results obtained with P-SBAS G-POD service on Valle d'Aosta region have been compared to the Deep Seated Gravitational Slope Deformation (DGSD, reference IFFI project), finding a good correlation with the anomalous areas of surface deformation and the catalogued DGSD. In addition, the results obtained on Valle d'Aosta and Piedmont regions show a good agreement to the mean velocity maps available retrieved from the "Portale Cartografico Nazionale" http://www.pcn.minambiente.it/GN/, which was instead processed by considering PSInSAR technique on the same Envisat ASAR dataset. Finally, we discuss possible future developments of the P-SBAS G-POD service in the Sentinel-1 scenario, when a large amount of SAR images will be available to a greater audience, and how this may affect the analysis of surface deformation at different spatial and temporal scales.

  13. Science Learning and Instruction: Taking Advantage of Technology to Promote Knowledge Integration

    ERIC Educational Resources Information Center

    Linn, Marcia C.; Eylon, Bat-Sheva

    2011-01-01

    "Science Learning and Instruction" describes advances in understanding the nature of science learning and their implications for the design of science instruction. The authors show how design patterns, design principles, and professional development opportunities coalesce to create and sustain effective instruction in each primary scientific…

  14. Unidirectional movement of an actin filament taking advantage of temperature gradients.

    PubMed

    Kawaguchi, Tomoaki; Honda, Hajime

    2007-01-01

    An actin filament with heat acceptors attached to its Cys374 residue in each actin monomer could move unidirectionally even under heat pulsation alone, while in the total absence of both ATP and myosin. The prime driver for the movement was temperature gradients operating between locally heated portions on an actin filament and its cooler surroundings. In this report, we investigated how the mitigation of the temperature gradients induces a unidirectional movement of an actin filament. We then observed the transversal fluctuations of the filament in response to heat pulsation and their transition into longitudinally unidirectional movement. The transition was significantly accelerated when Cys374 and Lys336 were simultaneously excited within an actin monomer. These results suggest that the mitigation of the temperature gradients within each actin monomer first went through the energy transformation to transversal fluctuations of the filament, and then followed by the transformation further down to longitudinal movements of the filament. The faster mitigation of temperature gradients within actin monomer helps build up the transition from the transversal to longitudinal movements of the filament by coordinating the interaction between the neighboring monomers. PMID:17030086

  15. Taking advantage of reduced droplet-surface interaction to optimize transport of bioanalytes in digital microfluidics.

    PubMed

    Freire, Sergio L S; Thorne, Nathaniel; Wutkowski, Michael; Dao, Selina

    2014-01-01

    Digital microfluidics (DMF), a technique for manipulation of droplets, is a promising alternative for the development of "lab-on-a-chip" platforms. Often, droplet motion relies on the wetting of a surface, directly associated with the application of an electric field; surface interactions, however, make motion dependent on droplet contents, limiting the breadth of applications of the technique. Some alternatives have been presented to minimize this dependence. However, they rely on the addition of extra chemical species to the droplet or its surroundings, which could potentially interact with droplet moieties. Addressing this challenge, our group recently developed Field-DW devices to allow the transport of cells and proteins in DMF, without extra additives. Here, the protocol for device fabrication and operation is provided, including the electronic interface for motion control. We also continue the studies with the devices, showing that multicellular, relatively large, model organisms can also be transported, arguably unaffected by the electric fields required for device operation. PMID:25407533

  16. When curiosity breeds intimacy: taking advantage of intimacy opportunities and transforming boring conversations.

    PubMed

    Kashdan, Todd B; McKnight, Patrick E; Fincham, Frank D; Rose, Paul

    2011-12-01

    Curious people seek knowledge and new experiences. In 3 studies, we examined whether, when, and how curiosity contributes to positive social outcomes between unacquainted strangers. Study 1 (98 college students) showed that curious people expect to generate closeness during intimate conversations but not during small talk; less curious people anticipated poor outcomes in both situations. We hypothesized that curious people underestimate their ability to bond with unacquainted strangers during mundane conversations. Studies 2 (90 college students) and 3 (106 college students) showed that curious people felt close to partners during intimate and small-talk conversations; less curious people only felt close when the situation offered relationship-building exercises. Surprise at the pleasure felt during this novel, uncertain situation partially mediated the benefits linked to curiosity. We found evidence of slight asymmetry between self and partner reactions. Results could not be attributed to physical attraction or positive affect. Collectively, results suggest that positive social interactions benefit from an open and curious mind-set.

  17. Pd-Pb Alloy Nanocrystals with Tailored Composition for Semihydrogenation: Taking Advantage of Catalyst Poisoning.

    PubMed

    Niu, Wenxin; Gao, Yongjun; Zhang, Weiqing; Yan, Ning; Lu, Xianmao

    2015-07-01

    Metallic nanocrystals (NCs) with well-defined sizes and shapes represent a new family of model systems for establishing structure-function relationships in heterogeneous catalysis. Here in this study, we show that catalyst poisoning can be utilized as an efficient strategy for nanocrystals shape and composition control, as well as a way to tune the catalytic activity of catalysts. Lead species, a well-known poison for noble-metal catalysts, was investigated in the growth of Pd NCs. We discovered that Pb atoms can be incorporated into the lattice of Pd NCs and form Pd-Pb alloy NCs with tunable composition and crystal facets. As model catalysts, the alloy NCs with different compositions showed different selectivity in the semihydrogenation of phenylacetylene. Pd-Pb alloy NCs with better selectivity than that of the commercial Lindlar catalyst were discovered. This study exemplified that the poisoning effect in catalysis can be explored as efficient shape-directing reagents in NC growth, and more importantly, as a strategy to tailor the performance of catalysts with high selectivity.

  18. SeDiCi: An Authentication Service Taking Advantage of Zero-Knowledge Proofs

    NASA Astrophysics Data System (ADS)

    Grzonkowski, Sławomir

    Transmission of users' profiles over insecure communication means is a crucial task of today's ecommerce applications. In addition, the users have to createmany profiles and remember many credentials. Thus they retype the same information over and over again. Each time the users type their credentials, they expose them to phishing or eavesdropping attempts.These problems could be solved by using Single Sign-on (SSO). The idea of SSO is that the users keep using the same set of credentials when visiting different websites. For web-aplications, OpenID1. is the most prominent solution that partially impelemtns SSO. However, OpenID is prone to phishing attempts and it does not preserve users' privacy [1].

  19. Taking Advantage of a Corrosion Problem to Solve a Pollution Problem

    ERIC Educational Resources Information Center

    Palomar-Ramirez, Carlos F.; Bazan-Martinez, Jose A.; Palomar-Pardave, Manuel E.; Romero-Romo, Mario A.; Ramirez-Silva, Maria Teresa

    2011-01-01

    Some simple chemistry is used to demonstrate how Fe(II) ions, formed during iron corrosion in acid aqueous solution, can reduce toxic Cr(VI) species, forming soluble Cr(III) and Fe(III) ions. These ions, in turn, can be precipitated by neutralizing the solution. The procedure provides a treatment for industrial wastewaters commonly found in…

  20. Taking advantage of the systemic immune system to cure brain diseases.

    PubMed

    Yong, V Wee; Rivest, Serge

    2009-10-15

    The systemic immune system has the ability to modulate multiple brain functions, including autonomic responses, glial reactivity following neural injuries, and neuronal excitability. Immune stimuli also influence microglia subpopulations originating from blood progenitors, and neuroprotective and reparative capacities of blood-derived microglia were recently described in mouse models of spinal cord injury and brain disorders. Furthermore, reparative roles for various immune subsets have been recognized, such as in inducing myelin repair. Nonetheless, uncontrolled and excessive activation of immune responses can be detrimental. The development of strategies to stimulate the systemic immune system safely to protect or repair brain disorders remains a major challenge ahead, but important inroads have been made. We discuss here some of the mechanisms underlying the neuroprotective and reparative effects of the systemic immune system and the most promising immunotherapies tested in mouse models of injuries and diseases, such as Alzheimer's disease, amyotrophic lateral sclerosis, and multiple sclerosis.

  1. Simulation for emergency management; Taking advantage of automation in emergency preparedness

    SciTech Connect

    Walker, J.A.; Ruberg, G.E.; O'Dell, J.J. . Management Systems Labs.)

    1989-09-01

    Currently, emergency responders are better prepared than emergency managers to accomplish their critically important functions. Emergency responders are those responsible for operational tasks during an emergency, e.g., fire fighters, emergency medical technicians, and rescue workers. Emergency managers are strategic or tactical decision-makers in leadership roles, either in the field or in remote command and control centers, who must manage an emergency, best characterized as an ill-defined problem with potentially severe consequences. Training and standards of proficiency currently exist for emergency responders in specific activities. In contrast, training and standards of proficiency are less advanced for emergency managers. Fortunately, understanding the reasons for this difference, and combining those reasons with the increased availability of computers in emergency management, offer real potential for improvement. This paper elaborates on the need for improvements in emergency management training. The reasons why training for emergency managers is difficult to develop and what currently available methods exist, including their usefulness and shortcomings, are discussed.

  2. Extend Instruction outside the Classroom: Take Advantage of Your Learning Management System

    ERIC Educational Resources Information Center

    Jensen, Lauren A.

    2010-01-01

    Numerous institutions of higher education have implemented a learning management system (LMS) or are considering doing so. This web-based software package provides self-service and quick (often personalized) access to content in a dynamic environment. Learning management systems support administrative, reporting, and documentation activities. LMSs…

  3. Taking advantage of reduced droplet-surface interaction to optimize transport of bioanalytes in digital microfluidics.

    PubMed

    Freire, Sergio L S; Thorne, Nathaniel; Wutkowski, Michael; Dao, Selina

    2014-11-10

    Digital microfluidics (DMF), a technique for manipulation of droplets, is a promising alternative for the development of "lab-on-a-chip" platforms. Often, droplet motion relies on the wetting of a surface, directly associated with the application of an electric field; surface interactions, however, make motion dependent on droplet contents, limiting the breadth of applications of the technique. Some alternatives have been presented to minimize this dependence. However, they rely on the addition of extra chemical species to the droplet or its surroundings, which could potentially interact with droplet moieties. Addressing this challenge, our group recently developed Field-DW devices to allow the transport of cells and proteins in DMF, without extra additives. Here, the protocol for device fabrication and operation is provided, including the electronic interface for motion control. We also continue the studies with the devices, showing that multicellular, relatively large, model organisms can also be transported, arguably unaffected by the electric fields required for device operation.

  4. Taking advantage of selective change driven processing for 3D scanning.

    PubMed

    Vegara, Francisco; Zuccarello, Pedro; Boluda, Jose A; Pardo, Fernando

    2013-09-27

    This article deals with the application of the principles of SCD (Selective Change Driven) vision to 3D laser scanning. Two experimental sets have been implemented: one with a classical CMOS (Complementary Metal-Oxide Semiconductor) sensor, and the other one with a recently developed CMOS SCD sensor for comparative purposes, both using the technique known as Active Triangulation. An SCD sensor only delivers the pixels that have changed most, ordered by the magnitude of their change since their last readout. The 3D scanning method is based on the systematic search through the entire image to detect pixels that exceed a certain threshold, showing the SCD approach to be ideal for this application. Several experiments for both capturing strategies have been performed to try to find the limitations in high speed acquisition/processing. The classical approach is limited by the sequential array acquisition, as predicted by the Nyquist-Shannon sampling theorem, and this has been experimentally demonstrated in the case of a rotating helix. These limitations are overcome by the SCD 3D scanning prototype achieving a significantly higher performance. The aim of this article is to compare both capturing strategies in terms of performance in the time and frequency domains, so they share all the static characteristics including resolution, 3D scanning method, etc., thus yielding the same 3D reconstruction in static scenes.

  5. Cognitive strategies take advantage of the cooperative potential of heterogeneous networks

    NASA Astrophysics Data System (ADS)

    Vukov, Jeromos; Santos, Francisco C.; Pacheco, Jorge M.

    2012-06-01

    Understanding the emergence and maintenance of cooperation is one of the most challenging topics of our time. Evolutionary game theory offers a very flexible framework within which to address this challenge. Here we use the prisoner's dilemma game to investigate the performance of individuals who are capable of adopting reactive strategies in communities structurally organized by means of Barabási-Albert scale-free networks. We find that basic cognitive abilities, such as the capability to distinguish their partners and act according to their previous actions, enable cooperation to thrive. This result is particularly significant whenever fear is the leading social tension, as this fosters retaliation, thus enforcing and sustaining cooperation. Being able to simultaneously reward fellow cooperators and punish defectors proves instrumental in achieving cooperation and the welfare of the community. As a result, central individuals can successfully lead the community and turn defective players into cooperative ones. Finally, even when participation costs—known to be detrimental to cooperation in scale-free networks—are explicitly included, we find that basic cognitive abilities have enough potential to help cooperation to prevail.

  6. Taking advantage of the positive side-effects of smallpox vaccination.

    PubMed

    Mayr, A

    2004-06-01

    From the introduction of smallpox vaccination approximately 200 years ago right up to its discontinuation (1980), reports by physicians and scientists about positive side-effects such as healing of chronic skin rashes, reduced susceptibility to various infectious diseases, e.g. measles, scarlet fever and whooping cough, and even the prophylactic use of the vaccination, e.g. against syphilis, were published again and again. Comparison with the period after cessation of vaccination confirms the experiences of the above vaccinators. As early as 1956, targeted research on these observations led to evidence of the 'ring-zone phenomenon', i.e. the production of soluble antiviral substances in infected chicken embryos and cell cultures. With the help of modern immunological and bioengineering methods, it was later possible to demonstrate that these effects are based on the activation of lymphoreticular cells and the regulatory effect of certain cytokines within the context of the non-specific immune system. These findings led to the development of paramunization with paraspecific vaccines from highly attenuated animal pox viruses. During attenuation, deletions in the virus DNA occur. Attenuated animal pox strains are therefore suited for the production of vector vaccines. The fact that these vector vaccines demonstrate an especially high level of paraspecific efficacy and lack harmful effects is likewise the result of the attenuated animal pox viruses. Optimum regulation of the entire immune system leads to increased paramunity already in the first few days after vaccination and to enhanced antigen recognition and thus accelerated commencement of specific immunity.

  7. Point-of-sale marketing of tobacco products: taking advantage of the socially disadvantaged?

    PubMed

    John, Robert; Cheney, Marshall K; Azad, M Raihan

    2009-05-01

    With increasing regulation of tobacco industry marketing practices, point-of-sale advertising has become an important channel for promoting tobacco products. One hundred and ten convenience stores in Oklahoma County were surveyed for tobacco-related advertising. There were significantly more point-of-sale tobacco advertisements in low-income and minority neighborhoods than in better educated, higher-income, predominantly White neighborhoods. Storeowners or managers were also interviewed to determine who has decision-making power regarding store signage and placement, and to elicit perceptions of industry tactics. Contracts with tobacco companies leave storeowners with little or no control over promotion of tobacco products within their store, and many are unaware of the implications of the tobacco industry point-of-sale practices. Local ordinances that regulated outdoor signage reduced outdoor tobacco advertisements, as well as tobacco signage and promotions within the store. Policy change, rather than education targeting storeowners, is recommended as the most effective strategy for reducing point-of-sale tobacco advertising.

  8. Taking Advantage of Selective Change Driven Processing for 3D Scanning

    PubMed Central

    Vegara, Francisco; Zuccarello, Pedro; Boluda, Jose A.; Pardo, Fernando

    2013-01-01

    This article deals with the application of the principles of SCD (Selective Change Driven) vision to 3D laser scanning. Two experimental sets have been implemented: one with a classical CMOS (Complementary Metal-Oxide Semiconductor) sensor, and the other one with a recently developed CMOS SCD sensor for comparative purposes, both using the technique known as Active Triangulation. An SCD sensor only delivers the pixels that have changed most, ordered by the magnitude of their change since their last readout. The 3D scanning method is based on the systematic search through the entire image to detect pixels that exceed a certain threshold, showing the SCD approach to be ideal for this application. Several experiments for both capturing strategies have been performed to try to find the limitations in high speed acquisition/processing. The classical approach is limited by the sequential array acquisition, as predicted by the Nyquist–Shannon sampling theorem, and this has been experimentally demonstrated in the case of a rotating helix. These limitations are overcome by the SCD 3D scanning prototype achieving a significantly higher performance. The aim of this article is to compare both capturing strategies in terms of performance in the time and frequency domains, so they share all the static characteristics including resolution, 3D scanning method, etc., thus yielding the same 3D reconstruction in static scenes. PMID:24084110

  9. How Can Monolingual Teachers Take Advantage of Learners' Native Language in Class?

    ERIC Educational Resources Information Center

    Pappamihiel, Eleni; Lynn, C. Allen

    2014-01-01

    With the increasing linguistic diversity of students in many classrooms around the world, teachers need to be well-equipped with strategies to address the learning needs of students with limited proficiency in the dominant language of the classroom. This article outlines various strategies that might help teachers reach that goal by taking…

  10. Competitive advantage on a warming planet.

    PubMed

    Lash, Jonathan; Wellington, Fred

    2007-03-01

    Whether you're in a traditional smokestack industry or a "clean" business like investment banking, your company will increasingly feel the effects of climate change. Even people skeptical about global warming's dangers are recognizing that, simply because so many others are concerned, the phenomenon has wide-ranging implications. Investors already are discounting share prices of companies poorly positioned to compete in a warming world. Many businesses face higher raw material and energy costs as more and more governments enact policies placing a cost on emissions. Consumers are taking into account a company's environmental record when making purchasing decisions. There's also a burgeoning market in greenhouse gas emission allowances (the carbon market), with annual trading in these assets valued at tens of billions of dollars. Companies that manage and mitigate their exposure to the risks associated with climate change while seeking new opportunities for profit will generate a competitive advantage over rivals in a carbon-constrained future. This article offers a systematic approach to mapping and responding to climate change risks. According to Jonathan Lash and Fred Wellington of the World Resources Institute, an environmental think tank, the risks can be divided into six categories: regulatory (policies such as new emissions standards), products and technology (the development and marketing of climate-friendly products and services), litigation (lawsuits alleging environmental harm), reputational (how a company's environmental policies affect its brand), supply chain (potentially higher raw material and energy costs), and physical (such as an increase in the incidence of hurricanes). The authors propose a four-step process for responding to climate change risk: Quantify your company's carbon footprint; identify the risks and opportunities you face; adapt your business in response; and do it better than your competitors. PMID:17348173

  11. Competitive advantage on a warming planet.

    PubMed

    Lash, Jonathan; Wellington, Fred

    2007-03-01

    Whether you're in a traditional smokestack industry or a "clean" business like investment banking, your company will increasingly feel the effects of climate change. Even people skeptical about global warming's dangers are recognizing that, simply because so many others are concerned, the phenomenon has wide-ranging implications. Investors already are discounting share prices of companies poorly positioned to compete in a warming world. Many businesses face higher raw material and energy costs as more and more governments enact policies placing a cost on emissions. Consumers are taking into account a company's environmental record when making purchasing decisions. There's also a burgeoning market in greenhouse gas emission allowances (the carbon market), with annual trading in these assets valued at tens of billions of dollars. Companies that manage and mitigate their exposure to the risks associated with climate change while seeking new opportunities for profit will generate a competitive advantage over rivals in a carbon-constrained future. This article offers a systematic approach to mapping and responding to climate change risks. According to Jonathan Lash and Fred Wellington of the World Resources Institute, an environmental think tank, the risks can be divided into six categories: regulatory (policies such as new emissions standards), products and technology (the development and marketing of climate-friendly products and services), litigation (lawsuits alleging environmental harm), reputational (how a company's environmental policies affect its brand), supply chain (potentially higher raw material and energy costs), and physical (such as an increase in the incidence of hurricanes). The authors propose a four-step process for responding to climate change risk: Quantify your company's carbon footprint; identify the risks and opportunities you face; adapt your business in response; and do it better than your competitors.

  12. Teachable Moment: Google Earth Takes Us There

    ERIC Educational Resources Information Center

    Williams, Ann; Davinroy, Thomas C.

    2015-01-01

    In the current educational climate, where clearly articulated learning objectives are required, it is clear that the spontaneous teachable moment still has its place. Authors Ann Williams and Thomas Davinroy think that instructors from almost any discipline can employ Google Earth as a tool to take advantage of teachable moments through the…

  13. Algorithms for physical segregation of coal

    NASA Astrophysics Data System (ADS)

    Ganguli, Rajive

    The capability for on-line measurement of the quality characteristics of conveyed coal now enables mine operators to take advantage of the inherent heterogeneity of those streams and split them into wash and no-wash stocks. Relative to processing the entire stream, this reduces the amount of coal that must be washed at the mine and thereby reduces processing costs, recovery losses, and refuse generation levels. In this dissertation, two classes of segregation algorithms, using time series models and moving windows are developed and demonstrated using field and simulated data. In all of the developed segregation algorithms, a "cut-off" ash value was computed for coal scanned on the running conveyor belt by the ash analyzer. It determined if the coal was sent to the wash pile or to the nowash pile. Forecasts from time series models, at various lead times ahead, were used in one class of the developed algorithms, to determine the cut-off ash levels. The time series models were updated from time to time to reflect changes in process. Statistical Process Control (SPC) techniques were used to determine if an update was necessary at a given time. When an update was deemed necessary, optimization techniques were used to determine the next best set of model parameters. In the other class of segregation algorithms, "few" of the immediate past observations were used to determine the cut-off ash value. These "few" observations were called the window width . The window width was kept constant in some variants of this class of algorithms. The other variants of this class were an improvement over the fixed window width algorithms. Here, the window widths were varied rather than kept constant. In these cases, SPC was used to determine the window width at any instant. Statistics of the empirical distribution and the normal distribution were used in computation of the cut-off ash value in all the variants of this class of algorithms. The good performance of the developed algorithms

  14. A Flexible Reservation Algorithm for Advance Network Provisioning

    SciTech Connect

    Balman, Mehmet; Chaniotakis, Evangelos; Shoshani, Arie; Sim, Alex

    2010-04-12

    Many scientific applications need support from a communication infrastructure that provides predictable performance, which requires effective algorithms for bandwidth reservations. Network reservation systems such as ESnet's OSCARS, establish guaranteed bandwidth of secure virtual circuits for a certain bandwidth and length of time. However, users currently cannot inquire about bandwidth availability, nor have alternative suggestions when reservation requests fail. In general, the number of reservation options is exponential with the number of nodes n, and current reservation commitments. We present a novel approach for path finding in time-dependent networks taking advantage of user-provided parameters of total volume and time constraints, which produces options for earliest completion and shortest duration. The theoretical complexity is only O(n2r2) in the worst-case, where r is the number of reservations in the desired time interval. We have implemented our algorithm and developed efficient methodologies for incorporation into network reservation frameworks. Performance measurements confirm the theoretical predictions.

  15. Discovering sequence similarity by the algorithmic significance method

    SciTech Connect

    Milosavljevic, A.

    1993-02-01

    The minimal-length encoding approach is applied to define concept of sequence similarity. A sequence is defined to be similar to another sequence or to a set of keywords if it can be encoded in a small number of bits by taking advantage of common subwords. Minimal-length encoding of a sequence is computed in linear time, using a data compression algorithm that is based on a dynamic programming strategy and the directed acyclic word graph data structure. No assumptions about common word (``k-tuple``) length are made in advance, and common words of any length are considered. The newly proposed algorithmic significance method provides an exact upper bound on the probability that sequence similarity has occurred by chance, thus eliminating the need for any arbitrary choice of similarity thresholds. Preliminary experiments indicate that a small number of keywords can positively identify a DNA sequence, which is extremely relevant in the context of partial sequencing by hybridization.

  16. Discovering sequence similarity by the algorithmic significance method

    SciTech Connect

    Milosavljevic, A.

    1993-02-01

    The minimal-length encoding approach is applied to define concept of sequence similarity. A sequence is defined to be similar to another sequence or to a set of keywords if it can be encoded in a small number of bits by taking advantage of common subwords. Minimal-length encoding of a sequence is computed in linear time, using a data compression algorithm that is based on a dynamic programming strategy and the directed acyclic word graph data structure. No assumptions about common word ( k-tuple'') length are made in advance, and common words of any length are considered. The newly proposed algorithmic significance method provides an exact upper bound on the probability that sequence similarity has occurred by chance, thus eliminating the need for any arbitrary choice of similarity thresholds. Preliminary experiments indicate that a small number of keywords can positively identify a DNA sequence, which is extremely relevant in the context of partial sequencing by hybridization.

  17. Why is Boris algorithm so good?

    SciTech Connect

    Qin, Hong; Zhang, Shuangxi; Xiao, Jianyuan; Liu, Jian; Sun, Yajuan; Tang, William M.

    2013-08-15

    Due to its excellent long term accuracy, the Boris algorithm is the de facto standard for advancing a charged particle. Despite its popularity, up to now there has been no convincing explanation why the Boris algorithm has this advantageous feature. In this paper, we provide an answer to this question. We show that the Boris algorithm conserves phase space volume, even though it is not symplectic. The global bound on energy error typically associated with symplectic algorithms still holds for the Boris algorithm, making it an effective algorithm for the multi-scale dynamics of plasmas.

  18. Why is Boris Algorithm So Good?

    SciTech Connect

    et al, Hong Qin

    2013-03-03

    Due to its excellent long term accuracy, the Boris algorithm is the de facto standard for advancing a charged particle. Despite its popularity, up to now there has been no convincing explanation why the Boris algorithm has this advantageous feature. In this letter, we provide an answer to this question. We show that the Boris algorithm conserves phase space volume, even though it is not symplectic. The global bound on energy error typically associated with symplectic algorithms still holds for the Boris algorithm, making it an effective algorithm for the multi-scale dynamics of plasmas.

  19. Analytical advantages of multivariate data processing. One, two, three, infinity?

    PubMed

    Olivieri, Alejandro C

    2008-08-01

    Multidimensional data are being abundantly produced by modern analytical instrumentation, calling for new and powerful data-processing techniques. Research in the last two decades has resulted in the development of a multitude of different processing algorithms, each equipped with its own sophisticated artillery. Analysts have slowly discovered that this body of knowledge can be appropriately classified, and that common aspects pervade all these seemingly different ways of analyzing data. As a result, going from univariate data (a single datum per sample, employed in the well-known classical univariate calibration) to multivariate data (data arrays per sample of increasingly complex structure and number of dimensions) is known to provide a gain in sensitivity and selectivity, combined with analytical advantages which cannot be overestimated. The first-order advantage, achieved using vector sample data, allows analysts to flag new samples which cannot be adequately modeled with the current calibration set. The second-order advantage, achieved with second- (or higher-) order sample data, allows one not only to mark new samples containing components which do not occur in the calibration phase but also to model their contribution to the overall signal, and most importantly, to accurately quantitate the calibrated analyte(s). No additional analytical advantages appear to be known for third-order data processing. Future research may permit, among other interesting issues, to assess if this "1, 2, 3, infinity" situation of multivariate calibration is really true. PMID:18613646

  20. Taking the Long View

    ERIC Educational Resources Information Center

    Bennett, Robert B., Jr.

    2010-01-01

    Legal studies faculty need to take the long view in their academic and professional lives. Taking the long view would seem to be a cliched piece of advice, but too frequently legal studies faculty, like their students, get focused on meeting the next short-term hurdle--getting through the next class, grading the next stack of papers, making it…

  1. The Down Syndrome Advantage: Fact or Fiction?

    ERIC Educational Resources Information Center

    Corrice, April M.; Glidden, Laraine Masters

    2009-01-01

    The "Down syndrome advantage" is the popular conception that children with Down syndrome are easier to rear than children with other developmental disabilities. We assessed whether mothers of children with developmental disabilities would demonstrate a consistent Down syndrome advantage as their children aged from 12 to 18 years. Results did not…

  2. A possible heterozygous advantage in muscular dystrophy.

    PubMed

    Emery, A E H

    2016-01-01

    In certain autosomal recessive disorders there is suggestive evidence that heterozygous carriers may have some selective advantage over normal homozygotes. These include, for example, cystic fibrosis, Tay-Sachs disease and phenylketonuria. The best example so far, however, is that of significant heterozygous advantage in sickle-cell anaemia with increased resistance to falciparum malaria. PMID:27245530

  3. Bilateral Advantages in Subitizing With Visual Masking.

    PubMed

    Pryor, Campbell G; Howe, Piers D L

    2015-01-01

    Performance on a range of visual-processing tasks has been shown to improve when information is split bilaterally across the left and right visual hemifields rather than being restricted to a single visual hemifield. However, a recent study by Delvenne et al. found no such bilateral advantage for subitizing, which is our ability to rapidly and accurately enumerate small quantities of objects. This finding is particularly surprising, as it contradicts the prediction of FINgers of INSTantiation theory that subitizing should benefit from bilateral presentation. Our study investigated the issue by determining if there are any circumstances where a bilateral advantage for subitization occurs. Contrary to Delvenne et al., we found that subitizing could show bilateral advantages, but only when the display was backward-masked. We discuss these findings in relation to how the rate of encoding and the time available for this encoding may affect bilateral advantages in subitizing. A general model is proposed under which bilateral advantages could be explained.

  4. [Algorithm for assessment of exposure to asbestos].

    PubMed

    Martines, V; Fioravanti, M; Anselmi, A; Attili, F; Battaglia, D; Cerratti, D; Ciarrocca, M; D'Amelio, R; De Lorenzo, G; Ferrante, E; Gaudioso, F; Mascia, E; Rauccio, A; Siena, S; Palitti, T; Tucci, L; Vacca, D; Vigliano, R; Zelano, V; Tomei, F; Sancini, A

    2010-01-01

    There is no universally approved method in the scientific literature to identify subjects exposed to asbestos and divide them in classes according to intensity of exposure. The aim of our work is to study and develope an algorithm based on the findings of occupational anamnestical information provided by a large group of workers. The algorithm allows to discriminate, in a probabilistic way, the risk of exposure by the attribution of a code for each worker (ELSA Code--work estimated exposure to asbestos). The ELSA code has been obtained through a synthesis of information that the international scientific literature identifies as the most predictive for the onset of asbestos-related abnormalities. Four dimensions are analyzed and described: 1) present and/or past occupation; 2) type of materials and equipment used in performing working activity; 3) environment where these activities are carried out; 4) period of time when activities are performed. Although it is possible to have informations in a subjective manner, the decisional procedure is objective and is based on the systematic evaluation of asbestos exposure. From the combination of the four identified dimensions it is possible to have 108 ELSA codes divided in three typological profiles of estimated risk of exposure. The application of the algorithm offers some advantages compared to other methods used for identifying individuals exposed to asbestos: 1) it can be computed both in case of present and past exposure to asbestos; 2) the classification of workers exposed to asbestos using ELSA code is more detailed than the one we have obtained with Job Exposure Matrix (JEM) because the ELSA Code takes in account other indicators of risk besides those considered in the JEM. This algorithm was developed for a project sponsored by the Italian Armed Forces and is also adaptable to other work conditions for in which it could be necessary to assess risk for asbestos exposure.

  5. Give/Take

    SciTech Connect

    2007-09-12

    Give and Take are set of companion utilities that allow a secure transfer of files from one user to another without exposing the files to third parties. The named files are copied to a spool area. The reciever can retrieve the files by running the "take" program. Ownership of the files remains with the giver until they are taken. Certain users may be limited to take files only from specific givers. For these users, files may only be taken from givers who are members of the gt-uid-group where uid is the UNIX id of the limited user.

  6. Give/Take

    2007-09-12

    Give and Take are set of companion utilities that allow a secure transfer of files from one user to another without exposing the files to third parties. The named files are copied to a spool area. The reciever can retrieve the files by running the "take" program. Ownership of the files remains with the giver until they are taken. Certain users may be limited to take files only from specific givers. For these users, filesmore » may only be taken from givers who are members of the gt-uid-group where uid is the UNIX id of the limited user.« less

  7. Algorithm That Synthesizes Other Algorithms for Hashing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2010-01-01

    An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the

  8. Take Your Medicines Safely

    MedlinePlus Videos and Cool Tools

    ... better, the antibiotic is working in killing the bacteria, but it might not completely give what they call a "bactericidal effect." That means taking the bacteria completely out of the system. It might be ...

  9. Self-Advantage in the Online World

    PubMed Central

    Yang, Hongsheng; Wang, Fang; Gu, Nianjun; Zhang, Ying

    2015-01-01

    In the current research, screen name was employed to explore the possible cognitive advantage for self-related online material. The results showed that one’s own screen name and real name were detected faster than famous names in both visual search and discrimination tasks. In comparison, there was no difference in visual search speed for the two kinds of self-related names. These findings extend self-advantage from the physical world to the virtual online environment and confirm its robustness. In addition, the present findings also suggest that familiarity might not be the determining factor for self-advantage. PMID:26461490

  10. The advantage of first mention in Spanish

    PubMed Central

    CARREIRAS, MANUEL; GERNSBACHER, MORTON ANN; VILLA, VICTOR

    2015-01-01

    An advantage of first mention—that is, faster access to participants mentioned first in a sentence—has previously been demonstrated only in English. We report three experiments demonstrating that the advantage of first mention occurs also in Spanish sentences, regardless of whether the first-mentioned participants are syntactic subjects, and regardless, too, of whether they are proper names or inanimate objects. Because greater word-order flexibility is allowed in Spanish than in English (e.g., nonpassive object-verb-subject constructions exist in Spanish), these findings provide additional evidence that the advantage of first mention is a general cognitive phenomenon. PMID:24203596

  11. Paedomorphic facial expressions give dogs a selective advantage.

    PubMed

    Waller, Bridget M; Peirce, Kate; Caeiro, Cátia C; Scheider, Linda; Burrows, Anne M; McCune, Sandra; Kaminski, Juliane

    2013-01-01

    How wolves were first domesticated is unknown. One hypothesis suggests that wolves underwent a process of self-domestication by tolerating human presence and taking advantage of scavenging possibilities. The puppy-like physical and behavioural traits seen in dogs are thought to have evolved later, as a byproduct of selection against aggression. Using speed of selection from rehoming shelters as a proxy for artificial selection, we tested whether paedomorphic features give dogs a selective advantage in their current environment. Dogs who exhibited facial expressions that enhance their neonatal appearance were preferentially selected by humans. Thus, early domestication of wolves may have occurred not only as wolf populations became tamer, but also as they exploited human preferences for paedomorphic characteristics. These findings, therefore, add to our understanding of early dog domestication as a complex co-evolutionary process.

  12. Paedomorphic facial expressions give dogs a selective advantage.

    PubMed

    Waller, Bridget M; Peirce, Kate; Caeiro, Cátia C; Scheider, Linda; Burrows, Anne M; McCune, Sandra; Kaminski, Juliane

    2013-01-01

    How wolves were first domesticated is unknown. One hypothesis suggests that wolves underwent a process of self-domestication by tolerating human presence and taking advantage of scavenging possibilities. The puppy-like physical and behavioural traits seen in dogs are thought to have evolved later, as a byproduct of selection against aggression. Using speed of selection from rehoming shelters as a proxy for artificial selection, we tested whether paedomorphic features give dogs a selective advantage in their current environment. Dogs who exhibited facial expressions that enhance their neonatal appearance were preferentially selected by humans. Thus, early domestication of wolves may have occurred not only as wolf populations became tamer, but also as they exploited human preferences for paedomorphic characteristics. These findings, therefore, add to our understanding of early dog domestication as a complex co-evolutionary process. PMID:24386109

  13. The Democratic Take

    ERIC Educational Resources Information Center

    Lehane, Christopher S.

    2008-01-01

    The 2008 presidential election stands as a "change" election. The public's anxiety over the challenges globalization poses to the future of the American Dream is driving a desire for the country to change direction. The American people understand that what will give the nation a competitive advantage in a global marketplace are the skills,…

  14. THE HOME ADVANTAGE IN MAJOR LEAGUE BASEBALL.

    PubMed

    Jones, Marshall B

    2015-12-01

    Home advantage is smaller in baseball than in other major professional sports for men, specifically football, basketball, or soccer. This paper advances an explanation. It begins by reviewing the main observations to support the view that there is little or no home advantage in individual sports. It then presents the case that home advantage originates in impaired teamwork among the away players. The need for teamwork and the extent of it vary from sport to sport. To the extent that a sport requires little teamwork it is more like an individual sport, and the home team would be expected to enjoy only a small advantage. Interactions among players on the same side (teamwork) are much less common in baseball than in the other sports considered.

  15. Women's Memory Advantage Might Skew Alzheimer's Diagnosis

    MedlinePlus

    ... https://medlineplus.gov/news/fullstory_161328.html Women's Memory Advantage Might Skew Alzheimer's Diagnosis Women tend to hold on to better verbal memory skills as they age compared to men, study ...

  16. THE HOME ADVANTAGE IN MAJOR LEAGUE BASEBALL.

    PubMed

    Jones, Marshall B

    2015-12-01

    Home advantage is smaller in baseball than in other major professional sports for men, specifically football, basketball, or soccer. This paper advances an explanation. It begins by reviewing the main observations to support the view that there is little or no home advantage in individual sports. It then presents the case that home advantage originates in impaired teamwork among the away players. The need for teamwork and the extent of it vary from sport to sport. To the extent that a sport requires little teamwork it is more like an individual sport, and the home team would be expected to enjoy only a small advantage. Interactions among players on the same side (teamwork) are much less common in baseball than in the other sports considered. PMID:26654988

  17. Medicare advantage plans at a crossroads--yet again.

    PubMed

    Berenson, Robert A; Dowd, Bryan E

    2009-01-01

    Since risk-taking, private health insurance plans were introduced into Medicare twenty-five years ago, policymakers have disagreed on these plans' fundamental purposes. Articulated objectives, which include improving quality, reducing government spending, providing additional benefits (without expanding the entitlement), increasing choices for beneficiaries, and providing benchmark competition for traditional Medicare, are plausible but sometimes conflicting. The program's history demonstrates continuous shifts in emphasis on these objectives. We enumerate the differing advantages of public and private plans in Medicare and argue that policymakers should focus their efforts on leveling the public-private playing field, thereby dealing forthrightly with the reality of growing fiscal problems.

  18. A Grammar-Based Semantic Similarity Algorithm for Natural Language Sentences

    PubMed Central

    Chang, Jia Wei; Hsieh, Tung Cheng

    2014-01-01

    This paper presents a grammar and semantic corpus based similarity algorithm for natural language sentences. Natural language, in opposition to “artificial language”, such as computer programming languages, is the language used by the general public for daily communication. Traditional information retrieval approaches, such as vector models, LSA, HAL, or even the ontology-based approaches that extend to include concept similarity comparison instead of cooccurrence terms/words, may not always determine the perfect matching while there is no obvious relation or concept overlap between two natural language sentences. This paper proposes a sentence similarity algorithm that takes advantage of corpus-based ontology and grammatical rules to overcome the addressed problems. Experiments on two famous benchmarks demonstrate that the proposed algorithm has a significant performance improvement in sentences/short-texts with arbitrary syntax and structure. PMID:24982952

  19. A grammar-based semantic similarity algorithm for natural language sentences.

    PubMed

    Lee, Ming Che; Chang, Jia Wei; Hsieh, Tung Cheng

    2014-01-01

    This paper presents a grammar and semantic corpus based similarity algorithm for natural language sentences. Natural language, in opposition to "artificial language", such as computer programming languages, is the language used by the general public for daily communication. Traditional information retrieval approaches, such as vector models, LSA, HAL, or even the ontology-based approaches that extend to include concept similarity comparison instead of cooccurrence terms/words, may not always determine the perfect matching while there is no obvious relation or concept overlap between two natural language sentences. This paper proposes a sentence similarity algorithm that takes advantage of corpus-based ontology and grammatical rules to overcome the addressed problems. Experiments on two famous benchmarks demonstrate that the proposed algorithm has a significant performance improvement in sentences/short-texts with arbitrary syntax and structure.

  20. cOSPREY: A Cloud-Based Distributed Algorithm for Large-Scale Computational Protein Design.

    PubMed

    Pan, Yuchao; Dong, Yuxi; Zhou, Jingtian; Hallen, Mark; Donald, Bruce R; Zeng, Jianyang; Xu, Wei

    2016-09-01

    Finding the global minimum energy conformation (GMEC) of a huge combinatorial search space is the key challenge in computational protein design (CPD) problems. Traditional algorithms lack a scalable and efficient distributed design scheme, preventing researchers from taking full advantage of current cloud infrastructures. We design cloud OSPREY (cOSPREY), an extension to a widely used protein design software OSPREY, to allow the original design framework to scale to the commercial cloud infrastructures. We propose several novel designs to integrate both algorithm and system optimizations, such as GMEC-specific pruning, state search partitioning, asynchronous algorithm state sharing, and fault tolerance. We evaluate cOSPREY on three different cloud platforms using different technologies and show that it can solve a number of large-scale protein design problems that have not been possible with previous approaches.

  1. Robust interacting multiple model algorithms based on multi-sensor fusion criteria

    NASA Astrophysics Data System (ADS)

    Zhou, Weidong; Liu, Mengmeng

    2016-01-01

    This paper is concerned with the state estimation problem for a class of Markov jump linear discrete-time stochastic systems. Three novel interacting multiple model (IMM) algorithms are proposed based on the H∞ technique, the correlation among estimation errors of mode-conditioned filters and the multi-sensor optimal information fusion criteria. Mode probabilities in the novel algorithms are derived based on the error cross-covariances instead of likelihood functions. The H∞ technique taking the place of Kalman filtering is applied to enhance the robustness of the new approaches. Theoretical analysis and Monte Carlo simulation results indicate that the proposed algorithms are effective and have an obvious advantage in velocity estimation when tracking a maneuvering target.

  2. A Hybrid Neural Network-Genetic Algorithm Technique for Aircraft Engine Performance Diagnostics

    NASA Technical Reports Server (NTRS)

    Kobayashi, Takahisa; Simon, Donald L.

    2001-01-01

    In this paper, a model-based diagnostic method, which utilizes Neural Networks and Genetic Algorithms, is investigated. Neural networks are applied to estimate the engine internal health, and Genetic Algorithms are applied for sensor bias detection and estimation. This hybrid approach takes advantage of the nonlinear estimation capability provided by neural networks while improving the robustness to measurement uncertainty through the application of Genetic Algorithms. The hybrid diagnostic technique also has the ability to rank multiple potential solutions for a given set of anomalous sensor measurements in order to reduce false alarms and missed detections. The performance of the hybrid diagnostic technique is evaluated through some case studies derived from a turbofan engine simulation. The results show this approach is promising for reliable diagnostics of aircraft engines.

  3. cOSPREY: A Cloud-Based Distributed Algorithm for Large-Scale Computational Protein Design.

    PubMed

    Pan, Yuchao; Dong, Yuxi; Zhou, Jingtian; Hallen, Mark; Donald, Bruce R; Zeng, Jianyang; Xu, Wei

    2016-09-01

    Finding the global minimum energy conformation (GMEC) of a huge combinatorial search space is the key challenge in computational protein design (CPD) problems. Traditional algorithms lack a scalable and efficient distributed design scheme, preventing researchers from taking full advantage of current cloud infrastructures. We design cloud OSPREY (cOSPREY), an extension to a widely used protein design software OSPREY, to allow the original design framework to scale to the commercial cloud infrastructures. We propose several novel designs to integrate both algorithm and system optimizations, such as GMEC-specific pruning, state search partitioning, asynchronous algorithm state sharing, and fault tolerance. We evaluate cOSPREY on three different cloud platforms using different technologies and show that it can solve a number of large-scale protein design problems that have not been possible with previous approaches. PMID:27154509

  4. The Oilheat Manufacturers Associations Oilheat Advantages Project

    SciTech Connect

    Hedden, R.; Bately, J.E.

    1995-04-01

    The Oilheat Advantages Project is the Oilheat Manufacturers Association`s first project. It involves the creation and disseminaiton of the unified, well documented, compellingly packaged oilheat story. The project invovles three steps: the first step is to pull together all the existing data on the advantages of oilheat into a single, well documented engineering report. The second step will be to rewrite and package the technical document into a consumer piece and a scripted presentation supported with overheads, and to disseminate the information throughout the industry. The third step will be to fund new research to update existing information and discover new advantages of oilheat. This step will begin next year. The inforamtion will be packaged in the following formats: The Engineering Document. This will include all the technical information including the creditable third party sources for all the findings on the many advantages of oilheat; the Consumer Booklet. This summarizes all the findings in the Engineering Document in simple language with easy to understand illustrations and graphs; a series of single topic Statement Stuffers on each of the advantages; an Overhead Transparency-Supported Scripted Show that can be used by industry representatives for presentations to the general public, schools, civic groups, and service clubs; and the Periodic publication of updates to the Oilheat Advantages Study.

  5. "Greenbook Algorithms and Hardware Needs Analysis"

    SciTech Connect

    De Jong, Wibe A.; Oehmen, Chris S.; Baxter, Douglas J.

    2007-01-09

    "This document describes the algorithms, and hardware balance requirements needed to enable the solution of real scientific problems in the DOE core mission areas of environmental and subsurface chemistry, computational and systems biology, and climate science. The MSCF scientific drivers have been outlined in the Greenbook, which is available online at http://mscf.emsl.pnl.gov/docs/greenbook_for_web.pdf . Historically, the primary science driver has been the chemical and the molecular dynamics of the biological science area, whereas the remaining applications in the biological and environmental systems science areas have been occupying a smaller segment of the available hardware resources. To go from science drivers to hardware balance requirements, the major applications were identified. Major applications on the MSCF resources are low- to high-accuracy electronic structure methods, molecular dynamics, regional climate modeling, subsurface transport, and computational biology. The algorithms of these applications were analyzed to identify the computational kernels in both sequential and parallel execution. This analysis shows that a balanced architecture is needed with respect to processor speed, peak flop rate, peak integer operation rate, and memory hierarchy, interprocessor communication, and disk access and storage. A single architecture can satisfy the needs of all of the science areas, although some areas may take greater advantage of certain aspects of the architecture. "

  6. Learning to take actions

    SciTech Connect

    Khardon, R.

    1996-12-31

    We formalize a model for supervised learning of action strategies in dynamic stochastic domains, and show that pac-learning results on Occam algorithms hold in this model as well. We then identify a particularly useful bias for action strategies based on production rule systems. We show that a subset of production rule systems, including rules in predicate calculus style, small hidden state, and unobserved support predicates, is properly learnable. The bias we introduce enables the learning algorithm to invent the recursive support predicates which are used in the action strategy, and to reconstruct the internal state of the strategy. It is also shown that hierarchical strategies are learnable if a helpful teacher is available, but that otherwise the problem is computationally hard.

  7. Categorizing Variations of Student-Implemented Sorting Algorithms

    ERIC Educational Resources Information Center

    Taherkhani, Ahmad; Korhonen, Ari; Malmi, Lauri

    2012-01-01

    In this study, we examined freshmen students' sorting algorithm implementations in data structures and algorithms' course in two phases: at the beginning of the course before the students received any instruction on sorting algorithms, and after taking a lecture on sorting algorithms. The analysis revealed that many students have insufficient…

  8. Simulating Price-Taking

    ERIC Educational Resources Information Center

    Engelhardt, Lucas M.

    2015-01-01

    In this article, the author presents a price-takers' market simulation geared toward principles-level students. This simulation demonstrates that price-taking behavior is a natural result of the conditions that create perfect competition. In trials, there is a significant degree of price convergence in just three or four rounds. Students find this…

  9. Take Pride in America.

    ERIC Educational Resources Information Center

    Indiana State Dept. of Education, Indianapolis. Center for School Improvement and Performance.

    During the 1987-88 school year the Indiana Department of Education assisted the United States Department of the Interior and the Indiana Department of Natural Resources with a program which asked students to become involved in activities to maintain and manage public lands. The 1987 Take Pride in America (TPIA) school program encouraged volunteer…

  10. Take a Bow

    ERIC Educational Resources Information Center

    Spitzer, Greg; Ogurek, Douglas J.

    2009-01-01

    Performing-arts centers can provide benefits at the high school and collegiate levels, and administrators can take steps now to get the show started. When a new performing-arts center comes to town, local businesses profit. Events and performances draw visitors to the community. Ideally, a performing-arts center will play many roles: entertainment…

  11. Routing Algorithm Exploits Spatial Relations

    NASA Technical Reports Server (NTRS)

    Okino, Clayton; Jennings, Esther

    2004-01-01

    A recently developed routing algorithm for broadcasting in an ad hoc wireless communication network takes account of, and exploits, the spatial relationships among the locations of nodes, in addition to transmission power levels and distances between the nodes. In contrast, most prior algorithms for discovering routes through ad hoc networks rely heavily on transmission power levels and utilize limited graph-topology techniques that do not involve consideration of the aforesaid spatial relationships. The present algorithm extracts the relevant spatial-relationship information by use of a construct denoted the relative-neighborhood graph (RNG).

  12. Cubit Adaptive Meshing Algorithm Library

    2004-09-01

    CAMAL (Cubit adaptive meshing algorithm library) is a software component library for mesh generation. CAMAL 2.0 includes components for triangle, quad and tetrahedral meshing. A simple Application Programmers Interface (API) takes a discrete boundary definition and CAMAL computes a quality interior unstructured grid. The triangle and quad algorithms may also import a geometric definition of a surface on which to define the grid. CAMAL’s triangle meshing uses a 3D space advancing front method, the quadmore » meshing algorithm is based upon Sandia’s patented paving algorithm and the tetrahedral meshing algorithm employs the GHS3D-Tetmesh component developed by INRIA, France.« less

  13. Is There an Islamist Political Advantage?

    PubMed Central

    Cammett, Melani; Luong, Pauline Jones

    2014-01-01

    There is a widespread presumption that Islamists have an advantage over their opponents when it comes to generating mass appeal and winning elections. The question remains, however, as to whether these advantages—or, what we refer to collectively as an Islamist political advantage—actually exist. We argue that—to the extent that Islamists have a political advantage—the primary source of this advantage is reputation rather than the provision of social services, organizational capacity, or ideological hegemony. Our purpose is not to dismiss the main sources of the Islamist governance advantage identified in scholarly literature and media accounts, but to suggest a different causal path whereby each of these factors individually and sometimes jointly promotes a reputation for Islamists as competent, trustworthy, and pure. It is this reputation for good governance that enables Islamists to distinguish themselves in the streets and at the ballot box. PMID:25767370

  14. The Spillover Effects of Medicare Managed Care: Medicare Advantage and Hospital Utilization

    PubMed Central

    Baicker, Katherine; Chernew, Michael; Robbins, Jacob

    2013-01-01

    More than a quarter of Medicare beneficiaries are enrolled in Medicare Advantage, which was created in large part to improve the efficiency of health care delivery by promoting competition among private managed care plans. This paper explores the spillover effects of the Medicare Advantage program on the traditional Medicare program and other patients, taking advantage of changes in Medicare Advantage payment policy to isolate exogenous increases in Medicare Advantage enrollment and trace out the effects of greater managed care penetration on hospital utilization and spending throughout the health care system. We find that when more seniors enroll in Medicare managed care, hospital costs decline for all seniors and for commercially insured younger populations. Greater managed care penetration is not associated with fewer hospitalizations, but is associated with lower costs and shorter stays per hospitalization. These spillovers are substantial – offsetting more than 10% of increased payments to Medicare Advantage plans. PMID:24308880

  15. The spillover effects of Medicare managed care: Medicare Advantage and hospital utilization.

    PubMed

    Baicker, Katherine; Chernew, Michael E; Robbins, Jacob A

    2013-12-01

    More than a quarter of Medicare beneficiaries are enrolled in Medicare Advantage, which was created in large part to improve the efficiency of health care delivery by promoting competition among private managed care plans. This paper explores the spillover effects of the Medicare Advantage program on the traditional Medicare program and other patients, taking advantage of changes in Medicare Advantage payment policy to isolate exogenous increases in Medicare Advantage enrollment and trace out the effects of greater managed care penetration on hospital utilization and spending throughout the health care system. We find that when more seniors enroll in Medicare managed care, hospital costs decline for all seniors and for commercially insured younger populations. Greater managed care penetration is not associated with fewer hospitalizations, but is associated with lower costs and shorter stays per hospitalization. These spillovers are substantial - offsetting more than 10% of increased payments to Medicare Advantage plans.

  16. Take the "C" Train

    ERIC Educational Resources Information Center

    Lawton, Rebecca

    2008-01-01

    In this essay, the author recalls several of her experiences in which she successfully pulled her boats out of river holes by throwing herself to the water as a sea-anchor. She learned this trick from her senior guides at a spring training. Her guides told her, "When you're stuck in a hole, take the "C" train."" "Meaning?" The author asked her…

  17. Small fenestra stapedectomy: technique and advantages.

    PubMed

    Pappas, J J; Bailey, H A; Graham, S S

    1984-11-01

    We discuss the rationale and advantages of the small fenestra technique (SFT) of stapedectomy. When results from conventional stapedectomy techniques are compared with those of SFT, the small fenestra technique shows improved hearing in the high frequencies of 2,000, 4,000, and 8,000 Hz, improved speech discrimination, reduced vestibular disturbance, and reduced iatrogenic trauma to the cochlea.

  18. Creating Competitive Advantage through Effective Management Education.

    ERIC Educational Resources Information Center

    Longenecker, Clinton O.; Ariss, Sonny S.

    2002-01-01

    Managers trained in executive education programs (n=203) identified ways in which management education can increase an organization's competitive advantage: exposure to new ideas and practices, skill development, and motivation. Characteristics of effective management education included experience-based learning orientation, credible instructors,…

  19. The Advantages of Using a Network.

    ERIC Educational Resources Information Center

    McCarthy, Robert

    1989-01-01

    Discusses the growth of computer networks in elementary and secondary schools and describes numerous benefits for both instructional and management functions. Topics discussed include ease of use; educational advantages; examples of use in physics, writing, and journalism classes; student records management; cost benefits; and greater efficiency.…

  20. Robustness of the Sequential Lineup Advantage

    ERIC Educational Resources Information Center

    Gronlund, Scott D.; Carlson, Curt A.; Dailey, Sarah B.; Goodsell, Charles A.

    2009-01-01

    A growing movement in the United States and around the world involves promoting the advantages of conducting an eyewitness lineup in a sequential manner. We conducted a large study (N = 2,529) that included 24 comparisons of sequential versus simultaneous lineups. A liberal statistical criterion revealed only 2 significant sequential lineup…

  1. Advantages of Studying Processes in Educational Research

    ERIC Educational Resources Information Center

    Schmitz, Bernhard

    2006-01-01

    It is argued that learning and instruction could be conceptualized from a process-analytic perspective. Important questions from the field of learning and instruction are presented which can be answered using our approach of process analyses. A classification system of process concepts and methods is given. One main advantage of this kind of…

  2. Improvements to previous algorithms to predict gene structure and isoform concentrations using Affymetrix Exon arrays

    PubMed Central

    2010-01-01

    Background Exon arrays provide a way to measure the expression of different isoforms of genes in an organism. Most of the procedures to deal with these arrays are focused on gene expression or on exon expression. Although the only biological analytes that can be properly assigned a concentration are transcripts, there are very few algorithms that focus on them. The reason is that previously developed summarization methods do not work well if applied to transcripts. In addition, gene structure prediction, i.e., the correspondence between probes and novel isoforms, is a field which is still unexplored. Results We have modified and adapted a previous algorithm to take advantage of the special characteristics of the Affymetrix exon arrays. The structure and concentration of transcripts -some of them possibly unknown- in microarray experiments were predicted using this algorithm. Simulations showed that the suggested modifications improved both specificity (SP) and sensitivity (ST) of the predictions. The algorithm was also applied to different real datasets showing its effectiveness and the concordance with PCR validated results. Conclusions The proposed algorithm shows a substantial improvement in the performance over the previous version. This improvement is mainly due to the exploitation of the redundancy of the Affymetrix exon arrays. An R-Package of SPACE with the updated algorithms have been developed and is freely available. PMID:21110835

  3. Implementation on Landsat Data of a Simple Cloud Mask Algorithm Developed for MODIS Land Bands

    NASA Technical Reports Server (NTRS)

    Oreopoulos, Lazaros; Wilson, Michael J.; Varnai, Tamas

    2010-01-01

    This letter assesses the performance on Landsat-7 images of a modified version of a cloud masking algorithm originally developed for clear-sky compositing of Moderate Resolution Imaging Spectroradiometer (MODIS) images at northern mid-latitudes. While data from recent Landsat missions include measurements at thermal wavelengths, and such measurements are also planned for the next mission, thermal tests are not included in the suggested algorithm in its present form to maintain greater versatility and ease of use. To evaluate the masking algorithm we take advantage of the availability of manual (visual) cloud masks developed at USGS for the collection of Landsat scenes used here. As part of our evaluation we also include the Automated Cloud Cover Assesment (ACCA) algorithm that includes thermal tests and is used operationally by the Landsat-7 mission to provide scene cloud fractions, but no cloud masks. We show that the suggested algorithm can perform about as well as ACCA both in terms of scene cloud fraction and pixel-level cloud identification. Specifically, we find that the algorithm gives an error of 1.3% for the scene cloud fraction of 156 scenes, and a root mean square error of 7.2%, while it agrees with the manual mask for 93% of the pixels, figures very similar to those from ACCA (1.2%, 7.1%, 93.7%).

  4. The half-truth of first-mover advantage.

    PubMed

    Suarez, Fernando; Lanzolla, Gianvito

    2005-04-01

    Many executives take for granted that the first company in a new product category gets an unbeatable head start and reaps long-lasting benefits. But that doesn't always happen. The authors of this article discovered that much depends on the pace at which the category's technology is changing and the speed at which the market is evolving. By analyzing these two factors, companies can improve their odds of succeeding as first movers with the resources they possess. Gradual evolution in both the technology and the market provides a first mover with the best conditions for creating a dominant position that is long lasting (Hoover in the vacuum cleaner industry is a good example). In such calm waters, a company can defend its advantages even without exceptional skills or extensive financial resources. When the market is changing rapidly and the product isn't, a first entrant with extensive resources can obtain a long-lasting advantage (as Sony did with its Walkman personal stereo); a company with only limited resources probably must settle for a short-term benefit. When the market is static but the product is changing constantly, first-mover advantages of either kind--durable or short-lived--are unlikely. Only companies with very deep pockets can survive (think of Sony and the digital cameras it pioneered). Rapid churn in both the technology and the market creates the worst conditions. But if companies have an acute sense of when to exit-as Netscape demonstrated when it agreed to be acquired by AOL-a worthwhile short-term gain is possible. Before venturing into a newly forming market, you need to analyze the environment, assess your resources, then determine which type offirst-mover advantage is most achievable. Once you've gone into the water, you have no choice but to swim.

  5. The half-truth of first-mover advantage.

    PubMed

    Suarez, Fernando; Lanzolla, Gianvito

    2005-04-01

    Many executives take for granted that the first company in a new product category gets an unbeatable head start and reaps long-lasting benefits. But that doesn't always happen. The authors of this article discovered that much depends on the pace at which the category's technology is changing and the speed at which the market is evolving. By analyzing these two factors, companies can improve their odds of succeeding as first movers with the resources they possess. Gradual evolution in both the technology and the market provides a first mover with the best conditions for creating a dominant position that is long lasting (Hoover in the vacuum cleaner industry is a good example). In such calm waters, a company can defend its advantages even without exceptional skills or extensive financial resources. When the market is changing rapidly and the product isn't, a first entrant with extensive resources can obtain a long-lasting advantage (as Sony did with its Walkman personal stereo); a company with only limited resources probably must settle for a short-term benefit. When the market is static but the product is changing constantly, first-mover advantages of either kind--durable or short-lived--are unlikely. Only companies with very deep pockets can survive (think of Sony and the digital cameras it pioneered). Rapid churn in both the technology and the market creates the worst conditions. But if companies have an acute sense of when to exit-as Netscape demonstrated when it agreed to be acquired by AOL-a worthwhile short-term gain is possible. Before venturing into a newly forming market, you need to analyze the environment, assess your resources, then determine which type offirst-mover advantage is most achievable. Once you've gone into the water, you have no choice but to swim. PMID:15807045

  6. Clinical algorithms as a tool for psychotherapy with Latino clients.

    PubMed

    Manoleas, Peter; Garcia, Betty

    2003-04-01

    Clinical algorithms have the advantage of being able to integrate clinical, cultural, and environmental factors into a unified method of planning and implementing treatment. A model for practice is proposed that uses 3 algorithms as guides for conducting psychotherapy with Latino clients, the uses of which are illustrated in a single, ongoing case vignette. The algorithm format has the additional advantage of easily adapting itself for data gathering for research purposes.

  7. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  8. Advantages of Parallel Processing and the Effects of Communications Time

    NASA Technical Reports Server (NTRS)

    Eddy, Wesley M.; Allman, Mark

    2000-01-01

    Many computing tasks involve heavy mathematical calculations, or analyzing large amounts of data. These operations can take a long time to complete using only one computer. Networks such as the Internet provide many computers with the ability to communicate with each other. Parallel or distributed computing takes advantage of these networked computers by arranging them to work together on a problem, thereby reducing the time needed to obtain the solution. The drawback to using a network of computers to solve a problem is the time wasted in communicating between the various hosts. The application of distributed computing techniques to a space environment or to use over a satellite network would therefore be limited by the amount of time needed to send data across the network, which would typically take much longer than on a terrestrial network. This experiment shows how much faster a large job can be performed by adding more computers to the task, what role communications time plays in the total execution time, and the impact a long-delay network has on a distributed computing system.

  9. Note: Fast imaging of DNA in atomic force microscopy enabled by a local raster scan algorithm

    SciTech Connect

    Huang, Peng; Andersson, Sean B.

    2014-06-15

    Approaches to high-speed atomic force microscopy typically involve some combination of novel mechanical design to increase the physical bandwidth and advanced controllers to take maximum advantage of the physical capabilities. For certain classes of samples, however, imaging time can be reduced on standard instruments by reducing the amount of measurement that is performed to image the sample. One such technique is the local raster scan algorithm, developed for imaging of string-like samples. Here we provide experimental results on the use of this technique to image DNA samples, demonstrating the efficacy of the scheme and illustrating the order-of-magnitude improvement in imaging time that it provides.

  10. Reconstruction algorithm for limited-angle diffraction tomography for microwave NDE

    SciTech Connect

    Paladhi, P. Roy; Klaser, J.; Tayebi, A.; Udpa, L.; Udpa, S.

    2014-02-18

    Microwave tomography is becoming a popular imaging modality in nondestructive evaluation and medicine. A commonly encountered challenge in tomography in general is that in many practical situations a full 360° angular access is not possible and with limited access, the quality of reconstructed image is compromised. This paper presents an approach for reconstruction with limited angular access in diffraction tomography. The algorithm takes advantage of redundancies in image Fourier space data obtained from diffracted field measurements and couples it to an error minimization technique using a constrained total variation (CTV) minimization. Initial results from simulated data have been presented here to validate the approach.

  11. Flap reconstruction of the knee: A review of current concepts and a proposed algorithm

    PubMed Central

    Gravvanis, Andreas; Kyriakopoulos, Antonios; Kateros, Konstantinos; Tsoutsos, Dimosthenis

    2014-01-01

    A literature search focusing on flap knee reconstruction revealed much controversy regarding the optimal management of around the knee defects. Muscle flaps are the preferred option, mainly in infected wounds. Perforator flaps have recently been introduced in knee coverage with significant advantages due to low donor morbidity and long pedicles with wide arc of rotation. In the case of free flap the choice of recipient vessels is the key point to the reconstruction. Taking the published experience into account, a reconstructive algorithm is proposed according to the size and location of the wound, the presence of infection and/or 3-dimensional defect. PMID:25405089

  12. Developments of aerosol retrieval algorithm for Geostationary Environmental Monitoring Spectrometer (GEMS) and the retrieval accuracy test

    NASA Astrophysics Data System (ADS)

    KIM, M.; Kim, J.; Jeong, U.; Ahn, C.; Bhartia, P. K.; Torres, O.

    2013-12-01

    A scanning UV-Visible spectrometer, the GEMS (Geostationary Environment Monitoring Spectrometer) onboard the GEO-KOMPSAT2B (Geostationary Korea Multi-Purpose Satellite) is planned to be launched in geostationary orbit in 2018. The GEMS employs hyper-spectral imaging with 0.6 nm resolution to observe solar backscatter radiation in the UV and Visible range. In the UV range, the low surface contribution to the backscattered radiation and strong interaction between aerosol absorption and molecular scattering can be advantageous in retrieving aerosol optical properties such as aerosol optical depth (AOD) and single scattering albedo (SSA). By taking the advantage, the OMI UV aerosol algorithm has provided information on the absorbing aerosol (Torres et al., 2007; Ahn et al., 2008). This study presents a UV-VIS algorithm to retrieve AOD and SSA from GEMS. The algorithm is based on the general inversion method, which uses pre-calculated look-up table with assumed aerosol properties and measurement condition. To obtain the retrieval accuracy, the error of the look-up table method occurred by the interpolation of pre-calculated radiances is estimated by using the reference dataset, and the uncertainties about aerosol type and height are evaluated. Also, the GEMS aerosol algorithm is tested with measured normalized radiance from OMI, a provisional data set for GEMS measurement, and the results are compared with the values from AERONET measurements over Asia. Additionally, the method for simultaneous retrieve of the AOD and aerosol height is discussed.

  13. An Evolutionary Algorithm with Double-Level Archives for Multiobjective Optimization.

    PubMed

    Chen, Ni; Chen, Wei-Neng; Gong, Yue-Jiao; Zhan, Zhi-Hui; Zhang, Jun; Li, Yun; Tan, Yu-Song

    2015-09-01

    Existing multiobjective evolutionary algorithms (MOEAs) tackle a multiobjective problem either as a whole or as several decomposed single-objective sub-problems. Though the problem decomposition approach generally converges faster through optimizing all the sub-problems simultaneously, there are two issues not fully addressed, i.e., distribution of solutions often depends on a priori problem decomposition, and the lack of population diversity among sub-problems. In this paper, a MOEA with double-level archives is developed. The algorithm takes advantages of both the multiobjective-problem-level and the sub-problem-level approaches by introducing two types of archives, i.e., the global archive and the sub-archive. In each generation, self-reproduction with the global archive and cross-reproduction between the global archive and sub-archives both breed new individuals. The global archive and sub-archives communicate through cross-reproduction, and are updated using the reproduced individuals. Such a framework thus retains fast convergence, and at the same time handles solution distribution along Pareto front (PF) with scalability. To test the performance of the proposed algorithm, experiments are conducted on both the widely used benchmarks and a set of truly disconnected problems. The results verify that, compared with state-of-the-art MOEAs, the proposed algorithm offers competitive advantages in distance to the PF, solution coverage, and search speed. PMID:25343775

  14. A Note on Evolutionary Algorithms and Its Applications

    ERIC Educational Resources Information Center

    Bhargava, Shifali

    2013-01-01

    This paper introduces evolutionary algorithms with its applications in multi-objective optimization. Here elitist and non-elitist multiobjective evolutionary algorithms are discussed with their advantages and disadvantages. We also discuss constrained multiobjective evolutionary algorithms and their applications in various areas.

  15. Spatial mapping takes time.

    PubMed

    Whishaw, I Q

    1998-01-01

    The experiment tested the prediction that spatial mapping takes time and asked whether time use is reflected in the overt behavior of a performing animal. The study examines this question by exploiting the expected behavioral differences of control rats and rats with hippocampal formation damage induced with fimbria-fornix (FF) lesions on a spatial navigation task. Previous studies have shown that control rats use a mapping strategy, in which they use the relative positions of environmental cues to reach places in space, whereas FF rats use a cue-based strategy, in which they are guided by a single cue or their own body orientation. Therefore, control and FF rats were overtrained on a complex foraging task in which they left a burrow to retrieve eight food pellets hidden around the perimeter of a circular table. The control rats retrieved the food pellets in order of their distance from the burrow, took direct routes to the food, and made few errors, all of which suggested they used a spatial strategy. The FF rats were less likely to retrieve food as a function of its distance, took a circular path around the perimeter of the table, and made many errors, suggesting they used a cue-based strategy. Despite taking shorter routes than the FF rats, the control rats had proportionally slower response speeds. Their slow response speeds support the hypothesis that spatial mapping takes time and that mapping time is reflected in behavior. The results are discussed in relation to their relevance to spatial mapping theory, hippocampal function, and the evolution of foraging strategies.

  16. Physics Take-Outs

    NASA Astrophysics Data System (ADS)

    Riendeau, Diane; Hawkins, Stephanie; Beutlich, Scott

    2016-03-01

    Most teachers want students to think about their course content not only during class but also throughout their day. So, how do you get your students to see how what they learn in class applies to their lives outside of class? As physics teachers, we are fortunate that our students are continually surrounded by our content. How can we get them to notice the physics around them? How can we get them to make connections between the classroom content and their everyday lives? We would like to offer a few suggestions, Physics Take-Outs, to solve this problem.

  17. Constraint treatment techniques and parallel algorithms for multibody dynamic analysis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chiou, Jin-Chern

    1990-01-01

    Computational procedures for kinematic and dynamic analysis of three-dimensional multibody dynamic (MBD) systems are developed from the differential-algebraic equations (DAE's) viewpoint. Constraint violations during the time integration process are minimized and penalty constraint stabilization techniques and partitioning schemes are developed. The governing equations of motion, a two-stage staggered explicit-implicit numerical algorithm, are treated which takes advantage of a partitioned solution procedure. A robust and parallelizable integration algorithm is developed. This algorithm uses a two-stage staggered central difference algorithm to integrate the translational coordinates and the angular velocities. The angular orientations of bodies in MBD systems are then obtained by using an implicit algorithm via the kinematic relationship between Euler parameters and angular velocities. It is shown that the combination of the present solution procedures yields a computationally more accurate solution. To speed up the computational procedures, parallel implementation of the present constraint treatment techniques, the two-stage staggered explicit-implicit numerical algorithm was efficiently carried out. The DAE's and the constraint treatment techniques were transformed into arrowhead matrices to which Schur complement form was derived. By fully exploiting the sparse matrix structural analysis techniques, a parallel preconditioned conjugate gradient numerical algorithm is used to solve the systems equations written in Schur complement form. A software testbed was designed and implemented in both sequential and parallel computers. This testbed was used to demonstrate the robustness and efficiency of the constraint treatment techniques, the accuracy of the two-stage staggered explicit-implicit numerical algorithm, and the speed up of the Schur-complement-based parallel preconditioned conjugate gradient algorithm on a parallel computer.

  18. Were there evolutionary advantages to premenstrual syndrome?

    PubMed

    Gillings, Michael R

    2014-09-01

    Premenstrual syndrome (PMS) affects up to 80% of women, often leading to significant personal, social and economic costs. When apparently maladaptive states are widespread, they sometimes confer a hidden advantage, or did so in our evolutionary past. We suggest that PMS had a selective advantage because it increased the chance that infertile pair bonds would dissolve, thus improving the reproductive outcomes of women in such partnerships. We confirm predictions arising from the hypothesis: PMS has high heritability; gene variants associated with PMS can be identified; animosity exhibited during PMS is preferentially directed at current partners; and behaviours exhibited during PMS may increase the chance of finding a new partner. Under this view, the prevalence of PMS might result from genes and behaviours that are adaptive in some societies, but are potentially less appropriate in modern cultures. Understanding this evolutionary mismatch might help depathologize PMS, and suggests solutions, including the choice to use cycle-stopping contraception.

  19. Were there evolutionary advantages to premenstrual syndrome?

    PubMed Central

    Gillings, Michael R

    2014-01-01

    Premenstrual syndrome (PMS) affects up to 80% of women, often leading to significant personal, social and economic costs. When apparently maladaptive states are widespread, they sometimes confer a hidden advantage, or did so in our evolutionary past. We suggest that PMS had a selective advantage because it increased the chance that infertile pair bonds would dissolve, thus improving the reproductive outcomes of women in such partnerships. We confirm predictions arising from the hypothesis: PMS has high heritability; gene variants associated with PMS can be identified; animosity exhibited during PMS is preferentially directed at current partners; and behaviours exhibited during PMS may increase the chance of finding a new partner. Under this view, the prevalence of PMS might result from genes and behaviours that are adaptive in some societies, but are potentially less appropriate in modern cultures. Understanding this evolutionary mismatch might help depathologize PMS, and suggests solutions, including the choice to use cycle-stopping contraception. PMID:25469168

  20. Sustainable competitive advantage for accountable care organizations.

    PubMed

    Macfarlane, Michael Alex

    2014-01-01

    In the current period of health industry reform, accountable care organizations (ACOs) have emerged as a new model for the delivery of high-quality and cost-effective healthcare. However, few ACOs operate in direct competition with one another, and the accountable care business model has yet to present a means of continually developing new marginal value for patients and network partners. With value-based purchasing and patient consumerism strengthening as market forces, ACOs must build organizational sustainability and competitive advantage to meet the value demands set by customers and competitors. This essay proposes a strategy, adapted from the disciplines of agile software development and Lean product development, through which ACOs can engage internal and external customers in the development of new products that will provide sustainability and competitive advantage to the organization by decreasing waste in development, promoting specialized knowledge, and closely targeting customer value.

  1. [Internet research methods: advantages and challenges].

    PubMed

    Liu, Yi; Tien, Yueh-Hsuan

    2009-12-01

    Compared to traditional research methods, using the Internet to conduct research offers a number of advantages to the researcher, which include increased access to sensitive issues and vulnerable / hidden populations; decreased data entry time requirements; and enhanced data accuracy. However, Internet research also presents certain challenges to the researcher. In this article, the advantages and challenges of Internet research methods are discussed in four principle issue areas: (a) recruitment, (b) data quality, (c) practicality, and (d) ethics. Nursing researchers can overcome problems related to sampling bias and data truthfulness using creative methods; resolve technical problems through collaboration with other disciplines; and protect participant's privacy, confidentiality and data security by maintaining a high level of vigilance. Once such issues have been satisfactorily addressed, the Internet should open a new window for Taiwan nursing research.

  2. Adaptive memory: animacy processing produces mnemonic advantages.

    PubMed

    VanArsdall, Joshua E; Nairne, James S; Pandeirada, Josefa N S; Blunt, Janell R

    2013-01-01

    It is adaptive to remember animates, particularly animate agents, because they play an important role in survival and reproduction. Yet, surprisingly, the role of animacy in mnemonic processing has received little direct attention in the literature. In two experiments, participants were presented with pronounceable nonwords and properties characteristic of either living (animate) or nonliving (inanimate) things. The task was to rate the likelihood that each nonword-property pair represented a living thing or a nonliving object. In Experiment 1, a subsequent recognition memory test for the nonwords revealed a significant advantage for the nonwords paired with properties of living things. To generalize this finding, Experiment 2 replicated the animate advantage using free recall. These data demonstrate a new phenomenon in the memory literature - a possible mnemonic tuning for animacy - and add to growing data supporting adaptive memory theory. PMID:23261948

  3. Sustainable competitive advantage for accountable care organizations.

    PubMed

    Macfarlane, Michael Alex

    2014-01-01

    In the current period of health industry reform, accountable care organizations (ACOs) have emerged as a new model for the delivery of high-quality and cost-effective healthcare. However, few ACOs operate in direct competition with one another, and the accountable care business model has yet to present a means of continually developing new marginal value for patients and network partners. With value-based purchasing and patient consumerism strengthening as market forces, ACOs must build organizational sustainability and competitive advantage to meet the value demands set by customers and competitors. This essay proposes a strategy, adapted from the disciplines of agile software development and Lean product development, through which ACOs can engage internal and external customers in the development of new products that will provide sustainability and competitive advantage to the organization by decreasing waste in development, promoting specialized knowledge, and closely targeting customer value. PMID:25154124

  4. The selective advantage of crypsis in mice.

    PubMed

    Vignieri, Sacha N; Larson, Joanna G; Hoekstra, Hopi E

    2010-07-01

    The light color of mice that inhabit the sandy dunes of Florida's coast have served as a textbook example of adaptation for nearly a century, despite the fact that the selective advantage of crypsis has never been directly tested or quantified in nature. Using plasticine mouse models of light and dark color, we demonstrate a strong selective advantage for mice that match their local background substrate. Further our data suggest that stabilizing selection maintains color matching within a single habitat, as models that are both lighter and darker than their local environment are selected against. These results provide empirical evidence in support of the hypothesis that visual hunting predators shape color patterning in Peromyscus mice and suggest a mechanism by which selection drives the pronounced color variation among populations. PMID:20163447

  5. Women and nurse executives. Finally, some advantages.

    PubMed

    Borman, J S

    1993-10-01

    How do chief nurse executives (CNEs) and chief executive officers (CEOs) compare on selected components of organizational socialization, and do differences exist between genders? To answer these questions, the author compared 127 male CEOs, 127 female CEOs, 232 female CNEs, and 117 male CNEs on their self-reported leadership styles, managerial values, and skills. The differences found between both genders and positions on all measures are largely advantageous to women and nurses in healthcare administration. PMID:8410326

  6. Sinus pericranii: advantages of MR imaging.

    PubMed

    Bigot, J L; Iacona, C; Lepreux, A; Dhellemmes, P; Motte, J; Gomes, H

    2000-10-01

    Sinus pericranii is a rare vascular anomaly involving an abnormal communication between the extracranial and intracranial circulations. A 3-year-old girl presented with a 2 x 2-cm, midline soft-tissue mass at the vertex. Plain skull films and CT using bone windows showed erosion of the parietal bones. MRI confirmed the clinical diagnosis by identifying communication of the vascular mass with the intracranial dural venous sinus. The advantages of MRI are discussed. PMID:11075608

  7. Women and nurse executives. Finally, some advantages.

    PubMed

    Borman, J S

    1993-10-01

    How do chief nurse executives (CNEs) and chief executive officers (CEOs) compare on selected components of organizational socialization, and do differences exist between genders? To answer these questions, the author compared 127 male CEOs, 127 female CEOs, 232 female CNEs, and 117 male CNEs on their self-reported leadership styles, managerial values, and skills. The differences found between both genders and positions on all measures are largely advantageous to women and nurses in healthcare administration.

  8. Transnasal endoscopy: Technical considerations, advantages and limitations.

    PubMed

    Atar, Mustafa; Kadayifci, Abdurrahman

    2014-02-16

    Transnasal endoscopy (TNE) is an upper endoscopy method which is performed by the nasal route using a thin endoscope less than 6 mm in diameter. The primary goal of this method is to improve patient tolerance and convenience of the procedure. TNE can be performed without sedation and thus eliminates the risks associated with general anesthesia. In this way, TNE decreases the cost and total duration of endoscopic procedures, while maintaining the image quality of standard caliber endoscopes, providing good results for diagnostic purposes. However, the small working channel of the ultra-thin endoscope used for TNE makes it difficult to use for therapeutic procedures except in certain conditions which require a thinner endoscope. Biopsy is possible with special forceps less than 2 mm in diameter. Recently, TNE has been used for screening endoscopy in Far East Asia, including Japan. In most controlled studies, TNE was found to have better patient tolerance when compared to unsedated endoscopy. Nasal pain is the most significant symptom associated with endoscopic procedures but can be reduced with nasal pretreatment. Despite the potential advantage of TNE, it is not common in Western countries, usually due to a lack of training in the technique and a lack of awareness of its potential advantages. This paper briefly reviews the technical considerations as well as the potential advantages and limitations of TNE with ultra-thin scopes.

  9. Assessing the binocular advantage in aided vision.

    PubMed

    Harrington, Lawrence K; McIntire, John P; Hopper, Darrel G

    2014-09-01

    Advances in microsensors, microprocessors, and microdisplays are creating new opportunities for improving vision in degraded environments through the use of head-mounted displays. Initially, the cutting-edge technology used in these new displays will be expensive. Inevitably, the cost of providing the additional sensor and processing required to support binocularity brings the value of binocularity into question. Several assessments comparing binocular, binocular, and monocular head-mounted displays for aided vision have concluded that the additional performance, if any, provided by binocular head-mounted displays does not justify the cost. The selection of a biocular [corrected] display for use in the F-35 is a current example of this recurring decision process. It is possible that the human binocularity advantage does not carry over to the aided vision application, but more likely the experimental approaches used in the past have been too coarse to measure its subtle but important benefits. Evaluating the value of binocularity in aided vision applications requires an understanding of the characteristics of both human vision and head-mounted displays. With this understanding, the value of binocularity in aided vision can be estimated and experimental evidence can be collected to confirm or reject the presumed binocular advantage, enabling improved decisions in aided vision system design. This paper describes four computational approaches-geometry of stereopsis, modulation transfer function area for stereopsis, probability summation, and binocular summation-that may be useful in quantifying the advantage of binocularity in aided vision.

  10. Explaining Asian Americans’ academic advantage over whites

    PubMed Central

    Hsin, Amy; Xie, Yu

    2014-01-01

    The superior academic achievement of Asian Americans is a well-documented phenomenon that lacks a widely accepted explanation. Asian Americans’ advantage in this respect has been attributed to three groups of factors: (i) socio-demographic characteristics, (ii) cognitive ability, and (iii) academic effort as measured by characteristics such as attentiveness and work ethic. We combine data from two nationally representative cohort longitudinal surveys to compare Asian-American and white students in their educational trajectories from kindergarten through high school. We find that the Asian-American educational advantage is attributable mainly to Asian students exerting greater academic effort and not to advantages in tested cognitive abilities or socio-demographics. We test explanations for the Asian–white gap in academic effort and find that the gap can be further attributed to (i) cultural differences in beliefs regarding the connection between effort and achievement and (ii) immigration status. Finally, we highlight the potential psychological and social costs associated with Asian-American achievement success. PMID:24799702

  11. The TOMS V9 Algorithm for OMPS Nadir Mapper Total Ozone: An Enhanced Design That Ensures Data Continuity

    NASA Astrophysics Data System (ADS)

    Haffner, D. P.; McPeters, R. D.; Bhartia, P. K.; Labow, G. J.

    2015-12-01

    The TOMS V9 total ozone algorithm will be applied to the OMPS Nadir Mapper instrument to supersede the exisiting V8.6 data product in operational processing and re-processing for public release. Becuase the quality of the V8.6 data is already quite high, enchancements in V9 are mainly with information provided by the retrieval and simplifcations to the algorithm. The design of the V9 algorithm has been influenced by improvements both in our knowledge of atmospheric effects, such as those of clouds made possible by studies with OMI, and also limitations in the V8 algorithms applied to both OMI and OMPS. But the namesake instruments of the TOMS algorithm are substantially more limited in their spectral and noise characterisitics, and a requirement of our algorithm is to also apply the algorithm to these discrete band spectrometers which date back to 1978. To achieve continuity for all these instruments, the TOMS V9 algorithm continues to use radiances in discrete bands, but now uses Rodgers optimal estimation to retrieve a coarse profile and provide uncertainties for each retrieval. The algorithm remains capable of achieving high accuracy results with a small number of discrete wavelengths, and in extreme cases, such as unusual profile shapes and high solar zenith angles, the quality of the retrievals is improved. Despite the intended design to use limited wavlenegths, the algorithm can also utilitze additional wavelengths from hyperspectral sensors like OMPS to augment the retreival's error detection and information content; for example SO2 detection and correction of Ring effect on atmospheric radiances. We discuss these and other aspects of the V9 algorithm as it will be applied to OMPS, and will mention potential improvements which aim to take advantage of a synergy with OMPS Limb Profiler and Nadir Mapper to further improve the quality of total ozone from the OMPS instrument.

  12. Parallelization of the Red-Black Algorithm on Solving the Second-Order PN Transport Equation with the Hybrid Finite Element Method

    SciTech Connect

    Yaqi Wang; Cristian Rabiti; Giuseppe Palmiotti

    2011-06-01

    The Red-Black algorithm has been successfully applied on solving the second-order parity transport equation with the PN approximation in angle and the Hybrid Finite Element Method (HFEM) in space, i.e., the Variational Nodal Method (VNM) [1,2,3,4,5]. Any transport solving techniques, including the Red-Black algorithm, need to be parallelized in order to take the advantage of the development of supercomputers with multiple processors for the advanced modeling and simulation. To our knowledge, an attempt [6] was done to parallelize it, but it was devoted only to the z axis plans in three-dimensional calculations. General parallelization of the Red-Black algorithm with the spatial domain decomposition has not been reported in the literature. In this summary, we present our implementation of the parallelization of the Red-Black algorithm and its efficiency results.

  13. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; A Recursive Maximum Likelihood Decoding

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    The Viterbi algorithm is indeed a very simple and efficient method of implementing the maximum likelihood decoding. However, if we take advantage of the structural properties in a trellis section, other efficient trellis-based decoding algorithms can be devised. Recently, an efficient trellis-based recursive maximum likelihood decoding (RMLD) algorithm for linear block codes has been proposed. This algorithm is more efficient than the conventional Viterbi algorithm in both computation and hardware requirements. Most importantly, the implementation of this algorithm does not require the construction of the entire code trellis, only some special one-section trellises of relatively small state and branch complexities are needed for constructing path (or branch) metric tables recursively. At the end, there is only one table which contains only the most likely code-word and its metric for a given received sequence r = (r(sub 1), r(sub 2),...,r(sub n)). This algorithm basically uses the divide and conquer strategy. Furthermore, it allows parallel/pipeline processing of received sequences to speed up decoding.

  14. Intelligent QoS routing algorithm based on improved AODV protocol for Ad Hoc networks

    NASA Astrophysics Data System (ADS)

    Huibin, Liu; Jun, Zhang

    2016-04-01

    Mobile Ad Hoc Networks were playing an increasingly important part in disaster reliefs, military battlefields and scientific explorations. However, networks routing difficulties are more and more outstanding due to inherent structures. This paper proposed an improved cuckoo searching-based Ad hoc On-Demand Distance Vector Routing protocol (CSAODV). It elaborately designs the calculation methods of optimal routing algorithm used by protocol and transmission mechanism of communication-package. In calculation of optimal routing algorithm by CS Algorithm, by increasing QoS constraint, the found optimal routing algorithm can conform to the requirements of specified bandwidth and time delay, and a certain balance can be obtained among computation spending, bandwidth and time delay. Take advantage of NS2 simulation software to take performance test on protocol in three circumstances and validate the feasibility and validity of CSAODV protocol. In results, CSAODV routing protocol is more adapt to the change of network topological structure than AODV protocol, which improves package delivery fraction of protocol effectively, reduce the transmission time delay of network, reduce the extra burden to network brought by controlling information, and improve the routing efficiency of network.

  15. An Adaptive Reputation-Based Algorithm for Grid Virtual Organization Formation

    NASA Astrophysics Data System (ADS)

    Cui, Yongrui; Li, Mingchu; Ren, Yizhi; Sakurai, Kouichi

    A novel adaptive reputation-based virtual organization formation is proposed. It restrains the bad performers effectively based on the consideration of the global experience of the evaluator and evaluates the direct trust relation between two grid nodes accurately by consulting the previous trust value rationally. It also consults and improves the reputation evaluation process in PathTrust model by taking account of the inter-organizational trust relationship and combines it with direct and recommended trust in a weighted way, which makes the algorithm more robust against collusion attacks. Additionally, the proposed algorithm considers the perspective of the VO creator and takes required VO services as one of the most important fine-grained evaluation criterion, which makes the algorithm more suitable for constructing VOs in grid environments that include autonomous organizations. Simulation results show that our algorithm restrains the bad performers and resists against fake transaction attacks and badmouth attacks effectively. It provides a clear advantage in the design of a VO infrastructure.

  16. Learning tensegrity locomotion using open-loop control signals and coevolutionary algorithms.

    PubMed

    Iscen, Atil; Caluwaerts, Ken; Bruce, Jonathan; Agogino, Adrian; SunSpiral, Vytas; Tumer, Kagan

    2015-01-01

    Soft robots offer many advantages over traditional rigid robots. However, soft robots can be difficult to control with standard control methods. Fortunately, evolutionary algorithms can offer an elegant solution to this problem. Instead of creating controls to handle the intricate dynamics of these robots, we can simply evolve the controls using a simulation to provide an evaluation function. In this article, we show how such a control paradigm can be applied to an emerging field within soft robotics: robots based on tensegrity structures. We take the model of the Spherical Underactuated Planetary Exploration Robot ball (SUPERball), an icosahedron tensegrity robot under production at NASA Ames Research Center, develop a rolling locomotion algorithm, and study the learned behavior using an accurate model of the SUPERball simulated in the NASA Tensegrity Robotics Toolkit. We first present the historical-average fitness-shaping algorithm for coevolutionary algorithms to speed up learning while favoring robustness over optimality. Second, we use a distributed control approach by coevolving open-loop control signals for each controller. Being simple and distributed, open-loop controllers can be readily implemented on SUPERball hardware without the need for sensor information or precise coordination. We analyze signals of different complexities and frequencies. Among the learned policies, we take one of the best and use it to analyze different aspects of the rolling gait, such as lengths, tensions, and energy consumption. We also discuss the correlation between the signals controlling different parts of the tensegrity robot. PMID:25951199

  17. Taking human life.

    PubMed

    Brock, Dan W

    1985-07-01

    Alan Donagan's position regarding the morality of taking innocent human life, that it is impermissible regardless of the wishes of the victim, is criticized by Brock who argues for a rights-based alternative. His argument appeals to the nature of persons' actual interest in life and gives them an additional element of control which they lack if a nonwaivable moral duty not to kill prevails. The author rejects Donagan's view that stopping a life-sustaining treatment, even when a competent patient has consented, is morally wrong and that there is no moral difference between killing and allowing to die. A rights-based position permits stopping treatment of incompetent patients based on what the patient would have wanted or what is in his or her best interest, and allows the withholding of treatment from a terminally ill person, with the patient's consent and for a benevolent motive, to be evaluated as morally different from killing that patient.

  18. Efficient Implementation of the Backpropagation Algorithm in FPGAs and Microcontrollers.

    PubMed

    Ortega-Zamorano, Francisco; Jerez, Jose M; Urda Munoz, Daniel; Luque-Baena, Rafael M; Franco, Leonardo

    2016-09-01

    The well-known backpropagation learning algorithm is implemented in a field-programmable gate array (FPGA) board and a microcontroller, focusing in obtaining efficient implementations in terms of a resource usage and computational speed. The algorithm was implemented in both cases using a training/validation/testing scheme in order to avoid overfitting problems. For the case of the FPGA implementation, a new neuron representation that reduces drastically the resource usage was introduced by combining the input and first hidden layer units in a single module. Further, a time-division multiplexing scheme was implemented for carrying out product computations taking advantage of the built-in digital signal processor cores. In both implementations, the floating-point data type representation normally used in a personal computer (PC) has been changed to a more efficient one based on a fixed-point scheme, reducing system memory variable usage and leading to an increase in computation speed. The results show that the modifications proposed produced a clear increase in computation speed in comparison with the standard PC-based implementation, demonstrating the usefulness of the intrinsic parallelism of FPGAs in neurocomputational tasks and the suitability of both implementations of the algorithm for its application to the real world problems. PMID:26277004

  19. An Enhanced Differential Evolution Algorithm Based on Multiple Mutation Strategies

    PubMed Central

    Xiang, Wan-li; Meng, Xue-lei; An, Mei-qing; Li, Yin-zhen; Gao, Ming-xia

    2015-01-01

    Differential evolution algorithm is a simple yet efficient metaheuristic for global optimization over continuous spaces. However, there is a shortcoming of premature convergence in standard DE, especially in DE/best/1/bin. In order to take advantage of direction guidance information of the best individual of DE/best/1/bin and avoid getting into local trap, based on multiple mutation strategies, an enhanced differential evolution algorithm, named EDE, is proposed in this paper. In the EDE algorithm, an initialization technique, opposition-based learning initialization for improving the initial solution quality, and a new combined mutation strategy composed of DE/current/1/bin together with DE/pbest/bin/1 for the sake of accelerating standard DE and preventing DE from clustering around the global best individual, as well as a perturbation scheme for further avoiding premature convergence, are integrated. In addition, we also introduce two linear time-varying functions, which are used to decide which solution search equation is chosen at the phases of mutation and perturbation, respectively. Experimental results tested on twenty-five benchmark functions show that EDE is far better than the standard DE. In further comparisons, EDE is compared with other five state-of-the-art approaches and related results show that EDE is still superior to or at least equal to these methods on most of benchmark functions. PMID:26609304

  20. Computational Discovery of Materials Using the Firefly Algorithm

    NASA Astrophysics Data System (ADS)

    Avendaño-Franco, Guillermo; Romero, Aldo

    Our current ability to model physical phenomena accurately, the increase computational power and better algorithms are the driving forces behind the computational discovery and design of novel materials, allowing for virtual characterization before their realization in the laboratory. We present the implementation of a novel firefly algorithm, a population-based algorithm for global optimization for searching the structure/composition space. This novel computation-intensive approach naturally take advantage of concurrency, targeted exploration and still keeping enough diversity. We apply the new method in both periodic and non-periodic structures and we present the implementation challenges and solutions to improve efficiency. The implementation makes use of computational materials databases and network analysis to optimize the search and get insights about the geometric structure of local minima on the energy landscape. The method has been implemented in our software PyChemia, an open-source package for materials discovery. We acknowledge the support of DMREF-NSF 1434897 and the Donors of the American Chemical Society Petroleum Research Fund for partial support of this research under Contract 54075-ND10.

  1. Comparison Between Four Detection Algorithms for GEO Objects

    NASA Astrophysics Data System (ADS)

    Yanagisawa, T.; Uetsuhara, M.; Banno, H.; Kurosaki, H.; Kinoshita, D.; Kitazawa, Y.; Hanada, T.

    2012-09-01

    Four detection algorithms for GEO objects are being developed under the collaboration between Kyushu University, IHI corporation and JAXA. Each algorithm is designed to process CCD images to detect GEO objects. First one is PC based stacking method which has been developed in JAXA since 2000. Numerous CCD images are used to detect faint GEO objects below the limiting magnitude of a single CCD image. Sub-images are cropped from many CCD image to fit the movement of the objects. A median image of all the sub-images is then created. Although this method has an ability to detect faint objects, it takes time to analyze. Second one is the line-identifying technique which also uses many CCD frames and finds any series of objects that are arrayed on a straight line from the first frame to the last frame. This can analyze data faster than the stacking method, but cannot detect faint objects as the stacking method. Third one is the robust stacking method developed by IHI corporation which uses average instead of median to reduce analysis time. This has same analysis speed as the line-identifying technique and better detection capabilities in terms of the darkness. Forth one is the FPGA based stacking method which uses binalized images and a new algorithm installed in a FPGA board which reduce analysis time about one thousandth. All four algorithms analyzed the same sets of data to evaluate their advantages and disadvantages. By comparing their analysis times and results, an optimal usage of these algorithms are considered.

  2. Does Medicare Advantage Cost Less Than Traditional Medicare?

    PubMed

    Biles, Brian; Casillas, Giselle; Guterman, Stuart

    2016-01-01

    The costs of providing benefits to enrollees in private Medicare Advantage (MA) plans are slightly less, on average, than what traditional Medicare spends per beneficiary in the same county. However, MA plans that are able to keep their costs comparatively low are concen­trated in a fairly small number of U.S. counties. In the 25 counties where the cost differences between MA plans and traditional Medicare are largest, MA plans spent a total of $5.2 billion less than what traditional Medicare would have been expected to spend on the same benefi­ciaries, with health maintenance organizations (HMOs) accounting for all of that difference. In the rest of the country, MA plans spent $4.8 billion above the expected costs under tradi­tional Medicare. Broad determinations about the relative efficiency of MA plans and traditional Medicare can therefore be misleading, as they fail to take into account local conditions and individual plans' performance.

  3. Evaluation of a photovoltaic energy mechatronics system with a built-in quadratic maximum power point tracking algorithm

    SciTech Connect

    Chao, R.M.; Ko, S.H.; Lin, I.H.; Pai, F.S.; Chang, C.C.

    2009-12-15

    The historically high cost of crude oil price is stimulating research into solar (green) energy as an alternative energy source. In general, applications with large solar energy output require a maximum power point tracking (MPPT) algorithm to optimize the power generated by the photovoltaic effect. This work aims to provide a stand-alone solution for solar energy applications by integrating a DC/DC buck converter to a newly developed quadratic MPPT algorithm along with its appropriate software and hardware. The quadratic MPPT method utilizes three previously used duty cycles with their corresponding power outputs. It approaches the maximum value by using a second order polynomial formula, which converges faster than the existing MPPT algorithm. The hardware implementation takes advantage of the real-time controller system from National Instruments, USA. Experimental results have shown that the proposed solar mechatronics system can correctly and effectively track the maximum power point without any difficulties. (author)

  4. Enforced Clonality Confers a Fitness Advantage

    PubMed Central

    Martínková, Jana; Klimešová, Jitka

    2016-01-01

    In largely clonal plants, splitting of a maternal plant into potentially independent plants (ramets) is usually spontaneous; however, such fragmentation also occurs in otherwise non-clonal species due to application of external force. This process might play an important yet largely overlooked role for otherwise non-clonal plants by providing a mechanism to regenerate after disturbance. Here, in a 5-year garden experiment on two short-lived, otherwise non-clonal species, Barbarea vulgaris and Barbarea stricta, we compared the fitness of plants fragmented by simulated disturbance (“enforced ramets”) both with plants that contemporaneously originate in seed and with individuals unscathed by the disturbance event. Because the ability to regrow from fragments is related to plant age and stored reserves, we compared the effects of disturbance applied during three different ontogenetic stages of the plants. In B. vulgaris, enforced ramet fitness was higher than the measured fitness values of both uninjured plants and plants established from seed after the disturbance. This advantage decreased with increasing plant age at the time of fragmentation. In B. stricta, enforced ramet fitness was lower than or similar to fitness of uninjured plants and plants grown from seed. Our results likely reflect the habitat preferences of the study species, as B. vulgaris occurs in anthropogenic, disturbed habitats where body fragmentation is more probable and enforced clonality thus more advantageous than in the more natural habitats preferred by B. stricta. Generalizing from our results, we see that increased fitness yielded by enforced clonality would confer an evolutionary advantage in the face of disturbance, especially in habitats where a seed bank has not been formed, e.g., during invasion or colonization. Our results thus imply that enforced clonality should be taken into account when studying population dynamics and life strategies of otherwise non-clonal species in disturbed

  5. Tightly Coupled Multiphysics Algorithm for Pebble Bed Reactors

    SciTech Connect

    HyeongKae Park; Dana Knoll; Derek Gaston; Richard Martineau

    2010-10-01

    We have developed a tightly coupled multiphysics simulation tool for the pebble-bed reactor (PBR) concept, a type of Very High-Temperature gas-cooled Reactor (VHTR). The simulation tool, PRONGHORN, takes advantages of the Multiphysics Object-Oriented Simulation Environment library, and is capable of solving multidimensional thermal-fluid and neutronics problems implicitly with a Newton-based approach. Expensive Jacobian matrix formation is alleviated via the Jacobian-free Newton-Krylov method, and physics-based preconditioning is applied to minimize Krylov iterations. Motivation for the work is provided via analysis and numerical experiments on simpler multiphysics reactor models. We then provide detail of the physical models and numerical methods in PRONGHORN. Finally, PRONGHORN's algorithmic capability is demonstrated on a number of PBR test cases.

  6. Cropping and noise resilient steganography algorithm using secret image sharing

    NASA Astrophysics Data System (ADS)

    Juarez-Sandoval, Oswaldo; Fierro-Radilla, Atoany; Espejel-Trujillo, Angelina; Nakano-Miyatake, Mariko; Perez-Meana, Hector

    2015-03-01

    This paper proposes an image steganography scheme, in which a secret image is hidden into a cover image using a secret image sharing (SIS) scheme. Taking advantage of the fault tolerant property of the (k,n)-threshold SIS, where using any k of n shares (k≤n), the secret data can be recovered without any ambiguity, the proposed steganography algorithm becomes resilient to cropping and impulsive noise contamination. Among many SIS schemes proposed until now, Lin and Chan's scheme is selected as SIS, due to its lossless recovery capability of a large amount of secret data. The proposed scheme is evaluated from several points of view, such as imperceptibility of the stegoimage respect to its original cover image, robustness of hidden data to cropping operation and impulsive noise contamination. The evaluation results show a high quality of the extracted secret image from the stegoimage when it suffered more than 20% cropping or high density noise contamination.

  7. Establishing a competitive advantage through quality management.

    PubMed

    George, R J

    1996-06-01

    The successful dentist of the future will establish a sustainable competitive advantage in the marketplace by recognising that patients undergoing dental treatment cannot see the result before purchase, and that they therefore look for signs of service quality to reduce uncertainty. Thus the successful dentist will implement a quality programme that recognises not only that quality is defined by meeting patients' needs and expectations, but also that quality service is fundamental to successful business strategy. Finally, the successful dentist of the future will realise that the pursuit of quality is a never-ending process which requires leadership by example.

  8. Using information networks for competitive advantage.

    PubMed

    Rothenberg, R L

    1995-01-01

    Although the healthcare "information superhighway" has received considerable attention, the use of information technology to create a sustainable competitive advantage is not new to other industries. Economic survival in the new world of managed care may depend on a healthcare delivery system's ability to use network-based communications technologies to differentiate itself in the market, especially through cost savings and demonstration of desirable outcomes. The adaptability of these technologies can help position healthcare organizations to break the paradigms of the past and thrive in a market environment that stresses coordination, efficiency, and quality in various settings.

  9. Complexity, Competitive Intelligence and the "First Mover" Advantage

    NASA Astrophysics Data System (ADS)

    Fellman, Philip Vos; Post, Jonathan Vos

    In the following paper we explore some of the ways in which competitive intelligence and game theory can be employed to assist firms in deciding whether or not to undertake international market diversification and whether or not there is an advantage to being a market leader or a market follower overseas. In attempting to answer these questions, we take a somewhat unconventional approach. We first examine how some of the most recent advances in the physical and biological sciences can contribute to the ways in which we understand how firms behave. Subsequently, we propose a formal methodology for competitive intelligence. While space considerations here do not allow for a complete game-theoretic treatment of competitive intelligence and its use with respect to understanding first and second mover advantage in firm internationalization, that treatment can be found in its entirety in the on-line proceedings of the 6th International Conference on Complex Systems at http://knowledgetoday.org/wiki/indec.php/ICCS06/89

  10. 2015: Rural Medicare Advantage Enrollment Update.

    PubMed

    Finegan, Chance; Ullrich, Fred; Mueller, Keith

    2015-07-01

    Key Findings. (1) Rural enrollment in Medicare Advantage (MA) and other prepaid plans increased by 6.8 percent between March 2014 and March 2015 to 2.1 million members, or 21.2 percent of all rural residents eligible for Medicare. This compares to a national enrollment in MA and other prepaid plans of 31.1 percent (16.7 million) of enrollees. (2) Rural enrollment in Health Maintenance Organization (HMO) plans (including point-of-service, or POS, plans), Preferred Provider Organization (PP0) plans, and other pre-paid plans (including Medicare Cost and Program of All-Inclusive Care for the Elderly Plans) all increased by 5-13 percent. (3) Enrollment in private fee-for-service (PFFS) plans continued to decline (decreasing nationally by 15.8 percent and 12.1 percent in rural counties over the period March 2014-2015). Only eight states showed an increase in PFFS plan enrollment. Five states experienced decreases of 50 percent or more. (4) The five states with the highest percentages of rural beneficiaries enrolled in a Medicare Advantage plan are Minnesota (51.8 percent), Hawaii (39.4 percent), Pennsylvania (36.2 percent), Wisconsin (35.5 percent), and New York (31.5 percent).

  11. Rural Medicare Advantage Plan Payment in 2015.

    PubMed

    Kemper, Leah; Barker, Abigail R; McBride, Timothy D; Mueller, Keith

    2015-12-01

    Payment to Medicare Advantage (MA) plans was fundamentally altered in the Patient Protection and Affordable Care Act of 2010 (ACA). MA plans now operate under a new formula for county-level payment area benchmarks, and in 2012 began receiving quality-based bonus payments. The Medicare Advantage Quality Bonus Payment Demonstration expanded the bonus payments to most MA plans through 2014; however, with the end of the demonstration bonus payments has been reduced for intermediate quality MA plans. This brief examines the impact that these changes in MA baseline payment are having on MA plans and beneficiaries in rural and urban areas. Key Data Findings. (1) Payments to plans in rural areas were 3.9 percent smaller under ACA payment policies in 2015 than they would have been in the absence of the ACA. For plans in urban areas, the payments were 8.8 percent smaller than they would have been. These figures were determined using hypothetical pre-ACA and actual ACA-mandated benchmarks for 2015. (2) MA plans in rural areas received an average annual bonus payment of $326.77 per enrollee in 2014, but only $63.76 per enrollee in 2015, with the conclusion of the demonstration. (3) In 2014, 92 percent of rural MA beneficiaries were in a plan that received quality-based bonus payments under the demonstration, while in March 2015, 56 percent of rural MA beneficiaries were in a plan that was eligible for quality-based bonus payments.

  12. Testosterone, territoriality, and the 'home advantage'.

    PubMed

    Neave, Nick; Wolfson, Sandy

    2003-02-01

    The consistently better performance seen by teams in various sporting contexts when playing at home is referred to as the 'home advantage'. Various explanations have been put forward to account for this robust phenomenon, though none has yet focussed on possible hormonal factors. In an initial study, we showed that salivary testosterone levels in soccer players were significantly higher before a home game than an away game.In a second study involving a different group of soccer players, this finding was replicated over two home games, two away games, and three training sessions. Perceived rivalry of the opposing team was important as testosterone levels were higher before playing an 'extreme' rival than a 'moderate' rival. Self-reported measures of mood in both studies were not linked to testosterone level. The present results corroborate and extend earlier findings on the relationships between testosterone, territoriality, and dominance in human competitive encounters and further suggest an important role for testosterone in the home advantage seen in various team sports.

  13. 2015: Rural Medicare Advantage Enrollment Update.

    PubMed

    Finegan, Chance; Ullrich, Fred; Mueller, Keith

    2015-07-01

    Key Findings. (1) Rural enrollment in Medicare Advantage (MA) and other prepaid plans increased by 6.8 percent between March 2014 and March 2015 to 2.1 million members, or 21.2 percent of all rural residents eligible for Medicare. This compares to a national enrollment in MA and other prepaid plans of 31.1 percent (16.7 million) of enrollees. (2) Rural enrollment in Health Maintenance Organization (HMO) plans (including point-of-service, or POS, plans), Preferred Provider Organization (PP0) plans, and other pre-paid plans (including Medicare Cost and Program of All-Inclusive Care for the Elderly Plans) all increased by 5-13 percent. (3) Enrollment in private fee-for-service (PFFS) plans continued to decline (decreasing nationally by 15.8 percent and 12.1 percent in rural counties over the period March 2014-2015). Only eight states showed an increase in PFFS plan enrollment. Five states experienced decreases of 50 percent or more. (4) The five states with the highest percentages of rural beneficiaries enrolled in a Medicare Advantage plan are Minnesota (51.8 percent), Hawaii (39.4 percent), Pennsylvania (36.2 percent), Wisconsin (35.5 percent), and New York (31.5 percent). PMID:26793818

  14. Relatively fast! Efficiency advantages of comparative thinking.

    PubMed

    Mussweiler, Thomas; Epstude, Kai

    2009-02-01

    Comparisons are a ubiquitous process in information processing. Seven studies examine whether, how, and when comparative thinking increases the efficiency of judgment and choice. Studies 1-4 demonstrate that procedurally priming participants to engage in more vs. less comparison influences how they process information about a target. Specifically, they retrieve less information about the target (Studies 1A, 1B), think more about an information-rich standard (Study 2) about which they activate judgment-relevant information (Study 3), and use this information to compensate for missing target information (Study 4). Studies 2-5 demonstrate the ensuing efficiency advantages. Participants who are primed on comparative thinking are faster in making a target judgment (Studies 2A, 2B, 4, 5) and have more residual processing capacities for a secondary task (Study 5). Studies 6 and 7 establish two boundary conditions by demonstrating that comparative thinking holds efficiency advantages only if target and standard are partly characterized by alignable features (Study 6) that are difficult to evaluate in isolation (Study 7). These findings indicate that comparative thinking may often constitute a useful mechanism to simplify information processing. (PsycINFO Database Record (c) 2009 APA, all rights reserved).

  15. [Biomarkers for liver fibrosis: advances, advantages and disadvantages].

    PubMed

    Cequera, A; García de León Méndez, M C

    2014-01-01

    Liver cirrhosis in Mexico is one of the most important causes of death in persons between the ages of 25 and 50 years. One of the reasons for therapeutic failure is the lack of knowledge about the molecular mechanisms that cause liver disorder and make it irreversible. One of its prevalent anatomical characteristics is an excessive deposition of fibrous tissue that takes different forms depending on etiology and disease stage. Liver biopsy, traditionally regarded as the gold standard of fibrosis staging, has been brought into question over the past decade, resulting in the proposal for developing non-invasive technologies based on different, but complementary, approaches: a biological one that takes the serum levels of products arising from the fibrosis into account, and a more physical one that evaluates scarring of the liver by methods such as ultrasound and magnetic resonance elastography; some of the methods were originally studied and validated in patients with hepatitis C. There is great interest in determining non-invasive markers for the diagnosis of liver fibrosis, since at present there is no panel or parameter efficient and reliable enough for diagnostic use. In this paper, we describe the biomarkers that are currently being used for studying liver fibrosis in humans, their advantages and disadvantages, as well as the implementation of new-generation technologies and the evaluation of their possible use in the diagnosis of fibrosis.

  16. Concomitant vs. Comparative Advantages: Sufficient vs. Necessary Conditions.

    ERIC Educational Resources Information Center

    Flaningam, Carl D.

    1981-01-01

    Discusses the concomitant advantages case in academic debate. Examines the distinction between concomitant and comparative advantages and the implications of this distinction for concomitant advantages as a form of argument. (PD)

  17. An automatic geo-spatial object recognition algorithm for high resolution satellite images

    NASA Astrophysics Data System (ADS)

    Ergul, Mustafa; Alatan, A. Aydın.

    2013-10-01

    This paper proposes a novel automatic geo-spatial object recognition algorithm for high resolution satellite imaging. The proposed algorithm consists of two main steps; a hypothesis generation step with a local feature-based algorithm and a verification step with a shape-based approach. In the hypothesis generation step, a set of hypothesis for possible object locations is generated, aiming lower missed detections and higher false-positives by using a Bag of Visual Words type approach. In the verification step, the foreground objects are first extracted by a semi-supervised image segmentation algorithm, utilizing detection results from the previous step, and then, the shape descriptors for segmented objects are utilized to prune out the false positives. Based on simulation results, it can be argued that the proposed algorithm achieves both high precision and high recall rates as a result of taking advantage of both the local feature-based and the shape-based object detection approaches. The superiority of the proposed method is due to the ability of minimization of false alarm rate and since most of the object shapes contain more characteristic and discriminative information about their identity and functionality.

  18. Parallel paving: An algorithm for generating distributed, adaptive, all-quadrilateral meshes on parallel computers

    SciTech Connect

    Lober, R.R.; Tautges, T.J.; Vaughan, C.T.

    1997-03-01

    Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.

  19. Improvements of HITS Algorithms for Spam Links

    NASA Astrophysics Data System (ADS)

    Asano, Yasuhito; Tezuka, Yu; Nishizeki, Takao

    The HITS algorithm proposed by Kleinberg is one of the representative methods of scoring Web pages by using hyperlinks. In the days when the algorithm was proposed, most of the pages given high score by the algorithm were really related to a given topic, and hence the algorithm could be used to find related pages. However, the algorithm and the variants including Bharat's improved HITS, abbreviated to BHITS, proposed by Bharat and Henzinger cannot be used to find related pages any more on today's Web, due to an increase of spam links. In this paper, we first propose three methods to find “linkfarms,” that is, sets of spam links forming a densely connected subgraph of a Web graph. We then present an algorithm, called a trust-score algorithm, to give high scores to pages which are not spam pages with a high probability. Combining the three methods and the trust-score algorithm with BHITS, we obtain several variants of the HITS algorithm. We ascertain by experiments that one of them, named TaN+BHITS using the trust-score algorithm and the method of finding linkfarms by employing name servers, is most suitable for finding related pages on today's Web. Our algorithms take time and memory no more than those required by the original HITS algorithm, and can be executed on a PC with a small amount of main memory.

  20. Effects of Virtual Education on Academic Culture: Perceived Advantages and Disadvantages

    ERIC Educational Resources Information Center

    Jefferson, Renee N.; Arnold, Liz W.

    2009-01-01

    The perceived advantages and disadvantages of courses taught in online and face-to-face learning environments were explored for students taking an accounting and a data collection and analysis course. Both courses were taught in a face-to-face learning environment at the main or satellite campus. It was hypothesized that there would be…

  1. Children's Understanding of Certainty and Evidentiality: Advantage of Grammaticalized Forms over Lexical Alternatives

    ERIC Educational Resources Information Center

    Matsui, Tomoko; Miura, Yui

    2009-01-01

    In verbal communication, the hearer takes advantage of the linguistic expressions of certainty and evidentiality to assess how committed the speaker might be to the truth of the informational content of the utterance. Little is known, however, about the precise developmental mechanism of this ability. In this chapter, we approach the question by…

  2. Development of a Distributed Routing Algorithm for a Digital Telephony Switch.

    NASA Astrophysics Data System (ADS)

    Al-Wakeel, Sami Saleh

    This research has developed a distributed routing algorithm and distributed control software to be implemented in modular digital telephone switching systems. The routing algorithm allows the routing information and the computer calculations for determining the route of switch calls to be divided evenly among the individual units of the digital switch, thus eliminating the need for the centralized complex routing logic. In addition a "routing language" for the storage of routing information has been developed that both compresses the routing information to conserve computer memory and speeds up the search through the routing information. A fully modular microprocessor-based digital switch that takes advantage of the routing algorithm was designed. The switch design achieves several objectives that include the reduction of digital telephone switch cost by taking full advantage of VLSI technology enabling manufacture by developing countries. By utilization of the technical advantages of the distributive routing algorithm, the modular switch can easily reach a capacity of 400,000 lines without degrading the system call processing or exceeding the system loading limits. A distributive control software was also designed to provide the main software protocols and routines necessary for a fully modular telephone switch. The design has several advantages over normal stored program control switches since it eliminates the need for centralized control software and allows the switch units to operate in any signaling environment. As a result, the possibility of total system breakdown is reduced, the switch software can be easily tested or modified, and the switch can interface any of the currently available communication technologies; namely, cable, VHF, satellite, R-1 or R-2 trunks and trunked radio phones. A second development of this research is a mathematical scheme to evaluate the performance of microprocessor-based digital telephone switches. The scheme evaluates various

  3. Improved algorithm for processing grating-based phase contrast interferometry image sets

    SciTech Connect

    Marathe, Shashidhara Assoufid, Lahsen Xiao, Xianghui; Ham, Kyungmin; Johnson, Warren W.; Butler, Leslie G.

    2014-01-15

    Grating-based X-ray and neutron interferometry tomography using phase-stepping methods generates large data sets. An improved algorithm is presented for solving for the parameters to calculate transmissions, differential phase contrast, and dark-field images. The method takes advantage of the vectorization inherent in high-level languages such as Mathematica and MATLAB and can solve a 16 × 1k × 1k data set in less than a second. In addition, the algorithm can function with partial data sets. This is demonstrated with processing of a 16-step grating data set with partial use of the original data chosen without any restriction. Also, we have calculated the reduced chi-square for the fit and notice the effect of grating support structural elements upon the differential phase contrast image and have explored expanded basis set representations to mitigate the impact.

  4. Modified Cholesky factorizations in interior-point algorithms for linear programming.

    SciTech Connect

    Wright, S.; Mathematics and Computer Science

    1999-01-01

    We investigate a modified Cholesky algorithm typical of those used in most interior-point codes for linear programming. Cholesky-based interior-point codes are popular for three reasons: their implementation requires only minimal changes to standard sparse Cholesky algorithms (allowing us to take full advantage of software written by specialists in that area); they tend to be more efficient than competing approaches that use alternative factorizations; and they perform robustly on most practical problems, yielding good interior-point steps even when the coefficient matrix of the main linear system to be solved for the step components is ill conditioned. We investigate this surprisingly robust performance by using analytical tools from matrix perturbation theory and error analysis, illustrating our results with computational experiments. Finally, we point out the potential limitations of this approach.

  5. Modified hyperspheres algorithm to trace homotopy curves of nonlinear circuits composed by piecewise linear modelled devices.

    PubMed

    Vazquez-Leal, H; Jimenez-Fernandez, V M; Benhammouda, B; Filobello-Nino, U; Sarmiento-Reyes, A; Ramirez-Pinero, A; Marin-Hernandez, A; Huerta-Chua, J

    2014-01-01

    We present a homotopy continuation method (HCM) for finding multiple operating points of nonlinear circuits composed of devices modelled by using piecewise linear (PWL) representations. We propose an adaptation of the modified spheres path tracking algorithm to trace the homotopy trajectories of PWL circuits. In order to assess the benefits of this proposal, four nonlinear circuits composed of piecewise linear modelled devices are analysed to determine their multiple operating points. The results show that HCM can find multiple solutions within a single homotopy trajectory. Furthermore, we take advantage of the fact that homotopy trajectories are PWL curves meant to replace the multidimensional interpolation and fine tuning stages of the path tracking algorithm with a simple and highly accurate procedure based on the parametric straight line equation.

  6. Modified Hyperspheres Algorithm to Trace Homotopy Curves of Nonlinear Circuits Composed by Piecewise Linear Modelled Devices

    PubMed Central

    Vazquez-Leal, H.; Jimenez-Fernandez, V. M.; Benhammouda, B.; Filobello-Nino, U.; Sarmiento-Reyes, A.; Ramirez-Pinero, A.; Marin-Hernandez, A.; Huerta-Chua, J.

    2014-01-01

    We present a homotopy continuation method (HCM) for finding multiple operating points of nonlinear circuits composed of devices modelled by using piecewise linear (PWL) representations. We propose an adaptation of the modified spheres path tracking algorithm to trace the homotopy trajectories of PWL circuits. In order to assess the benefits of this proposal, four nonlinear circuits composed of piecewise linear modelled devices are analysed to determine their multiple operating points. The results show that HCM can find multiple solutions within a single homotopy trajectory. Furthermore, we take advantage of the fact that homotopy trajectories are PWL curves meant to replace the multidimensional interpolation and fine tuning stages of the path tracking algorithm with a simple and highly accurate procedure based on the parametric straight line equation. PMID:25184157

  7. Estimation of Distribution Algorithm Based on Probabilistic Grammar with Latent Annotations

    NASA Astrophysics Data System (ADS)

    Hasegawa, Yoshihiko; Iba, Hitoshi

    Evolutionary algorithms (EAs) are optimization methods and are based on the concept of natural evolution. Recently, growing interests has been observed on applying estimation of distribution techniques to EAs (EDAs). Although probabilistic context free grammar (PCFG) is a widely used model in EDAs for program evolution, it is not able to estimate the building blocks from promising solutions because it takes advantage of the context freedom assumption. We have proposed a new program evolution algorithm based on PCFG with latent annotations which weaken the context freedom assumption. Computational experiments on two subjects (the royal tree problem and the DMAX problem) demonstrate that our new approach is highly effective compared to prior approaches including the conventional GP.

  8. Lazy skip-lists: An algorithm for fast hybridization-expansion quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Sémon, P.; Yee, Chuck-Hou; Haule, Kristjan; Tremblay, A.-M. S.

    2014-08-01

    The solution of a generalized impurity model lies at the heart of electronic structure calculations with dynamical mean field theory. In the strongly correlated regime, the method of choice for solving the impurity model is the hybridization-expansion continuous-time quantum Monte Carlo (CT-HYB). Enhancements to the CT-HYB algorithm are critical for bringing new physical regimes within reach of current computational power. Taking advantage of the fact that the bottleneck in the algorithm is a product of hundreds of matrices, we present optimizations based on the introduction and combination of two concepts of more general applicability: (a) skip lists and (b) fast rejection of proposed configurations based on matrix bounds. Considering two very different test cases with d electrons, we find speedups of ˜25 up to ˜500 compared to the direct evaluation of the matrix product. Even larger speedups are likely with f electron systems and with clusters of correlated atoms.

  9. Modified hyperspheres algorithm to trace homotopy curves of nonlinear circuits composed by piecewise linear modelled devices.

    PubMed

    Vazquez-Leal, H; Jimenez-Fernandez, V M; Benhammouda, B; Filobello-Nino, U; Sarmiento-Reyes, A; Ramirez-Pinero, A; Marin-Hernandez, A; Huerta-Chua, J

    2014-01-01

    We present a homotopy continuation method (HCM) for finding multiple operating points of nonlinear circuits composed of devices modelled by using piecewise linear (PWL) representations. We propose an adaptation of the modified spheres path tracking algorithm to trace the homotopy trajectories of PWL circuits. In order to assess the benefits of this proposal, four nonlinear circuits composed of piecewise linear modelled devices are analysed to determine their multiple operating points. The results show that HCM can find multiple solutions within a single homotopy trajectory. Furthermore, we take advantage of the fact that homotopy trajectories are PWL curves meant to replace the multidimensional interpolation and fine tuning stages of the path tracking algorithm with a simple and highly accurate procedure based on the parametric straight line equation. PMID:25184157

  10. Public health advantages of biological insect controls.

    PubMed Central

    Bosch, R

    1976-01-01

    Biological control is not new, it is simply newly appreciated. This renewed appreciation stems from the widespread insecticide treadmill which is largely a product of insecticide disruption of the balance of insect communities. Biological control is a natural phenomenon; the regulation of plant and animal numbers by natural enemies. In this broad sense, biological control is vital to public health because it keeps the myriad insect species from out-competing us. It also has direct public health advantages as where natural enemies are manipulated to control disease vectoring insects. Insecticide distruption of biological control by insecticides and the resulting pesticide treadmill have serious public health implications. One is the increased pesticide load in the environment. The other is the acceleration of pesticide resistance in disease vectoring insects. The treadmill and its associated hazards will not abate so long as chemical control dominates our pest management strategy. PMID:976223

  11. The kinematic advantage of electric cars

    NASA Astrophysics Data System (ADS)

    Meyn, Jan-Peter

    2015-11-01

    Acceleration of a common car with with a turbocharged diesel engine is compared to the same type with an electric motor in terms of kinematics. Starting from a state of rest, the electric car reaches a distant spot earlier than the diesel car, even though the latter has a better specification for engine power and average acceleration from 0 to 100 km h-1. A three phase model of acceleration as a function of time fits the data of the electric car accurately. The first phase is a quadratic growth of acceleration in time. It is shown that the tenfold higher coefficient for the first phase accounts for most of the kinematic advantage of the electric car.

  12. The academic advantage: gender disparities in patenting.

    PubMed

    Sugimoto, Cassidy R; Ni, Chaoqun; West, Jevin D; Larivière, Vincent

    2015-01-01

    We analyzed gender disparities in patenting by country, technological area, and type of assignee using the 4.6 million utility patents issued between 1976 and 2013 by the United States Patent and Trade Office (USPTO). Our analyses of fractionalized inventorships demonstrate that women's rate of patenting has increased from 2.7% of total patenting activity to 10.8% over the nearly 40-year period. Our results show that, in every technological area, female patenting is proportionally more likely to occur in academic institutions than in corporate or government environments. However, women's patents have a lower technological impact than that of men, and that gap is wider in the case of academic patents. We also provide evidence that patents to which women--and in particular academic women--contributed are associated with a higher number of International Patent Classification (IPC) codes and co-inventors than men. The policy implications of these disparities and academic setting advantages are discussed.

  13. Advantages and Challenges of Superconducting Accelerators

    NASA Astrophysics Data System (ADS)

    Krischel, Detlef

    After a short review of the history toward high-energy superconducting (SC) accelerators for ion beam therapy (IBT), an overview is given on material properties and technical developments enabling to use SC components in a medical accelerator for full body cancer treatment. The design concept and the assembly of a commercially available SC cyclotron for proton therapy (PT) are described and the potential advantages for applying superconductivity are assessed. The discussion includes the first years of operation experience with regard to cryogenic and magnetic performance, automated beam control, and maintenance aspects. An outlook is given on alternative machine concepts for protons-only or for heavier ions. Finally, it is discussed whether the application of superconductivity might be expanded in the future to a broader range of subsystems of clinical IBT accelerators such as SC magnets for transfer beam lines or gantries.

  14. The POP Program: the patient education advantage.

    PubMed

    Claeys, M; Mosher, C; Reesman, D

    1998-01-01

    In 1992, a preoperative education program was developed for total joint replacement patients in a small community hospital. The goals of the program were to increase educational opportunities for the joint replacement patients, prepare patients for hospitalization, plan for discharge needs, and increase efficiency of the orthopaedic program. Since 1992, approximately 600 patients have attended the education program. Outcomes have included positive responses from patients regarding their preparedness for surgery, increased participation in their plan of care, coordinated discharge planning, decreased length of stay, and progression across the continuum of care. A multidisciplinary approach to preparing patients for surgery allows for a comprehensive and efficient education program. Marketing of successful programs can enhance an institution's competitive advantage and help ensure the hospital's viability in the current health care arena.

  15. The SSRIs: advantages, disadvantages and differences.

    PubMed

    Lane, R; Baldwin, D; Preskorn, S

    1995-01-01

    The highly specific mechanism of action of the selective serotonin re-uptake inhibitors (SSRIs) confers advantages on this group, relative to other classes of antidepressant, and thus represents a significant advance in the pharmacotherapy of depression. Whilst their clinical efficacy is equivalent to that of the tricyclic antidepressants (TCAs), the SSRIs have a greatly reduced risk of toxicity in overdose and have been shown to be significantly better tolerated. Specifically, the SSRIs have a low incidence of anticholinergic effects and are essentially devoid of cardiotoxicity. This tolerability advantage may be of significance in improving compliance and hence cost-effectiveness of treatment, particularly in the long term. Despite a lack of sedative effect, there is evidence that SSRIs are more effective than TCAs in the treatment of depression with anxiety. In addition, the SSRIs have been shown to be effective in obsessive-compulsive disorder, panic disorder and social phobia. Although superior efficacy has not been demonstrated for any one of the SSRIs, the structural diversity of this group is reflected in emerging qualitative and quantitative differences in side effects and drug interaction potential. Many of these differential features reflect important variations in pharmacological and pharmacokinetic profiles, including dosage flexibility, washout times, dose-plasma level proportionality and age-related changes in plasma levels. Fluoxetine, for example, has a considerably longer half-life than other SSRIs and side effects and drug interactions may thus occur for an extended period following discontinuation of treatment. Significant differences in the potential for drug interactions in this group are related to their relative potency for inhibition of important liver drug-metabolising enzymes including CYPIID6, CYPIA2 and CYPIIIA4. Large comparative clinical trials of the different SSRIs have yet to be undertaken; however, the differences that have

  16. A Modified Decision Tree Algorithm Based on Genetic Algorithm for Mobile User Classification Problem

    PubMed Central

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389

  17. A modified decision tree algorithm based on genetic algorithm for mobile user classification problem.

    PubMed

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389

  18. A modified decision tree algorithm based on genetic algorithm for mobile user classification problem.

    PubMed

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity.

  19. A comprehensive review of swarm optimization algorithms.

    PubMed

    Ab Wahab, Mohd Nadhir; Nefti-Meziani, Samia; Atyabi, Adham

    2015-01-01

    Many swarm optimization algorithms have been introduced since the early 60's, Evolutionary Programming to the most recent, Grey Wolf Optimization. All of these algorithms have demonstrated their potential to solve many optimization problems. This paper provides an in-depth survey of well-known optimization algorithms. Selected algorithms are briefly explained and compared with each other comprehensively through experiments conducted using thirty well-known benchmark functions. Their advantages and disadvantages are also discussed. A number of statistical tests are then carried out to determine the significant performances. The results indicate the overall advantage of Differential Evolution (DE) and is closely followed by Particle Swarm Optimization (PSO), compared with other considered approaches. PMID:25992655

  20. A Comprehensive Review of Swarm Optimization Algorithms

    PubMed Central

    2015-01-01

    Many swarm optimization algorithms have been introduced since the early 60’s, Evolutionary Programming to the most recent, Grey Wolf Optimization. All of these algorithms have demonstrated their potential to solve many optimization problems. This paper provides an in-depth survey of well-known optimization algorithms. Selected algorithms are briefly explained and compared with each other comprehensively through experiments conducted using thirty well-known benchmark functions. Their advantages and disadvantages are also discussed. A number of statistical tests are then carried out to determine the significant performances. The results indicate the overall advantage of Differential Evolution (DE) and is closely followed by Particle Swarm Optimization (PSO), compared with other considered approaches. PMID:25992655

  1. Atmospheric Correction Algorithm for Hyperspectral Imagery

    SciTech Connect

    R. J. Pollina

    1999-09-01

    In December 1997, the US Department of Energy (DOE) established a Center of Excellence (Hyperspectral-Multispectral Algorithm Research Center, HyMARC) for promoting the research and development of algorithms to exploit spectral imagery. This center is located at the DOE Remote Sensing Laboratory in Las Vegas, Nevada, and is operated for the DOE by Bechtel Nevada. This paper presents the results to date of a research project begun at the center during 1998 to investigate the correction of hyperspectral data for atmospheric aerosols. Results of a project conducted by the Rochester Institute of Technology to define, implement, and test procedures for absolute calibration and correction of hyperspectral data to absolute units of high spectral resolution imagery will be presented. Hybrid techniques for atmospheric correction using image or spectral scene data coupled through radiative propagation models will be specifically addressed. Results of this effort to analyze HYDICE sensor data will be included. Preliminary results based on studying the performance of standard routines, such as Atmospheric Pre-corrected Differential Absorption and Nonlinear Least Squares Spectral Fit, in retrieving reflectance spectra show overall reflectance retrieval errors of approximately one to two reflectance units in the 0.4- to 2.5-micron-wavelength region (outside of the absorption features). These results are based on HYDICE sensor data collected from the Southern Great Plains Atmospheric Radiation Measurement site during overflights conducted in July of 1997. Results of an upgrade made in the model-based atmospheric correction techniques, which take advantage of updates made to the moderate resolution atmospheric transmittance model (MODTRAN 4.0) software, will also be presented. Data will be shown to demonstrate how the reflectance retrieval in the shorter wavelengths of the blue-green region will be improved because of enhanced modeling of multiple scattering effects.

  2. GPUs benchmarking in subpixel image registration algorithm

    NASA Astrophysics Data System (ADS)

    Sanz-Sabater, Martin; Picazo-Bueno, Jose Angel; Micó, Vicente; Ferrerira, Carlos; Granero, Luis; Garcia, Javier

    2015-05-01

    Image registration techniques are used among different scientific fields, like medical imaging or optical metrology. The straightest way to calculate shifting between two images is using the cross correlation, taking the highest value of this correlation image. Shifting resolution is given in whole pixels which cannot be enough for certain applications. Better results can be achieved interpolating both images, as much as the desired resolution we want to get, and applying the same technique described before, but the memory needed by the system is significantly higher. To avoid memory consuming we are implementing a subpixel shifting method based on FFT. With the original images, subpixel shifting can be achieved multiplying its discrete Fourier transform by a linear phase with different slopes. This method is high time consuming method because checking a concrete shifting means new calculations. The algorithm, highly parallelizable, is very suitable for high performance computing systems. GPU (Graphics Processing Unit) accelerated computing became very popular more than ten years ago because they have hundreds of computational cores in a reasonable cheap card. In our case, we are going to register the shifting between two images, doing the first approach by FFT based correlation, and later doing the subpixel approach using the technique described before. We consider it as `brute force' method. So we will present a benchmark of the algorithm consisting on a first approach (pixel resolution) and then do subpixel resolution approaching, decreasing the shifting step in every loop achieving a high resolution in few steps. This program will be executed in three different computers. At the end, we will present the results of the computation, with different kind of CPUs and GPUs, checking the accuracy of the method, and the time consumed in each computer, discussing the advantages, disadvantages of the use of GPUs.

  3. Taking centre stage...

    NASA Astrophysics Data System (ADS)

    1998-11-01

    HAMLET (Highly Automated Multimedia Light Enhanced Theatre) was the star performance at the recent finals of the `Young Engineer for Britain' competition, held at the Commonwealth Institute in London. This state-of-the-art computer-controlled theatre lighting system won the title `Young Engineers for Britain 1998' for David Kelnar, Jonathan Scott, Ramsay Waller and John Wyllie (all aged 16) from Merchiston Castle School, Edinburgh. HAMLET replaces conventional manually-operated controls with a special computer program, and should find use in the thousands of small theatres, schools and amateur drama productions that operate with limited resources and without specialist expertise. The four students received a £2500 prize between them, along with £2500 for their school, and in addition they were invited to spend a special day with the Royal Engineers. A project designed to improve car locking systems enabled Ian Robinson of Durham University to take the `Working in industry award' worth £1000. He was also given the opportunity of a day at sea with the Royal Navy. Other prizewinners with their projects included: Jun Baba of Bloxham School, Banbury (a cardboard armchair which converts into a desk and chair); Kobika Sritharan and Gemma Hancock, Bancroft's School, Essex (a rain warning system for a washing line); and Alistair Clarke, Sam James and Ruth Jenkins, Bishop of Llandaff High School, Cardiff (a mechanism to open and close the retractable roof of the Millennium Stadium in Cardiff). The two principal national sponsors of the competition, which is organized by the Engineering Council, are Lloyd's Register and GEC. Industrial companies, professional engineering institutions and educational bodies also provided national and regional prizes and support. During this year's finals, various additional activities took place, allowing the students to surf the Internet and navigate individual engineering websites on a network of computers. They also visited the

  4. Taking Care of Your Vision

    MedlinePlus

    ... a Friend Who Cuts? Taking Care of Your Vision KidsHealth > For Teens > Taking Care of Your Vision ... are important parts of keeping your peepers perfect. Vision Basics One of the best things you can ...

  5. [Decision on the rational algorithm in treatment of kidney cysts].

    PubMed

    Antonov, A V; Ishutin, E Iu; Guliev, R N

    2012-01-01

    The article presents an algorithm of diagnostics and treatment of renal cysts and other liquid neoplasms of the retroperitoneal space on an analysis of 270 case histories. The algorithm takes into account the achievements of modern medical technologies developed in the recent years. The application of the proposed algorithm must elevate efficiency of the diagnosis and quality of treatment of patients with renal cysts.

  6. Navigation Algorithms for Formation Flying Missions

    NASA Technical Reports Server (NTRS)

    Huxel, Paul J.; Bishop, Robert H.

    2004-01-01

    The objective of the investigations is to develop navigation algorithms to support formation flying missions. In particular, we examine the advantages and concerns associated with the use of combinations of inertial and relative measurements, as well as address observability issues. In our analysis we consider the interaction between measurement types, update frequencies, and trajectory geometry and their cumulative impact on observability. Furthermore, we investigate how relative measurements affect inertial navigation in terms of algorithm performance.

  7. A projected preconditioned conjugate gradient algorithm for computing many extreme eigenpairs of a Hermitian matrix [A projected preconditioned conjugate gradient algorithm for computing a large eigenspace of a Hermitian matrix

    DOE PAGES

    Vecharynski, Eugene; Yang, Chao; Pask, John E.

    2015-02-25

    Here, we present an iterative algorithm for computing an invariant subspace associated with the algebraically smallest eigenvalues of a large sparse or structured Hermitian matrix A. We are interested in the case in which the dimension of the invariant subspace is large (e.g., over several hundreds or thousands) even though it may still be small relative to the dimension of A. These problems arise from, for example, density functional theory (DFT) based electronic structure calculations for complex materials. The key feature of our algorithm is that it performs fewer Rayleigh–Ritz calculations compared to existing algorithms such as the locally optimalmore » block preconditioned conjugate gradient or the Davidson algorithm. It is a block algorithm, and hence can take advantage of efficient BLAS3 operations and be implemented with multiple levels of concurrency. We discuss a number of practical issues that must be addressed in order to implement the algorithm efficiently on a high performance computer.« less

  8. An evolutionary computation based algorithm for calculating solar differential rotation by automatic tracking of coronal bright points

    NASA Astrophysics Data System (ADS)

    Shahamatnia, Ehsan; Dorotovič, Ivan; Fonseca, Jose M.; Ribeiro, Rita A.

    2016-03-01

    Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO), solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO), a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.

  9. Exact and approximate Fourier rebinning algorithms for the solution of the data truncation problem in 3-D PET.

    PubMed

    Bouallègue, Fayçal Ben; Crouzet, Jean-François; Comtat, Claude; Fourcade, Marjolaine; Mohammadi, Bijan; Mariano-Goulart, Denis

    2007-07-01

    This paper presents an extended 3-D exact rebinning formula in the Fourier space that leads to an iterative reprojection algorithm (iterative FOREPROJ), which enables the estimation of unmeasured oblique projection data on the basis of the whole set of measured data. In first approximation, this analytical formula also leads to an extended Fourier rebinning equation that is the basis for an approximate reprojection algorithm (extended FORE). These algorithms were evaluated on numerically simulated 3-D positron emission tomography (PET) data for the solution of the truncation problem, i.e., the estimation of the missing portions in the oblique projection data, before the application of algorithms that require complete projection data such as some rebinning methods (FOREX) or 3-D reconstruction algorithms (3DRP or direct Fourier methods). By taking advantage of all the 3-D data statistics, the iterative FOREPROJ reprojection provides a reliable alternative to the classical FOREPROJ method, which only exploits the low-statistics nonoblique data. It significantly improves the quality of the external reconstructed slices without loss of spatial resolution. As for the approximate extended FORE algorithm, it clearly exhibits limitations due to axial interpolations, but will require clinical studies with more realistic measured data in order to decide on its pertinence. PMID:17649913

  10. Expectation-maximization algorithms for learning a finite mixture of univariate survival time distributions from partially specified class values

    SciTech Connect

    Lee, Youngrok

    2013-05-15

    Heterogeneity exists on a data set when samples from di erent classes are merged into the data set. Finite mixture models can be used to represent a survival time distribution on heterogeneous patient group by the proportions of each class and by the survival time distribution within each class as well. The heterogeneous data set cannot be explicitly decomposed to homogeneous subgroups unless all the samples are precisely labeled by their origin classes; such impossibility of decomposition is a barrier to overcome for estimating nite mixture models. The expectation-maximization (EM) algorithm has been used to obtain maximum likelihood estimates of nite mixture models by soft-decomposition of heterogeneous samples without labels for a subset or the entire set of data. In medical surveillance databases we can find partially labeled data, that is, while not completely unlabeled there is only imprecise information about class values. In this study we propose new EM algorithms that take advantages of using such partial labels, and thus incorporate more information than traditional EM algorithms. We particularly propose four variants of the EM algorithm named EM-OCML, EM-PCML, EM-HCML and EM-CPCML, each of which assumes a specific mechanism of missing class values. We conducted a simulation study on exponential survival trees with five classes and showed that the advantages of incorporating substantial amount of partially labeled data can be highly signi cant. We also showed model selection based on AIC values fairly works to select the best proposed algorithm on each specific data set. A case study on a real-world data set of gastric cancer provided by Surveillance, Epidemiology and End Results (SEER) program showed a superiority of EM-CPCML to not only the other proposed EM algorithms but also conventional supervised, unsupervised and semi-supervised learning algorithms.

  11. A Breeder Algorithm for Stellarator Optimization

    NASA Astrophysics Data System (ADS)

    Wang, S.; Ware, A. S.; Hirshman, S. P.; Spong, D. A.

    2003-10-01

    An optimization algorithm that combines the global parameter space search properties of a genetic algorithm (GA) with the local parameter search properties of a Levenberg-Marquardt (LM) algorithm is described. Optimization algorithms used in the design of stellarator configurations are often classified as either global (such as GA and differential evolution algorithm) or local (such as LM). While nonlinear least-squares methods such as LM are effective at minimizing a cost-function based on desirable plasma properties such as quasi-symmetry and ballooning stability, whether or not this is a local or global minimum is unknown. The advantage of evolutionary algorithms such as GA is that they search a wider range of parameter space and are not susceptible to getting stuck in a local minimum of the cost function. Their disadvantage is that in some cases the evolutionary algorithms are ineffective at finding a minimum state. Here, we describe the initial development of the Breeder Algorithm (BA). BA consists of a genetic algorithm outer loop with an inner loop in which each generation is refined using a LM step. Initial results for a quasi-poloidal stellarator optimization will be presented, along with a comparison to existing optimization algorithms.

  12. Data classification with radial basis function networks based on a novel kernel density estimation algorithm.

    PubMed

    Oyang, Yen-Jen; Hwang, Shien-Ching; Ou, Yu-Yen; Chen, Chien-Yu; Chen, Zhi-Wei

    2005-01-01

    This paper presents a novel learning algorithm for efficient construction of the radial basis function (RBF) networks that can deliver the same level of accuracy as the support vector machines (SVMs) in data classification applications. The proposed learning algorithm works by constructing one RBF subnetwork to approximate the probability density function of each class of objects in the training data set. With respect to algorithm design, the main distinction of the proposed learning algorithm is the novel kernel density estimation algorithm that features an average time complexity of O(n log n), where n is the number of samples in the training data set. One important advantage of the proposed learning algorithm, in comparison with the SVM, is that the proposed learning algorithm generally takes far less time to construct a data classifier with an optimized parameter setting. This feature is of significance for many contemporary applications, in particular, for those applications in which new objects are continuously added into an already large database. Another desirable feature of the proposed learning algorithm is that the RBF networks constructed are capable of carrying out data classification with more than two classes of objects in one single run. In other words, unlike with the SVM, there is no need to resort to mechanisms such as one-against-one or one-against-all for handling datasets with more than two classes of objects. The comparison with SVM is of particular interest, because it has been shown in a number of recent studies that SVM generally are able to deliver higher classification accuracy than the other existing data classification algorithms. As the proposed learning algorithm is instance-based, the data reduction issue is also addressed in this paper. One interesting observation in this regard is that, for all three data sets used in data reduction experiments, the number of training samples remaining after a naive data reduction mechanism is

  13. Motion Cueing Algorithm Development: Initial Investigation and Redesign of the Algorithms

    NASA Technical Reports Server (NTRS)

    Telban, Robert J.; Wu, Weimin; Cardullo, Frank M.; Houck, Jacob A. (Technical Monitor)

    2000-01-01

    In this project four motion cueing algorithms were initially investigated. The classical algorithm generated results with large distortion and delay and low magnitude. The NASA adaptive algorithm proved to be well tuned with satisfactory performance, while the UTIAS adaptive algorithm produced less desirable results. Modifications were made to the adaptive algorithms to reduce the magnitude of undesirable spikes. The optimal algorithm was found to have the potential for improved performance with further redesign. The center of simulator rotation was redefined. More terms were added to the cost function to enable more tuning flexibility. A new design approach using a Fortran/Matlab/Simulink setup was employed. A new semicircular canals model was incorporated in the algorithm. With these changes results show the optimal algorithm has some advantages over the NASA adaptive algorithm. Two general problems observed in the initial investigation required solutions. A nonlinear gain algorithm was developed that scales the aircraft inputs by a third-order polynomial, maximizing the motion cues while remaining within the operational limits of the motion system. A braking algorithm was developed to bring the simulator to a full stop at its motion limit and later release the brake to follow the cueing algorithm output.

  14. Advantages and disadvantages of pinless external fixation.

    PubMed

    Thomas, S R; Giele, H; Simpson, A H

    2000-12-01

    The AO pinless external fixator (PEF) uses trocar tipped clamps to grip the outer tibial cortex rather than pins to transfix it. Its main advantage is to avoid further contamination of the medullary canal in open tibial fractures where a nail may subsequently be used. We tested the anatomical safety of this device and its effect on plastic surgical procedures compared with a standard unilateral external fixator (UEF).The PEF and UEF were placed on two amputated limbs which were then dissected. Structures at risk were traced on ten cadaver limbs. We found that important anatomical structures were endangered by the PEF and that safe zones could not always be defined. The UEF avoided these structures. Plastic surgical approaches were made more difficult by the PEF which imposed limitations on local flap design and endangered the arterial perforators which supply them. We conclude that safety is compromised by the PEF because margins for error are small. It poses additional problems in soft tissue reconstruction and highlights the need for co-operation between plastic surgical and orthopaedic teams in choice of fixation device.

  15. [Coronary revascularization by arterial bypasses: advantages, disadvantages].

    PubMed

    Bical, O; Deleuze, P; Sousa Uva, M

    1997-01-01

    Coronary vein grafts are frequently become occluded or develop atherosclerotic lesions in the long-term. In contrast, the internal mammary artery has a very satisfactory long-term patency rate. The use of an internal mammary artery on the LAD consequently increases the benefit of coronary surgery. The benefit of using 2 internal mammary arteries or other arterial grafts for coronary artery bypass surgery is more controversial. The advantages and disadvantages of the various coronary artery grafts are reported together with the clinical experience of several teams in this area. Coronary artery surgery should be reserved to patients with a good general condition, who are likely to benefit from this type of revascularization. The right internal mammary artery is unsuitable for revascularization of the right coronary network and the two internal mammary arteries must be used to revascularize the left coronary network, in order to obtain a good result. However, surgeons must be aware of the limitations of coronary artery surgery and these techniques should be used cautiously.

  16. Quantifying the Magnetic Advantage in Magnetotaxis

    PubMed Central

    Smith, M. J.; Sheehan, P. E.; Perry, L. L.; O'Connor, K.; Csonka, L. N.; Applegate, B. M.; Whitman, L. J.

    2006-01-01

    Magnetotactic bacteria are characterized by the production of magnetosomes, nanoscale particles of lipid bilayer encapsulated magnetite, that act to orient the bacteria in magnetic fields. These magnetosomes allow magneto-aerotaxis, which is the motion of the bacteria along a magnetic field and toward preferred concentrations of oxygen. Magneto-aerotaxis has been shown to direct the motion of these bacteria downward toward sediments and microaerobic environments favorable for growth. Herein, we compare the magneto-aerotaxis of wild-type, magnetic Magnetospirillum magneticum AMB-1 with a nonmagnetic mutant we have engineered. Using an applied magnetic field and an advancing oxygen gradient, we have quantified the magnetic advantage in magneto-aerotaxis as a more rapid migration to preferred oxygen levels. Magnetic, wild-type cells swimming in an applied magnetic field more quickly migrate away from the advancing oxygen than either wild-type cells in a zero field or the nonmagnetic cells in any field. We find that the responses of the magnetic and mutant strains are well described by a relatively simple analytical model, an analysis of which indicates that the key benefit of magnetotaxis is an enhancement of a bacterium's ability to detect oxygen, not an increase in its average speed moving away from high oxygen concentrations. PMID:16714352

  17. Clinical advantages of carbon-ion radiotherapy

    NASA Astrophysics Data System (ADS)

    Tsujii, Hirohiko; Kamada, Tadashi; Baba, Masayuki; Tsuji, Hiroshi; Kato, Hirotoshi; Kato, Shingo; Yamada, Shigeru; Yasuda, Shigeo; Yanagi, Takeshi; Kato, Hiroyuki; Hara, Ryusuke; Yamamoto, Naotaka; Mizoe, Junetsu

    2008-07-01

    Carbon-ion radiotherapy (C-ion RT) possesses physical and biological advantages. It was started at NIRS in 1994 using the Heavy Ion Medical Accelerator in Chiba (HIMAC); since then more than 50 protocol studies have been conducted on almost 4000 patients with a variety of tumors. Clinical experiences have demonstrated that C-ion RT is effective in such regions as the head and neck, skull base, lung, liver, prostate, bone and soft tissues, and pelvic recurrence of rectal cancer, as well as for histological types including adenocarcinoma, adenoid cystic carcinoma, malignant melanoma and various types of sarcomas, against which photon therapy could be less effective. Furthermore, when compared with photon and proton RT, a significant reduction of overall treatment time and fractions has been accomplished without enhancing toxicities. Currently, the number of irradiation sessions per patient averages 13 fractions spread over approximately three weeks. This means that in a carbon therapy facility a larger number of patients than is possible with other modalities can be treated over the same period of time.

  18. Advantageous grain boundaries in iron pnictide superconductors

    PubMed Central

    Katase, Takayoshi; Ishimaru, Yoshihiro; Tsukamoto, Akira; Hiramatsu, Hidenori; Kamiya, Toshio; Tanabe, Keiichi; Hosono, Hideo

    2011-01-01

    High critical temperature superconductors have zero power consumption and could be used to produce ideal electric power lines. The principal obstacle in fabricating superconducting wires and tapes is grain boundaries—the misalignment of crystalline orientations at grain boundaries, which is unavoidable for polycrystals, largely deteriorates critical current density. Here we report that high critical temperature iron pnictide superconductors have advantages over cuprates with respect to these grain boundary issues. The transport properties through well-defined bicrystal grain boundary junctions with various misorientation angles (θGB) were systematically investigated for cobalt-doped BaFe2As2 (BaFe2As2:Co) epitaxial films fabricated on bicrystal substrates. The critical current density through bicrystal grain boundary (JcBGB) remained high (>1 MA cm−2) and nearly constant up to a critical angle θc of ∼9°, which is substantially larger than the θc of ∼5° for YBa2Cu3O7–δ. Even at θGB>θc, the decay of JcBGB was much slower than that of YBa2Cu3O7–δ. PMID:21811238

  19. Competitive advantages of Caedibacter-infected Paramecia.

    PubMed

    Kusch, Jürgen; Czubatinski, Lars; Wegmann, Silke; Hubner, Markus; Alter, Margret; Albrecht, Petra

    2002-03-01

    Intracellular bacteria of the genus Caedibacter limit the reproduction of their host, the freshwater ciliate Paramecium. Reproduction rates of infected strains of paramecia were significantly lower than those of genetically identical strains that had lost their parasites after treatment with an antibiotic. Interference competition occurs when infected paramecia release a toxic form of the parasitic bacterium that kills uninfected paramecia. In mixed cultures of infected and uninfected strains of either P tetraurelia or of P novaurelia, the infected strains outcompeted the uninfected strains. Infection of new host paramecia seems to be rare. Infection of new hosts was not observed in either mixtures of infected with uninfected strains, or after incubation of paramecia with isolated parasites. The competitive advantages of the host paramecia, in combination with their vegetative reproduction, makes infection of new hosts by the bacterial parasites unnecessary, and could be responsible for the continued existence of "killer paramecia" in nature. Caedibacter parasites are not a defensive adaptation. Feeding rates and reproduction of the predators Didinium nasutum (Ciliophora) and Amoeba proteus (Amoebozoa, Gymnamoebia) were not influenced by whether or not their paramecia prey were infected. Infection of the predators frequently occurred when they preyed on infected paramecia. Caedibacter-infected predators may influence competition between Paramecium strains by release of toxic parasites into the environment that are harmful to uninfected strains.

  20. The Academic Advantage: Gender Disparities in Patenting

    PubMed Central

    Sugimoto, Cassidy R.; Ni, Chaoqun; West, Jevin D.; Larivière, Vincent

    2015-01-01

    We analyzed gender disparities in patenting by country, technological area, and type of assignee using the 4.6 million utility patents issued between 1976 and 2013 by the United States Patent and Trade Office (USPTO). Our analyses of fractionalized inventorships demonstrate that women’s rate of patenting has increased from 2.7% of total patenting activity to 10.8% over the nearly 40-year period. Our results show that, in every technological area, female patenting is proportionally more likely to occur in academic institutions than in corporate or government environments. However, women’s patents have a lower technological impact than that of men, and that gap is wider in the case of academic patents. We also provide evidence that patents to which women—and in particular academic women—contributed are associated with a higher number of International Patent Classification (IPC) codes and co-inventors than men. The policy implications of these disparities and academic setting advantages are discussed. PMID:26017626

  1. Thoracoscopy versus thoracotomy: indications and advantages.

    PubMed

    Weatherford, D A; Stephenson, J E; Taylor, S M; Blackhurst, D

    1995-01-01

    Although the diagnosis and treatment of intrathoracic diseases have been affected by the use of thoracoscopy, the indications and advantages of this procedure are poorly defined. To review the indications and results in a community practice, 52 consecutive cases of thoracoscopy were reviewed and the postoperative courses were compared to a control group of 43 simultaneous thoracotomies. Operative indications for thoracoscopy included investigation or treatment of a lung mass (n = 33), spontaneous pneumothorax (n = 10), mediastinal mass (n = 4), pleural effusion (n = 2), mesothelioma (n = 2), and a ruptured hemidiaphragm (n = 1). General endotracheal anesthesia was used in each case. Overall, thoracoscopy was successful in 40 cases (77%). Conversion to formal thoracotomy was required in 14 cases (27%) secondary to poor visualization or to aid in further dissection. Compared to thoracotomy, complication rates were less (7.6 vs 16.2%), hospital stay shorter (5.5 vs 8 days), ICU stay shorter (0 vs 2 days) and pleural drainage time less (2 vs 5 days) in the thoracoscopy group. In summary, 73% of the patients in this study who formerly would have undergone thoracotomy were successfully managed with thoracoscopy alone, with acceptable morbidity and mortality. These data define the indications, morbidity, and mortality of thoracoscopy and suggest that thoracoscopy may emerge as the procedure of choice in the diagnosis and management of many thoracic diseases.

  2. Vegetarian diets: what are the advantages?

    PubMed

    Leitzmann, Claus

    2005-01-01

    A growing body of scientific evidence indicates that wholesome vegetarian diets offer distinct advantages compared to diets containing meat and other foods of animal origin. The benefits arise from lower intakes of saturated fat, cholesterol and animal protein as well as higher intakes of complex carbohydrates, dietary fiber, magnesium, folic acid, vitamin C and E, carotenoids and other phytochemicals. Since vegetarians consume widely divergent diets, a differentiation between various types of vegetarian diets is necessary. Indeed, many contradictions and misunderstandings concerning vegetarianism are due to scientific data from studies without this differentiation. In the past, vegetarian diets have been described as being deficient in several nutrients including protein, iron, zinc, calcium, vitamin B12 and A, n-3 fatty acids and iodine. Numerous studies have demonstrated that the observed deficiencies are usually due to poor meal planning. Well-balanced vegetarian diets are appropriate for all stages of the life cycle, including children, adolescents, pregnant and lactating women, the elderly and competitive athletes. In most cases, vegetarian diets are beneficial in the prevention and treatment of certain diseases, such as cardiovascular disease, hypertension, diabetes, cancer, osteoporosis, renal disease and dementia, as well as diverticular disease, gallstones and rheumatoid arthritis. The reasons for choosing a vegetarian diet often go beyond health and well-being and include among others economical, ecological and social concerns. The influences of these aspects of vegetarian diets are the subject of the new field of nutritional ecology that is concerned with sustainable life styles and human development.

  3. Azelastine and fluticasone nasal spray: any advantage?

    PubMed

    2014-02-01

    Allergic rhinitis affects over 20% of the UK population. It can have a significant impact on quality of life and interferes with both attendance and performance at school and at work.1 Intranasal corticosteroids are widely recognised as the most effective symptomatic treatment available, but oral or intranasal new generation antihistamines are usually offered as first-line treatment for intermittent symptoms.1,2 Patients with moderate to severe allergic rhinitis may require a combination of drugs, and many patients only achieve limited control of their symptoms.3 Dymista is described as a novel intranasal formulation combining the antihistamine azelastine hydrochloride with the corticosteroid fluticasone propionate.3 It is licensed for the relief of symptoms of moderate to severe seasonal and perennial allergic rhinitis in adults and adolescents if monotherapy with either intranasal antihistamine or glucocorticoid is not considered sufficient.4 The manufacturer claims that compared with fluticasone or azelastine alone, Dymista is twice as effective (when placebo effect is excluded) in providing relief from both nasal and ocular symptoms, and leads to greater overall relief from nasal symptoms. It also claims that Dymista controls nasal symptoms up to 6 days faster than fluticasone.5 Here we consider the evidence for Dymista and whether it represents a significant advantage in the management of patients with allergic rhinitis.

  4. Advantages of a leveled commitment contracting protocol

    SciTech Connect

    Sandholm, T.W.; Lesser, V.R.

    1996-12-31

    In automated negotiation systems consisting of self-interested agents, contracts have traditionally been binding. Such contracts do not allow agents to efficiently accommodate future events. Game theory has proposed contingency contracts to solve this problem. Among computational agents, contingency contracts are often impractical due to large numbers of interdependent and unanticipated future events to be conditioned on, and because some events are not mutually observable. This paper proposes a leveled commitment contracting protocol that allows self-interested agents to efficiently accommodate future events by having the possibility of unilaterally decommitting from a contract based on local reasoning. A decommitment penalty is assigned to both agents in a contract: to be freed from the contract, an agent only pays this penalty to the other party. It is shown through formal analysis of several contracting settings that this leveled commitment feature in a contracting protocol increases Pareto efficiency of deals and can make contracts individually rational when no full commitment contract can. This advantage holds even if the agents decommit manipulatively.

  5. Azelastine and fluticasone nasal spray: any advantage?

    PubMed

    2014-02-01

    Allergic rhinitis affects over 20% of the UK population. It can have a significant impact on quality of life and interferes with both attendance and performance at school and at work.1 Intranasal corticosteroids are widely recognised as the most effective symptomatic treatment available, but oral or intranasal new generation antihistamines are usually offered as first-line treatment for intermittent symptoms.1,2 Patients with moderate to severe allergic rhinitis may require a combination of drugs, and many patients only achieve limited control of their symptoms.3 Dymista is described as a novel intranasal formulation combining the antihistamine azelastine hydrochloride with the corticosteroid fluticasone propionate.3 It is licensed for the relief of symptoms of moderate to severe seasonal and perennial allergic rhinitis in adults and adolescents if monotherapy with either intranasal antihistamine or glucocorticoid is not considered sufficient.4 The manufacturer claims that compared with fluticasone or azelastine alone, Dymista is twice as effective (when placebo effect is excluded) in providing relief from both nasal and ocular symptoms, and leads to greater overall relief from nasal symptoms. It also claims that Dymista controls nasal symptoms up to 6 days faster than fluticasone.5 Here we consider the evidence for Dymista and whether it represents a significant advantage in the management of patients with allergic rhinitis. PMID:24504481

  6. Competitive advantages of Caedibacter-infected Paramecia.

    PubMed

    Kusch, Jürgen; Czubatinski, Lars; Wegmann, Silke; Hubner, Markus; Alter, Margret; Albrecht, Petra

    2002-03-01

    Intracellular bacteria of the genus Caedibacter limit the reproduction of their host, the freshwater ciliate Paramecium. Reproduction rates of infected strains of paramecia were significantly lower than those of genetically identical strains that had lost their parasites after treatment with an antibiotic. Interference competition occurs when infected paramecia release a toxic form of the parasitic bacterium that kills uninfected paramecia. In mixed cultures of infected and uninfected strains of either P tetraurelia or of P novaurelia, the infected strains outcompeted the uninfected strains. Infection of new host paramecia seems to be rare. Infection of new hosts was not observed in either mixtures of infected with uninfected strains, or after incubation of paramecia with isolated parasites. The competitive advantages of the host paramecia, in combination with their vegetative reproduction, makes infection of new hosts by the bacterial parasites unnecessary, and could be responsible for the continued existence of "killer paramecia" in nature. Caedibacter parasites are not a defensive adaptation. Feeding rates and reproduction of the predators Didinium nasutum (Ciliophora) and Amoeba proteus (Amoebozoa, Gymnamoebia) were not influenced by whether or not their paramecia prey were infected. Infection of the predators frequently occurred when they preyed on infected paramecia. Caedibacter-infected predators may influence competition between Paramecium strains by release of toxic parasites into the environment that are harmful to uninfected strains. PMID:12022275

  7. Pharyngeal Packing during Rhinoplasty: Advantages and Disadvantages

    PubMed Central

    Razavi, Majid; Taghavi Gilani, Mehryar; Bameshki, Ali Reza; Behdani, Reza; Khadivi, Ehsan; Bakhshaee, Mahdi

    2015-01-01

    Introduction: Controversy remains as to the advantages and disadvantages of pharyngeal packing during septorhinoplasty. Our study investigated the effect of pharyngeal packing on postoperative nausea and vomiting and sore throat following this type of surgery or septorhinoplasty. Materials and Methods: This clinical trial was performed on 90 American Society of Anesthesiologists (ASA) I or II patients who were candidates for septorhinoplasty. They were randomly divided into two groups. Patients in the study group had received pharyngeal packing while those in the control group had not. The incidence of nausea and vomiting and sore throat based on the visual analog scale (VAS) was evaluated postoperatively in the recovery room as well as at 2, 6 and 24 hours. Results: The incidence of postoperative nausea and vomiting (PONV) was 12.3%, with no significant difference between the study and control groups. Sore throat was reported in 50.5% of cases overall (56.8% on pack group and 44.4% on control). Although the severity of pain was higher in the study group at all times, the incidence in the two groups did not differ significantly. Conclusion: The use of pharyngeal packing has no effect in reducing the incidence of nausea and vomiting and sore throat after surgery. Given that induced hypotension is used as the routine method of anesthesia in septorhinoplasty surgery, with a low incidence of hemorrhage and a high risk of unintended retention of pharyngeal packing, its routine use is not recommended for this procedure. PMID:26788486

  8. SR-71 Taking Off

    NASA Technical Reports Server (NTRS)

    1990-01-01

    One of three U.S. Air Force SR-71 reconnaissance aircraft originally retired from operational service and loaned to NASA for a high-speed research program retracts its landing gear after taking off from NASA's Ames-Dryden Flight Research Facility (later Dryden Flight Research Center), Edwards, California, on a 1990 research flight. One of the SR-71As was later returned to the Air Force for active duty in 1995. Data from the SR-71 high-speed research program will be used to aid designers of future supersonic/hypersonic aircraft and propulsion systems. Two SR-71 aircraft have been used by NASA as testbeds for high-speed and high-altitude aeronautical research. The aircraft, an SR-71A and an SR-71B pilot trainer aircraft, have been based here at NASA's Dryden Flight Research Center, Edwards, California. They were transferred to NASA after the U.S. Air Force program was cancelled. As research platforms, the aircraft can cruise at Mach 3 for more than one hour. For thermal experiments, this can produce heat soak temperatures of over 600 degrees Fahrenheit (F). This operating environment makes these aircraft excellent platforms to carry out research and experiments in a variety of areas -- aerodynamics, propulsion, structures, thermal protection materials, high-speed and high-temperature instrumentation, atmospheric studies, and sonic boom characterization. The SR-71 was used in a program to study ways of reducing sonic booms or over pressures that are heard on the ground, much like sharp thunderclaps, when an aircraft exceeds the speed of sound. Data from this Sonic Boom Mitigation Study could eventually lead to aircraft designs that would reduce the 'peak' overpressures of sonic booms and minimize the startling affect they produce on the ground. One of the first major experiments to be flown in the NASA SR-71 program was a laser air data collection system. It used laser light instead of air pressure to produce airspeed and attitude reference data, such as angle of

  9. Is Concentrated Advantage the Cause? The Relative Contributions of Neighborhood Advantage and Disadvantage to Educational Inequality

    ERIC Educational Resources Information Center

    Johnson, Odis, Jr.

    2013-01-01

    Supported by persistent educational inequality and growth of the field of neighborhood effects research, this meta-analysis investigates the relative association of neighborhood advantage and disadvantage to educational outcomes; the consistency of associations across different educational indicators; and the moderating influence of model…

  10. Calculating Home Advantage in the First Decade of the 21th Century UEFA Soccer Leagues.

    PubMed

    García, Miguel Saavedra; Aguilar, Oscar Gutiérrez; Marques, Paulo Sa; Tobío, Gabriel Torres; Fernández Romero, Juan J

    2013-01-01

    Home advantage has been studied in different sports, establishing its existence and its possible causes. This article analyzes the home advantage in soccer leagues of UEFA countries in the first part of the 21st century. The sample of 52 countries monitored during a period of 10 years allows us to study 520 leagues and 111,030 matches of the highest level in each country associated with UEFA. Home advantage exists and is significant in 32 of the 52 UEFA countries, where it equals 55.6%. A decrease can be observed in the tendency towards home advantage between the years 2000 and 2010. Values between 55 and 56 were observed for home advantage in the top ten leagues in Europe. It has also been observed that home advantage depends on the level of the league evaluated using UEFA's 2010/11 Country coefficients. The home advantage is calculated taking into account the teams' position and the points obtained in each of the leagues. A direct relationship was observed with the number of points gained and an inverse relationship was observed with the team position.

  11. Calculating Home Advantage in the First Decade of the 21th Century UEFA Soccer Leagues

    PubMed Central

    García, Miguel Saavedra; Aguilar, Óscar Gutiérrez; Marques, Paulo Sa; Tobío, Gabriel Torres; Fernández Romero, Juan J.

    2013-01-01

    Home advantage has been studied in different sports, establishing its existence and its possible causes. This article analyzes the home advantage in soccer leagues of UEFA countries in the first part of the 21st century. The sample of 52 countries monitored during a period of 10 years allows us to study 520 leagues and 111,030 matches of the highest level in each country associated with UEFA. Home advantage exists and is significant in 32 of the 52 UEFA countries, where it equals 55.6%. A decrease can be observed in the tendency towards home advantage between the years 2000 and 2010. Values between 55 and 56 were observed for home advantage in the top ten leagues in Europe. It has also been observed that home advantage depends on the level of the league evaluated using UEFA’s 2010/11 Country coefficients. The home advantage is calculated taking into account the teams’ position and the points obtained in each of the leagues. A direct relationship was observed with the number of points gained and an inverse relationship was observed with the team position. PMID:24235990

  12. Virtual online consultations: advantages and limitations (VOCAL) study

    PubMed Central

    Greenhalgh, Trisha; Vijayaraghavan, Shanti; Wherton, Joe; Shaw, Sara; Byrne, Emma; Campbell-Richards, Desirée; Bhattacharya, Satya; Hanson, Philippa; Ramoutar, Seendy; Gutteridge, Charles; Hodkinson, Isabel; Collard, Anna; Morris, Joanne

    2016-01-01

    Introduction Remote video consultations between clinician and patient are technically possible and increasingly acceptable. They are being introduced in some settings alongside (and occasionally replacing) face-to-face or telephone consultations. Methods To explore the advantages and limitations of video consultations, we will conduct in-depth qualitative studies of real consultations (microlevel) embedded in an organisational case study (mesolevel), taking account of national context (macrolevel). The study is based in 2 contrasting clinical settings (diabetes and cancer) in a National Health Service (NHS) acute trust in London, UK. Main data sources are: microlevel—audio, video and screen capture to produce rich multimodal data on 45 remote consultations; mesolevel—interviews, ethnographic observations and analysis of documents within the trust; macrolevel—key informant interviews of national-level stakeholders and document analysis. Data will be analysed and synthesised using a sociotechnical framework developed from structuration theory. Ethics approval City Road and Hampstead NHS Research Ethics Committee, 9 December 2014, reference 14/LO/1883. Planned outputs We plan outputs for 5 main audiences: (1) academics: research publications and conference presentations; (2) service providers: standard operating procedures, provisional operational guidance and key safety issues; (3) professional bodies and defence societies: summary of relevant findings to inform guidance to members; (4) policymakers: summary of key findings; (5) patients and carers: ‘what to expect in your virtual consultation’. Discussion The research literature on video consultations is sparse. Such consultations offer potential advantages to patients (who are spared the cost and inconvenience of travel) and the healthcare system (eg, they may be more cost-effective), but fears have been expressed that they may be clinically risky and/or less acceptable to patients or staff, and they

  13. Copper-phosphorus alloys offer advantages in brazing copper

    SciTech Connect

    Rupert, W.D.

    1996-05-01

    Copper-phosphorus brazing alloys are used extensively for joining copper, especially refrigeration and air-conditioning copper tubing and electrical conductors. What is the effect of phosphorus when alloyed with copper? The following are some of the major effects: (1) It lowers the melt temperature of copper (a temperature depressant). (2) It increases the fluidity of the copper when in the liquid state. (3) It acts as a deoxidant or a fluxing agent with copper. (4) It lowers the ductility of copper (embrittles). There is a misconception that silver improves the ductility of the copper-phosphorus alloys. In reality, silver added to copper acts in a similar manner as phosphorus. The addition of silver to copper lowers the melt temperature (temperature depressant) and decreases the ductility. Fortunately, the rate and amount at which silver lowers copper ductility is significantly less than that of phosphorus. Therefore, taking advantage of the temperature depressant property of silver, a Ag-Cu-P alloy can be selected at approximately the same melt temperature as a Cu-P alloy, but at a lower phosphorus content. The lowering of the phosphorus content actually makes the alloy more ductile, not the silver addition. A major advantage of the copper-phosphorus alloys is the self-fluxing characteristic when joining copper to copper. They may also be used with the addition of a paste flux on brass, bronze, and specialized applications on silver, tungsten and molybdenum. Whether it is selection of the proper BCuP alloy or troubleshooting an existing problem, the suggested approach is a review of the desired phosphorus content in the liquid metal and how it is being altered during application. In torch brazing, a slight change in the oxygen-fuel ratio can affect the joint quality or leak tightness.

  14. Algorithmic chemistry

    SciTech Connect

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  15. [The precautionary principle: advantages and risks].

    PubMed

    Tubiana, M

    2001-04-01

    The extension of the precautionary principle to the field of healthcare is the social response to two demands of the population: improved health safety and the inclusion of an informed public in the decision-making process. The necessary balance between cost (treatment-induced risk) and benefit (therapeutic effect) underlies all healthcare decisions. An underestimation or an overestimation of cost, i.e. risk, is equally harmful in public healthcare. A vaccination should be prescribed when its beneficial effect outweighs its inevitable risk. Mandatory vaccination, such as in the case of the Hepatitis B virus, is a health policy requiring some courage because those who benefit will never be aware of its positive effect while those who are victims of the risk could resort to litigation. Defense against such accusations requires an accurate assessment of risk and benefit, which underlines the importance of expertise. Even within the framework of the precautionary principle, it is impossible to act without knowledge, or at least a plausible estimation, of expected effects. Recent affairs (blood contamination, transmissible spongiform encephalitis by growth hormone, and new variant of Creutzfeldt-Jacob disease) illustrate that in such cases the precautionary principle would have had limited impact and it is only when enough knowledge was available that effective action could be taken. Likewise, in current debates concerning the possible risks of electromagnetic fields, cellular phones and radon, research efforts must be given priority. The general public understands intuitively the concept of cost and benefit. For example, the possible health risks of oral contraceptives and hormone replacement therapy were not ignored, but the public has judged that their advantages justify the risk. Estimating risk and benefit and finding a balance between risk and preventive measures could help avoid the main drawbacks of the precautionary principle, i.e. inaction and refusal of

  16. Competitive advantage of PET/MRI.

    PubMed

    Jadvar, Hossein; Colletti, Patrick M

    2014-01-01

    Multimodality imaging has made great strides in the imaging evaluation of patients with a variety of diseases. Positron emission tomography/computed tomography (PET/CT) is now established as the imaging modality of choice in many clinical conditions, particularly in oncology. While the initial development of combined PET/magnetic resonance imaging (PET/MRI) was in the preclinical arena, hybrid PET/MR scanners are now available for clinical use. PET/MRI combines the unique features of MRI including excellent soft tissue contrast, diffusion-weighted imaging, dynamic contrast-enhanced imaging, fMRI and other specialized sequences as well as MR spectroscopy with the quantitative physiologic information that is provided by PET. Most evidence for the potential clinical utility of PET/MRI is based on studies performed with side-by-side comparison or software-fused MRI and PET images. Data on distinctive utility of hybrid PET/MRI are rapidly emerging. There are potential competitive advantages of PET/MRI over PET/CT. In general, PET/MRI may be preferred over PET/CT where the unique features of MRI provide more robust imaging evaluation in certain clinical settings. The exact role and potential utility of simultaneous data acquisition in specific research and clinical settings will need to be defined. It may be that simultaneous PET/MRI will be best suited for clinical situations that are disease-specific, organ-specific, related to diseases of the children or in those patients undergoing repeated imaging for whom cumulative radiation dose must be kept as low as reasonably achievable. PET/MRI also offers interesting opportunities for use of dual modality probes. Upon clear definition of clinical utility, other important and practical issues related to business operational model, clinical workflow and reimbursement will also be resolved.

  17. Fluorescence advantages with microscopic spatiotemporal control

    NASA Astrophysics Data System (ADS)

    Goswami, Debabrata; Roy, Debjit; De, Arijit K.

    2013-03-01

    We present a clever design concept of using femtosecond laser pulses in microscopy by selective excitation or de-excitation of one fluorophore over the other overlapping one. Using either a simple pair of femtosecond pulses with variable delay or using a train of laser pulses at 20-50 Giga-Hertz excitation, we show controlled fluorescence excitation or suppression of one of the fluorophores with respect to the other through wave-packet interference, an effect that prevails even after the fluorophore coherence timescale. Such an approach can be used both under the single-photon excitation as well as in the multi-photon excitation conditions resulting in effective higher spatial resolution. Such high spatial resolution advantage with broadband-pulsed excitation is of immense benefit to multi-photon microscopy and can also be an effective detection scheme for trapped nanoparticles with near-infrared light. Such sub-diffraction limit trapping of nanoparticles is challenging and a two-photon fluorescence diagnostics allows a direct observation of a single nanoparticle in a femtosecond high-repetition rate laser trap, which promises new directions to spectroscopy at the single molecule level in solution. The gigantic peak power of femtosecond laser pulses at high repetition rate, even at low average powers, provide huge instantaneous gradient force that most effectively result in a stable optical trap for spatial control at sub-diffraction limit. Such studies have also enabled us to explore simultaneous control of internal and external degrees of freedom that require coupling of various control parameters to result in spatiotemporal control, which promises to be a versatile tool for the microscopic world.

  18. The advantages of logarithmically scaled data for electromagnetic inversion

    NASA Astrophysics Data System (ADS)

    Wheelock, Brent; Constable, Steven; Key, Kerry

    2015-06-01

    Non-linear inversion algorithms traverse a data misfit space over multiple iterations of trial models in search of either a global minimum or some target misfit contour. The success of the algorithm in reaching that objective depends upon the smoothness and predictability of the misfit space. For any given observation, there is no absolute form a datum must take, and therefore no absolute definition for the misfit space; in fact, there are many alternatives. However, not all misfit spaces are equal in terms of promoting the success of inversion. In this work, we appraise three common forms that complex data take in electromagnetic geophysical methods: real and imaginary components, a power of amplitude and phase, and logarithmic amplitude and phase. We find that the optimal form is logarithmic amplitude and phase. Single-parameter misfit curves of log-amplitude and phase data for both magnetotelluric and controlled-source electromagnetic methods are the smoothest of the three data forms and do not exhibit flattening at low model resistivities. Synthetic, multiparameter, 2-D inversions illustrate that log-amplitude and phase is the most robust data form, converging to the target misfit contour in the fewest steps regardless of starting model and the amount of noise added to the data; inversions using the other two data forms run slower or fail under various starting models and proportions of noise. It is observed that inversion with log-amplitude and phase data is nearly two times faster in converging to a solution than with other data types. We also assess the statistical consequences of transforming data in the ways discussed in this paper. With the exception of real and imaginary components, which are assumed to be Gaussian, all other data types do not produce an expected mean-squared misfit value of 1.00 at the true model (a common assumption) as the errors in the complex data become large. We recommend that real and imaginary data with errors larger than 10 per

  19. Advantages of using flat-panel LCD for projection displays

    NASA Astrophysics Data System (ADS)

    Wu, Dean C.

    1995-04-01

    The advantages of applying flat panel Liquid CRystal Displays (LCD) for Projection Displays will be extensively discussed. The selection and fabrication of flat panel LCD in order to meet the specific requirements of projection displays through various technologies will be suggested and explored in detail. The compact, flexible size and easy portability of flat panel LCDs are well known. For practical reasons, it is desirable to take advantages some of these useful properties in Projection Displays. With the recent popularity of large format display sizes, high information content and practicality all increases the demand of projection enlargement for high level performance and comfortable viewing. As a result, Projection Displays are becoming the chosen technological option for effective presentation of visual information. In general, the Liquid Crystal Light Valves (LCLV) used in Projection Displays are simply transmissive flat panel liquid crystal displays. For example at the low end, the monochromatic LCD projection panels are simply transmissive LCDs to be used in combination with laptops or PCs and light sources such as overhead projectors. These projection panels are getting popular for their portability, readability and low cost. However, due to the passive nature of the LCD used in these projector panels, the response time, contrast ratio and color gamut are relatively limited. Whether the newly developed Active Addressing technology will be able to improve the response time, contrast ratio and color gamut of these passive matrix LCDs remain to be proven. In the middle range of projection displays, Liquid Crystal Light Valves using color Active Matrix LCDs are rapidly replacing the dominant CRT based projectors. LCLVs have a number of advantages including portability, easy set-up and data readability. There are several new developments using single crystal, polysilicon as active matrix for LCDs with improved performance. Since single crystal active matrix

  20. Testing block subdivision algorithms on block designs

    NASA Astrophysics Data System (ADS)

    Wiseman, Natalie; Patterson, Zachary

    2016-01-01

    Integrated land use-transportation models predict future transportation demand taking into account how households and firms arrange themselves partly as a function of the transportation system. Recent integrated models require parcels as inputs and produce household and employment predictions at the parcel scale. Block subdivision algorithms automatically generate parcel patterns within blocks. Evaluating block subdivision algorithms is done by way of generating parcels and comparing them to those in a parcel database. Three block subdivision algorithms are evaluated on how closely they reproduce parcels of different block types found in a parcel database from Montreal, Canada. While the authors who developed each of the algorithms have evaluated them, they have used their own metrics and block types to evaluate their own algorithms. This makes it difficult to compare their strengths and weaknesses. The contribution of this paper is in resolving this difficulty with the aim of finding a better algorithm suited to subdividing each block type. The proposed hypothesis is that given the different approaches that block subdivision algorithms take, it's likely that different algorithms are better adapted to subdividing different block types. To test this, a standardized block type classification is used that consists of mutually exclusive and comprehensive categories. A statistical method is used for finding a better algorithm and the probability it will perform well for a given block type. Results suggest the oriented bounding box algorithm performs better for warped non-uniform sites, as well as gridiron and fragmented uniform sites. It also produces more similar parcel areas and widths. The Generalized Parcel Divider 1 algorithm performs better for gridiron non-uniform sites. The Straight Skeleton algorithm performs better for loop and lollipop networks as well as fragmented non-uniform and warped uniform sites. It also produces more similar parcel shapes and patterns.

  1. The competitive advantage of corporate philanthropy.

    PubMed

    Porter, Michael E; Kramer, Mark R

    2002-12-01

    When it comes to philanthropy, executives increasingly see themselves as caught between critics demanding ever higher levels of "corporate social responsibility" and investors applying pressure to maximize short-term profits. In response, many companies have sought to make their giving more strategic, but what passes for strategic philanthropy is almost never truly strategic, and often isn't particularly effective as philanthropy. Increasingly, philanthropy is used as a form of public relations or advertising, promoting a company's image through high-profile sponsorships. But there is a more truly strategic way to think about philanthropy. Corporations can use their charitable efforts to improve their competitive context--the quality of the business environment in the locations where they operate. Using philanthropy to enhance competitive context aligns social and economic goals and improves a company's long-term business prospects. Addressing context enables a company to not only give money but also leverage its capabilities and relationships in support of charitable causes. The produces social benefits far exceeding those provided by individual donors, foundations, or even governments. Taking this new direction requires fundamental changes in the way companies approach their contribution programs. For example, philanthropic investments can improve education and local quality of life in ways that will benefit the company. Such investments can also improve the company's competitiveness by contributing to expanding the local market and helping to reduce corruption in the local business environment. Adopting a context-focused approach goes against the grain of current philanthropic practice, and it requires a far more disciplined approach than is prevalent today. But it can make a company's philanthropic activities far more effective. PMID:12510538

  2. The competitive advantage of corporate philanthropy.

    PubMed

    Porter, Michael E; Kramer, Mark R

    2002-12-01

    When it comes to philanthropy, executives increasingly see themselves as caught between critics demanding ever higher levels of "corporate social responsibility" and investors applying pressure to maximize short-term profits. In response, many companies have sought to make their giving more strategic, but what passes for strategic philanthropy is almost never truly strategic, and often isn't particularly effective as philanthropy. Increasingly, philanthropy is used as a form of public relations or advertising, promoting a company's image through high-profile sponsorships. But there is a more truly strategic way to think about philanthropy. Corporations can use their charitable efforts to improve their competitive context--the quality of the business environment in the locations where they operate. Using philanthropy to enhance competitive context aligns social and economic goals and improves a company's long-term business prospects. Addressing context enables a company to not only give money but also leverage its capabilities and relationships in support of charitable causes. The produces social benefits far exceeding those provided by individual donors, foundations, or even governments. Taking this new direction requires fundamental changes in the way companies approach their contribution programs. For example, philanthropic investments can improve education and local quality of life in ways that will benefit the company. Such investments can also improve the company's competitiveness by contributing to expanding the local market and helping to reduce corruption in the local business environment. Adopting a context-focused approach goes against the grain of current philanthropic practice, and it requires a far more disciplined approach than is prevalent today. But it can make a company's philanthropic activities far more effective.

  3. Effects of visualization on algorithm comprehension

    NASA Astrophysics Data System (ADS)

    Mulvey, Matthew

    Computer science students are expected to learn and apply a variety of core algorithms which are an essential part of the field. Any one of these algorithms by itself is not necessarily extremely complex, but remembering the large variety of algorithms and the differences between them is challenging. To address this challenge, we present a novel algorithm visualization tool designed to enhance students understanding of Dijkstra's algorithm by allowing them to discover the rules of the algorithm for themselves. It is hoped that a deeper understanding of the algorithm will help students correctly select, adapt and apply the appropriate algorithm when presented with a problem to solve, and that what is learned here will be applicable to the design of other visualization tools designed to teach different algorithms. Our visualization tool is currently in the prototype stage, and this thesis will discuss the pedagogical approach that informs its design, as well as the results of some initial usability testing. Finally, to clarify the direction for further development of the tool, four different variations of the prototype were implemented, and the instructional effectiveness of each was assessed by having a small sample participants use the different versions of the prototype and then take a quiz to assess their comprehension of the algorithm.

  4. Does Medicare Advantage Cost Less Than Traditional Medicare?

    PubMed

    Biles, Brian; Casillas, Giselle; Guterman, Stuart

    2016-01-01

    The costs of providing benefits to enrollees in private Medicare Advantage (MA) plans are slightly less, on average, than what traditional Medicare spends per beneficiary in the same county. However, MA plans that are able to keep their costs comparatively low are concen­trated in a fairly small number of U.S. counties. In the 25 counties where the cost differences between MA plans and traditional Medicare are largest, MA plans spent a total of $5.2 billion less than what traditional Medicare would have been expected to spend on the same benefi­ciaries, with health maintenance organizations (HMOs) accounting for all of that difference. In the rest of the country, MA plans spent $4.8 billion above the expected costs under tradi­tional Medicare. Broad determinations about the relative efficiency of MA plans and traditional Medicare can therefore be misleading, as they fail to take into account local conditions and individual plans' performance. PMID:26934756

  5. [Communication server in the hospital--advantages, expenses and limitations].

    PubMed

    Jendrysiak, U

    1997-01-01

    The common situation in a hospital with multiple departments is a heterogeneous set of subsystems, one or more for each department. Today, we have a rising number of requests for an information interchange between these independent systems. The exchange of patients data has a technical and a conceptional part. Establishing a connection between more than two subsystems requires links from one system to all the others, each of them with its own code translation, interface and message transfer. A communication server is an important tool for significantly reducing the amount of work for the technical realisation. It reduces the number of interfaces, facilitates the definition, maintenance and documentation of the message structure and translation tables and helps to keep control on the message pipelines. Existing interfaces can be adapted for similar purposes. Anyway, a communication server needs a lot of configuration and it is necessary to know about low-level internetworking on different hard- and software to take advantage of its features. The code for writing files on a remote system and for process communication via TCP/IP sockets or similar techniques has to be written specifically for each communication task. There are first experiences in the university school of medicine in Mainz setting up a communication server to connect different departments. We also made a checklist for the selection of such a product. PMID:9381841

  6. The Chorus Conflict and Loss of Separation Resolution Algorithms

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Hagen, George E.; Maddalon, Jeffrey M.

    2013-01-01

    The Chorus software is designed to investigate near-term, tactical conflict and loss of separation detection and resolution concepts for air traffic management. This software is currently being used in two different problem domains: en-route self- separation and sense and avoid for unmanned aircraft systems. This paper describes the core resolution algorithms that are part of Chorus. The combination of several features of the Chorus program distinguish this software from other approaches to conflict and loss of separation resolution. First, the program stores a history of state information over time which enables it to handle communication dropouts and take advantage of previous input data. Second, the underlying conflict algorithms find resolutions that solve the most urgent conflict, but also seek to prevent secondary conflicts with the other aircraft. Third, if the program is run on multiple aircraft, and the two aircraft maneuver at the same time, the result will be implicitly co-ordinated. This implicit coordination property is established by ensuring that a resolution produced by Chorus will comply with a mathematically-defined criteria whose correctness has been formally verified. Fourth, the program produces both instantaneous solutions and kinematic solutions, which are based on simple accel- eration models. Finally, the program provides resolutions for recovery from loss of separation. Different versions of this software are implemented as Java and C++ software programs, respectively.

  7. Feature Subset Selection, Class Separability, and Genetic Algorithms

    SciTech Connect

    Cantu-Paz, E

    2004-01-21

    The performance of classification algorithms in machine learning is affected by the features used to describe the labeled examples presented to the inducers. Therefore, the problem of feature subset selection has received considerable attention. Genetic approaches to this problem usually follow the wrapper approach: treat the inducer as a black box that is used to evaluate candidate feature subsets. The evaluations might take a considerable time and the traditional approach might be unpractical for large data sets. This paper describes a hybrid of a simple genetic algorithm and a method based on class separability applied to the selection of feature subsets for classification problems. The proposed hybrid was compared against each of its components and two other feature selection wrappers that are used widely. The objective of this paper is to determine if the proposed hybrid presents advantages over the other methods in terms of accuracy or speed in this problem. The experiments used a Naive Bayes classifier and public-domain and artificial data sets. The experiments suggest that the hybrid usually finds compact feature subsets that give the most accurate results, while beating the execution time of the other wrappers.

  8. Host manipulation by an ichneumonid spider ectoparasitoid that takes advantage of preprogrammed web-building behaviour for its cocoon protection.

    PubMed

    Takasuka, Keizo; Yasui, Tomoki; Ishigami, Toru; Nakata, Kensuke; Matsumoto, Rikio; Ikeda, Kenichi; Maeto, Kaoru

    2015-08-01

    Host manipulation by parasites and parasitoids is a fascinating phenomenon within evolutionary ecology, representing an example of extended phenotypes. To elucidate the mechanism of host manipulation, revealing the origin and function of the invoked actions is essential. Our study focused on the ichneumonid spider ectoparasitoid Reclinervellus nielseni, which turns its host spider (Cyclosa argenteoalba) into a drugged navvy, to modify the web structure into a more persistent cocoon web so that the wasp can pupate safely on this web after the spider's death. We focused on whether the cocoon web originated from the resting web that an unparasitized spider builds before moulting, by comparing web structures, building behaviour and silk spectral/tensile properties. We found that both resting and cocoon webs have reduced numbers of radii decorated by numerous fibrous threads and specific decorating behaviour was identical, suggesting that the cocoon web in this system has roots in the innate resting web and ecdysteroid-related components may be responsible for the manipulation. We also show that these decorations reflect UV light, possibly to prevent damage by flying web-destroyers such as birds or large insects. Furthermore, the tensile test revealed that the spider is induced to repeat certain behavioural steps in addition to resting web construction so that many more threads are laid down for web reinforcement.

  9. Taking Advantage of the "Big Mo"--Momentum in Everyday English and Swedish and in Physics Teaching

    ERIC Educational Resources Information Center

    Haglund, Jesper; Jeppsson, Fredrik; Ahrenberg, Lars

    2015-01-01

    Science education research suggests that our everyday intuitions of motion and interaction of physical objects fit well with how physicists use the term "momentum". Corpus linguistics provides an easily accessible approach to study language in different domains, including everyday language. Analysis of language samples from English text…

  10. How can we take advantage of halophyte properties to cope with heavy metal toxicity in salt-affected areas?

    PubMed Central

    Lutts, Stanley; Lefèvre, Isabelle

    2015-01-01

    Background Many areas throughout the world are simultaneously contaminated by high concentrations of soluble salts and by high concentrations of heavy metals that constitute a serious threat to human health. The use of plants to extract or stabilize pollutants is an interesting alternative to classical expensive decontamination procedures. However, suitable plant species still need to be identified for reclamation of substrates presenting a high electrical conductivity. Scope Halophytic plant species are able to cope with several abiotic constraints occurring simultaneously in their natural environment. This review considers their putative interest for remediation of polluted soil in relation to their ability to sequester absorbed toxic ions in trichomes or vacuoles, to perform efficient osmotic adjustment and to limit the deleterious impact of oxidative stress. These physiological adaptations are considered in relation to the impact of salt on heavy metal bioavailabilty in two types of ecosystem: (1) salt marshes and mangroves, and (2) mine tailings in semi-arid areas. Conclusions Numerous halophytes exhibit a high level of heavy metal accumulation and external NaCl may directly influence heavy metal speciation and absorption rate. Maintenance of biomass production and plant water status makes some halophytes promising candidates for further management of heavy-metal-polluted areas in both saline and non-saline environments. PMID:25672360

  11. Host manipulation by an ichneumonid spider ectoparasitoid that takes advantage of preprogrammed web-building behaviour for its cocoon protection.

    PubMed

    Takasuka, Keizo; Yasui, Tomoki; Ishigami, Toru; Nakata, Kensuke; Matsumoto, Rikio; Ikeda, Kenichi; Maeto, Kaoru

    2015-08-01

    Host manipulation by parasites and parasitoids is a fascinating phenomenon within evolutionary ecology, representing an example of extended phenotypes. To elucidate the mechanism of host manipulation, revealing the origin and function of the invoked actions is essential. Our study focused on the ichneumonid spider ectoparasitoid Reclinervellus nielseni, which turns its host spider (Cyclosa argenteoalba) into a drugged navvy, to modify the web structure into a more persistent cocoon web so that the wasp can pupate safely on this web after the spider's death. We focused on whether the cocoon web originated from the resting web that an unparasitized spider builds before moulting, by comparing web structures, building behaviour and silk spectral/tensile properties. We found that both resting and cocoon webs have reduced numbers of radii decorated by numerous fibrous threads and specific decorating behaviour was identical, suggesting that the cocoon web in this system has roots in the innate resting web and ecdysteroid-related components may be responsible for the manipulation. We also show that these decorations reflect UV light, possibly to prevent damage by flying web-destroyers such as birds or large insects. Furthermore, the tensile test revealed that the spider is induced to repeat certain behavioural steps in addition to resting web construction so that many more threads are laid down for web reinforcement. PMID:26246608

  12. Antisense-mediated exon skipping: taking advantage of a trick from Mother Nature to treat rare genetic diseases.

    PubMed

    Veltrop, Marcel; Aartsma-Rus, Annemieke

    2014-07-01

    Rare diseases can be caused by genetic mutations that disrupt normal pre-mRNA splicing. Antisense oligonucleotide treatment to the splicing thus has therapeutic potential for many rare diseases. In this review we will focus on the state of the art on exon skipping using antisense oligonucleotides as a potential therapy for rare genetic diseases, outlining how this versatile approach can be exploited to correct for different mutations.

  13. A Content Analysis of Kindergarten-12th Grade School-Based Nutrition Interventions: Taking Advantage of Past Learning

    ERIC Educational Resources Information Center

    Roseman, Mary G.; Riddell, Martha C.; Haynes, Jessica N.

    2011-01-01

    Objective: To review the literature, identifying proposed recommendations for school-based nutrition interventions, and evaluate kindergarten through 12th grade school-based nutrition interventions conducted from 2000-2008. Design: Proposed recommendations from school-based intervention reviews were developed and used in conducting a content…

  14. Mentoring the Next Generation of AACRAO Leaders: Taking Advantage of Routines, Exceptions, and Challenges for Developing Leadership Skills

    ERIC Educational Resources Information Center

    Cramer, Sharon F.

    2012-01-01

    As members of enrollment management units look ahead to the next few years, they anticipate many institution-wide challenges: (1) implementation of a new student information system; (2) major upgrade of an existing system; and (3) re-configuring an existing system to reflect changes in academic policies or to accommodate new federal or state…

  15. Taking ad-Vantage of lax advertising regulation in the USA and Canada: reassuring and distracting health-concerned smokers.

    PubMed

    Anderson, Stacey J; Pollay, Richard W; Ling, Pamela M

    2006-10-01

    We explored the evolution from cigarette product attributes to psychosocial needs in advertising campaigns for low-tar cigarettes. Analysis of previously secret tobacco industry documents and print advertising images indicated that low-tar brands targeted smokers who were concerned about their health with advertising images intended to distract them from the health hazards of smoking. Advertising first emphasized product characteristics (filtration, low tar) that implied health benefits. Over time, advertising emphasis shifted to salient psychosocial needs of the target markets. A case study of Vantage cigarettes in the USA and Canada showed that advertising presented images of intelligent, upward-striving people who had achieved personal success and intentionally excluded the act of smoking from the imagery, while minimal product information was provided. This illustrates one strategy to appeal to concerned smokers by not describing the product itself (which may remind smokers of the problems associated with smoking), but instead using evocative imagery to distract smokers from these problems. Current advertising for potential reduced-exposure products (PREPs) emphasizes product characteristics, but these products have not delivered on the promise of a healthier alternative cigarette. Our results suggest that the tobacco control community should be on the alert for a shift in advertising focus for PREPs to the image of the user rather than the cigarette. Global Framework Convention on Tobacco Control-style advertising bans that prohibit all user imagery in tobacco advertising could preempt a psychosocial needs-based advertising strategy for PREPs and maintain public attention on the health hazards of smoking.

  16. Taking Advantage of the "Big Mo"—Momentum in Everyday English and Swedish and in Physics Teaching

    NASA Astrophysics Data System (ADS)

    Haglund, Jesper; Jeppsson, Fredrik; Ahrenberg, Lars

    2015-06-01

    Science education research suggests that our everyday intuitions of motion and interaction of physical objects fit well with how physicists use the term "momentum". Corpus linguistics provides an easily accessible approach to study language in different domains, including everyday language. Analysis of language samples from English text corpora reveals a trend of increasing metaphorical use of "momentum" in non-science domains, and through conceptual metaphor analysis, we show that the use of the word in everyday language, as opposed to for instance "force", is largely adequate from a physics point of view. In addition, "momentum" has recently been borrowed into Swedish as a metaphor in domains such as sports, politics and finance, with meanings similar to those in physics. As an implication for educational practice, we find support for the suggestion to introduce the term "momentum" to English-speaking pupils at an earlier age than what is typically done in the educational system today, thereby capitalising on their intuitions and experiences of everyday language. For Swedish-speaking pupils, and possibly also relevant to other languages, the parallel between "momentum" and the corresponding physics term in the students' mother tongue could be made explicit..

  17. Optimizing Hydraulic Fracture Spacing and Frac Timing in Unconventionals - Taking Advantage of Time-Dependent Pressure Diffusion

    NASA Astrophysics Data System (ADS)

    Sheibani, F.

    2014-12-01

    Due to low natural gas prices, low production rates, and increased development costs, many operators have shifted operations from shale gas to liquid-rich shale plays. One means to make shale gas plays more attractive is to enhance well production through stimulation optimization. In numerous previous works, the authors have highlighted the geomechanical causes and important parameters for hydraulic fracture optimization in naturally fractured shale plays. The authors have, for example, emphasized the impact that stress shadows, from multiple hydraulic fractures, has on increasing the resistance of natural fractures and weakness planes to shear stimulation. The authors have also shown the critical role that in-situ pressure and pressure changes have on natural fracture shear stimulation.In this paper, we present the results of a discrete element model numerical study of both hydraulic fracture spacing and hydraulic fracture timing in a fully hydro-mechanical coupled fashion. The pressure changes in the natural fracture system of an unconventional play, due to hydraulic fracturing, often follow a diffusion-type process, which means the pressure changes are time dependent. As shown in previous works of the authors and others, the time-dependent changes in the in-situ pressure can have a marked impact on shear stimulation. The study performed quantitatively looked at the impact of hydraulic fracture spacing as a function of in-situ pressure change and time for key parameters such as the in-situ stress ratio, natural fracture characteristics, and natural fracture mechanical properties. The results of the study help improve the understanding of in-situ pressure and hydraulic fracture timing on stimulation optimization and enhanced hydrocarbon production. The study also provides a means to optimize hydraulic fracture spacing and increase shear stimulation for unconventional wells.

  18. How Users Take Advantage of Different Forms of Interactivity on Online News Sites: Clicking, E-Mailing, and Commenting

    ERIC Educational Resources Information Center

    Boczkowski, Pablo J.; Mitchelstein, Eugenia

    2012-01-01

    This study examines the uptake of multiple interactive features on news sites. It looks at the thematic composition of the most clicked, most e-mailed, and most commented stories during periods of heightened and routine political activity. Results show that (a) during the former period, the most commented stories were more likely to be focused on…

  19. Improving the Reactivity of Zerovalent Iron by Taking Advantage of Its Magnetic Memory: Implications for Arsenite Removal.

    PubMed

    Li, Jinxiang; Shi, Zhong; Ma, Bin; Zhang, Pingping; Jiang, Xiao; Xiao, Zhongjin; Guan, Xiaohong

    2015-09-01

    Premagnetization was employed to enhance the reactivity of zerovalent iron (ZVI) toward As(III) sequestration for the first time. Compared to the pristine ZVI (Pri-ZVI), the rate of As(III) elimination by the premagnetized ZVI (Mag-ZVI) was greater over the pHini range of 4.0-9.0 and increased progressively with increasing intensity of the magnetic field for premagnetization. Mag-ZVI could keep its reactivity for a long time and showed better performance than Pri-ZVI for As(III) removal from synthetic groundwater in column tests. The Fe K-edge XAFS analysis for As(III)-treated ZVI samples unraveled that premagnetization promoted the transformation of ZVI to iron (hydr)oxides and shifted the corrosion products from maghemite and magnetite to lepidocrocite, which favored the arsenic sequestration. The arsenic species analysis revealed that premagnetization facilitated the oxidation of As(III) to As(V). ZVI pretreated with grinding was very different from Mag-ZVI with regard to As(III) removal, indicating that the improved reactivity of Mag-ZVI should not be associated with the physical squeezing effect of the ZVI grains during magnetization. The positive correlation between the remanence of Mag-ZVI and the rate constants of total arsenic removal indicated that the enhanced reactivity of Mag-ZVI was mainly ascribed to its magnetic memory, i.e., the remanence kept by Mag-ZVI. PMID:26221911

  20. The Exposure Advantage: Early Exposure to a Multilingual Environment Promotes Effective Communication.

    PubMed

    Fan, Samantha P; Liberman, Zoe; Keysar, Boaz; Kinzler, Katherine D

    2015-07-01

    Early language exposure is essential to developing a formal language system, but may not be sufficient for communicating effectively. To understand a speaker's intention, one must take the speaker's perspective. Multilingual exposure may promote effective communication by enhancing perspective taking. We tested children on a task that required perspective taking to interpret a speaker's intended meaning. Monolingual children failed to interpret the speaker's meaning dramatically more often than both bilingual children and children who were exposed to a multilingual environment but were not bilingual themselves. Children who were merely exposed to a second language performed as well as bilingual children, despite having lower executive-function scores. Thus, the communicative advantages demonstrated by the bilinguals may be social in origin, and not due to enhanced executive control. For millennia, multilingual exposure has been the norm. Our study shows that such an environment may facilitate the development of perspective-taking tools that are critical for effective communication.

  1. The advantages of stereo vision in a face recognition system

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2014-06-01

    Humans can recognize a face with binocular vision, while computers typically use a single face image. It is known that the performance of face recognition (by a computer) can be improved using the score fusion of multimodal images and multiple algorithms. A question is: Can we apply stereo vision to a face recognition system? We know that human binocular vision has many advantages such as stereopsis (3D vision), binocular summation, and singleness of vision including fusion of binocular images (cyclopean image). For face recognition, a 3D face or 3D facial features are typically computed from a pair of stereo images. In human visual processes, the binocular summation and singleness of vision are similar as image fusion processes. In this paper, we propose an advanced face recognition system with stereo imaging capability, which is comprised of two 2-in-1 multispectral (visible and thermal) cameras and three recognition algorithms (circular Gaussian filter, face pattern byte, and linear discriminant analysis [LDA]). Specifically, we present and compare stereo fusion at three levels (images, features, and scores) by using stereo images (from left camera and right camera). Image fusion is achieved with three methods (Laplacian pyramid, wavelet transform, average); feature fusion is done with three logical operations (AND, OR, XOR); and score fusion is implemented with four classifiers (LDA, k-nearest neighbor, support vector machine, binomial logical regression). The system performance is measured by probability of correct classification (PCC) rate (reported as accuracy rate in this paper) and false accept rate (FAR). The proposed approaches were validated with a multispectral stereo face dataset from 105 subjects. Experimental results show that any type of stereo fusion can improve the PCC, meanwhile reduce the FAR. It seems that stereo image/feature fusion is superior to stereo score fusion in terms of recognition performance. Further score fusion after image

  2. An ant colony optimization based algorithm for identifying gene regulatory elements.

    PubMed

    Liu, Wei; Chen, Hanwu; Chen, Ling

    2013-08-01

    It is one of the most important tasks in bioinformatics to identify the regulatory elements in gene sequences. Most of the existing algorithms for identifying regulatory elements are inclined to converge into a local optimum, and have high time complexity. Ant Colony Optimization (ACO) is a meta-heuristic method based on swarm intelligence and is derived from a model inspired by the collective foraging behavior of real ants. Taking advantage of the ACO in traits such as self-organization and robustness, this paper designs and implements an ACO based algorithm named ACRI (ant-colony-regulatory-identification) for identifying all possible binding sites of transcription factor from the upstream of co-expressed genes. To accelerate the ants' searching process, a strategy of local optimization is presented to adjust the ants' start positions on the searched sequences. By exploiting the powerful optimization ability of ACO, the algorithm ACRI can not only improve precision of the results, but also achieve a very high speed. Experimental results on real world datasets show that ACRI can outperform other traditional algorithms in the respects of speed and quality of solutions. PMID:23746735

  3. Toward a practical ultrasound waveform tomography algorithm for improving breast imaging

    NASA Astrophysics Data System (ADS)

    Li, Cuiping; Sandhu, Gursharan S.; Roy, Olivier; Duric, Neb; Allada, Veerendra; Schmidt, Steven

    2014-03-01

    Ultrasound tomography is an emerging modality for breast imaging. However, most current ultrasonic tomography imaging algorithms, historically hindered by the limited memory and processor speed of computers, are based on ray theory and assume a homogeneous background which is inaccurate for complex heterogeneous regions. Therefore, wave theory, which accounts for diffraction effects, must be used in ultrasonic imaging algorithms to properly handle the heterogeneous nature of breast tissue in order to accurately image small lesions. However, application of waveform tomography to medical imaging has been limited by extreme computational cost and convergence. By taking advantage of the computational architecture of Graphic Processing Units (GPUs), the intensive processing burden of waveform tomography can be greatly alleviated. In this study, using breast imaging methods, we implement a frequency domain waveform tomography algorithm on GPUs with the goal of producing high-accuracy and high-resolution breast images on clinically relevant time scales. We present some simulation results and assess the resolution and accuracy of our waveform tomography algorithms based on the simulation data.

  4. Accelerating k-NN Algorithm with Hybrid MPI and OpenSHMEM

    SciTech Connect

    Lin, Jian; Hamidouche, Khaled; Zheng, Jie; Lu, Xiaoyi; Vishnu, Abhinav; Panda, Dhabaleswar

    2015-08-05

    Machine Learning algorithms are benefiting from the continuous improvement of programming models, including MPI, MapReduce and PGAS. k-Nearest Neighbors (k-NN) algorithm is a widely used machine learning algorithm, applied to supervised learning tasks such as classification. Several parallel implementations of k-NN have been proposed in the literature and practice. However, on high-performance computing systems with high-speed interconnects, it is important to further accelerate existing designs of the k-NN algorithm through taking advantage of scalable programming models. To improve the performance of k-NN on large-scale environment with InfiniBand network, this paper proposes several alternative hybrid MPI+OpenSHMEM designs and performs a systemic evaluation and analysis on typical workloads. The hybrid designs leverage the one-sided memory access to better overlap communication with computation than the existing pure MPI design, and propose better schemes for efficient buffer management. The implementation based on k-NN program from MaTEx with MVAPICH2-X (Unified MPI+PGAS Communication Runtime over InfiniBand) shows up to 9.0% time reduction for training KDD Cup 2010 workload over 512 cores, and 27.6% time reduction for small workload with balanced communication and computation. Experiments of running with varied number of cores show that our design can maintain good scalability.

  5. Computational Analysis of Distance Operators for the Iterative Closest Point Algorithm

    PubMed Central

    Mora-Pascual, Jerónimo M.; García-García, Alberto; Martínez-González, Pablo

    2016-01-01

    The Iterative Closest Point (ICP) algorithm is currently one of the most popular methods for rigid registration so that it has become the standard in the Robotics and Computer Vision communities. Many applications take advantage of it to align 2D/3D surfaces due to its popularity and simplicity. Nevertheless, some of its phases present a high computational cost thus rendering impossible some of its applications. In this work, it is proposed an efficient approach for the matching phase of the Iterative Closest Point algorithm. This stage is the main bottleneck of that method so that any efficiency improvement has a great positive impact on the performance of the algorithm. The proposal consists in using low computational cost point-to-point distance metrics instead of classic Euclidean one. The candidates analysed are the Chebyshev and Manhattan distance metrics due to their simpler formulation. The experiments carried out have validated the performance, robustness and quality of the proposal. Different experimental cases and configurations have been set up including a heterogeneous set of 3D figures, several scenarios with partial data and random noise. The results prove that an average speed up of 14% can be obtained while preserving the convergence properties of the algorithm and the quality of the final results. PMID:27768714

  6. Staged optimization algorithms based MAC dynamic bandwidth allocation for OFDMA-PON

    NASA Astrophysics Data System (ADS)

    Liu, Yafan; Qian, Chen; Cao, Bingyao; Dun, Han; Shi, Yan; Zou, Junni; Lin, Rujian; Wang, Min

    2016-06-01

    Orthogonal frequency division multiple access passive optical network (OFDMA-PON) has being considered as a promising solution for next generation PONs due to its high spectral efficiency and flexible bandwidth allocation scheme. In order to take full advantage of these merits of OFDMA-PON, a high-efficiency medium access control (MAC) dynamic bandwidth allocation (DBA) scheme is needed. In this paper, we propose two DBA algorithms which can act on two different stages of a resource allocation process. To achieve higher bandwidth utilization and ensure the equity of ONUs, we propose a DBA algorithm based on frame structure for the stage of physical layer mapping. Targeting the global quality of service (QoS) of OFDMA-PON, we propose a full-range DBA algorithm with service level agreement (SLA) and class of service (CoS) for the stage of bandwidth allocation arbitration. The performance of the proposed MAC DBA scheme containing these two algorithms is evaluated using numerical simulations. Simulations of a 15 Gbps network with 1024 sub-carriers and 32 ONUs demonstrate the maximum network throughput of 14.87 Gbps and the maximum packet delay of 1.45 ms for the highest priority CoS under high load condition.

  7. The study on the synthesis approximate algorithm of GPS height conversion

    NASA Astrophysics Data System (ADS)

    Wu, Xiang-Yang; Wang, Qing; Gao, Bing; Liang, Hongbao

    2009-12-01

    Studying on the GPS height transformation algorithm to improve the accuracy of GPS height conversion has always been a hotspot in the field of geodesy. At present, there are many methods of converses GPS height into normal height, the most common of which is the method of numerical approximation, and its algorithm is mostly confined to how to choose a suitable function model or statistical model to numerical approximation. Aiming at the singularity of GPS height conversion methods, this article presents a comprehensive approximation algorithm, which combine the functional approximation models with statistical approximation models in order to take full advantage of the regularity of function approximation model and the flexibility of the statistical approximation model. Based on the analysis of the actual engineering data, the results show that the accuracy of GPS height transformation based on the algorithm of integrated approximation approach is superior to the single function transformation model or a single statistical model. At the same time, it also relaxes the selection requirements of function model and statistical model.

  8. Through-Wall Multiple Targets Vital Signs Tracking Based on VMD Algorithm

    PubMed Central

    Yan, Jiaming; Hong, Hong; Zhao, Heng; Li, Yusheng; Gu, Chen; Zhu, Xiaohua

    2016-01-01

    Targets located at the same distance are easily neglected in most through-wall multiple targets detecting applications which use the single-input single-output (SISO) ultra-wideband (UWB) radar system. In this paper, a novel multiple targets vital signs tracking algorithm for through-wall detection using SISO UWB radar has been proposed. Taking advantage of the high-resolution decomposition of the Variational Mode Decomposition (VMD) based algorithm, the respiration signals of different targets can be decomposed into different sub-signals, and then, we can track the time-varying respiration signals accurately when human targets located in the same distance. Intensive evaluation has been conducted to show the effectiveness of our scheme with a 0.15 m thick concrete brick wall. Constant, piecewise-constant and time-varying vital signs could be separated and tracked successfully with the proposed VMD based algorithm for two targets, even up to three targets. For the multiple targets’ vital signs tracking issues like urban search and rescue missions, our algorithm has superior capability in most detection applications. PMID:27537880

  9. Lesion detection in magnetic resonance brain images by hyperspectral imaging algorithms

    NASA Astrophysics Data System (ADS)

    Xue, Bai; Wang, Lin; Li, Hsiao-Chi; Chen, Hsian Min; Chang, Chein-I.

    2016-05-01

    Magnetic Resonance (MR) images can be considered as multispectral images so that MR imaging can be processed by multispectral imaging techniques such as maximum likelihood classification. Unfortunately, most multispectral imaging techniques are not particularly designed for target detection. On the other hand, hyperspectral imaging is primarily developed to address subpixel detection, mixed pixel classification for which multispectral imaging is generally not effective. This paper takes advantages of hyperspectral imaging techniques to develop target detection algorithms to find lesions in MR brain images. Since MR images are collected by only three image sequences, T1, T2 and PD, if a hyperspectral imaging technique is used to process MR images it suffers from the issue of insufficient dimensionality. To address this issue, two approaches to nonlinear dimensionality expansion are proposed, nonlinear correlation expansion and nonlinear band ratio expansion. Once dimensionality is expanded hyperspectral imaging algorithms are readily applied. The hyperspectral detection algorithm to be investigated for lesion detection in MR brain is the well-known subpixel target detection algorithm, called Constrained Energy Minimization (CEM). In order to demonstrate the effectiveness of proposed CEM in lesion detection, synthetic images provided by BrainWeb are used for experiments.

  10. Through-Wall Multiple Targets Vital Signs Tracking Based on VMD Algorithm.

    PubMed

    Yan, Jiaming; Hong, Hong; Zhao, Heng; Li, Yusheng; Gu, Chen; Zhu, Xiaohua

    2016-01-01

    Targets located at the same distance are easily neglected in most through-wall multiple targets detecting applications which use the single-input single-output (SISO) ultra-wideband (UWB) radar system. In this paper, a novel multiple targets vital signs tracking algorithm for through-wall detection using SISO UWB radar has been proposed. Taking advantage of the high-resolution decomposition of the Variational Mode Decomposition (VMD) based algorithm, the respiration signals of different targets can be decomposed into different sub-signals, and then, we can track the time-varying respiration signals accurately when human targets located in the same distance. Intensive evaluation has been conducted to show the effectiveness of our scheme with a 0.15 m thick concrete brick wall. Constant, piecewise-constant and time-varying vital signs could be separated and tracked successfully with the proposed VMD based algorithm for two targets, even up to three targets. For the multiple targets' vital signs tracking issues like urban search and rescue missions, our algorithm has superior capability in most detection applications. PMID:27537880

  11. jClustering, an Open Framework for the Development of 4D Clustering Algorithms

    PubMed Central

    Mateos-Pérez, José María; García-Villalba, Carmen; Pascau, Javier; Desco, Manuel; Vaquero, Juan J.

    2013-01-01

    We present jClustering, an open framework for the design of clustering algorithms in dynamic medical imaging. We developed this tool because of the difficulty involved in manually segmenting dynamic PET images and the lack of availability of source code for published segmentation algorithms. Providing an easily extensible open tool encourages publication of source code to facilitate the process of comparing algorithms and provide interested third parties with the opportunity to review code. The internal structure of the framework allows an external developer to implement new algorithms easily and quickly, focusing only on the particulars of the method being implemented and not on image data handling and preprocessing. This tool has been coded in Java and is presented as an ImageJ plugin in order to take advantage of all the functionalities offered by this imaging analysis platform. Both binary packages and source code have been published, the latter under a free software license (GNU General Public License) to allow modification if necessary. PMID:23990913

  12. GPU Accelerated Event Detection Algorithm

    2011-05-25

    Smart grid external require new algorithmic approaches as well as parallel formulations. One of the critical components is the prediction of changes and detection of anomalies within the power grid. The state-of-the-art algorithms are not suited to handle the demands of streaming data analysis. (i) need for events detection algorithms that can scale with the size of data, (ii) need for algorithms that can not only handle multi dimensional nature of the data, but alsomore » model both spatial and temporal dependencies in the data, which, for the most part, are highly nonlinear, (iii) need for algorithms that can operate in an online fashion with streaming data. The GAEDA code is a new online anomaly detection techniques that take into account spatial, temporal, multi-dimensional aspects of the data set. The basic idea behind the proposed approach is to (a) to convert a multi-dimensional sequence into a univariate time series that captures the changes between successive windows extracted from the original sequence using singular value decomposition (SVD), and then (b) to apply known anomaly detection techniques for univariate time series. A key challenge for the proposed approach is to make the algorithm scalable to huge datasets by adopting techniques from perturbation theory, incremental SVD analysis. We used recent advances in tensor decomposition techniques which reduce computational complexity to monitor the change between successive windows and detect anomalies in the same manner as described above. Therefore we propose to develop the parallel solutions on many core systems such as GPUs, because these algorithms involve lot of numerical operations and are highly data-parallelizable.« less

  13. GPU Accelerated Event Detection Algorithm

    SciTech Connect

    2011-05-25

    Smart grid external require new algorithmic approaches as well as parallel formulations. One of the critical components is the prediction of changes and detection of anomalies within the power grid. The state-of-the-art algorithms are not suited to handle the demands of streaming data analysis. (i) need for events detection algorithms that can scale with the size of data, (ii) need for algorithms that can not only handle multi dimensional nature of the data, but also model both spatial and temporal dependencies in the data, which, for the most part, are highly nonlinear, (iii) need for algorithms that can operate in an online fashion with streaming data. The GAEDA code is a new online anomaly detection techniques that take into account spatial, temporal, multi-dimensional aspects of the data set. The basic idea behind the proposed approach is to (a) to convert a multi-dimensional sequence into a univariate time series that captures the changes between successive windows extracted from the original sequence using singular value decomposition (SVD), and then (b) to apply known anomaly detection techniques for univariate time series. A key challenge for the proposed approach is to make the algorithm scalable to huge datasets by adopting techniques from perturbation theory, incremental SVD analysis. We used recent advances in tensor decomposition techniques which reduce computational complexity to monitor the change between successive windows and detect anomalies in the same manner as described above. Therefore we propose to develop the parallel solutions on many core systems such as GPUs, because these algorithms involve lot of numerical operations and are highly data-parallelizable.

  14. Take Your Leadership Role Seriously.

    ERIC Educational Resources Information Center

    School Administrator, 1986

    1986-01-01

    The principal authors of a new book, "Profiling Excellence in America's Schools," state that leadership is the single most important element for effective schools. The generic skills of leaders are flexibility, autonomy, risk taking, innovation, and commitment. Exceptional principals and teachers take their leadership and management roles…

  15. Taking Over a Broken Program

    ERIC Educational Resources Information Center

    Grabowski, Carl

    2008-01-01

    Taking over a broken program can be one of the hardest tasks to take on. However, working towards a vision and a common goal--and eventually getting there--makes it all worth it in the end. In this article, the author shares the lessons she learned as the new director for the Bright Horizons Center in Ashburn, Virginia. She suggests that new…

  16. Taking Chances in Romantic Relationships

    ERIC Educational Resources Information Center

    Elliott, Lindsey; Knox, David

    2016-01-01

    A 64 item Internet questionnaire was completed by 381 undergraduates at a large southeastern university to assess taking chances in romantic relationships. Almost three fourths (72%) self-identified as being a "person willing to take chances in my love relationship." Engaging in unprotected sex, involvement in a "friends with…

  17. The Day-1 GPM Combined Precipitation Algorithm: IMERG

    NASA Astrophysics Data System (ADS)

    Huffman, G. J.; Bolvin, D. T.; Braithwaite, D.; Hsu, K.; Joyce, R.; Kidd, C.; Sorooshian, S.; Xie, P.

    2012-12-01

    The Integrated Multi-satellitE Retrievals for Global Precipitation Measurement (GPM) mission (IMERG) algorithm will provide the at-launch combined-sensor precipitation dataset being produced by the U.S. GPM Science Team. IMERG is being developed as a unified U.S. algorithm that takes advantage of strengths in three current U.S. algorithms: - the TRMM Multi-satellite Precipitation Analysis (TMPA), which addresses inter-satellite calibration of precipitation estimates and monthly scale combination of satellite and gauge analyses; - the CPC Morphing algorithm with Kalman Filtering (KF-CMORPH), which provides quality-weighted time interpolation of precipitation patterns following storm motion; and - the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks using a Cloud Classification System (PERSIANN-CCS), which provides a neural-network-based scheme for generating microwave-calibrated precipitation estimates from geosynchronous infrared brightness temperatures, and filters out some non-raining cold clouds. The goal is to provide a long-term, fine-scale record of global precipitation from the entire constellation of precipitation-relevant satellite sensors, with input from surface precipitation gauges. The record will begin January 1998 at the start of the Tropical Rainfall Measuring Mission (TRMM) and extend as GPM records additional data. Although homogeneity is considered desirable, the use of diverse and evolving data sources works against the strict long-term homogeneity that characterizes a Climate Data Record (CDR). This talk will briefly review the design requirements for IMERG, including multiple runs at different latencies (most likely around 4 hours, 12 hours, and 2 months after observation time), various intermediate data fields as part of the IMERG data file, and the plans to bring up IMERG with calibration by TRMM initially, transitioning to GPM when its individual-sensor precipitation algorithms are fully functional

  18. NASA Team 2 Sea Ice Concentration Algorithm Retrieval Uncertainty

    NASA Technical Reports Server (NTRS)

    Brucker, Ludovic; Cavalieri, Donald J.; Markus, Thorsten; Ivanoff, Alvaro

    2014-01-01

    Satellite microwave radiometers are widely used to estimate sea ice cover properties (concentration, extent, and area) through the use of sea ice concentration (IC) algorithms. Rare are the algorithms providing associated IC uncertainty estimates. Algorithm uncertainty estimates are needed to assess accurately global and regional trends in IC (and thus extent and area), and to improve sea ice predictions on seasonal to interannual timescales using data assimilation approaches. This paper presents a method to provide relative IC uncertainty estimates using the enhanced NASA Team (NT2) IC algorithm. The proposed approach takes advantage of the NT2 calculations and solely relies on the brightness temperatures (TBs) used as input. NT2 IC and its associated relative uncertainty are obtained for both the Northern and Southern Hemispheres using the Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E) TB. NT2 IC relative uncertainties estimated on a footprint-by-footprint swath-by-swath basis were averaged daily over each 12.5-km grid cell of the polar stereographic grid. For both hemispheres and throughout the year, the NT2 relative uncertainty is less than 5%. In the Southern Hemisphere, it is low in the interior ice pack, and it increases in the marginal ice zone up to 5%. In the Northern Hemisphere, areas with high uncertainties are also found in the high IC area of the Central Arctic. Retrieval uncertainties are greater in areas corresponding to NT2 ice types associated with deep snow and new ice. Seasonal variations in uncertainty show larger values in summer as a result of melt conditions and greater atmospheric contributions. Our analysis also includes an evaluation of the NT2 algorithm sensitivity to AMSR-E sensor noise. There is a 60% probability that the IC does not change (to within the computed retrieval precision of 1%) due to sensor noise, and the cumulated probability shows that there is a 90% chance that the IC varies by less than

  19. Home advantage in retractable-roof baseball stadia.

    PubMed

    Romanowich, Paul

    2012-10-01

    This study examined whether the home advantage varies for open-air, domed, or retractable-roof baseball stadia, and whether having the roof open or closed affects the home advantage in retractable-roof baseball stadia. Data from Major League Baseball (MLB) games played between 2001 and 2009 were analyzed for whether or not the presence of a home-advantage was dependent on the type of home stadium used. Home advantage was robust for all three types of stadia. A significant effect of stadium type on home advantage was found, with a greater home advantage for teams playing home games in domed stadia relative to open-air stadia, replicating a previous study. There was a greater home advantage for teams playing home games in domed stadia relative to retractable-roof stadia. No other differences in the home advantage were found; results are discussed in terms of familiarity with the facility.

  20. Hyperspectral image compressive projection algorithm

    NASA Astrophysics Data System (ADS)

    Rice, Joseph P.; Allen, David W.

    2009-05-01

    We describe a compressive projection algorithm and experimentally assess its performance when used with a Hyperspectral Image Projector (HIP). The HIP is being developed by NIST for system-level performance testing of hyperspectral and multispectral imagers. It projects a two-dimensional image into the unit under test (UUT), whereby each pixel can have an independently programmable arbitrary spectrum. To efficiently project a single frame of dynamic realistic hyperspectral imagery through the collimator into the UUT, a compression algorithm has been developed whereby the series of abundance images and corresponding endmember spectra that comprise the image cube of that frame are first computed using an automated endmember-finding algorithm such as the Sequential Maximum Angle Convex Cone (SMACC) endmember model. Then these endmember spectra are projected sequentially on the HIP spectral engine in sync with the projection of the abundance images on the HIP spatial engine, during the singleframe exposure time of the UUT. The integrated spatial image captured by the UUT is the endmember-weighted sum of the abundance images, which results in the formation of a datacube for that frame. Compressive projection enables a much smaller set of broadband spectra to be projected than monochromatic projection, and thus utilizes the inherent multiplex advantage of the HIP spectral engine. As a result, radiometric brightness and projection frame rate are enhanced. In this paper, we use a visible breadboard HIP to experimentally assess the compressive projection algorithm performance.

  1. Did Babe Ruth Have a Comparative Advantage as a Pitcher?

    ERIC Educational Resources Information Center

    Scahill, Edward M.

    1990-01-01

    Advocates using baseball statistics to illustrate the advantages of specialization in production. Using Babe Ruth's record as an analogy, suggests a methodology for determining a player's comparative advantage as a teaching illustration. Includes the team's statistical profile in five tables to explain comparative advantage and profit maximizing.…

  2. Back to Basics: A Bilingual Advantage in Infant Visual Habituation

    ERIC Educational Resources Information Center

    Singh, Leher; Fu, Charlene S. L.; Rahman, Aishah A.; Hameed, Waseem B.; Sanmugam, Shamini; Agarwal, Pratibha; Jiang, Binyan; Chong, Yap Seng; Meaney, Michael J.; Rifkin-Graboi, Anne

    2015-01-01

    Comparisons of cognitive processing in monolinguals and bilinguals have revealed a bilingual advantage in inhibitory control. Recent studies have demonstrated advantages associated with exposure to two languages in infancy. However, the domain specificity and scope of the infant bilingual advantage in infancy remains unclear. In the present study,…

  3. Recursive Branching Simulated Annealing Algorithm

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew; Smith, J. Scott; Aronstein, David

    2012-01-01

    This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal

  4. Associated petroleum gas utilization in Tomsk Oblast: energy efficiency and tax advantages

    NASA Astrophysics Data System (ADS)

    Vazim, A.; Romanyuk, V.; Ahmadeev, K.; Matveenko, I.

    2015-11-01

    This article deals with oil production companies activities in increasing the utilization volume of associated petroleum gas (APG) in Tomsk Oblast. Cost-effectiveness analysis of associated petroleum gas utilization was carried out using the example of gas engine power station AGP-350 implementation at Yuzhno-Cheremshanskoye field, Tomsk Oblast. Authors calculated the effectiveness taking into account the tax advantages of 2012. The implementation of this facility shows high profitability, the payback period being less than 2 years.

  5. The Advantages of Fixed Facilities in Characterizing TRU Wastes

    SciTech Connect

    FRENCH, M.S.

    2000-02-08

    In May 1998 the Hanford Site started developing a program for characterization of transuranic (TRU) waste for shipment to the Waste Isolation Pilot Plant (WIPP) in New Mexico. After less than two years, Hanford will have a program certified by the Carlsbad Area Office (CAO). By picking a simple waste stream, taking advantage of lessons learned at the other sites, as well as communicating effectively with the CAO, Hanford was able to achieve certification in record time. This effort was further simplified by having a centralized program centered on the Waste Receiving and Processing (WRAP) Facility that contains most of the equipment required to characterize TRU waste. The use of fixed facilities for the characterization of TRU waste at sites with a long-term clean-up mission can be cost effective for several reasons. These include the ability to control the environment in which sensitive instrumentation is required to operate and ensuring that calibrations and maintenance activities are scheduled and performed as an operating routine. Other factors contributing to cost effectiveness include providing approved procedures and facilities for handling hazardous materials and anticipated contingencies and performing essential evolutions, and regulating and smoothing the work load and environmental conditions to provide maximal efficiency and productivity. Another advantage is the ability to efficiently provide characterization services to other sites in the Department of Energy (DOE) Complex that do not have the same capabilities. The Waste Receiving and Processing (WRAP) Facility is a state-of-the-art facility designed to consolidate the operations necessary to inspect, process and ship waste to facilitate verification of contents for certification to established waste acceptance criteria. The WRAP facility inspects, characterizes, treats, and certifies transuranic (TRU), low-level and mixed waste at the Hanford Site in Washington state. Fluor Hanford operates the $89

  6. Algorithms for High-Speed Noninvasive Eye-Tracking System

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Morookian, John-Michael; Lambert, James

    2010-01-01

    Two image-data-processing algorithms are essential to the successful operation of a system of electronic hardware and software that noninvasively tracks the direction of a person s gaze in real time. The system was described in High-Speed Noninvasive Eye-Tracking System (NPO-30700) NASA Tech Briefs, Vol. 31, No. 8 (August 2007), page 51. To recapitulate from the cited article: Like prior commercial noninvasive eyetracking systems, this system is based on (1) illumination of an eye by a low-power infrared light-emitting diode (LED); (2) acquisition of video images of the pupil, iris, and cornea in the reflected infrared light; (3) digitization of the images; and (4) processing the digital image data to determine the direction of gaze from the centroids of the pupil and cornea in the images. Most of the prior commercial noninvasive eyetracking systems rely on standard video cameras, which operate at frame rates of about 30 Hz. Such systems are limited to slow, full-frame operation. The video camera in the present system includes a charge-coupled-device (CCD) image detector plus electronic circuitry capable of implementing an advanced control scheme that effects readout from a small region of interest (ROI), or subwindow, of the full image. Inasmuch as the image features of interest (the cornea and pupil) typically occupy a small part of the camera frame, this ROI capability can be exploited to determine the direction of gaze at a high frame rate by reading out from the ROI that contains the cornea and pupil (but not from the rest of the image) repeatedly. One of the present algorithms exploits the ROI capability. The algorithm takes horizontal row slices and takes advantage of the symmetry of the pupil and cornea circles and of the gray-scale contrasts of the pupil and cornea with respect to other parts of the eye. The algorithm determines which horizontal image slices contain the pupil and cornea, and, on each valid slice, the end coordinates of the pupil and cornea

  7. A seed expanding cluster algorithm for deriving upwelling areas on sea surface temperature images

    NASA Astrophysics Data System (ADS)

    Nascimento, Susana; Casca, Sérgio; Mirkin, Boris

    2015-12-01

    In this paper a novel clustering algorithm is proposed as a version of the seeded region growing (SRG) approach for the automatic recognition of coastal upwelling from sea surface temperature (SST) images. The new algorithm, one seed expanding cluster (SEC), takes advantage of the concept of approximate clustering due to Mirkin (1996, 2013) to derive a homogeneity criterion in the format of a product rather than the conventional difference between a pixel value and the mean of values over the region of interest. It involves a boundary-oriented pixel labeling so that the cluster growing is performed by expanding its boundary iteratively. The starting point is a cluster consisting of just one seed, the pixel with the coldest temperature. The baseline version of the SEC algorithm uses Otsu's thresholding method to fine-tune the homogeneity threshold. Unfortunately, this method does not always lead to a satisfactory solution. Therefore, we introduce a self-tuning version of the algorithm in which the homogeneity threshold is locally derived from the approximation criterion over a window around the pixel under consideration. The window serves as a boundary regularizer. These two unsupervised versions of the algorithm have been applied to a set of 28 SST images of the western coast of mainland Portugal, and compared against a supervised version fine-tuned by maximizing the F-measure with respect to manually labeled ground-truth maps. The areas built by the unsupervised versions of the SEC algorithm are significantly coincident over the ground-truth regions in the cases at which the upwelling areas consist of a single continuous fragment of the SST map.

  8. The Optimization of Trained and Untrained Image Classification Algorithms for Use on Large Spatial Datasets

    NASA Technical Reports Server (NTRS)

    Kocurek, Michael J.

    2005-01-01

    The HARVIST project seeks to automatically provide an accurate, interactive interface to predict crop yield over the entire United States. In order to accomplish this goal, large images must be quickly and automatically classified by crop type. Current trained and untrained classification algorithms, while accurate, are highly inefficient when operating on large datasets. This project sought to develop new variants of two standard trained and untrained classification algorithms that are optimized to take advantage of the spatial nature of image data. The first algorithm, harvist-cluster, utilizes divide-and-conquer techniques to precluster an image in the hopes of increasing overall clustering speed. The second algorithm, harvistSVM, utilizes support vector machines (SVMs), a type of trained classifier. It seeks to increase classification speed by applying a "meta-SVM" to a quick (but inaccurate) SVM to approximate a slower, yet more accurate, SVM. Speedups were achieved by tuning the algorithm to quickly identify when the quick SVM was incorrect, and then reclassifying low-confidence pixels as necessary. Comparing the classification speeds of both algorithms to known baselines showed a slight speedup for large values of k (the number of clusters) for harvist-cluster, and a significant speedup for harvistSVM. Future work aims to automate the parameter tuning process required for harvistSVM, and further improve classification accuracy and speed. Additionally, this research will move documents created in Canvas into ArcGIS. The launch of the Mars Reconnaissance Orbiter (MRO) will provide a wealth of image data such as global maps of Martian weather and high resolution global images of Mars. The ability to store this new data in a georeferenced format will support future Mars missions by providing data for landing site selection and the search for water on Mars.

  9. A set-covering based heuristic algorithm for the periodic vehicle routing problem.

    PubMed

    Cacchiani, V; Hemmelmayr, V C; Tricoire, F

    2014-01-30

    We present a hybrid optimization algorithm for mixed-integer linear programming, embedding both heuristic and exact components. In order to validate it we use the periodic vehicle routing problem (PVRP) as a case study. This problem consists of determining a set of minimum cost routes for each day of a given planning horizon, with the constraints that each customer must be visited a required number of times (chosen among a set of valid day combinations), must receive every time the required quantity of product, and that the number of routes per day (each respecting the capacity of the vehicle) does not exceed the total number of available vehicles. This is a generalization of the well-known vehicle routing problem (VRP). Our algorithm is based on the linear programming (LP) relaxation of a set-covering-like integer linear programming formulation of the problem, with additional constraints. The LP-relaxation is solved by column generation, where columns are generated heuristically by an iterated local search algorithm. The whole solution method takes advantage of the LP-solution and applies techniques of fixing and releasing of the columns as a local search, making use of a tabu list to avoid cycling. We show the results of the proposed algorithm on benchmark instances from the literature and compare them to the state-of-the-art algorithms, showing the effectiveness of our approach in producing good quality solutions. In addition, we report the results on realistic instances of the PVRP introduced in Pacheco et al. (2011)  [24] and on benchmark instances of the periodic traveling salesman problem (PTSP), showing the efficacy of the proposed algorithm on these as well. Finally, we report the new best known solutions found for all the tested problems.

  10. Library of Continuation Algorithms

    2005-03-01

    LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newton’s method for their nonlinear solve.

  11. Taking medicines to treat tuberculosis

    MedlinePlus

    ... drugs. This is called directly observed therapy. Side Effects and Other Problems Women who may be pregnant, who are pregnant, or who are breastfeeding should talk to their provider before taking these ...

  12. Taking Action for Healthy Kids.

    ERIC Educational Resources Information Center

    Kidd, Jill E.

    2003-01-01

    Summarizes research on relationship between physical activity, good nutrition, and academic performance. Offers several recommendations for how schools can take action to improve the nutrition and fitness of students. (PKP)

  13. Brazilian physicists take centre stage

    NASA Astrophysics Data System (ADS)

    Curtis, Susan

    2014-06-01

    With the FIFA World Cup taking place in Brazil this month, Susan Curtis travels to South America's richest nation to find out how its physicists are exploiting recent big increases in science funding.

  14. LRO Takes the Moon's Temperature

    NASA Video Gallery

    During the December 2011 lunar eclipse, LRO's Diviner instrument will take the temperature on the lunar surface. Since different rock sizes cool at different rates, scientists will be able to infer...

  15. LRO Takes the Moon's Temperature

    NASA Video Gallery

    During the June 2011 lunar eclipse, scientists will be able to get a unique view of the moon. While the sun is blocked by the Earth, LRO's Diviner instrument will take the temperature on the lunar ...

  16. Taking America To New Heights

    NASA Video Gallery

    NASA's Commercial Crew Program (CCP) is taking America to new heights with its Commercial Crew Development Round 2 (CCDev2) partners. In 2011, NASA entered into funded Space Act Agreements (SAAs) w...

  17. Home advantage in speed skating: evidence from individual data.

    PubMed

    Koning, Ruud H

    2005-04-01

    Home advantage is a well-documented phenomenon in many sports. Home advantage has been shown to exist for team sports (soccer, hockey, football, baseball, basketball) and for countries organizing sports tournaments like the Olympics and World Cup Soccer. There is also some evidence for home advantage in some individual sports, but there is a much more limited literature. This paper addresses the issue of home advantage in speed skating. From a methodological point of view, it is difficult to identify home advantage, because skaters vary in their abilities and the conditions of tournaments vary. There is a small but significant home advantage using a generalized linear mixed model, with random effects for skaters and fixed effects for skating rinks and seasons. Even though the home advantage effect exists, it is very small when compared to variation in skating times due to differences of rinks and individual abilities.

  18. Algorithm for obtaining angular fluxes in a cell for the LUCKY and LUCKY{sub C} multiprocessor programs

    SciTech Connect

    Moryakov, A. V.

    2012-12-15

    Basic formulas for solving the transport equation in a cell are presented. The algorithm has been implemented in the LUCKY and LUCKY{sub C} programs. The advantages of the proposed algorithm are described.

  19. [Conclusions. The precautionary principle: its advantages and risks].

    PubMed

    Tubiana, M

    2000-01-01

    The proposed extension to health of the precautionary principle is the reaction to two social demands: the desire for greater health safety and for more transparency in the decision making process by associating the public. In medical care, all decisions are based on the balance between cost (dangers induced by the treatment) and benefit (the therapeutic effect). It is as dangerous to overestimate the cost, in other words the risks, as it is to underestimate them. The same problem is encountered in public health. If a vaccination is to be prescribed, the beneficial effects must outweigh the risks; however, these risks are inevitable and have been known to exist since the 18th century, but they have been accepted for the public good. It takes courage to make a vaccination mandatory because those who benefit from it will never know, while those who suffer from its ill effects could take legal action. In order to counter accusations, an evaluation must be made beforehand of the risks and benefits, which underlines the important role of expert opinion. Within the framework of the precautionary principle, actions cannot be taken in ignorance and, at the very least, plausible estimations must be made. The analysis of several recent events (contaminated blood, BSE, growth hormone and Creutzfeldt-Jacob disease) shows that the precautionary principle would have had a very limited impact and that only once there was sufficient knowledge was action made possible. The same is true concerning current debates (the possible risks associated with electromagnetic fields, mobile phones and radon); in these three cases, no country in the world has invoked the precautionary principle, but rather the priority has been given to research. The public understands quite readily the cost/benefit relationship. In the case of oral contraceptives, or hormone replacement therapy the public was aware of their possible health risks but judged that the advantages outweighed the risks. The

  20. [Conclusions. The precautionary principle: its advantages and risks].

    PubMed

    Tubiana, M

    2000-01-01

    The proposed extension to health of the precautionary principle is the reaction to two social demands: the desire for greater health safety and for more transparency in the decision making process by associating the public. In medical care, all decisions are based on the balance between cost (dangers induced by the treatment) and benefit (the therapeutic effect). It is as dangerous to overestimate the cost, in other words the risks, as it is to underestimate them. The same problem is encountered in public health. If a vaccination is to be prescribed, the beneficial effects must outweigh the risks; however, these risks are inevitable and have been known to exist since the 18th century, but they have been accepted for the public good. It takes courage to make a vaccination mandatory because those who benefit from it will never know, while those who suffer from its ill effects could take legal action. In order to counter accusations, an evaluation must be made beforehand of the risks and benefits, which underlines the important role of expert opinion. Within the framework of the precautionary principle, actions cannot be taken in ignorance and, at the very least, plausible estimations must be made. The analysis of several recent events (contaminated blood, BSE, growth hormone and Creutzfeldt-Jacob disease) shows that the precautionary principle would have had a very limited impact and that only once there was sufficient knowledge was action made possible. The same is true concerning current debates (the possible risks associated with electromagnetic fields, mobile phones and radon); in these three cases, no country in the world has invoked the precautionary principle, but rather the priority has been given to research. The public understands quite readily the cost/benefit relationship. In the case of oral contraceptives, or hormone replacement therapy the public was aware of their possible health risks but judged that the advantages outweighed the risks. The

  1. Adaptive computation algorithm for RBF neural network.

    PubMed

    Han, Hong-Gui; Qiao, Jun-Fei

    2012-02-01

    A novel learning algorithm is proposed for nonlinear modelling and identification using radial basis function neural networks. The proposed method simplifies neural network training through the use of an adaptive computation algorithm (ACA). In addition, the convergence of the ACA is analyzed by the Lyapunov criterion. The proposed algorithm offers two important advantages. First, the model performance can be significantly improved through ACA, and the modelling error is uniformly ultimately bounded. Secondly, the proposed ACA can reduce computational cost and accelerate the training speed. The proposed method is then employed to model classical nonlinear system with limit cycle and to identify nonlinear dynamic system, exhibiting the effectiveness of the proposed algorithm. Computational complexity analysis and simulation results demonstrate its effectiveness.

  2. Alignment algorithms for planar optical waveguides

    NASA Astrophysics Data System (ADS)

    Zheng, Yu; Duan, Ji-an

    2012-10-01

    Planar optical waveguides are the key elements in a modern, high-speed optical network. An important problem facing the optical fiber communication system is optical-axis alignment and coupling between waveguide chips and transmission fibers. The advantages and disadvantages of the various algorithms used for the optical-axis alignment, namely, hill-climbing, pattern search, and genetic algorithm are analyzed. A new optical-axis alignment for planar optical waveguides is presented which is a composite of a genetic algorithm and a pattern search algorithm. Experiments have proved the proposed alignment's feasibility; compared with hill climbing, the search process can reduce the number of movements by 88% and reduce the search time by 83%. Moreover, the search success rate in the experiment can reach 100%.

  3. Genetic algorithms and supernovae type Ia analysis

    SciTech Connect

    Bogdanos, Charalampos; Nesseris, Savvas E-mail: nesseris@nbi.dk

    2009-05-15

    We introduce genetic algorithms as a means to analyze supernovae type Ia data and extract model-independent constraints on the evolution of the Dark Energy equation of state w(z) {identical_to} P{sub DE}/{rho}{sub DE}. Specifically, we will give a brief introduction to the genetic algorithms along with some simple examples to illustrate their advantages and finally we will apply them to the supernovae type Ia data. We find that genetic algorithms can lead to results in line with already established parametric and non-parametric reconstruction methods and could be used as a complementary way of treating SNIa data. As a non-parametric method, genetic algorithms provide a model-independent way to analyze data and can minimize bias due to premature choice of a dark energy model.

  4. Disconnects between popular discourse and home advantage research: what can fans and media tell us about the home advantage phenomenon?

    PubMed

    Smith, D Randall

    2005-04-01

    Many of the factors identified as influencing the home advantage have an underlying social basis, presumably through the influence exerted by the home crowd. Beliefs in the home advantage and the causes of that advantage also have a social basis: sports coverage and fan discourse focus on some aspects of the phenomenon at the expense of others. This paper compares home advantage research with the use of the concept in media narratives and fan Intemet postings. While there are many similarities across sources, the findings suggest three major differences. Fans, and to a lesser extent the media, (1) focus almost exclusively on winning as the evidence for a home advantage, (2) see crowd noise as the main factor for the home advantage, and (3) treat the phenomenon as much more transient than is suggested by academic studies. I identify several features of the phenomenon that facilitate popular views of the home advantage and suggest how future research may benefit from incorporating those views.

  5. Parallelization of the Pipelined Thomas Algorithm

    NASA Technical Reports Server (NTRS)

    Povitsky, A.

    1998-01-01

    In this study the following questions are addressed. Is it possible to improve the parallelization efficiency of the Thomas algorithm? How should the Thomas algorithm be formulated in order to get solved lines that are used as data for other computational tasks while processors are idle? To answer these questions, two-step pipelined algorithms (PAs) are introduced formally. It is shown that the idle processor time is invariant with respect to the order of backward and forward steps in PAs starting from one outermost processor. The advantage of PAs starting from two outermost processors is small. Versions of the pipelined Thomas algorithms considered here fall into the category of PAs. These results show that the parallelization efficiency of the Thomas algorithm cannot be improved directly. However, the processor idle time can be used if some data has been computed by the time processors become idle. To achieve this goal the Immediate Backward pipelined Thomas Algorithm (IB-PTA) is developed in this article. The backward step is computed immediately after the forward step has been completed for the first portion of lines. This enables the completion of the Thomas algorithm for some of these lines before processors become idle. An algorithm for generating a static processor schedule recursively is developed. This schedule is used to switch between forward and backward computations and to control communications between processors. The advantage of the IB-PTA over the basic PTA is the presence of solved lines, which are available for other computations, by the time processors become idle.

  6. Combined string searching algorithm based on knuth-morris- pratt and boyer-moore algorithms

    NASA Astrophysics Data System (ADS)

    Tsarev, R. Yu; Chernigovskiy, A. S.; Tsareva, E. A.; Brezitskaya, V. V.; Nikiforov, A. Yu; Smirnov, N. A.

    2016-04-01

    The string searching task can be classified as a classic information processing task. Users either encounter the solution of this task while working with text processors or browsers, employing standard built-in tools, or this task is solved unseen by the users, while they are working with various computer programmes. Nowadays there are many algorithms for solving the string searching problem. The main criterion of these algorithms’ effectiveness is searching speed. The larger the shift of the pattern relative to the string in case of pattern and string characters’ mismatch is, the higher is the algorithm running speed. This article offers a combined algorithm, which has been developed on the basis of well-known Knuth-Morris-Pratt and Boyer-Moore string searching algorithms. These algorithms are based on two different basic principles of pattern matching. Knuth-Morris-Pratt algorithm is based upon forward pattern matching and Boyer-Moore is based upon backward pattern matching. Having united these two algorithms, the combined algorithm allows acquiring the larger shift in case of pattern and string characters’ mismatch. The article provides an example, which illustrates the results of Boyer-Moore and Knuth-Morris- Pratt algorithms and combined algorithm’s work and shows advantage of the latter in solving string searching problem.

  7. Taking a History of Childhood Trauma in Psychotherapy

    PubMed Central

    SAPORTA, JOSÉ A.; GANS, JEROME S.

    1995-01-01

    The authors examine the process of taking an initial history of childhood abuse and trauma in psychodynamic psychotherapy. In exploring the advantages, complexities, and potential complications of this practice, they hope to heighten the sensitivities of clinicians taking trauma histories. Emphasis on the need to be active in eliciting important historical material is balanced with discussion of concepts that can help therapists avoid interpersonal dynamics that reenact and perpetuate the traumas the therapy seeks to treat. Ensuring optimal psychotherapeutic treatment for patients who have experienced childhood trauma requires attention to the following concepts: a safe holding environment, destabilization, compliance, the repetition compulsion, and projective identification. PMID:22700250

  8. Fever and Taking Your Child's Temperature

    MedlinePlus

    ... About Zika & Pregnancy Fever and Taking Your Child's Temperature KidsHealth > For Parents > Fever and Taking Your Child's ... a mercury thermometer.) previous continue Tips for Taking Temperatures As any parent knows, taking a squirming child's ...

  9. Algorithmic Mechanism Design of Evolutionary Computation

    PubMed Central

    Pei, Yan

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777

  10. [Multispectral image compression algorithms for color reproduction].

    PubMed

    Liang, Wei; Zeng, Ping; Luo, Xue-mei; Wang, Yi-feng; Xie, Kun

    2015-01-01

    In order to improve multispectral images compression efficiency and further facilitate their storage and transmission for the application of color reproduction and so on, in which fields high color accuracy is desired, WF serial methods is proposed, and APWS_RA algorithm is designed. Then the WF_APWS_RA algorithm, which has advantages of low complexity, good illuminant stability and supporting consistent coior reproduction across devices, is presented. The conventional MSE based wavelet embedded coding principle is first studied. And then color perception distortion criterion and visual characteristic matrix W are proposed. Meanwhile, APWS_RA algorithm is formed by optimizing the. rate allocation strategy of APWS. Finally, combined above technologies, a new coding method named WF_APWS_RA is designed. Colorimetric error criterion is used in the algorithm and APWS_RA is applied on visual weighted multispectral image. In WF_APWS_RA, affinity propagation clustering is utilized to exploit spectral correlation of weighted image. Then two-dimensional wavelet transform is used to remove the spatial redundancy. Subsequently, error compensation mechanism and rate pre-allocation are combined to accomplish the embedded wavelet coding. Experimental results show that at the same bit rate, compared with classical coding algorithms, WF serial algorithms have better performance on color retention. APWS_RA preserves least spectral error and WF APWS_RA algorithm has obvious superiority on color accuracy.

  11. Algorithm for dynamic Speckle pattern processing

    NASA Astrophysics Data System (ADS)

    Cariñe, J.; Guzmán, R.; Torres-Ruiz, F. A.

    2016-07-01

    In this paper we present a new algorithm for determining surface activity by processing speckle pattern images recorded with a CCD camera. Surface activity can be produced by motility or small displacements among other causes, and is manifested as a change in the pattern recorded in the camera with reference to a static background pattern. This intensity variation is considered to be a small perturbation compared with the mean intensity. Based on a perturbative method we obtain an equation with which we can infer information about the dynamic behavior of the surface that generates the speckle pattern. We define an activity index based on our algorithm that can be easily compared with the outcomes from other algorithms. It is shown experimentally that this index evolves in time in the same way as the Inertia Moment method, however our algorithm is based on direct processing of speckle patterns without the need for other kinds of post-processes (like THSP and co-occurrence matrix), making it a viable real-time method. We also show how this algorithm compares with several other algorithms when applied to calibration experiments. From these results we conclude that our algorithm offer qualitative and quantitative advantages over current methods.

  12. ALFA: Automated Line Fitting Algorithm

    NASA Astrophysics Data System (ADS)

    Wesson, R.

    2015-12-01

    ALFA fits emission line spectra of arbitrary wavelength coverage and resolution, fully automatically. It uses a catalog of lines which may be present to construct synthetic spectra, the parameters of which are then optimized by means of a genetic algorithm. Uncertainties are estimated using the noise structure of the residuals. An emission line spectrum containing several hundred lines can be fitted in a few seconds using a single processor of a typical contemporary desktop or laptop PC. Data cubes in FITS format can be analysed using multiple processors, and an analysis of tens of thousands of deep spectra obtained with instruments such as MUSE will take a few hours.

  13. A procedure and program to calculate shuttle mask advantage

    NASA Astrophysics Data System (ADS)

    Balasinski, A.; Cetin, J.; Kahng, A.; Xu, X.

    2006-10-01

    A well-known recipe for reducing mask cost component in product development is to place non-redundant elements of layout databases related to multiple products on one reticle plate [1,2]. Such reticles are known as multi-product, multi-layer, or, in general, multi-IP masks. The composition of the mask set should minimize not only the layout placement cost, but also the cost of the manufacturing process, design flow setup, and product design and introduction to market. An important factor is the quality check which should be expeditious and enable thorough visual verification to avoid costly modifications once the data is transferred to the mask shop. In this work, in order to enable the layer placement and quality check procedure, we proposed an algorithm where mask layers are first lined up according to the price and field tone [3]. Then, depending on the product die size, expected fab throughput, and scribeline requirements, the subsequent product layers are placed on the masks with different grades. The actual reduction of this concept to practice allowed us to understand the tradeoffs between the automation of layer placement and setup related constraints. For example, the limited options of the numbers of layer per plate dictated by the die size and other design feedback, made us consider layer pairing based not only on the final price of the mask set, but also on the cost of mask design and fab-friendliness. We showed that it may be advantageous to introduce manual layer pairing to ensure that, e.g., all interconnect layers would be placed on the same plate, allowing for easy and simultaneous design fixes. Another enhancement was to allow some flexibility in mixing and matching of the layers such that non-critical ones requiring low mask grade would be placed in a less restrictive way, to reduce the count of orphan layers. In summary, we created a program to automatically propose and visualize shuttle mask architecture for design verification, with

  14. Plant circadian clocks increase photosynthesis, growth, survival, and competitive advantage.

    PubMed

    Dodd, Antony N; Salathia, Neeraj; Hall, Anthony; Kévei, Eva; Tóth, Réka; Nagy, Ferenc; Hibberd, Julian M; Millar, Andrew J; Webb, Alex A R

    2005-07-22

    Circadian clocks are believed to confer an advantage to plants, but the nature of that advantage has been unknown. We show that a substantial photosynthetic advantage is conferred by correct matching of the circadian clock period with that of the external light-dark cycle. In wild type and in long- and short-circadian period mutants of Arabidopsis thaliana, plants with a clock period matched to the environment contain more chlorophyll, fix more carbon, grow faster, and survive better than plants with circadian periods differing from their environment. This explains why plants gain advantage from circadian control.

  15. Study on Increasing the Accuracy of Classification Based on Ant Colony algorithm

    NASA Astrophysics Data System (ADS)

    Yu, M.; Chen, D.-W.; Dai, C.-Y.; Li, Z.-L.

    2013-05-01

    The application for GIS advances the ability of data analysis on remote sensing image. The classification and distill of remote sensing image is the primary information source for GIS in LUCC application. How to increase the accuracy of classification is an important content of remote sensing research. Adding features and researching new classification methods are the ways to improve accuracy of classification. Ant colony algorithm based on mode framework defined, agents of the algorithms in nature-inspired computation field can show a kind of uniform intelligent computation mode. It is applied in remote sensing image classification is a new method of preliminary swarm intelligence. Studying the applicability of ant colony algorithm based on more features and exploring the advantages and performance of ant colony algorithm are provided with very important significance. The study takes the outskirts of Fuzhou with complicated land use in Fujian Province as study area. The multi-source database which contains the integration of spectral information (TM1-5, TM7, NDVI, NDBI) and topography characters (DEM, Slope, Aspect) and textural information (Mean, Variance, Homogeneity, Contrast, Dissimilarity, Entropy, Second Moment, Correlation) were built. Classification rules based different characters are discovered from the samples through ant colony algorithm and the classification test is performed based on these rules. At the same time, we compare with traditional maximum likelihood method, C4.5 algorithm and rough sets classifications for checking over the accuracies. The study showed that the accuracy of classification based on the ant colony algorithm is higher than other methods. In addition, the land use and cover changes in Fuzhou for the near term is studied and display the figures by using remote sensing technology based on ant colony algorithm. In addition, the land use and cover changes in Fuzhou for the near term is studied and display the figures by using

  16. Algorithm For Hypersonic Flow In Chemical Equilibrium

    NASA Technical Reports Server (NTRS)

    Palmer, Grant

    1989-01-01

    Implicit, finite-difference, shock-capturing algorithm calculates inviscid, hypersonic flows in chemical equilibrium. Implicit formulation chosen because overcomes limitation on mathematical stability encountered in explicit formulations. For dynamical portion of problem, Euler equations written in conservation-law form in Cartesian coordinate system for two-dimensional or axisymmetric flow. For chemical portion of problem, equilibrium state of gas at each point in computational grid determined by minimizing local Gibbs free energy, subject to local conservation of molecules, atoms, ions, and total enthalpy. Major advantage: resulting algorithm naturally stable and captures strong shocks without help of artificial-dissipation terms to damp out spurious numerical oscillations.

  17. Who Takes the Second Chance? Implementing Educational Equality in Adult Basic Education in a Swedish Context.

    ERIC Educational Resources Information Center

    Fransson, Anders; Larsson, Staffan

    This paper presents an overview of adult basic education in Sweden, focusing on what types of adults take advantage of basic educational opportunities and when they do so. It includes two case studies, one of auxiliary nurses and one of factory workers, that describe what motivates these employees to take literacy education or further education…

  18. Take Charge of Your Career

    ERIC Educational Resources Information Center

    Brown, Marshall A.

    2013-01-01

    Today's work world is full of uncertainty. Every day, people hear about another organization going out of business, downsizing, or rightsizing. To prepare for these uncertain times, one must take charge of their own career. This article presents some tips for surviving in today's world of work: (1) Be self-managing; (2) Know what you…

  19. Taking Stands for Social Justice

    ERIC Educational Resources Information Center

    Lindley, Lorinda; Rios, Francisco

    2004-01-01

    In this paper the authors describe efforts to help students take a stand for social justice in the College of Education at one predominantly White institution in the western Rocky Mountain region. The authors outline the theoretical frameworks that inform this work and the context of our work. The focus is on specific pedagogical strategies used…

  20. Four Takes on Tough Times

    ERIC Educational Resources Information Center

    Rebell, Michael A.; Odden, Allan; Rolle, Anthony; Guthrie, James W.

    2012-01-01

    Educational Leadership talks with four experts in the fields of education policy and finance about how schools can weather the current financial crisis. Michael A. Rebell focuses on the recession and students' rights; Allan Odden suggests five steps schools can take to improve in tough times; Anthony Rolle describes the tension between equity and…

  1. Experiencing discrimination increases risk taking.

    PubMed

    Jamieson, Jeremy P; Koslov, Katrina; Nock, Matthew K; Mendes, Wendy Berry

    2013-02-01

    Prior research has revealed racial disparities in health outcomes and health-compromising behaviors, such as smoking and drug abuse. It has been suggested that discrimination contributes to such disparities, but the mechanisms through which this might occur are not well understood. In the research reported here, we examined whether the experience of discrimination affects acute physiological stress responses and increases risk-taking behavior. Black and White participants each received rejecting feedback from partners who were either of their own race (in-group rejection) or of a different race (out-group rejection, which could be interpreted as discrimination). Physiological (cardiovascular and neuroendocrine) changes, cognition (memory and attentional bias), affect, and risk-taking behavior were assessed. Significant participant race × partner race interactions were observed. Cross-race rejection, compared with same-race rejection, was associated with lower levels of cortisol, increased cardiac output, decreased vascular resistance, greater anger, increased attentional bias, and more risk-taking behavior. These data suggest that perceived discrimination is associated with distinct profiles of physiological reactivity, affect, cognitive processing, and risk taking, implicating direct and indirect pathways to health disparities.

  2. Taking Stock and Standing down

    ERIC Educational Resources Information Center

    Peeler, Tom

    2009-01-01

    Standing down is an action the military takes to review, regroup, and reorganize. Unfortunately, it often comes after an accident or other tragic event. To stop losses, the military will "stand down" until they are confident they can resume safe operations. Standing down is good for everyone, not just the military. In today's fast-paced world,…

  3. College Presidents Take on 21

    ERIC Educational Resources Information Center

    Fain, Paul

    2008-01-01

    College presidents have long gotten flak for refusing to take controversial stands on national issues. A large group of presidents opened an emotionally charged national debate on the drinking age. In doing so, they triggered an avalanche of news-media coverage and a fierce backlash. While the criticism may sting, the prime-time fracas may help…

  4. Pair take top science posts

    NASA Astrophysics Data System (ADS)

    Pockley, Peter

    2008-11-01

    Australia's science minister Kim Carr has appointed physical scientists to key posts. Penny Sackett, an astronomer, takes over as the government's chief scientist this month, while in January geologist Megan Clark will become chief executive of the Commonwealth Scientific and Industrial Research Organisation (CSIRO), the county's largest research agency. Both five-year appointments have been welcomed by researchers.

  5. A time-accurate multiple-grid algorithm

    NASA Technical Reports Server (NTRS)

    Jespersen, D. C.

    1985-01-01

    A time-accurate multiple-grid algorithm is described. The algorithm allows one to take much larger time steps with an explicit time-marching scheme than would otherwise be the case. Sample calculations of a scalar advection equation and the Euler equations for an oscillating airfoil are shown. For the oscillating airfoil, time steps an order of magnitude larger than the single-grid algorithm are possible.

  6. When perspective taking increases taking: reactive egoism in social interaction.

    PubMed

    Epley, Nicholas; Caruso, Eugene; Bazerman, Max H

    2006-11-01

    Group members often reason egocentrically, believing that they deserve more than their fair share of group resources. Leading people to consider other members' thoughts and perspectives can reduce these egocentric (self-centered) judgments such that people claim that it is fair for them to take less; however, the consideration of others' thoughts and perspectives actually increases egoistic (selfish) behavior such that people actually take more of available resources. A series of experiments demonstrates this pattern in competitive contexts in which considering others' perspectives activates egoistic theories of their likely behavior, leading people to counter by behaving more egoistically themselves. This reactive egoism is attenuated in cooperative contexts. Discussion focuses on the implications of reactive egoism in social interaction and on strategies for alleviating its potentially deleterious effects. PMID:17059307

  7. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.

    1986-01-01

    Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.

  8. Algorithm-development activities

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    1994-01-01

    The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.

  9. The Relationship between Puberty and Risk Taking in the Real World and in the Laboratory

    PubMed Central

    Collado-Rodriguez, A.; MacPherson, L.; Kurdziel, G.; Rosenberg, L. A.; Lejuez, C.W.

    2014-01-01

    Adolescence is marked by the emergence and escalation of risk taking. Puberty has been long-implicated as constituting vulnerability for risk behavior during this developmental period. Sole reliance on self-reports of risk taking however poses limitations to understanding this complex relationship. There exist potential advantages of complementing self-reports by using the BART-Y laboratory task, a well-validated measure of adolescent risk taking. Toward this end, we examined the association between self-reported puberty and both self-reported and BART-Y risk taking in 231 adolescents. Results showed that pubertal status predicted risk taking using both methodologies above and beyond relevant demographic characteristics. Advantages of a multimodal assessment toward understanding the effects of puberty in adolescent risk taking are discussed and future research directions offered. PMID:24999291

  10. Automated Vectorization of Decision-Based Algorithms

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    Virtually all existing vectorization algorithms are designed to only analyze the numeric properties of an algorithm and distribute those elements across multiple processors. This advances the state of the practice because it is the only known system, at the time of this reporting, that takes high-level statements and analyzes them for their decision properties and converts them to a form that allows them to automatically be executed in parallel. The software takes a high-level source program that describes a complex decision- based condition and rewrites it as a disjunctive set of component Boolean relations that can then be executed in parallel. This is important because parallel architectures are becoming more commonplace in conventional systems and they have always been present in NASA flight systems. This technology allows one to take existing condition-based code and automatically vectorize it so it naturally decomposes across parallel architectures.

  11. A three-dimensional weighted cone beam filtered backprojection (CB-FBP) algorithm for image reconstruction in volumetric CT under a circular source trajectory.

    PubMed

    Tang, Xiangyang; Hsieh, Jiang; Hagiwara, Akira; Nilsen, Roy A; Thibault, Jean-Baptiste; Drapkin, Evgeny

    2005-08-21

    -FBP algorithm can be implemented in either the native CB geometry or the so-called cone-parallel geometry. By taking the cone-parallel geometry as an example, the experimental evaluation shows that, up to a moderate cone angle corresponding to a detector dimension of 64 x 0.625 mm, the CB artefacts can be substantially suppressed by the proposed algorithm, while advantages of the original FDK algorithm, such as the filtered backprojection algorithm structure, 1D ramp filtering and data manipulation efficiency, are maintained.

  12. Referee bias contributes to home advantage in English Premiership football.

    PubMed

    Boyko, Ryan H; Boyko, Adam R; Boyko, Mark G

    2007-09-01

    Officiating bias is thought to contribute to home advantage. Recent research has shown that sports with subjective officiating tend to experience greater home advantage and that referees' decisions can be influenced by crowd noise, but little work has been done to examine whether individual referees vary in their home bias or whether biased decisions contribute to overall home advantage. We develop an ordinal regression model to determine whether various measures of home advantage are affected by the official for the match and by crowd size while controlling for team ability. We examine 5244 English Premier League (EPL) match results involving 50 referees and find that home bias differs between referees. Individual referees give significantly different levels of home advantage, measured as goal differential between the home and away teams, although the significance of this result depends on one referee with a particularly high home advantage (an outlier). Referees vary significantly and robustly in their yellow card and penalty differentials even excluding the outlier. These results confirm that referees are responsible for some of the observed home advantage in the EPL and suggest that home advantage is dependent on the subjective decisions of referees that vary between individuals. We hypothesize that individual referees respond differently to factors such as crowd noise and suggest further research looking at referees' psychological and behavioural responses to biased crowds.

  13. The Female Educational Advantage among Adolescent Children of Immigrants

    ERIC Educational Resources Information Center

    Feliciano, Cynthia

    2012-01-01

    The female advantage in educational achievement is especially puzzling in the case of children of immigrants because it departs from the pattern in most immigrants' home countries. Using data from the Children of Immigrants Longitudinal Study (CILS), this study explores the female advantage in grades and expectations among adolescents and finds…

  14. Information Technology, Core Competencies, and Sustained Competitive Advantage.

    ERIC Educational Resources Information Center

    Byrd, Terry Anthony

    2001-01-01

    Presents a model that depicts a possible connection between competitive advantage and information technology. Focuses on flexibility of the information technology infrastructure as an enabler of core competencies, especially mass customization and time-to-market, that have a relationship to sustained competitive advantage. (Contains 82…

  15. Comparative Advantage, Relative Wages, and the Accumulation of Human Capital.

    ERIC Educational Resources Information Center

    Teulings, Coen N.

    2005-01-01

    I apply Ricardo's principle of comparative advantage to a theory of factor substitutability in a model with a continuum of worker and job types. Highly skilled workers have a comparative advantage in complex jobs. The model satisfies the distance-dependent elasticity of substitution (DIDES) characteristic: substitutability between types declines…

  16. Polysemy Advantage with Abstract but Not Concrete Words

    ERIC Educational Resources Information Center

    Jager, Bernadet; Cleland, Alexandra A.

    2016-01-01

    It is a robust finding that ambiguous words are recognized faster than unambiguous words. More recent studies (e.g., Rodd et al. in "J Mem Lang" 46:245-266, 2002) now indicate that this "ambiguity advantage" may in reality be a "polysemy advantage": caused by related senses (polysemy) rather than unrelated meanings…

  17. Home advantage in the Winter Olympics (1908-1998).

    PubMed

    Balmer, N J; Nevill, A M; Williams, A M

    2001-02-01

    We obtained indices of home advantage, based on the medals won by competing nations, for each event held at the Winter Olympics from 1908 to 1998. These indices were designed to assess home advantage while controlling for nation strength, changes in the number of medals on offer and the performance of 'non-hosting' nations. Some evidence of home advantage was found in figure skating, freestyle skiing, ski jumping, alpine skiing and short track speed skating. In contrast, little or no home advantage was observed in ice hockey, Nordic combined, Nordic skiing, bobsled, luge, biathlon or speed skating. When all events were combined, a significant home advantage was observed (P = 0.029), although no significant differences in the extent of home advantage were found between events (P > 0.05). When events were grouped according to whether they were subjectively assessed by judges, significantly greater home advantage was observed in the subjectively assessed events (P = 0.037). This was a reflection of better home performances, suggesting that judges were scoring home competitors disproportionately higher than away competitors. Familiarity with local conditions was shown to have some effect, particularly in alpine skiing, although the bobsled and luge showed little or no advantage over other events. Regression analysis showed that the number of time zones and direction of travel produced no discernible trends or differences in performance.

  18. The Advantages of Using Planned Comparisons over Post Hoc Tests.

    ERIC Educational Resources Information Center

    Kuehne, Carolyn C.

    There are advantages to using a priori or planned comparisons rather than omnibus multivariate analysis of variance (MANOVA) tests followed by post hoc or a posteriori testing. A small heuristic data set is used to illustrate these advantages. An omnibus MANOVA test was performed on the data followed by a post hoc test (discriminant analysis). A…

  19. Reasoning about Other People's Beliefs: Bilinguals Have an Advantage

    ERIC Educational Resources Information Center

    Rubio-Fernandez, Paula; Glucksberg, Sam

    2012-01-01

    Bilingualism can have widespread cognitive effects. In this article we investigate whether bilingualism might have an effect on adults' abilities to reason about other people's beliefs. In particular, we tested whether bilingual adults might have an advantage over monolingual adults in false-belief reasoning analogous to the advantage that has…

  20. Aging and Text Comprehension: Interpretation and Domain Knowledge Advantage

    ERIC Educational Resources Information Center

    Jeong, Heisawn; Kim, Hyo Sik

    2009-01-01

    In this study, young, middle-aged, and elderly adults read two different history texts. In the "knowledge advantage" condition, readers read a history text about an event that was well-known to readers of all ages but most familiar to elderly adults. In the "no advantage" condition, readers read a history text about a political situation of a…

  1. INSENS classification algorithm report

    SciTech Connect

    Hernandez, J.E.; Frerking, C.J.; Myers, D.W.

    1993-07-28

    This report describes a new algorithm developed for the Imigration and Naturalization Service (INS) in support of the INSENS project for classifying vehicles and pedestrians using seismic data. This algorithm is less sensitive to nuisance alarms due to environmental events than the previous algorithm. Furthermore, the algorithm is simple enough that it can be implemented in the 8-bit microprocessor used in the INSENS system.

  2. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  3. Miniature lens design and optimization with liquid lens element via genetic algorithm

    NASA Astrophysics Data System (ADS)

    Fang, Yi-Chin; Tsai, Chen-Mu

    2008-07-01

    This paper proposes a design and optimization method via (GA) genetic algorithm applied to a newly developed optical element: the liquid lens as a fast focus group. This design takes advantage of quick focus which works simultaneously with modern CMOS sensors in order to significantly improve image quality. Such improvement is important, especially for medical imaging technology such as laparoscopy. However, this optical design with a liquid lens element has not achieved success yet; one of the major reasons is the lack of anomalous dispersion glass and their Abbe number, which complicates the correction of aberrations, limits its availability. From the point of view of aberration theory, most aberrations, particularly in the axial chromatic and lateral color aberration of an optical lens, play the same role as the selection of optical glass. Therefore, in the present research, some optical layouts with a liquid lens are first discussed; next, genetic algorithms are used to replace traditional LDS (least damping square) to search for the best solution using a liquid lens and find the best glass sets for the combination of anomalous dispersion glass and materials inside a liquid lens. During optimization work, the 'geometric optics' theory and 'multiple dynamic crossover and random gene mutation' technique are employed. Through implementation of the algorithms proposed in this paper, satisfactory elimination of axial and lateral color aberration can be achieved.

  4. A fully scalable online pre-processing algorithm for short oligonucleotide microarray atlases

    PubMed Central

    Lahti, Leo; Torrente, Aurora; Elo, Laura L.; Brazma, Alvis; Rung, Johan

    2013-01-01

    Rapid accumulation of large and standardized microarray data collections is opening up novel opportunities for holistic characterization of genome function. The limited scalability of current preprocessing techniques has, however, formed a bottleneck for full utilization of these data resources. Although short oligonucleotide arrays constitute a major source of genome-wide profiling data, scalable probe-level techniques have been available only for few platforms based on pre-calculated probe effects from restricted reference training sets. To overcome these key limitations, we introduce a fully scalable online-learning algorithm for probe-level analysis and pre-processing of large microarray atlases involving tens of thousands of arrays. In contrast to the alternatives, our algorithm scales up linearly with respect to sample size and is applicable to all short oligonucleotide platforms. The model can use the most comprehensive data collections available to date to pinpoint individual probes affected by noise and biases, providing tools to guide array design and quality control. This is the only available algorithm that can learn probe-level parameters based on sequential hyperparameter updates at small consecutive batches of data, thus circumventing the extensive memory requirements of the standard approaches and opening up novel opportunities to take full advantage of contemporary microarray collections. PMID:23563154

  5. [Non-contact heart rate estimation based on joint approximate diagonalization of eigenmatrices algorithm].

    PubMed

    Wang Yinazhi; Han Tailin

    2014-08-01

    Based on the imaging photoplethysmography (iPPG) and blind source separation (BSS) theory the author put forward a method for non-contact heartbeat frequency estimation. Using the recorded video images of the human face in the ambient light with Webcam, we detected the human face through software, separated the detected facial image into three channels RGB components. And then preprocesses i.e. normalization, whitening, etc. were carried out to a certain number of RGB data. After the independent component analysis (ICA)'theory and joint approximate diagonalization of eigenmatrices (JADE) algorithm were applied, we estimated the frequency of heart rate through spectrum analysis. Taking advantage of the consistency of Bland-Altman theory analysis and the commercial Pulse Oximetry Sensor test results, the root mean square error of the algorithm result was calculated as 2. 06 beat/min. It indicated that the algorithm could realize the non-contact measurement of heart rate and lay the foundation for the re- mote and non-contact measurement of multi-parameter physiological measurements. PMID:25464777

  6. [Non-contact heart rate estimation based on joint approximate diagonalization of eigenmatrices algorithm].

    PubMed

    Wang Yinazhi; Han Tailin

    2014-08-01

    Based on the imaging photoplethysmography (iPPG) and blind source separation (BSS) theory the author put forward a method for non-contact heartbeat frequency estimation. Using the recorded video images of the human face in the ambient light with Webcam, we detected the human face through software, separated the detected facial image into three channels RGB components. And then preprocesses i.e. normalization, whitening, etc. were carried out to a certain number of RGB data. After the independent component analysis (ICA)'theory and joint approximate diagonalization of eigenmatrices (JADE) algorithm were applied, we estimated the frequency of heart rate through spectrum analysis. Taking advantage of the consistency of Bland-Altman theory analysis and the commercial Pulse Oximetry Sensor test results, the root mean square error of the algorithm result was calculated as 2. 06 beat/min. It indicated that the algorithm could realize the non-contact measurement of heart rate and lay the foundation for the re- mote and non-contact measurement of multi-parameter physiological measurements. PMID:25508408

  7. Public and expert collaborative evaluation model and algorithm for enterprise knowledge

    NASA Astrophysics Data System (ADS)

    Le, Chengyi; Gu, Xinjian; Pan, Kai; Dai, Feng; Qi, Guoning

    2013-08-01

    Knowledge is becoming the most important resource for more and more enterprises and increases exponentially, but there is not an effective method to evaluate them cogently. Based on Web2.0, this article firstly builds an enterprise knowledge sharing model. Synthetically taking the advantage of the convenience and low cost in public evaluation and of the specialty in peer review, a public and expert collaborative evaluation (PECE) model and algorithm for enterprise knowledge are put forward. Through analyzing interaction between user's domain weights and scores of knowledge points, the PECE model and algorithm serve to recognise valuable knowledge and domain experts efficiently and therefore improve ordering and utilisation of knowledge. This article also studies malicious and casual evaluation from users and a method is proposed to update user's domain weights. Finally, a case of knowledge sharing system for amanufacturing enterprise is developed and realised. User's behaviour of publishing and evaluating knowledge is simulated and then analysed to verify the feasibility of PECE algorithm based on the system.

  8. A hyperspectral imagery anomaly detection algorithm based on local three-dimensional orthogonal subspace projection

    NASA Astrophysics Data System (ADS)

    Zhang, Xing; Wen, Gongjian

    2015-10-01

    Anomaly detection (AD) becomes increasingly important in hyperspectral imagery analysis with many practical applications. Local orthogonal subspace projection (LOSP) detector is a popular anomaly detector which exploits local endmembers/eigenvectors around the pixel under test (PUT) to construct background subspace. However, this subspace only takes advantage of the spectral information, but the spatial correlat ion of the background clutter is neglected, which leads to the anomaly detection result sensitive to the accuracy of the estimated subspace. In this paper, a local three dimensional orthogonal subspace projection (3D-LOSP) algorithm is proposed. Firstly, under the jointly use of both spectral and spatial information, three directional background subspaces are created along the image height direction, the image width direction and the spectral direction, respectively. Then, the three corresponding orthogonal subspaces are calculated. After that, each vector along three direction of the local cube is projected onto the corresponding orthogonal subspace. Finally, a composite score is given through the three direction operators. In 3D-LOSP, the anomalies are redefined as the target not only spectrally different to the background, but also spatially distinct. Thanks to the addition of the spatial information, the robustness of the anomaly detection result has been improved greatly by the proposed 3D-LOSP algorithm. It is noteworthy that the proposed algorithm is an expansion of LOSP and this ideology can inspire many other spectral-based anomaly detection methods. Experiments with real hyperspectral images have proved the stability of the detection result.

  9. The Effect of Peptide Identification Search Algorithms on MS2-Based Label-Free Protein Quantification

    PubMed Central

    Degroeve, Sven; Staes, An; De Bock, Pieter-Jan

    2012-01-01

    Abstract Several approaches exist for the quantification of proteins in complex samples processed by liquid chromatography-mass spectrometry followed by fragmentation analysis (MS2). One of these approaches is label-free MS2-based quantification, which takes advantage of the information computed from MS2 spectrum observations to estimate the abundance of a protein in a sample. As a first step in this approach, fragmentation spectra are typically matched to the peptides that generated them by a search algorithm. Because different search algorithms identify overlapping but non-identical sets of peptides, here we investigate whether these differences in peptide identification have an impact on the quantification of the proteins in the sample. We therefore evaluated the effect of using different search algorithms by examining the reproducibility of protein quantification in technical repeat measurements of the same sample. From our results, it is clear that a search engine effect does exist for MS2-based label-free protein quantification methods. As a general conclusion, it is recommended to address the overall possibility of search engine-induced bias in the protein quantification results of label-free MS2-based methods by performing the analysis with two or more distinct search engines. PMID:22804230

  10. Stochastic coalescence in finite systems: an algorithm for the numerical solution of the multivariate master equation.

    NASA Astrophysics Data System (ADS)

    Alfonso, Lester; Zamora, Jose; Cruz, Pedro

    2015-04-01

    The stochastic approach to coagulation considers the coalescence process going in a system of a finite number of particles enclosed in a finite volume. Within this approach, the full description of the system can be obtained from the solution of the multivariate master equation, which models the evolution of the probability distribution of the state vector for the number of particles of a given mass. Unfortunately, due to its complexity, only limited results were obtained for certain type of kernels and monodisperse initial conditions. In this work, a novel numerical algorithm for the solution of the multivariate master equation for stochastic coalescence that works for any type of kernels and initial conditions is introduced. The performance of the method was checked by comparing the numerically calculated particle mass spectrum with analytical solutions obtained for the constant and sum kernels, with an excellent correspondence between the analytical and numerical solutions. In order to increase the speedup of the algorithm, software parallelization techniques with OpenMP standard were used, along with an implementation in order to take advantage of new accelerator technologies. Simulations results show an important speedup of the parallelized algorithms. This study was funded by a grant from Consejo Nacional de Ciencia y Tecnologia de Mexico SEP-CONACYT CB-131879. The authors also thanks LUFAC® Computacion SA de CV for CPU time and all the support provided.

  11. Optimal configuration algorithm of a satellite transponder

    NASA Astrophysics Data System (ADS)

    Sukhodoev, M. S.; Savenko, I. I.; Martynov, Y. A.; Savina, N. I.; Asmolovskiy, V. V.

    2016-04-01

    This paper describes the algorithm of determining the optimal transponder configuration of the communication satellite while in service. This method uses a mathematical model of the pay load scheme based on the finite-state machine. The repeater scheme is shown as a weighted oriented graph that is represented as plexus in the program view. This paper considers an algorithm example for application with a typical transparent repeater scheme. In addition, the complexity of the current algorithm has been calculated. The main peculiarity of this algorithm is that it takes into account the functionality and state of devices, reserved equipment and input-output ports ranged in accordance with their priority. All described limitations allow a significant decrease in possible payload commutation variants and enable a satellite operator to make reconfiguration solutions operatively.

  12. Home advantage in the Six Nations Rugby Union tournament.

    PubMed

    Thomas, Sion; Reeves, Colin; Bell, Andrew

    2008-02-01

    This study examined whether home advantage occurred in the Six Nations Rugby Union tournament. Data were gathered using the final championship standings from the tournament's inception in 2000 to the recently completed 2007 season. Home advantage for each championship season was defined as the number of points won by teams playing at home, expressed as a percentage of all points gained either at home or away. An analysis of home advantage for each of eight seasons of competition ranged from 53% (2005) to 70% (2006). There was an overall statistically significant home advantage of 61% for 120 matches played in the Six Nations tournament between 2000 and 2007. Also analysed were the percentage of points won at home by each country. Again, evidence supported home advantage amongst all competing nations regardless of the team's quality.

  13. Social network analysis: foundations and frontiers on advantage.

    PubMed

    Burt, Ronald S; Kilduff, Martin; Tasselli, Stefano

    2013-01-01

    We provide an overview of social network analysis focusing on network advantage as a lens that touches on much of the area. For reasons of good data and abundant research, we draw heavily on studies of people in organizations. Advantage is traced to network structure as a proxy for the distribution of variably sticky information in a population. The network around a person indicates the person's access and control in the distribution. Advantage is a function of information breadth, timing, and arbitrage. Advantage is manifest in higher odds of proposing good ideas, more positive evaluations and recognition, higher compensation, and faster promotions. We discuss frontiers of advantage contingent on personality, cognition, embeddedness, and dynamics.

  14. Beyond the University of Racial Diversity: Some Remarks on Race, Diversity, (Dis)Advantage and Affirmative Action

    ERIC Educational Resources Information Center

    Waghid, Y.

    2010-01-01

    The compelling essays in this issue of the journal take on the often contentious and complex issue of racial affirmative action. I do not wish to repeat the arguments authors offer either in defence or against student admissions to a university on the grounds of race, (dis)advantage, class, gender, and so on. Rather, I wish to respond to a…

  15. Optimisation of nonlinear motion cueing algorithm based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Asadi, Houshyar; Mohamed, Shady; Rahim Zadeh, Delpak; Nahavandi, Saeid

    2015-04-01

    Motion cueing algorithms (MCAs) are playing a significant role in driving simulators, aiming to deliver the most accurate human sensation to the simulator drivers compared with a real vehicle driver, without exceeding the physical limitations of the simulator. This paper provides the optimisation design of an MCA for a vehicle simulator, in order to find the most suitable washout algorithm parameters, while respecting all motion platform physical limitations, and minimising human perception error between real and simulator driver. One of the main limitations of the classical washout filters is that it is attuned by the worst-case scenario tuning method. This is based on trial and error, and is effected by driving and programmers experience, making this the most significant obstacle to full motion platform utilisation. This leads to inflexibility of the structure, production of false cues and makes the resulting simulator fail to suit all circumstances. In addition, the classical method does not take minimisation of human perception error and physical constraints into account. Production of motion cues and the impact of different parameters of classical washout filters on motion cues remain inaccessible for designers for this reason. The aim of this paper is to provide an optimisation method for tuning the MCA parameters, based on nonlinear filtering and genetic algorithms. This is done by taking vestibular sensation error into account between real and simulated cases, as well as main dynamic limitations, tilt coordination and correlation coefficient. Three additional compensatory linear blocks are integrated into the MCA, to be tuned in order to modify the performance of the filters successfully. The proposed optimised MCA is implemented in MATLAB/Simulink software packages. The results generated using the proposed method show increased performance in terms of human sensation, reference shape tracking and exploiting the platform more efficiently without reaching

  16. IDP++: signal and image processing algorithms in C++ version 4.1

    SciTech Connect

    Lehman, S.K.

    1996-11-01

    IDP++ (Image and Data Processing in C++) is a collection of signal and image processing algorithms written in C++. It is a compiled signal processing environment which supports four data types of up to four dimensions. It is developed within Lawrence Livermore National Laboratory`s Image and Data Processing group as a partial replacement for View. IDP ++ takes advantage of the latest, implemented and actually working, object-oriented compiler technology to provide `information hiding.` Users need only know C, not C++. Signals are treated like any other variable with a defined set of operators and functions in an intuitive manner. IDP++ is designed for real-time environment where interpreted processing packages are less efficient. IDP++ exists for both SUNs and Silicon Graphics using their most current compilers.

  17. Toward an image compression algorithm for the high-resolution electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    Taking pictures with a camera that uses a digital recording medium instead of film has the advantage of recording and transmitting images without the use of a darkroom or a courier. However, high-resolution images contain an enormous amount of information and strain data-storage systems. Image compression will allow multiple images to be stored in the High-Resolution Electronic Still Camera. The camera is under development at Johnson Space Center. Fidelity of the reproduced image and compression speed are of tantamount importance. Lossless compression algorithms are fast and faithfully reproduce the image, but their compression ratios will be unacceptably low due to noise in the front end of the camera. Future efforts will include exploring methods that will reduce the noise in the image and increase the compression ratio.

  18. OPERA: Objective Prism Enhanced Reduction Algorithms

    NASA Astrophysics Data System (ADS)

    Universidad Complutense de Madrid Astrophysics Research Group

    2015-09-01

    OPERA (Objective Prism Enhanced Reduction Algorithms) automatically analyzes astronomical images using the objective-prism (OP) technique to register thousands of low resolution spectra in large areas. It detects objects in an image, extracts one-dimensional spectra, and identifies the emission line feature. The main advantages of this method are: 1) to avoid subjectivity inherent to visual inspection used in past studies; and 2) the ability to obtain physical parameters without follow-up spectroscopy.

  19. Accuracy estimation for supervised learning algorithms

    SciTech Connect

    Glover, C.W.; Oblow, E.M.; Rao, N.S.V.

    1997-04-01

    This paper illustrates the relative merits of three methods - k-fold Cross Validation, Error Bounds, and Incremental Halting Test - to estimate the accuracy of a supervised learning algorithm. For each of the three methods we point out the problem they address, some of the important assumptions that are based on, and illustrate them through an example. Finally, we discuss the relative advantages and disadvantages of each method.

  20. A Sparse Self-Consistent Field Algorithm and Its Parallel Implementation: Application to Density-Functional-Based Tight Binding.

    PubMed

    Scemama, Anthony; Renon, Nicolas; Rapacioli, Mathias

    2014-06-10

    We present an algorithm and its parallel implementation for solving a self-consistent problem as encountered in Hartree-Fock or density functional theory. The algorithm takes advantage of the sparsity of matrices through the use of local molecular orbitals. The implementation allows one to exploit efficiently modern symmetric multiprocessing (SMP) computer architectures. As a first application, the algorithm is used within the density-functional-based tight binding method, for which most of the computational time is spent in the linear algebra routines (diagonalization of the Fock/Kohn-Sham matrix). We show that with this algorithm (i) single point calculations on very large systems (millions of atoms) can be performed on large SMP machines, (ii) calculations involving intermediate size systems (1000-100 000 atoms) are also strongly accelerated and can run efficiently on standard servers, and (iii) the error on the total energy due to the use of a cutoff in the molecular orbital coefficients can be controlled such that it remains smaller than the SCF convergence criterion. PMID:26580754

  1. Sleep Deprivation and Advice Taking

    PubMed Central

    Häusser, Jan Alexander; Leder, Johannes; Ketturat, Charlene; Dresler, Martin; Faber, Nadira Sophie

    2016-01-01

    Judgements and decisions in many political, economic or medical contexts are often made while sleep deprived. Furthermore, in such contexts individuals are required to integrate information provided by – more or less qualified – advisors. We asked if sleep deprivation affects advice taking. We conducted a 2 (sleep deprivation: yes vs. no) ×2 (competency of advisor: medium vs. high) experimental study to examine the effects of sleep deprivation on advice taking in an estimation task. We compared participants with one night of total sleep deprivation to participants with a night of regular sleep. Competency of advisor was manipulated within subjects. We found that sleep deprived participants show increased advice taking. An interaction of condition and competency of advisor and further post-hoc analyses revealed that this effect was more pronounced for the medium competency advisor compared to the high competency advisor. Furthermore, sleep deprived participants benefited more from an advisor of high competency in terms of stronger improvement in judgmental accuracy than well-rested participants. PMID:27109507

  2. Sleep Deprivation and Advice Taking.

    PubMed

    Häusser, Jan Alexander; Leder, Johannes; Ketturat, Charlene; Dresler, Martin; Faber, Nadira Sophie

    2016-01-01

    Judgements and decisions in many political, economic or medical contexts are often made while sleep deprived. Furthermore, in such contexts individuals are required to integrate information provided by - more or less qualified - advisors. We asked if sleep deprivation affects advice taking. We conducted a 2 (sleep deprivation: yes vs. no) ×2 (competency of advisor: medium vs. high) experimental study to examine the effects of sleep deprivation on advice taking in an estimation task. We compared participants with one night of total sleep deprivation to participants with a night of regular sleep. Competency of advisor was manipulated within subjects. We found that sleep deprived participants show increased advice taking. An interaction of condition and competency of advisor and further post-hoc analyses revealed that this effect was more pronounced for the medium competency advisor compared to the high competency advisor. Furthermore, sleep deprived participants benefited more from an advisor of high competency in terms of stronger improvement in judgmental accuracy than well-rested participants. PMID:27109507

  3. Home advantage in southern hemisphere rugby union: national and international.

    PubMed

    Morton R, Hugh

    2006-05-01

    This study evaluates home advantages both for national (Super 12) and international (Tri-nations) rugby union teams from South Africa, Australia and New Zealand, over the five-year period 2000 - 2004 using linear modelling. These home advantages are examined for statistical and practical significance, for variability between teams, for stability over time and for inter-correlation. These data reveal that the overall home advantage in elite rugby union has a mean of +6.7 points, and that this changes little from year to year. Closer scrutiny nevertheless reveals a high degree of variability. Different teams can and do have different home advantages, which ranges from a low of -0.7 to a high of +28.3 points in any one year. Furthermore, some team home advantages change up or down from one year to the next, by as much as -36.5 to +31.4 points at the extremes. There is no evidence that the stronger teams have the higher home advantages, or that a high home advantage leads to a superior finishing position in the competition.

  4. Does being female provide a neuroprotective advantage following spinal cord injury?

    PubMed Central

    Datto, Jeffrey P.; Yang, Jackie; Dietrich, W. Dalton; Pearse, Damien D.

    2015-01-01

    It has been controversial whether gender has any effect on recovery following spinal cord injury (SCI). Past experimental and clinical research aimed at addressing this subject has led to constrasting findings on whether females hold any advantage in locomotor recovery. Additionally, for studies supporting the notion of a female gender related advantage, a definite cause has not been explained. In a recent study, using large sample sizes for comparative male and female spinal cord injury cohorts, we reported that a significant gender advantage favoring females existed in both tissue preservation and functional recovery after taking into consideration discrepancies in age and weight of the animals across sexes. Prior animal research frequently used sample sizes that were too small to determine significance with certainty and also did not account for two other factors that influence locomotor performance: age and weight. Our finding is important in light of controversy surrounding the effect of gender on outcome and the fact that SCI affects more than ten thousand new individuals annually, a population that is disproportionately male. By deepening our understanding of why a gender advantage exists, potential new therapeutics can be designed to improve recovery for the male population following the initial trauma or putatively augment the neuroprotective privilege in females for enhanced outcomes. PMID:26692831

  5. Does being female provide a neuroprotective advantage following spinal cord injury?

    PubMed

    Datto, Jeffrey P; Yang, Jackie; Dietrich, W Dalton; Pearse, Damien D

    2015-10-01

    It has been controversial whether gender has any effect on recovery following spinal cord injury (SCI). Past experimental and clinical research aimed at addressing this subject has led to constrasting findings on whether females hold any advantage in locomotor recovery. Additionally, for studies supporting the notion of a female gender related advantage, a definite cause has not been explained. In a recent study, using large sample sizes for comparative male and female spinal cord injury cohorts, we reported that a significant gender advantage favoring females existed in both tissue preservation and functional recovery after taking into consideration discrepancies in age and weight of the animals across sexes. Prior animal research frequently used sample sizes that were too small to determine significance with certainty and also did not account for two other factors that influence locomotor performance: age and weight. Our finding is important in light of controversy surrounding the effect of gender on outcome and the fact that SCI affects more than ten thousand new individuals annually, a population that is disproportionately male. By deepening our understanding of why a gender advantage exists, potential new therapeutics can be designed to improve recovery for the male population following the initial trauma or putatively augment the neuroprotective privilege in females for enhanced outcomes.

  6. Does being female provide a neuroprotective advantage following spinal cord injury?

    PubMed

    Datto, Jeffrey P; Yang, Jackie; Dietrich, W Dalton; Pearse, Damien D

    2015-10-01

    It has been controversial whether gender has any effect on recovery following spinal cord injury (SCI). Past experimental and clinical research aimed at addressing this subject has led to constrasting findings on whether females hold any advantage in locomotor recovery. Additionally, for studies supporting the notion of a female gender related advantage, a definite cause has not been explained. In a recent study, using large sample sizes for comparative male and female spinal cord injury cohorts, we reported that a significant gender advantage favoring females existed in both tissue preservation and functional recovery after taking into consideration discrepancies in age and weight of the animals across sexes. Prior animal research frequently used sample sizes that were too small to determine significance with certainty and also did not account for two other factors that influence locomotor performance: age and weight. Our finding is important in light of controversy surrounding the effect of gender on outcome and the fact that SCI affects more than ten thousand new individuals annually, a population that is disproportionately male. By deepening our understanding of why a gender advantage exists, potential new therapeutics can be designed to improve recovery for the male population following the initial trauma or putatively augment the neuroprotective privilege in females for enhanced outcomes. PMID:26692831

  7. A segmentation algorithm for automated tracking of fast swimming unlabelled cells in three dimensions.

    PubMed

    Pimentel, J A; Carneiro, J; Darszon, A; Corkidi, G

    2012-01-01

    Recent advances in microscopy and cytolabelling methods enable the real time imaging of cells as they move and interact in their real physiological environment. Scenarios in which multiple cells move autonomously in all directions are not uncommon in biology. A remarkable example is the swimming of marine spermatozoa in search of the conspecific oocyte. Imaging cells in these scenarios, particularly when they move fast and are poorly labelled or even unlabelled requires very fast three-dimensional time-lapse (3D+t) imaging. This 3D+t imaging poses challenges not only to the acquisition systems but also to the image analysis algorithms. It is in this context that this work describes an original automated multiparticle segmentation method to analyse motile translucent cells in 3D microscopical volumes. The proposed segmentation technique takes advantage of the way the cell appearance changes with the distance to the focal plane position. The cells translucent properties and their interaction with light produce a specific pattern: when the cell is within or close to the focal plane, its two-dimensional (2D) appearance matches a bright spot surrounded by a dark ring, whereas when it is farther from the focal plane the cell contrast is inverted looking like a dark spot surrounded by a bright ring. The proposed method analyses the acquired video sequence frame-by-frame taking advantage of 2D image segmentation algorithms to identify and select candidate cellular sections. The crux of the method is in the sequential filtering of the candidate sections, first by template matching of the in-focus and out-of-focus templates and second by considering adjacent candidates sections in 3D. These sequential filters effectively narrow down the number of segmented candidate sections making the automatic tracking of cells in three dimensions a straightforward operation. PMID:21999166

  8. A segmentation algorithm for automated tracking of fast swimming unlabelled cells in three dimensions.

    PubMed

    Pimentel, J A; Carneiro, J; Darszon, A; Corkidi, G

    2012-01-01

    Recent advances in microscopy and cytolabelling methods enable the real time imaging of cells as they move and interact in their real physiological environment. Scenarios in which multiple cells move autonomously in all directions are not uncommon in biology. A remarkable example is the swimming of marine spermatozoa in search of the conspecific oocyte. Imaging cells in these scenarios, particularly when they move fast and are poorly labelled or even unlabelled requires very fast three-dimensional time-lapse (3D+t) imaging. This 3D+t imaging poses challenges not only to the acquisition systems but also to the image analysis algorithms. It is in this context that this work describes an original automated multiparticle segmentation method to analyse motile translucent cells in 3D microscopical volumes. The proposed segmentation technique takes advantage of the way the cell appearance changes with the distance to the focal plane position. The cells translucent properties and their interaction with light produce a specific pattern: when the cell is within or close to the focal plane, its two-dimensional (2D) appearance matches a bright spot surrounded by a dark ring, whereas when it is farther from the focal plane the cell contrast is inverted looking like a dark spot surrounded by a bright ring. The proposed method analyses the acquired video sequence frame-by-frame taking advantage of 2D image segmentation algorithms to identify and select candidate cellular sections. The crux of the method is in the sequential filtering of the candidate sections, first by template matching of the in-focus and out-of-focus templates and second by considering adjacent candidates sections in 3D. These sequential filters effectively narrow down the number of segmented candidate sections making the automatic tracking of cells in three dimensions a straightforward operation.

  9. An Overview of GPM At-Launch Level 2 Precipitation Algorithms (Invited)

    NASA Astrophysics Data System (ADS)

    Munchak, S. J.; Meneghini, R.; Kummerow, C. D.; Olson, W. S.

    2013-12-01

    The Global Precipitation Measurement core satellite will carry the most advanced array of precipitation sensing instruments yet flown in space, the GPM Microwave Imager (GMI) and Dual-Frequency Precipitation Radar (DPR). Algorithms to convert the measurements from these instruments to precipitation rates have been developed and tested with data from aircraft instruments, physical model simulations, and existing satellites. These algorithms build upon the heritage of the Tropical Rainfall Measuring Mission (TRMM) algorithms to take advantage of the additional frequencies probed by GMI and DPR. As with TRMM, three instrument-specific level 2 precipitation products will be available: Radar-only, radiometer-only, and combined radar-radiometer. The radar-only product will be further subdivided into three subproducts: Ku-band-only (245 km swath), Ka-band-only (120 km swath with enhanced sensitivity), and Ku-Ka (120 km swath). The dual-frequency algorithm will provide enhanced estimation of rainfall rates and microphysical parameters such as mean raindrop size and phase identification relative to single-frequency products. The GMI precipitation product will be based upon a Bayesian algorithm that seeks to match observed brightness against those in a database. After launch, this database will be populated with observations from the GPM Core Observatory, but the at-launch database consists of profiles observed by TRMM, CloudSat, ground radars, and is augmented by model data fields to facilitate the generation of databases at non-observed frequencies. Ancillary data is used to subset the database by surface temperature, column water vapor, and surface type. This algorithm has been tested with data from the Special Sensor Microwave Imager/Sounder and comparisons with ground-based radar mosaic rainfall (NMQ) will be presented. The combined GMI-DPR algorithm uses an ensemble filtering approach to create and adjust many solutions (owing to different assumptions about the

  10. Development of sensor-based nitrogen recommendation algorithms for cereal crops

    NASA Astrophysics Data System (ADS)

    Asebedo, Antonio Ray

    through 2014 to evaluate the previously developed KSU sensor-based N recommendation algorithm in corn N fertigation systems. Results indicate that the current KSU corn algorithm was effective at achieving high yields, but has the tendency to overestimate N requirements. To optimize sensor-based N recommendations for N fertigation systems, algorithms must be specifically designed for these systems to take advantage of their full capabilities, thus allowing implementation of high NUE N management systems.

  11. BootGraph: probabilistic fiber tractography using bootstrap algorithms and graph theory.

    PubMed

    Vorburger, Robert S; Reischauer, Carolin; Boesiger, Peter

    2013-02-01

    Bootstrap methods have recently been introduced to diffusion-weighted magnetic resonance imaging to estimate the measurement uncertainty of ensuing diffusion parameters directly from the acquired data without the necessity to assume a noise model. These methods have been previously combined with deterministic streamline tractography algorithms to allow for the assessment of connection probabilities in the human brain. Thereby, the local noise induced disturbance in the diffusion data is accumulated additively due to the incremental progression of streamline tractography algorithms. Graph based approaches have been proposed to overcome this drawback of streamline techniques. For this reason, the bootstrap method is in the present work incorporated into a graph setup to derive a new probabilistic fiber tractography method, called BootGraph. The acquired data set is thereby converted into a weighted, undirected graph by defining a vertex in each voxel and edges between adjacent vertices. By means of the cone of uncertainty, which is derived using the wild bootstrap, a weight is thereafter assigned to each edge. Two path finding algorithms are subsequently applied to derive connection probabilities. While the first algorithm is based on the shortest path approach, the second algorithm takes all existing paths between two vertices into consideration. Tracking results are compared to an established algorithm based on the bootstrap method in combination with streamline fiber tractography and to another graph based algorithm. The BootGraph shows a very good performance in crossing situations with respect to false negatives and permits incorporating additional constraints, such as a curvature threshold. By inheriting the advantages of the bootstrap method and graph theory, the BootGraph method provides a computationally efficient and flexible probabilistic tractography setup to compute connection probability maps and virtual fiber pathways without the drawbacks of

  12. Global Precipitation Measurement (GPM) Microwave Imager Falling Snow Retrieval Algorithm Performance

    NASA Astrophysics Data System (ADS)

    Skofronick Jackson, Gail; Munchak, Stephen J.; Johnson, Benjamin T.

    2015-04-01

    Retrievals of falling snow from space represent an important data set for understanding the Earth's atmospheric, hydrological, and energy cycles. While satellite-based remote sensing provides global coverage of falling snow events, the science is relatively new and retrievals are still undergoing development with challenges and uncertainties remaining. This work reports on the development and post-launch testing of retrieval algorithms for the NASA Global Precipitation Measurement (GPM) mission Core Observatory satellite launched in February 2014. In particular, we will report on GPM Microwave Imager (GMI) radiometer instrument algorithm performance with respect to falling snow detection and estimation. Since GPM's launch, the at-launch GMI precipitation algorithms, based on a Bayesian framework, have been used with the new GPM data. The at-launch database is generated using proxy satellite data merged with surface measurements (instead of models). One year after launch, the Bayesian database will begin to be replaced with the more realistic observational data from the GPM spacecraft radar retrievals and GMI data. It is expected that the observational database will be much more accurate for falling snow retrievals because that database will take full advantage of the 166 and 183 GHz snow-sensitive channels. Furthermore, much retrieval algorithm work has been done to improve GPM retrievals over land. The Bayesian framework for GMI retrievals is dependent on the a priori database used in the algorithm and how profiles are selected from that database. Thus, a land classification sorts land surfaces into ~15 different categories for surface-specific databases (radiometer brightness temperatures are quite dependent on surface characteristics). In addition, our work has shown that knowing if the land surface is snow-covered, or not, can improve the performance of the algorithm. Improvements were made to the algorithm that allow for daily inputs of ancillary snow cover

  13. Advantageous use of slush and gelled slush in space vehicles

    NASA Technical Reports Server (NTRS)

    Adamson, J. Z.

    1971-01-01

    The advantages of combining both slush and gel have been recognized. These advantages are: (1) a reduction in the gelling agent necessary; (2) the achievement of active positioning; and (3) a potential increase in impulse density. The need for extended mission capability as indicated by present mission planning is outlined and the expected schedules are presented. A condensed version of analytical and testing conclusions as related to storage systems and slush is given. The significant results of slush flow testing and its possible influence on vehicle propulsion systems are presented, and the characterization and preparation of gels are discussed in relation to future applications, advantages, and disadvantages.

  14. [On the concept of health in traditional Chinese medicine and its characteristics and advantages].

    PubMed

    Wang, Jie; Tang, Yan-li

    2010-01-01

    There are abundant systematic concepts of health and the wisdom of life-cultivation in traditional Chinese medicine (TCM), not only including the connotation of modern health, but also with many of its own characteristics. The health concept of TCM includes the holistic view of unison between man and universe, the harmonious unity of fusion of shape and soul, the people-oriented view of values and the balance of qi-blood-yin-yang in the human body. The characteristics and advantages of TCM for maintaining health consist of prevention, emotion regulation, to pay great attention to Upright Qi and obeying nature. To take full advantage of the health concept and regulation methods of TCM, to further study and spread it will promote and make great contributions to human health.

  15. Advantages and challenges of optical coating production with indirect monochromatic monitoring.

    PubMed

    Zhang, Jinlong; Cao, Chong; Tikhonravov, Alexander V; Trubetskov, Michael K; Gorokh, Artur; Cheng, Xinbin; Wang, Zhanshan

    2015-04-10

    In this paper, we present our recent studies on raising the quality of optical coating production with an indirect monochromatic monitoring system. Preproduction error analysis and computational manufacturing are used to estimate potential advantages of application of indirect optical monitoring. It is then demonstrated that a key issue for realization of this advantage is accurate specification of tooling factors for layer thicknesses on test glasses. The tooling factors are precalibrated using single layer depositions and then are corrected using results of reverse engineering for the first production run. It is found that a gradual variation of tooling factors of low index layers is the main error factor in the first deposition run. Finally, we redeposit our coating with a modified monitoring strategy, taking into account this factor. The new experimental results show excellent correspondence with the theoretical spectral performance.

  16. Photoacoustic imaging taking into account thermodynamic attenuation

    NASA Astrophysics Data System (ADS)

    Acosta, Sebastián; Montalto, Carlos

    2016-11-01

    In this paper we consider a mathematical model for photoacoustic imaging which takes into account attenuation due to thermodynamic dissipation. The propagation of acoustic (compressional) waves is governed by a scalar wave equation coupled to the heat equation for the excess temperature. We seek to recover the initial acoustic profile from knowledge of acoustic measurements at the boundary. We recognize that this inverse problem is a special case of boundary observability for a thermoelastic system. This leads to the use of control/observability tools to prove the unique and stable recovery of the initial acoustic profile in the weak thermoelastic coupling regime. This approach is constructive, yielding a solvable equation for the unknown acoustic profile. Moreover, the solution to this reconstruction equation can be approximated numerically using the conjugate gradient method. If certain geometrical conditions for the wave speed are satisfied, this approach is well-suited for variable media and for measurements on a subset of the boundary. We also present a numerical implementation of the proposed reconstruction algorithm.

  17. Bootstrap performance profiles in stochastic algorithms assessment

    SciTech Connect

    Costa, Lino; Espírito Santo, Isabel A.C.P.; Oliveira, Pedro

    2015-03-10

    Optimization with stochastic algorithms has become a relevant research field. Due to its stochastic nature, its assessment is not straightforward and involves integrating accuracy and precision. Performance profiles for the mean do not show the trade-off between accuracy and precision, and parametric stochastic profiles require strong distributional assumptions and are limited to the mean performance for a large number of runs. In this work, bootstrap performance profiles are used to compare stochastic algorithms for different statistics. This technique allows the estimation of the sampling distribution of almost any statistic even with small samples. Multiple comparison profiles are presented for more than two algorithms. The advantages and drawbacks of each assessment methodology are discussed.

  18. Concurrent algorithms for transient nonlinear FE analysis

    NASA Technical Reports Server (NTRS)

    Ortiz, M.

    1987-01-01

    A two-parameter class of time-stepping algorithms for nonlinear structural dynamics is investigated. What sets the present method apart from other concurrent algorithms is the fact that it can be used to some advantage in sequential machines as well. Thus, substantial speed-ups are obtained on a single processor as the number of subdomains is increased. An additional O(p) speed-up is obtained when p processors are utilized. The test case discussed is being repeated for a mesh comprising four times as many elements, in an effort to understand how the large scale asymptotic speed-ups are attained. A three dimensional example involving finite deformations and free body motions is also being pursued. A code optimized for concurrency in the Alliant FX8 computer is being finalized. This will provide the means for testing the performance of the algorithm in a multiprocessor environment.

  19. A danger-theory-based immune network optimization algorithm.

    PubMed

    Zhang, Ruirui; Li, Tao; Xiao, Xin; Shi, Yuanquan

    2013-01-01

    Existing artificial immune optimization algorithms reflect a number of shortcomings, such as premature convergence and poor local search ability. This paper proposes a danger-theory-based immune network optimization algorithm, named dt-aiNet. The danger theory emphasizes that danger signals generated from changes of environments will guide different levels of immune responses, and the areas around danger signals are called danger zones. By defining the danger zone to calculate danger signals for each antibody, the algorithm adjusts antibodies' concentrations through its own danger signals and then triggers immune responses of self-regulation. So the population diversity can be maintained. Experimental results show that the algorithm has more advantages in the solution quality and diversity of the population. Compared with influential optimization algorithms, CLONALG, opt-aiNet, and dopt-aiNet, the algorithm has smaller error values and higher success rates and can find solutions to meet the accuracies within the specified function evaluation times.

  20. A novel clustering algorithm inspired by membrane computing.

    PubMed

    Peng, Hong; Luo, Xiaohui; Gao, Zhisheng; Wang, Jun; Pei, Zheng

    2015-01-01

    P systems are a class of distributed parallel computing models; this paper presents a novel clustering algorithm, which is inspired from mechanism of a tissue-like P system with a loop structure of cells, called membrane clustering algorithm. The objects of the cells express the candidate centers of clusters and are evolved by the evolution rules. Based on the loop membrane structure, the communication rules realize a local neighborhood topology, which helps the coevolution of the objects and improves the diversity of objects in the system. The tissue-like P system can effectively search for the optimal partitioning with the help of its parallel computing advantage. The proposed clustering algorithm is evaluated on four artificial data sets and six real-life data sets. Experimental results show that the proposed clustering algorithm is superior or competitive to k-means algorithm and several evolutionary clustering algorithms recently reported in the literature.

  1. A Novel Clustering Algorithm Inspired by Membrane Computing

    PubMed Central

    Luo, Xiaohui; Gao, Zhisheng; Wang, Jun; Pei, Zheng

    2015-01-01

    P systems are a class of distributed parallel computing models; this paper presents a novel clustering algorithm, which is inspired from mechanism of a tissue-like P system with a loop structure of cells, called membrane clustering algorithm. The objects of the cells express the candidate centers of clusters and are evolved by the evolution rules. Based on the loop membrane structure, the communication rules realize a local neighborhood topology, which helps the coevolution of the objects and improves the diversity of objects in the system. The tissue-like P system can effectively search for the optimal partitioning with the help of its parallel computing advantage. The proposed clustering algorithm is evaluated on four artificial data sets and six real-life data sets. Experimental results show that the proposed clustering algorithm is superior or competitive to k-means algorithm and several evolutionary clustering algorithms recently reported in the literature. PMID:25874264

  2. Lord-Wingersky Algorithm Version 2.0 for Hierarchical Item Factor Models with Applications in Test Scoring, Scale Alignment, and Model Fit Testing

    PubMed Central

    Cai, Li

    2014-01-01

    Lord and Wingersky’s (1984) recursive algorithm for creating summed score based likelihoods and posteriors has a proven track record in unidimensional item response theory (IRT) applications. Extending the recursive algorithm to handle multidimensionality is relatively simple, especially with fixed quadrature because the recursions can be defined on a grid formed by direct products of quadrature points. However, the increase in computational burden remains exponential in the number of dimensions, making the implementation of the recursive algorithm cumbersome for truly high dimensional models. In this paper, a dimension reduction method that is specific to the Lord-Wingersky recursions is developed. This method can take advantage of the restrictions implied by hierarchical item factor models, e.g., the bifactor model, the testlet model, or the two-tier model, such that a version of the Lord-Wingersky recursive algorithm can operate on a dramatically reduced set of quadrature points. For instance, in a bifactor model, the dimension of integration is always equal to 2, regardless of the number of factors. The new algorithm not only provides an effective mechanism to produce summed score to IRT scaled score translation tables properly adjusted for residual dependence, but leads to new applications in test scoring, linking, and model fit checking as well. Simulated and empirical examples are used to illustrate the new applications. PMID:25233839

  3. Lord-Wingersky Algorithm Version 2.0 for Hierarchical Item Factor Models with Applications in Test Scoring, Scale Alignment, and Model Fit Testing.

    PubMed

    Cai, Li

    2015-06-01

    Lord and Wingersky's (Appl Psychol Meas 8:453-461, 1984) recursive algorithm for creating summed score based likelihoods and posteriors has a proven track record in unidimensional item response theory (IRT) applications. Extending the recursive algorithm to handle multidimensionality is relatively simple, especially with fixed quadrature because the recursions can be defined on a grid formed by direct products of quadrature points. However, the increase in computational burden remains exponential in the number of dimensions, making the implementation of the recursive algorithm cumbersome for truly high-dimensional models. In this paper, a dimension reduction method that is specific to the Lord-Wingersky recursions is developed. This method can take advantage of the restrictions implied by hierarchical item factor models, e.g., the bifactor model, the testlet model, or the two-tier model, such that a version of the Lord-Wingersky recursive algorithm can operate on a dramatically reduced set of quadrature points. For instance, in a bifactor model, the dimension of integration is always equal to 2, regardless of the number of factors. The new algorithm not only provides an effective mechanism to produce summed score to IRT scaled score translation tables properly adjusted for residual dependence, but leads to new applications in test scoring, linking, and model fit checking as well. Simulated and empirical examples are used to illustrate the new applications.

  4. Online Planning Algorithms for POMDPs

    PubMed Central

    Ross, Stéphane; Pineau, Joelle; Paquet, Sébastien; Chaib-draa, Brahim

    2009-01-01

    Partially Observable Markov Decision Processes (POMDPs) provide a rich framework for sequential decision-making under uncertainty in stochastic domains. However, solving a POMDP is often intractable except for small problems due to their complexity. Here, we focus on online approaches that alleviate the computational complexity by computing good local policies at each decision step during the execution. Online algorithms generally consist of a lookahead search to find the best action to execute at each time step in an environment. Our objectives here are to survey the various existing online POMDP methods, analyze their properties and discuss their advantages and disadvantages; and to thoroughly evaluate these online approaches in different environments under various metrics (return, error bound reduction, lower bound improvement). Our experimental results indicate that state-of-the-art online heuristic search methods can handle large POMDP domains efficiently. PMID:19777080

  5. Review of ADHD Pharmacotherapies: Advantages, Disadvantages, and Clinical Pearls

    ERIC Educational Resources Information Center

    Daughton, Joan M.; Kratochvil, Christopher J.

    2009-01-01

    The advantages, disadvantages, as well as helpful hints on when to use several drug therapies against attention deficit hyperactivity disorder are discussed. The drugs discussed are methylphenidate, atomoxetine, clonidine, and bupropion.

  6. Cognitive advantage in bilingualism: an example of publication bias?

    PubMed

    de Bruin, Angela; Treccani, Barbara; Della Sala, Sergio

    2015-01-01

    It is a widely held belief that bilinguals have an advantage over monolinguals in executive-control tasks, but is this what all studies actually demonstrate? The idea of a bilingual advantage may result from a publication bias favoring studies with positive results over studies with null or negative effects. To test this hypothesis, we looked at conference abstracts from 1999 to 2012 on the topic of bilingualism and executive control. We then determined which of the studies they reported were subsequently published. Studies with results fully supporting the bilingual-advantage theory were most likely to be published, followed by studies with mixed results. Studies challenging the bilingual advantage were published the least. This discrepancy was not due to differences in sample size, tests used, or statistical power. A test for funnel-plot asymmetry provided further evidence for the existence of a publication bias. PMID:25475825

  7. Cognitive advantage in bilingualism: an example of publication bias?

    PubMed

    de Bruin, Angela; Treccani, Barbara; Della Sala, Sergio

    2015-01-01

    It is a widely held belief that bilinguals have an advantage over monolinguals in executive-control tasks, but is this what all studies actually demonstrate? The idea of a bilingual advantage may result from a publication bias favoring studies with positive results over studies with null or negative effects. To test this hypothesis, we looked at conference abstracts from 1999 to 2012 on the topic of bilingualism and executive control. We then determined which of the studies they reported were subsequently published. Studies with results fully supporting the bilingual-advantage theory were most likely to be published, followed by studies with mixed results. Studies challenging the bilingual advantage were published the least. This discrepancy was not due to differences in sample size, tests used, or statistical power. A test for funnel-plot asymmetry provided further evidence for the existence of a publication bias.

  8. Risk taking among diabetic clients.

    PubMed

    Joseph, D H; Schwartz-Barcott, D; Patterson, B

    1992-01-01

    Diabetic clients must make daily decisions about their health care needs. Observational and anecdotal evidence suggests that vast differences exist between the kinds of choices diabetic clients make and the kinds of chances they are willing to take. The purpose of this investigation was to develop a diabetic risk-assessment tool. This instrument, which is based on subjective expected utility theory, measures risk-prone and risk-averse behavior. Initial findings from a pilot study of 18 women clients who are on insulin indicate that patterns of risk behavior exist in the areas of exercise, skin care, and diet. PMID:1729123

  9. Rover Takes a Sunday Drive

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This animation, made with images from the Mars Exploration Rover Spirit hazard-identification camera, shows the rover's perspective of its first post-egress drive on Mars Sunday. Engineers drove Spirit approximately 3 meters (10 feet) toward its first rock target, a football-sized, mountain-shaped rock called Adirondack. The drive took approximately 30 minutes to complete, including time stopped to take images. Spirit first made a series of arcing turns totaling approximately 1 meter (3 feet). It then turned in place and made a series of short, straightforward movements totaling approximately 2 meters (6.5 feet).

  10. Semioptimal practicable algorithmic cooling

    NASA Astrophysics Data System (ADS)

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-04-01

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon’s entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  11. Adaptive thresholding algorithm based on SAR images and wind data to segment oil spills along the northwest coast of the Iberian Peninsula.

    PubMed

    Mera, David; Cotos, José M; Varela-Pet, José; Garcia-Pineda, Oscar

    2012-10-01

    Satellite Synthetic Aperture Radar (SAR) has been established as a useful tool for detecting hydrocarbon spillage on the ocean's surface. Several surveillance applications have been developed based on this technology. Environmental variables such as wind speed should be taken into account for better SAR image segmentation. This paper presents an adaptive thresholding algorithm for detecting oil spills based on SAR data and a wind field estimation as well as its implementation as a part of a functional prototype. The algorithm was adapted to an important shipping route off the Galician coast (northwest Iberian Peninsula) and was developed on the basis of confirmed oil spills. Image testing revealed 99.93% pixel labelling accuracy. By taking advantage of multi-core processor architecture, the prototype was optimized to get a nearly 30% improvement in processing time.

  12. Contrasting behavior between dispersive seismic velocity and attenuation: advantages in subsoil characterization.

    PubMed

    Zhubayev, Alimzhan; Ghose, Ranajit

    2012-02-01

    A careful look into the pertinent models of poroelasticity reveals that in water-saturated sediments or soils, the seismic (P and S wave) velocity dispersion and attenuation in the low field-seismic frequency band (20-200 Hz) have a contrasting behavior in the porosity-permeability domain. Taking advantage of this nearly orthogonal behavior, a new approach has been proposed, which leads to unique estimates of both porosity and permeability simultaneously. Through realistic numerical tests, the effect of maximum frequency content in data and the integration of P and S waves on the accuracy and robustness of the estimates are demonstrated. PMID:22352618

  13. Contrasting behavior between dispersive seismic velocity and attenuation: advantages in subsoil characterization.

    PubMed

    Zhubayev, Alimzhan; Ghose, Ranajit

    2012-02-01

    A careful look into the pertinent models of poroelasticity reveals that in water-saturated sediments or soils, the seismic (P and S wave) velocity dispersion and attenuation in the low field-seismic frequency band (20-200 Hz) have a contrasting behavior in the porosity-permeability domain. Taking advantage of this nearly orthogonal behavior, a new approach has been proposed, which leads to unique estimates of both porosity and permeability simultaneously. Through realistic numerical tests, the effect of maximum frequency content in data and the integration of P and S waves on the accuracy and robustness of the estimates are demonstrated.

  14. Rayleigh wave nonlinear inversion based on the Firefly algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Teng-Fei; Peng, Geng-Xin; Hu, Tian-Yue; Duan, Wen-Sheng; Yao, Feng-Chang; Liu, Yi-Mou

    2014-06-01

    Rayleigh waves have high amplitude, low frequency, and low velocity, which are treated as strong noise to be attenuated in reflected seismic surveys. This study addresses how to identify useful shear wave velocity profile and stratigraphic information from Rayleigh waves. We choose the Firefly algorithm for inversion of surface waves. The Firefly algorithm, a new type of particle swarm optimization, has the advantages of being robust, highly effective, and allows global searching. This algorithm is feasible and has advantages for use in Rayleigh wave inversion with both synthetic models and field data. The results show that the Firefly algorithm, which is a robust and practical method, can achieve nonlinear inversion of surface waves with high resolution.

  15. HOW MUCH FAVORABLE SELECTION IS LEFT IN MEDICARE ADVANTAGE?

    PubMed Central

    PRICE, MARY; MCWILLIAMS, J. MICHAEL; HSU, JOHN; MCGUIRE, THOMAS G.

    2015-01-01

    The health economics literature contains two models of selection, one with endogenous plan characteristics to attract good risks and one with fixed plan characteristics; neither model contains a regulator. Medicare Advantage, a principal example of selection in the literature, is, however, subject to anti-selection regulations. Because selection causes economic inefficiency and because the historically favorable selection into Medicare Advantage plans increased government cost, the effectiveness of the anti-selection regulations is an important policy question, especially since the Medicare Advantage program has grown to comprise 30 percent of Medicare beneficiaries. Moreover, similar anti-selection regulations are being used in health insurance exchanges for those under 65. Contrary to earlier work, we show that the strengthened anti-selection regulations that Medicare introduced starting in 2004 markedly reduced government overpayment attributable to favorable selection in Medicare Advantage. At least some of the remaining selection is plausibly related to fixed plan characteristics of Traditional Medicare versus Medicare Advantage rather than changed selection strategies by Medicare Advantage plans. PMID:26389127

  16. Operational algorithm development and refinement approaches

    NASA Astrophysics Data System (ADS)

    Ardanuy, Philip E.

    2003-11-01

    takes into account the specific maturities of each system"s (sensor and algorithm) technology to provide for a program that contains continuous improvement while retaining its manageability.

  17. Advantaged group's emotional reactions to intergroup inequality: the dynamics of pride, guilt, and sympathy.

    PubMed

    Harth, Nicole Syringa; Kessler, Thomas; Leach, Colin Wayne

    2008-01-01

    Three studies establish intergroup inequality to investigate how it is emotionally experienced by the advantaged. Studies 1 and 2 examine psychology students' emotional experience of their unequal job situation with worse-off pedagogy students. When inequality is ingroup focused and legitimate, participants experience more pride. However, when inequality is ingroup focused and illegitimate, participants experience more guilt. Sympathy is increased when inequality is outgroup focused and illegitimate. These emotions have particular effects on behavioral tendencies. In Study 2 group-based pride predicts greater ingroup favoritism in a resource distribution task, whereas group-based sympathy predicts less ingroup favoritism. Study 3 replicates these findings in the context of students' willingness to let young immigrants take part in a university sport. Pride predicts less willingness to let immigrants take part whereas sympathy predicts greater willingness. Guilt is a weak predictor of behavioral tendencies in all studies. This shows the specificity of emotions experienced about intergroup inequality. PMID:18162660

  18. Inhomogeneous phase shifting: an algorithm for nonconstant phase displacements

    SciTech Connect

    Tellez-Quinones, Alejandro; Malacara-Doblado, Daniel

    2010-11-10

    In this work, we have developed a different algorithm than the classical one on phase-shifting interferometry. These algorithms typically use constant or homogeneous phase displacements and they can be quite accurate and insensitive to detuning, taking appropriate weight factors in the formula to recover the wrapped phase. However, these algorithms have not been considered with variable or inhomogeneous displacements. We have generalized these formulas and obtained some expressions for an implementation with variable displacements and ways to get partially insensitive algorithms with respect to these arbitrary error shifts.

  19. Synthesizing Dynamic Programming Algorithms from Linear Temporal Logic Formulae

    NASA Technical Reports Server (NTRS)

    Rosu, Grigore; Havelund, Klaus

    2001-01-01

    The problem of testing a linear temporal logic (LTL) formula on a finite execution trace of events, generated by an executing program, occurs naturally in runtime analysis of software. We present an algorithm which takes an LTL formula and generates an efficient dynamic programming algorithm. The generated algorithm tests whether the LTL formula is satisfied by a finite trace of events given as input. The generated algorithm runs in linear time, its constant depending on the size of the LTL formula. The memory needed is constant, also depending on the size of the formula.

  20. The simplification of fuzzy control algorithm and hardware implementation

    NASA Technical Reports Server (NTRS)

    Wu, Z. Q.; Wang, P. Z.; Teh, H. H.

    1991-01-01

    The conventional interface composition algorithm of a fuzzy controller is very time and memory consuming. As a result, it is difficult to do real time fuzzy inference, and most fuzzy controllers are realized by look-up tables. Here, researchers derive a simplified algorithm using the defuzzification mean of maximum. This algorithm takes shorter computation time and needs less memory usage, thus making it possible to compute the fuzzy inference on real time and easy to tune the control rules on line. A hardware implementation based on a simplified fuzzy inference algorithm is described.

  1. Algorithm-dependent fault tolerance for distributed computing

    SciTech Connect

    P. D. Hough; M. e. Goldsby; E. J. Walsh

    2000-02-01

    Large-scale distributed systems assembled from commodity parts, like CPlant, have become common tools in the distributed computing world. Because of their size and diversity of parts, these systems are prone to failures. Applications that are being run on these systems have not been equipped to efficiently deal with failures, nor is there vendor support for fault tolerance. Thus, when a failure occurs, the application crashes. While most programmers make use of checkpoints to allow for restarting of their applications, this is cumbersome and incurs substantial overhead. In many cases, there are more efficient and more elegant ways in which to address failures. The goal of this project is to develop a software architecture for the detection of and recovery from faults in a cluster computing environment. The detection phase relies on the latest techniques developed in the fault tolerance community. Recovery is being addressed in an application-dependent manner, thus allowing the programmer to take advantage of algorithmic characteristics to reduce the overhead of fault tolerance. This architecture will allow large-scale applications to be more robust in high-performance computing environments that are comprised of clusters of commodity computers such as CPlant and SMP clusters.

  2. Improved interpretation of satellite altimeter data using genetic algorithms

    NASA Technical Reports Server (NTRS)

    Messa, Kenneth; Lybanon, Matthew

    1992-01-01

    Genetic algorithms (GA) are optimization techniques that are based on the mechanics of evolution and natural selection. They take advantage of the power of cumulative selection, in which successive incremental improvements in a solution structure become the basis for continued development. A GA is an iterative procedure that maintains a 'population' of 'organisms' (candidate solutions). Through successive 'generations' (iterations) the population as a whole improves in simulation of Darwin's 'survival of the fittest'. GA's have been shown to be successful where noise significantly reduces the ability of other search techniques to work effectively. Satellite altimetry provides useful information about oceanographic phenomena. It provides rapid global coverage of the oceans and is not as severely hampered by cloud cover as infrared imagery. Despite these and other benefits, several factors lead to significant difficulty in interpretation. The GA approach to the improved interpretation of satellite data involves the representation of the ocean surface model as a string of parameters or coefficients from the model. The GA searches in parallel, a population of such representations (organisms) to obtain the individual that is best suited to 'survive', that is, the fittest as measured with respect to some 'fitness' function. The fittest organism is the one that best represents the ocean surface model with respect to the altimeter data.

  3. Simple algorithm for improved security in the FDDI protocol

    NASA Astrophysics Data System (ADS)

    Lundy, G. M.; Jones, Benjamin

    1993-02-01

    We propose a modification to the Fiber Distributed Data Interface (FDDI) protocol based on a simple algorithm which will improve confidential communication capability. This proposed modification provides a simple and reliable system which exploits some of the inherent security properties in a fiber optic ring network. This method differs from conventional methods in that end to end encryption can be facilitated at the media access control sublayer of the data link layer in the OSI network model. Our method is based on a variation of the bit stream cipher method. The transmitting station takes the intended confidential message and uses a simple modulo two addition operation against an initialization vector. The encrypted message is virtually unbreakable without the initialization vector. None of the stations on the ring will have access to both the encrypted message and the initialization vector except the transmitting and receiving stations. The generation of the initialization vector is unique for each confidential transmission and thus provides a unique approach to the key distribution problem. The FDDI protocol is of particular interest to the military in terms of LAN/MAN implementations. Both the Army and the Navy are considering the standard as the basis for future network systems. A simple and reliable security mechanism with the potential to support realtime communications is a necessary consideration in the implementation of these systems. The proposed method offers several advantages over traditional methods in terms of speed, reliability, and standardization.

  4. Self-adaptive algorithm for segmenting skin regions

    NASA Astrophysics Data System (ADS)

    Kawulok, Michal; Kawulok, Jolanta; Nalepa, Jakub; Smolka, Bogdan

    2014-12-01

    In this paper, we introduce a new self-adaptive algorithm for segmenting human skin regions in color images. Skin detection and segmentation is an active research topic, and many solutions have been proposed so far, especially concerning skin tone modeling in various color spaces. Such models are used for pixel-based classification, but its accuracy is limited due to high variance and low specificity of human skin color. In many works, skin model adaptation and spatial analysis were reported to improve the final segmentation outcome; however, little attention has been paid so far to the possibilities of combining these two improvement directions. Our contribution lies in learning a local skin color model on the fly, which is subsequently applied to the image to determine the seeds for the spatial analysis. Furthermore, we also take advantage of textural features for computing local propagation costs that are used in the distance transform. The results of an extensive experimental study confirmed that the new method is highly competitive, especially for extracting the hand regions in color images.

  5. Confidence intervals for expected moments algorithm flood quantile estimates

    USGS Publications Warehouse

    Cohn, T.A.; Lane, W.L.; Stedinger, J.R.

    2001-01-01

    Historical and paleoflood information can substantially improve flood frequency estimates if appropriate statistical procedures are properly applied. However, the Federal guidelines for flood frequency analysis, set forth in Bulletin 17B, rely on an inefficient "weighting" procedure that fails to take advantage of historical and paleoflood information. This has led researchers to propose several more efficient alternatives including the Expected Moments Algorithm (EMA), which is attractive because it retains Bulletin 17B's statistical structure (method of moments with the Log Pearson Type 3 distribution) and thus can be easily integrated into flood analyses employing the rest of the Bulletin 17B approach. The practical utility of EMA, however, has been limited because no closed-form method has been available for quantifying the uncertainty of EMA-based flood quantile estimates. This paper addresses that concern by providing analytical expressions for the asymptotic variance of EMA flood-quantile estimators and confidence intervals for flood quantile estimates. Monte Carlo simulations demonstrate the properties of such confidence intervals for sites where a 25- to 100-year streamgage record is augmented by 50 to 150 years of historical information. The experiments show that the confidence intervals, though not exact, should be acceptable for most purposes.

  6. PSimScan: Algorithm and Utility for Fast Protein Similarity Search

    PubMed Central

    Kaznadzey, Anna; Alexandrova, Natalia; Novichkov, Vladimir; Kaznadzey, Denis

    2013-01-01

    In the era of metagenomics and diagnostics sequencing, the importance of protein comparison methods of boosted performance cannot be overstated. Here we present PSimScan (Protein Similarity Scanner), a flexible open source protein similarity search tool which provides a significant gain in speed compared to BLASTP at the price of controlled sensitivity loss. The PSimScan algorithm introduces a number of novel performance optimization methods that can be further used by the community to improve the speed and lower hardware requirements of bioinformatics software. The optimization starts at the lookup table construction, then the initial lookup table–based hits are passed through a pipeline of filtering and aggregation routines of increasing computational complexity. The first step in this pipeline is a novel algorithm that builds and selects ‘similarity zones’ aggregated from neighboring matches on small arrays of adjacent diagonals. PSimScan performs 5 to 100 times faster than the standard NCBI BLASTP, depending on chosen parameters, and runs on commodity hardware. Its sensitivity and selectivity at the slowest settings are comparable to the NCBI BLASTP’s and decrease with the increase of speed, yet stay at the levels reasonable for many tasks. PSimScan is most advantageous when used on large collections of query sequences. Comparing the entire proteome of Streptocuccus pneumoniae (2,042 proteins) to the NCBI’s non-redundant protein database of 16,971,855 records takes 6.5 hours on a moderately powerful PC, while the same task with the NCBI BLASTP takes over 66 hours. We describe innovations in the PSimScan algorithm in considerable detail to encourage bioinformaticians to improve on the tool and to use the innovations in their own software development. PMID:23505522

  7. Internal quantum efficiency analysis of solar cell by genetic algorithm

    SciTech Connect

    Xiong, Kanglin; Yang, Hui; Lu, Shulong; Zhou, Taofei; Wang, Rongxin; Qiu, Kai; Dong, Jianrong; Jiang, Desheng

    2010-11-15

    To investigate factors limiting the performance of a GaAs solar cell, genetic algorithm is employed to fit the experimentally measured internal quantum efficiency (IQE) in the full spectra range. The device parameters such as diffusion lengths and surface recombination velocities are extracted. Electron beam induced current (EBIC) is performed in the base region of the cell with obtained diffusion length agreeing with the fit result. The advantage of genetic algorithm is illustrated. (author)

  8. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.; Subrahmanyam, P.A.

    1988-12-01

    The authors present a methodology for verifying correctness of systolic algorithms. The methodology is based on solving a set of Uniform Recurrence Equations obtained from a description of systolic algorithms as a set of recursive equations. They present an approach to mechanically verify correctness of systolic algorithms, using the Boyer-Moore theorem proven. A mechanical correctness proof of an example from the literature is also presented.

  9. Does living donation have advantages over deceased donation in liver transplantation?

    PubMed

    Kaido, Toshimi; Uemoto, Shinji

    2010-10-01

    Liver transplantation (LT) is the best treatment option for patients with end-stage liver disease. Living donor LT (LDLT) has developed as an alternative to deceased donor LT (DDLT) in order to overcome the critical shortage of deceased organ donations, particularly in Asia. LDLT offers several advantages over DDLT. The major advantage of LDLT is the reduction in waiting time mortality. Especially among patients with hepatocellular carcinoma (HCC), LDLT can shorten the waiting time and lower the dropout rate. The Hong Kong group reported that median waiting time was significantly shorter for LDLT than for DDLT. Intention-to-treat survival rates of HCC patients with voluntary live donors were significantly higher than those of patients without voluntary live donors. In contrast, a multicenter adult-to-adult LDLT retrospective cohort study reported that LDLT recipients displayed a significantly higher rate of HCC recurrence than DDLT recipients, although LDLT recipients had shorter waiting times than DDLT recipients. The advantage of LDLT involves the more liberal criteria for HCC compared with those for DDLT. Various preoperative interventions including nutritional treatment can also be planned for both the donor and recipient in LDLT. Conversely, LDLT has marked unfavorable characteristics in terms of donor risks. Donor morbidity is not infrequent and the donor mortality rate is estimated at around 0.1-0.3%. In conclusion, living donation is not necessarily advantageous over deceased donation in LT. Taking the advantages and disadvantages of each option into consideration, LDLT and DDLT should both be used to facilitate effective LT for patients requiring transplant. PMID:20880167

  10. Ligand Identification Scoring Algorithm (LISA)

    PubMed Central

    Zheng, Zheng; Merz, Kenneth M.

    2011-01-01

    A central problem in de novo drug design is determining the binding affinity of a ligand with a receptor. A new scoring algorithm is presented that estimates the binding affinity of a protein-ligand complex given a three-dimensional structure. The method, LISA (Ligand Identification Scoring Algorithm), uses an empirical scoring function to describe the binding free energy. Interaction terms have been designed to account for van der Waals (VDW) contacts, hydrogen bonding, desolvation effects and metal chelation to model the dissociation equilibrium constants using a linear model. Atom types have been introduced to differentiate the parameters for VDW, H-bonding interactions and metal chelation between different atom pairs. A training set of 492 protein-ligand complexes was selected for the fitting process. Different test sets have been examined to evaluate its ability to predict experimentally measured binding affinities. By comparing with other well known scoring functions, the results show that LISA has advantages over many existing scoring functions in simulating protein-ligand binding affinity, especially metalloprotein-ligand binding affinity. Artificial Neural Network (ANN) was also used in order to demonstrate that the energy terms in LISA are well designed and do not require extra cross terms. PMID:21561101

  11. Competing Sudakov veto algorithms

    NASA Astrophysics Data System (ADS)

    Kleiss, Ronald; Verheyen, Rob

    2016-07-01

    We present a formalism to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The formal analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance in a semi-realistic setting and show that there are significantly faster alternatives to the commonly used algorithms.

  12. Information filtering via weighted heat conduction algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Jian-Guo; Guo, Qiang; Zhang, Yi-Cheng

    2011-06-01

    In this paper, by taking into account effects of the user and object correlations on a heat conduction (HC) algorithm, a weighted heat conduction (WHC) algorithm is presented. We argue that the edge weight of the user-object bipartite network should be embedded into the HC algorithm to measure the object similarity. The numerical results indicate that both the accuracy and diversity could be improved greatly compared with the standard HC algorithm and the optimal values reached simultaneously. On the Movielens and Netflix datasets, the algorithmic accuracy, measured by the average ranking score, can be improved by 39.7% and 56.1% in the optimal case, respectively, and the diversity could reach 0.9587 and 0.9317 when the recommendation list equals to 5. Further statistical analysis indicates that, in the optimal case, the distributions of the edge weight are changed to the Poisson form, which may be the reason why HC algorithm performance could be improved. This work highlights the effect of edge weight on a personalized recommendation study, which maybe an important factor affecting personalized recommendation performance.

  13. Taking charge: a personal responsibility.

    PubMed Central

    Newman, D M

    1987-01-01

    Women can adopt health practices that will help them to maintain good health throughout their various life stages. Women can take charge of their health by maintaining a nutritionally balanced diet, exercising, and using common sense. Women can also employ known preventive measures against osteoporosis, stroke, lung and breast cancer and accidents. Because women experience increased longevity and may require long-term care with age, the need for restructuring the nation's care system for the elderly becomes an important women's health concern. Adult day care centers, home health aides, and preventive education will be necessary, along with sufficient insurance to maintain quality care and self-esteem without depleting a person's resources. PMID:3120224

  14. Dynamic load balance scheme for the DSMC algorithm

    NASA Astrophysics Data System (ADS)

    Li, Jin; Geng, Xiangren; Jiang, Dingwu; Chen, Jianqiang

    2014-12-01

    The direct simulation Monte Carlo (DSMC) algorithm, devised by Bird, has been used over a wide range of various rarified flow problems in the past 40 years. While the DSMC is suitable for the parallel implementation on powerful multi-processor architecture, it also introduces a large load imbalance across the processor array, even for small examples. The load imposed on a processor by a DSMC calculation is determined to a large extent by the total of simulator particles upon it. Since most flows are impulsively started with initial distribution of particles which is surely quite different from the steady state, the total of simulator particles will change dramatically. The load balance based upon an initial distribution of particles will break down as the steady state of flow is reached. The load imbalance and huge computational cost of DSMC has limited its application to rarefied or simple transitional flows. In this paper, by taking advantage of METIS, a software for partitioning unstructured graphs, and taking the total of simulator particles in each cell as a weight information, the repartitioning based upon the principle that each processor handles approximately the equal total of simulator particles has been achieved. The computation must pause several times to renew the total of simulator particles in each processor and repartition the whole domain again. Thus the load balance across the processors array holds in the duration of computation. The parallel efficiency can be improved effectively. The benchmark solution of a cylinder submerged in hypersonic flow has been simulated numerically. Besides, hypersonic flow past around a complex wing-body configuration has also been simulated. The results have displayed that, for both of cases, the computational time can be reduced by about 50%.

  15. Dynamic load balance scheme for the DSMC algorithm

    SciTech Connect

    Li, Jin; Geng, Xiangren; Jiang, Dingwu; Chen, Jianqiang

    2014-12-09

    The direct simulation Monte Carlo (DSMC) algorithm, devised by Bird, has been used over a wide range of various rarified flow problems in the past 40 years. While the DSMC is suitable for the parallel implementation on powerful multi-processor architecture, it also introduces a large load imbalance across the processor array, even for small examples. The load imposed on a processor by a DSMC calculation is determined to a large extent by the total of simulator particles upon it. Since most flows are impulsively started with initial distribution of particles which is surely quite different from the steady state, the total of simulator particles will change dramatically. The load balance based upon an initial distribution of particles will break down as the steady state of flow is reached. The load imbalance and huge computational cost of DSMC has limited its application to rarefied or simple transitional flows. In this paper, by taking advantage of METIS, a software for partitioning unstructured graphs, and taking the total of simulator particles in each cell as a weight information, the repartitioning based upon the principle that each processor handles approximately the equal total of simulator particles has been achieved. The computation must pause several times to renew the total of simulator particles in each processor and repartition the whole domain again. Thus the load balance across the processors array holds in the duration of computation. The parallel efficiency can be improved effectively. The benchmark solution of a cylinder submerged in hypersonic flow has been simulated numerically. Besides, hypersonic flow past around a complex wing-body configuration has also been simulated. The results have displayed that, for both of cases, the computational time can be reduced by about 50%.

  16. A new optimized GA-RBF neural network algorithm.

    PubMed

    Jia, Weikuan; Zhao, Dean; Shen, Tian; Su, Chunyang; Hu, Chanli; Zhao, Yuyan

    2014-01-01

    When confronting the complex problems, radial basis function (RBF) neural network has the advantages of adaptive and self-learning ability, but it is difficult to determine the number of hidden layer neurons, and the weights learning ability from hidden layer to the output layer is low; these deficiencies easily lead to decreasing learning ability and recognition precision. Aiming at this problem, we propose a new optimized RBF neural network algorithm based on genetic algorithm (GA-RBF algorithm), which uses genetic algorithm to optimize the weights and structure of RBF neural network; it chooses new ways of hybrid encoding and optimizing simultaneously. Using the binary encoding encodes the number of the hidden layer's neurons and using real encoding encodes the connection weights. Hidden layer neurons number and connection weights are optimized simultaneously in the new algorithm. However, the connection weights optimization is not complete; we need to use least mean square (LMS) algorithm for further leaning, and finally get a new algorithm model. Using two UCI standard data sets to test the new algorithm, the results show that the new algorithm improves the operating efficiency in dealing with complex problems and also improves the recognition precision, which proves that the new algorithm is valid.

  17. The POP learning algorithms: reducing work in identifying fuzzy rules.

    PubMed

    Quek, C; Zhou, R W

    2001-12-01

    A novel fuzzy neural network, the Pseudo Outer-Product based Fuzzy Neural Network (POPFNN), and its two fuzzy-rule-identification algorithms are proposed in this paper. They are the Pseudo Outer-Product (POP) learning and the Lazy Pseudo Outer-Product (LazyPOP) leaning algorithms. These two learning algorithms are used in POPFNN to identify relevant fuzzy rules. In contrast with other rule-learning algorithms, the proposed algorithms have many advantages, such as being fast, reliable, efficient, and easy to understand. POP learning is a simple one-pass learning algorithm. It essentially performs rule-selection. Hence, it suffers from the shortcoming of having to consider all the possible rules. The second algorithm, the LazyPOP learning algorithm, truly identifies the fuzzy rules which are relevant and does not use a rule-selection method whereby irrelevant fuzzy rules are eliminated from an initial rule set. In addition, it is able to adjust the structure of the fuzzy neural network. The proposed LazyPOP learning algorithm is able to delete invalid feature inputs according to the fuzzy rules that have been identified. Extensive experimental results and discussions are presented for a detailed analysis of the proposed algorithms.

  18. A New Optimized GA-RBF Neural Network Algorithm

    PubMed Central

    Zhao, Dean; Su, Chunyang; Hu, Chanli; Zhao, Yuyan

    2014-01-01

    When confronting the complex problems, radial basis function (RBF) neural network has the advantages of adaptive and self-learning ability, but it is difficult to determine the number of hidden layer neurons, and the weights learning ability from hidden layer to the output layer is low; these deficiencies easily lead to decreasing learning ability and recognition precision. Aiming at this problem, we propose a new optimized RBF neural network algorithm based on genetic algorithm (GA-RBF algorithm), which uses genetic algorithm to optimize the weights and structure of RBF neural network; it chooses new ways of hybrid encoding and optimizing simultaneously. Using the binary encoding encodes the number of the hidden layer's neurons and using real encoding encodes the connection weights. Hidden layer neurons number and connection weights are optimized simultaneously in the new algorithm. However, the connection weights optimization is not complete; we need to use least mean square (LMS) algorithm for further leaning, and finally get a new algorithm model. Using two UCI standard data sets to test the new algorithm, the results show that the new algorithm improves the operating efficiency in dealing with complex problems and also improves the recognition precision, which proves that the new algorithm is valid. PMID:25371666

  19. Competitive Advantage and its Sources in an Evolving Market

    NASA Astrophysics Data System (ADS)

    Zaridis, Apostolos D.

    2009-08-01

    In a continuously altered and evolving Market, as is the food manufacturing market, the main and long-lasting objective of firm that is the maximization of its wealth and consequently the continuous remaining in profit regions, appears that it is possible to be achieved via the obtainment and maintenance of diachronically long-term competitive advantage, which it will render the firm unique or leader force in a inexorable competition that is continuously extended in a globalized market. Various definitions and different regards are developed in regard to the competitive advantage and the way with which a firm it is possible, acquiring it, to star in the market in which it is activated. As result of sustainable competitive advantage in a firm comes the above the average performance. Abundance of resources and competences that are proposed as sources of competitive advantage in the resource-based view literature exists, while they are added continuously new based on empiric studies. In any case, it appears to suffer hierarchy of sources of competitive advantage, with regard to sustainability of these.

  20. The extensive use of a take-off assist for a rocket plane

    NASA Astrophysics Data System (ADS)

    Tomita, N.; Ohkami, Y.

    In the previous paper, the authors proposed the use of a take-off assist for launching a space plane with rocket engines (rocket plane). In this paper, the advantages and disadvantages of employing a take-off assist is discussed in more detail. The velocity at the peak altitude of a rocket plane launched horizontally with a take-off assist is compared with the velocity of a rocket plane launched vertically. A take-off assist system, which provides initial velocity to the rocket plane, is proposed.