Sample records for problem size increases

  1. The more the merrier? Increasing group size may be detrimental to decision-making performance in nominal groups.

    PubMed

    Amir, Ofra; Amir, Dor; Shahar, Yuval; Hart, Yuval; Gal, Kobi

    2018-01-01

    Demonstrability-the extent to which group members can recognize a correct solution to a problem-has a significant effect on group performance. However, the interplay between group size, demonstrability and performance is not well understood. This paper addresses these gaps by studying the joint effect of two factors-the difficulty of solving a problem and the difficulty of verifying the correctness of a solution-on the ability of groups of varying sizes to converge to correct solutions. Our empirical investigations use problem instances from different computational complexity classes, NP-Complete (NPC) and PSPACE-complete (PSC), that exhibit similar solution difficulty but differ in verification difficulty. Our study focuses on nominal groups to isolate the effect of problem complexity on performance. We show that NPC problems have higher demonstrability than PSC problems: participants were significantly more likely to recognize correct and incorrect solutions for NPC problems than for PSC problems. We further show that increasing the group size can actually decrease group performance for some problems of low demonstrability. We analytically derive the boundary that distinguishes these problems from others for which group performance monotonically improves with group size. These findings increase our understanding of the mechanisms that underlie group problem-solving processes, and can inform the design of systems and processes that would better facilitate collective decision-making.

  2. Impact of ageing on problem size and proactive interference in arithmetic facts solving.

    PubMed

    Archambeau, Kim; De Visscher, Alice; Noël, Marie-Pascale; Gevers, Wim

    2018-02-01

    Arithmetic facts (AFs) are required when solving problems such as "3 × 4" and refer to calculations for which the correct answer is retrieved from memory. Currently, two important effects that modulate the performance in AFs have been highlighted: the problem size effect and the proactive interference effect. The aim of this study is to investigate possible age-related changes of the problem size effect and the proactive interference effect in AF solving. To this end, the performance of young and older adults was compared in a multiplication production task. Furthermore, an independent measure of proactive interference was assessed to further define the architecture underlying this effect in multiplication solving. The results indicate that both young and older adults were sensitive to the effects of interference and of the problem size. That is, both interference and problem size affected performance negatively: the time needed to solve a multiplication problem increases as the level of interference and the size of the problem increase. Regarding the effect of ageing, the problem size effect remains constant with age, indicating a preserved AF network in older adults. Interestingly, sensitivity to proactive interference in multiplication solving was less pronounced in older than in younger adults suggesting that part of the proactive interference has been overcome with age.

  3. Portion Size: What We Know and What We Need to Know

    PubMed Central

    Benton, David

    2015-01-01

    There is increasing evidence that the portion sizes of many foods have increased and in a laboratory at least this increases the amount eaten. The conclusions are, however, limited by the complexity of the phenomenon. There is a need to consider meals freely chosen over a prolonged period when a range of foods of different energy densities are available. A range of factors will influence the size of the portion size chosen: amongst others packaging, labeling, advertising, and the unit size rather than portion size of the food item. The way portion size interacts with the multitude of factors that determine food intake needs to be established. In particular, the role of portion size on energy intake should be examined as many confounding variables exist and we must be clear that it is portion size that is the major problem. If the approach is to make a practical contribution, then methods of changing portion sizes will need to be developed. This may prove to be a problem in a free market, as it is to be expected that customers will resist the introduction of smaller portion sizes, given that value for money is an important motivator. PMID:24915353

  4. On the Problem-Size Effect in Small Additions: Can We Really Discard Any Counting-Based Account?

    ERIC Educational Resources Information Center

    Barrouillet, Pierre; Thevenot, Catherine

    2013-01-01

    The problem-size effect in simple additions, that is the increase in response times (RTs) and error rates with the size of the operands, is one of the most robust effects in cognitive arithmetic. Current accounts focus on factors that could affect speed of retrieval of the answers from long-term memory such as the occurrence of interference in a…

  5. Management practices used by white-tailed deer farms in Pennsylvania and herd health problems.

    PubMed

    Brooks, Jason W; Jayarao, Bhushan M

    2008-01-01

    To determine current management practices used by white-tailed deer farms in Pennsylvania and identify animal health problems that exist in these herds. Cross-sectional study. Owners and managers of 233 farms in Pennsylvania that raised white-tailed deer. A self-administered questionnaire was mailed to participants. Herds ranged in size from 1 to 350 deer. Land holdings ranged from 0.07 to 607 hectares (0.17 to 1,500 acres). Stocking density ranged from 0.1 to 118.6 deer/hectare (0.04 to 48 deer/acre). Most (84%) respondents raised deer for breeding or hunting stock; 13% raised deer exclusively as pets or for hobby purposes, and purpose varied by herd size. Multiple associations were identified between management or disease factors and herd size. The use of vaccines, use of veterinary and diagnostic services, use of pasture, and use of artificial insemination increased as herd size increased. The most common conditions in herds of all sizes were respiratory tract disease, diarrhea, parasitism, and sudden death. The prevalence of respiratory tract disease increased as herd size increased. Results suggested that many aspects of herd management for white-tailed deer farms in Pennsylvania were associated with herd size, but that regardless of herd size, many preventive medicine practices were improperly used or underused in many herds.

  6. Visualizing Phylogenetic Treespace Using Cartographic Projections

    NASA Astrophysics Data System (ADS)

    Sundberg, Kenneth; Clement, Mark; Snell, Quinn

    Phylogenetic analysis is becoming an increasingly important tool for biological research. Applications include epidemiological studies, drug development, and evolutionary analysis. Phylogenetic search is a known NP-Hard problem. The size of the data sets which can be analyzed is limited by the exponential growth in the number of trees that must be considered as the problem size increases. A better understanding of the problem space could lead to better methods, which in turn could lead to the feasible analysis of more data sets. We present a definition of phylogenetic tree space and a visualization of this space that shows significant exploitable structure. This structure can be used to develop search methods capable of handling much larger datasets.

  7. Siblings of the Handicapped: A Literature Review for School Psychologists.

    ERIC Educational Resources Information Center

    Hannah, Mary Elizabeth; Midlarsky, Elizabeth

    1985-01-01

    Siblings of handicapped children may have adjustment problems associated with increased family responsibilities, increased parental expectations, and perceived parental neglect in favor of the disabled sibling. Problems may be related to socioeconomic status; family size; age, sex, and birth order of the sibling; and severity of the handicap. (GDC)

  8. Morphing Wing Weight Predictors and Their Application in a Template-Based Morphing Aircraft Sizing Environment II. Part 2; Morphing Aircraft Sizing via Multi-level Optimization

    NASA Technical Reports Server (NTRS)

    Skillen, Michael D.; Crossley, William A.

    2008-01-01

    This report presents an approach for sizing of a morphing aircraft based upon a multi-level design optimization approach. For this effort, a morphing wing is one whose planform can make significant shape changes in flight - increasing wing area by 50% or more from the lowest possible area, changing sweep 30 or more, and/or increasing aspect ratio by as much as 200% from the lowest possible value. The top-level optimization problem seeks to minimize the gross weight of the aircraft by determining a set of "baseline" variables - these are common aircraft sizing variables, along with a set of "morphing limit" variables - these describe the maximum shape change for a particular morphing strategy. The sub-level optimization problems represent each segment in the morphing aircraft's design mission; here, each sub-level optimizer minimizes fuel consumed during each mission segment by changing the wing planform within the bounds set by the baseline and morphing limit variables from the top-level problem.

  9. Modelling the contribution of changes in family life to time trends in adolescent conduct problems.

    PubMed

    Collishaw, Stephan; Goodman, Robert; Pickles, Andrew; Maughan, Barbara

    2007-12-01

    The past half-century has seen significant changes in family life, including an increase in parental divorce, increases in the numbers of lone parent and stepfamilies, changes in socioeconomic well being, and a decrease in family size. Evidence also shows substantial time trends in adolescent mental health, including a marked increase in conduct problems over the last 25 years of the 20th Century in the UK. The aim of this study was to examine how these two sets of trends may be related. To illustrate the complexity of the issues involved, we focused on three well-established family risks for conduct problems: family type, income and family size. Three community samples of adolescents from England, Scotland and Wales were compared: 10,348 16-year olds assessed in 1974 as part of the National Child Development Study, 7234 16-year olds assessed in 1986 as part of the British Cohort Study, and 860 15-year olds assessed in the 1999 British Child and Adolescent Mental Health Survey. Parents completed comparable ratings of conduct problems in each survey and provided information on family type, income and size. Findings highlight important variations in both the prevalence of these family variables and their associations with conduct problems over time, underscoring the complex conceptual issues involved in testing causes of trends in mental health.

  10. Solving lot-sizing problem with quantity discount and transportation cost

    NASA Astrophysics Data System (ADS)

    Lee, Amy H. I.; Kang, He-Yau; Lai, Chun-Mei

    2013-04-01

    Owing to today's increasingly competitive market and ever-changing manufacturing environment, the inventory problem is becoming more complicated to solve. The incorporation of heuristics methods has become a new trend to tackle the complex problem in the past decade. This article considers a lot-sizing problem, and the objective is to minimise total costs, where the costs include ordering, holding, purchase and transportation costs, under the requirement that no inventory shortage is allowed in the system. We first formulate the lot-sizing problem as a mixed integer programming (MIP) model. Next, an efficient genetic algorithm (GA) model is constructed for solving large-scale lot-sizing problems. An illustrative example with two cases in a touch panel manufacturer is used to illustrate the practicality of these models, and a sensitivity analysis is applied to understand the impact of the changes in parameters to the outcomes. The results demonstrate that both the MIP model and the GA model are effective and relatively accurate tools for determining the replenishment for touch panel manufacturing for multi-periods with quantity discount and batch transportation. The contributions of this article are to construct an MIP model to obtain an optimal solution when the problem is not too complicated itself and to present a GA model to find a near-optimal solution efficiently when the problem is complicated.

  11. On the use of cartographic projections in visualizing phylo-genetic tree space

    PubMed Central

    2010-01-01

    Phylogenetic analysis is becoming an increasingly important tool for biological research. Applications include epidemiological studies, drug development, and evolutionary analysis. Phylogenetic search is a known NP-Hard problem. The size of the data sets which can be analyzed is limited by the exponential growth in the number of trees that must be considered as the problem size increases. A better understanding of the problem space could lead to better methods, which in turn could lead to the feasible analysis of more data sets. We present a definition of phylogenetic tree space and a visualization of this space that shows significant exploitable structure. This structure can be used to develop search methods capable of handling much larger data sets. PMID:20529355

  12. Specific Features in Measuring Particle Size Distributions in Highly Disperse Aerosol Systems

    NASA Astrophysics Data System (ADS)

    Zagaynov, V. A.; Vasyanovich, M. E.; Maksimenko, V. V.; Lushnikov, A. A.; Biryukov, Yu. G.; Agranovskii, I. E.

    2018-06-01

    The distribution of highly dispersed aerosols is studied. Particular attention is given to the diffusion dynamic approach, as it is the best way to determine particle size distribution. It shown that the problem can be divided into two steps: directly measuring particle penetration through diffusion batteries and solving the inverse problem (obtaining a size distribution from the measured penetrations). No reliable way of solving the so-called inverse problem is found, but it can be done by introducing a parametrized size distribution (i.e., a gamma distribution). The integral equation is therefore reduced to a system of nonlinear equations that can be solved by elementary mathematical means. Further development of the method requires an increase in sensitivity (i.e., measuring the dimensions of molecular clusters with radioactive sources, along with the activity of diffusion battery screens).

  13. Human problem solving performance in a fault diagnosis task

    NASA Technical Reports Server (NTRS)

    Rouse, W. B.

    1978-01-01

    It is proposed that humans in automated systems will be asked to assume the role of troubleshooter or problem solver and that the problems which they will be asked to solve in such systems will not be amenable to rote solution. The design of visual displays for problem solving in such situations is considered, and the results of two experimental investigations of human problem solving performance in the diagnosis of faults in graphically displayed network problems are discussed. The effects of problem size, forced-pacing, computer aiding, and training are considered. Results indicate that human performance deviates from optimality as problem size increases. Forced-pacing appears to cause the human to adopt fairly brute force strategies, as compared to those adopted in self-paced situations. Computer aiding substantially lessens the number of mistaken diagnoses by performing the bookkeeping portions of the task.

  14. Size effects on magnetoelectric response of multiferroic composite with inhomogeneities

    NASA Astrophysics Data System (ADS)

    Yue, Y. M.; Xu, K. Y.; Chen, T.; Aifantis, E. C.

    2015-12-01

    This paper investigates the influence of size effects on the magnetoelectric performance of multiferroic composite with inhomogeneities. Based on a simple model of gradient elasticity for multiferroic materials, the governing equations and boundary conditions are obtained from an energy variational principle. The general formulation is applied to consider an anti-plane problem of multiferroic composites with inhomogeneities. This problem is solved analytically and the effective magnetoelectric coefficient is obtained. The influence of the internal length (grain size or particle size) on the effective magnetoelectric coefficients of piezoelectric/piezomagnetic nanoscale fibrous composite is numerically evaluated and analyzed. The results suggest that with the increase of the internal length of piezoelectric matrix (PZT and BaTiO3), the magnetoelectric coefficient increases, but the rate of increase is ratcheting downwards. If the internal length of piezoelectric matrix remains unchanged, the magnetoelectric coefficient will decrease with the increase of internal length scale of piezomagnetic nonfiber (CoFe2O3). In a composite consisiting of a piezomagnetic matrix (CoFe2O3) reinforced with piezoelectric nanofibers (BaTiO3), an increase of the internal length in the piezomagnetic matrix, results to a decrease of the magnetoelectric coefficient, with the rate of decrease diminishing.

  15. Class Size: A Battle between Accountability and Quality Instruction

    ERIC Educational Resources Information Center

    Januszka, Cynthia; Dixon-Krauss, Lisbeth

    2008-01-01

    A substantial amount of controversy surrounds the issue of class size in public schools. Parents and teachers are on one side, touting the benefits of smaller class sizes (e.g., increased academic achievement, greater student-teacher interaction, utilization of more innovative teaching strategies, and a decrease in discipline problems). On the…

  16. Spectrum-to-Spectrum Searching Using a Proteome-wide Spectral Library*

    PubMed Central

    Yen, Chia-Yu; Houel, Stephane; Ahn, Natalie G.; Old, William M.

    2011-01-01

    The unambiguous assignment of tandem mass spectra (MS/MS) to peptide sequences remains a key unsolved problem in proteomics. Spectral library search strategies have emerged as a promising alternative for peptide identification, in which MS/MS spectra are directly compared against a reference library of confidently assigned spectra. Two problems relate to library size. First, reference spectral libraries are limited to rediscovery of previously identified peptides and are not applicable to new peptides, because of their incomplete coverage of the human proteome. Second, problems arise when searching a spectral library the size of the entire human proteome. We observed that traditional dot product scoring methods do not scale well with spectral library size, showing reduction in sensitivity when library size is increased. We show that this problem can be addressed by optimizing scoring metrics for spectrum-to-spectrum searches with large spectral libraries. MS/MS spectra for the 1.3 million predicted tryptic peptides in the human proteome are simulated using a kinetic fragmentation model (MassAnalyzer version2.1) to create a proteome-wide simulated spectral library. Searches of the simulated library increase MS/MS assignments by 24% compared with Mascot, when using probabilistic and rank based scoring methods. The proteome-wide coverage of the simulated library leads to 11% increase in unique peptide assignments, compared with parallel searches of a reference spectral library. Further improvement is attained when reference spectra and simulated spectra are combined into a hybrid spectral library, yielding 52% increased MS/MS assignments compared with Mascot searches. Our study demonstrates the advantages of using probabilistic and rank based scores to improve performance of spectrum-to-spectrum search strategies. PMID:21532008

  17. A modified Wright-Fisher model that incorporates Ne: A variant of the standard model with increased biological realism and reduced computational complexity.

    PubMed

    Zhao, Lei; Gossmann, Toni I; Waxman, David

    2016-03-21

    The Wright-Fisher model is an important model in evolutionary biology and population genetics. It has been applied in numerous analyses of finite populations with discrete generations. It is recognised that real populations can behave, in some key aspects, as though their size that is not the census size, N, but rather a smaller size, namely the effective population size, Ne. However, in the Wright-Fisher model, there is no distinction between the effective and census population sizes. Equivalently, we can say that in this model, Ne coincides with N. The Wright-Fisher model therefore lacks an important aspect of biological realism. Here, we present a method that allows Ne to be directly incorporated into the Wright-Fisher model. The modified model involves matrices whose size is determined by Ne. Thus apart from increased biological realism, the modified model also has reduced computational complexity, particularly so when Ne⪡N. For complex problems, it may be hard or impossible to numerically analyse the most commonly-used approximation of the Wright-Fisher model that incorporates Ne, namely the diffusion approximation. An alternative approach is simulation. However, the simulations need to be sufficiently detailed that they yield an effective size that is different to the census size. Simulations may also be time consuming and have attendant statistical errors. The method presented in this work may then be the only alternative to simulations, when Ne differs from N. We illustrate the straightforward application of the method to some problems involving allele fixation and the determination of the equilibrium site frequency spectrum. We then apply the method to the problem of fixation when three alleles are segregating in a population. This latter problem is significantly more complex than a two allele problem and since the diffusion equation cannot be numerically solved, the only other way Ne can be incorporated into the analysis is by simulation. We have achieved good accuracy in all cases considered. In summary, the present work extends the realism and tractability of an important model of evolutionary biology and population genetics. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Performance comparison analysis library communication cluster system using merge sort

    NASA Astrophysics Data System (ADS)

    Wulandari, D. A. R.; Ramadhan, M. E.

    2018-04-01

    Begins by using a single processor, to increase the speed of computing time, the use of multi-processor was introduced. The second paradigm is known as parallel computing, example cluster. The cluster must have the communication potocol for processing, one of it is message passing Interface (MPI). MPI have many library, both of them OPENMPI and MPICH2. Performance of the cluster machine depend on suitable between performance characters of library communication and characters of the problem so this study aims to analyze the comparative performances libraries in handling parallel computing process. The case study in this research are MPICH2 and OpenMPI. This case research execute sorting’s problem to know the performance of cluster system. The sorting problem use mergesort method. The research method is by implementing OpenMPI and MPICH2 on a Linux-based cluster by using five computer virtual then analyze the performance of the system by different scenario tests and three parameters for to know the performance of MPICH2 and OpenMPI. These performances are execution time, speedup and efficiency. The results of this study showed that the addition of each data size makes OpenMPI and MPICH2 have an average speed-up and efficiency tend to increase but at a large data size decreases. increased data size doesn’t necessarily increased speed up and efficiency but only execution time example in 100000 data size. OpenMPI has a execution time greater than MPICH2 example in 1000 data size average execution time with MPICH2 is 0,009721 and OpenMPI is 0,003895 OpenMPI can customize communication needs.

  19. Lowering sample size in comparative analyses can indicate a correlation where there is none: example from Rensch's rule in primates.

    PubMed

    Lindenfors, P; Tullberg, B S

    2006-07-01

    The fact that characters may co-vary in organism groups because of shared ancestry and not always because of functional correlations was the initial rationale for developing phylogenetic comparative methods. Here we point out a case where similarity due to shared ancestry can produce an undesired effect when conducting an independent contrasts analysis. Under special circumstances, using a low sample size will produce results indicating an evolutionary correlation between characters where an analysis of the same pattern utilizing a larger sample size will show that this correlation does not exist. This is the opposite effect of increased sample size to that expected; normally an increased sample size increases the chance of finding a correlation. The situation where the problem occurs is when co-variation between the two continuous characters analysed is clumped in clades; e.g. when some phylogenetically conservative factors affect both characters simultaneously. In such a case, the correlation between the two characters becomes contingent on the number of clades sharing this conservative factor that are included in the analysis, in relation to the number of species contained within these clades. Removing species scattered evenly over the phylogeny will in this case remove the exact variation that diffuses the evolutionary correlation between the two characters - the variation contained within the clades sharing the conservative factor. We exemplify this problem by discussing a parallel in nature where the described problem may be of importance. This concerns the question of the presence or absence of Rensch's rule in primates.

  20. Herd size and bovine tuberculosis persistence in cattle farms in Great Britain.

    PubMed

    Brooks-Pollock, Ellen; Keeling, Matt

    2009-12-01

    Bovine tuberculosis (bTB) infection in cattle is one of the most complex and persistent problems faced by the cattle industry in Great Britain today. While a number of factors have been identified as increasing the risk of infection, there has been little analysis on the causes of persistent infection within farms. In this article, we use the Cattle Tracing System to examine changes in herd size and VetNet data to correlate herd size with clearance of bTB. We find that the number of active farms fell by 16.3% between 2002 and 2007. The average farm size increased by 17.9% between 2002 and 2005. Using a measure similar to the Critical Community Size, the VetNet data reveal that herd size is positively correlated with disease persistence. Since economic policy and subsidies have been shown to influence farm size, we used a simple financial model for ideal farm size which includes disease burden to conclude that increasing herd size for efficiency gains may contribute to increased disease incidence.

  1. Development of Quadratic Programming Algorithm Based on Interior Point Method with Estimation Mechanism of Active Constraints

    NASA Astrophysics Data System (ADS)

    Hashimoto, Hiroyuki; Takaguchi, Yusuke; Nakamura, Shizuka

    Instability of calculation process and increase of calculation time caused by increasing size of continuous optimization problem remain the major issues to be solved to apply the technique to practical industrial systems. This paper proposes an enhanced quadratic programming algorithm based on interior point method mainly for improvement of calculation stability. The proposed method has dynamic estimation mechanism of active constraints on variables, which fixes the variables getting closer to the upper/lower limit on them and afterwards releases the fixed ones as needed during the optimization process. It is considered as algorithm-level integration of the solution strategy of active-set method into the interior point method framework. We describe some numerical results on commonly-used bench-mark problems called “CUTEr” to show the effectiveness of the proposed method. Furthermore, the test results on large-sized ELD problem (Economic Load Dispatching problems in electric power supply scheduling) are also described as a practical industrial application.

  2. Actuator Placement Via Genetic Algorithm for Aircraft Morphing

    NASA Technical Reports Server (NTRS)

    Crossley, William A.; Cook, Andrea M.

    2001-01-01

    This research continued work that began under the support of NASA Grant NAG1-2119. The focus of this effort was to continue investigations of Genetic Algorithm (GA) approaches that could be used to solve an actuator placement problem by treating this as a discrete optimization problem. In these efforts, the actuators are assumed to be "smart" devices that change the aerodynamic shape of an aircraft wing to alter the flow past the wing, and, as a result, provide aerodynamic moments that could provide flight control. The earlier work investigated issued for the problem statement, developed the appropriate actuator modeling, recognized the importance of symmetry for this problem, modified the aerodynamic analysis routine for more efficient use with the genetic algorithm, and began a problem size study to measure the impact of increasing problem complexity. The research discussed in this final summary further investigated the problem statement to provide a "combined moment" problem statement to simultaneously address roll, pitch and yaw. Investigations of problem size using this new problem statement provided insight into performance of the GA as the number of possible actuator locations increased. Where previous investigations utilized a simple wing model to develop the GA approach for actuator placement, this research culminated with application of the GA approach to a high-altitude unmanned aerial vehicle concept to demonstrate that the approach is valid for an aircraft configuration.

  3. Potential for bed-material entrainment in selected streams of the Edwards Plateau - Edwards, Kimble, and Real Counties, Texas, and vicinity

    USGS Publications Warehouse

    Heitmuller, Franklin T.; Asquith, William H.

    2008-01-01

    The Texas Department of Transportation spends considerable money for maintenance and replacement of low-water crossings of streams in the Edwards Plateau in Central Texas as a result of damages caused in part by the transport of cobble- and gravel-sized bed material. An investigation of the problem at low-water crossings was made by the U.S. Geological Survey in cooperation with the Texas Department of Transportation, and in collaboration with Texas Tech University, Lamar University, and the University of Houston. The bed-material entrainment problem for low-water crossings occurs at two spatial scales - watershed scale and channel-reach scale. First, the relative abundance and activity of cobble- and gravel-sized bed material along a given channel reach becomes greater with increasingly steeper watershed slopes. Second, the stresses required to mobilize bed material at a location can be attributed to reach-scale hydraulic factors, including channel geometry and particle size. The frequency of entrainment generally increases with downstream distance, as a result of decreasing particle size and increased flood magnitudes. An average of 1 year occurs between flows that initially entrain bed material as large as the median particle size, and an average of 1.5 years occurs between flows that completely entrain bed material as large as the median particle size. The Froude numbers associated with initial and complete entrainment of bed material up to the median particle size approximately are 0.40 and 0.45, respectively.

  4. Rescuing Computerized Testing by Breaking Zipf's Law.

    ERIC Educational Resources Information Center

    Wainer, Howard

    2000-01-01

    Suggests that because of the nonlinear relationship between item usage and item security, the problems of test security posed by continuous administration of standardized tests cannot be resolved merely by increasing the size of the item pool. Offers alternative strategies to overcome these problems, distributing test items so as to avoid the…

  5. Birthweight-discordance and differences in early parenting relate to monozygotic twin differences in behaviour problems and academic achievement at age 7.

    PubMed

    Asbury, Kathryn; Dunn, Judith F; Plomin, Robert

    2006-03-01

    This longitudinal monozygotic (MZ) twin differences study explored associations between birthweight and early family environment and teacher-rated behaviour problems and academic achievement at age 7. MZ differences in anxiety, hyperactivity, conduct problems, peer problems and academic achievement correlated significantly with MZ differences in birthweight and early family environment, showing effect sizes of up to 2%. As predicted by earlier research, associations increased at the extremes of discordance, even in a longitudinal, cross-rater design, with effect sizes reaching as high as 12%. As with previous research some of these non-shared environmental (NSE) relationships appeared to operate partly as a function of SES, family chaos and maternal depression. Higher-risk families generally showed stronger negative associations.

  6. What big size you have! Using effect sizes to determine the impact of public health nursing interventions.

    PubMed

    Johnson, K E; McMorris, B J; Raynor, L A; Monsen, K A

    2013-01-01

    The Omaha System is a standardized interface terminology that is used extensively by public health nurses in community settings to document interventions and client outcomes. Researchers using Omaha System data to analyze the effectiveness of interventions have typically calculated p-values to determine whether significant client changes occurred between admission and discharge. However, p-values are highly dependent on sample size, making it difficult to distinguish statistically significant changes from clinically meaningful changes. Effect sizes can help identify practical differences but have not yet been applied to Omaha System data. We compared p-values and effect sizes (Cohen's d) for mean differences between admission and discharge for 13 client problems documented in the electronic health records of 1,016 young low-income parents. Client problems were documented anywhere from 6 (Health Care Supervision) to 906 (Caretaking/parenting) times. On a scale from 1 to 5, the mean change needed to yield a large effect size (Cohen's d ≥ 0.80) was approximately 0.60 (range = 0.50 - 1.03) regardless of p-value or sample size (i.e., the number of times a client problem was documented in the electronic health record). Researchers using the Omaha System should report effect sizes to help readers determine which differences are practical and meaningful. Such disclosures will allow for increased recognition of effective interventions.

  7. Single-machine common/slack due window assignment problems with linear decreasing processing times

    NASA Astrophysics Data System (ADS)

    Zhang, Xingong; Lin, Win-Chin; Wu, Wen-Hsiang; Wu, Chin-Chia

    2017-08-01

    This paper studies linear non-increasing processing times and the common/slack due window assignment problems on a single machine, where the actual processing time of a job is a linear non-increasing function of its starting time. The aim is to minimize the sum of the earliness cost, tardiness cost, due window location and due window size. Some optimality results are discussed for the common/slack due window assignment problems and two O(n log n) time algorithms are presented to solve the two problems. Finally, two examples are provided to illustrate the correctness of the corresponding algorithms.

  8. Extending ALE3D, an Arbitrarily Connected hexahedral 3D Code, to Very Large Problem Size (U)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nichols, A L

    2010-12-15

    As the number of compute units increases on the ASC computers, the prospect of running previously unimaginably large problems is becoming a reality. In an arbitrarily connected 3D finite element code, like ALE3D, one must provide a unique identification number for every node, element, face, and edge. This is required for a number of reasons, including defining the global connectivity array required for domain decomposition, identifying appropriate communication patterns after domain decomposition, and determining the appropriate load locations for implicit solvers, for example. In most codes, the unique identification number is defined as a 32-bit integer. Thus the maximum valuemore » available is 231, or roughly 2.1 billion. For a 3D geometry consisting of arbitrarily connected hexahedral elements, there are approximately 3 faces for every element, and 3 edges for every node. Since the nodes and faces need id numbers, using 32-bit integers puts a hard limit on the number of elements in a problem at roughly 700 million. The first solution to this problem would be to replace 32-bit signed integers with 32-bit unsigned integers. This would increase the maximum size of a problem by a factor of 2. This provides some head room, but almost certainly not one that will last long. Another solution would be to replace all 32-bit int declarations with 64-bit long long declarations. (long is either a 32-bit or a 64-bit integer, depending on the OS). The problem with this approach is that there are only a few arrays that actually need to extended size, and thus this would increase the size of the problem unnecessarily. In a future computing environment where CPUs are abundant but memory relatively scarce, this is probably the wrong approach. Based on these considerations, we have chosen to replace only the global identifiers with the appropriate 64-bit integer. The problem with this approach is finding all the places where data that is specified as a 32-bit integer needs to be replaced with the 64-bit integer. that need to be replaced. In the rest of this paper we describe the techniques used to facilitate this transformation, issues raised, and issues still to be addressed. This poster will describe the reasons, methods, issues associated with extending the ALE3D code to run problems larger than 700 million elements.« less

  9. Size Matters: Increased Grey Matter in Boys with Conduct Problems and Callous-Unemotional Traits

    ERIC Educational Resources Information Center

    De Brito, Stephane A.; Mechelli, Andrea; Wilke, Marko; Laurens, Kristin R.; Jones, Alice P.; Barker, Gareth J.; Hodgins, Sheilagh; Viding, Essi

    2009-01-01

    Brain imaging studies of adults with psychopathy have identified structural and functional abnormalities in limbic and prefrontal regions that are involved in emotion recognition, decision-making, morality and empathy. Among children with conduct problems, a small subgroup presents callous-unemotional traits thought to be antecedents of…

  10. Bicriteria Network Optimization Problem using Priority-based Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Gen, Mitsuo; Lin, Lin; Cheng, Runwei

    Network optimization is being an increasingly important and fundamental issue in the fields such as engineering, computer science, operations research, transportation, telecommunication, decision support systems, manufacturing, and airline scheduling. In many applications, however, there are several criteria associated with traversing each edge of a network. For example, cost and flow measures are both important in the networks. As a result, there has been recent interest in solving Bicriteria Network Optimization Problem. The Bicriteria Network Optimization Problem is known a NP-hard. The efficient set of paths may be very large, possibly exponential in size. Thus the computational effort required to solve it can increase exponentially with the problem size in the worst case. In this paper, we propose a genetic algorithm (GA) approach used a priority-based chromosome for solving the bicriteria network optimization problem including maximum flow (MXF) model and minimum cost flow (MCF) model. The objective is to find the set of Pareto optimal solutions that give possible maximum flow with minimum cost. This paper also combines Adaptive Weight Approach (AWA) that utilizes some useful information from the current population to readjust weights for obtaining a search pressure toward a positive ideal point. Computer simulations show the several numerical experiments by using some difficult-to-solve network design problems, and show the effectiveness of the proposed method.

  11. Magnetically modified bioсells in constant magnetic field

    NASA Astrophysics Data System (ADS)

    Abramov, E. G.; Panina, L. K.; Kolikov, V. A.; Bogomolova, E. V.; Snetov, V. N.; Cherepkova, I. A.; Kiselev, A. A.

    2017-02-01

    Paper addresses the inverse problem in determining the area, where the external constant magnetic field captures the biological cells modified by the magnetic nanoparticles. Zero velocity isolines, in area where the modified cells are captured by the magnetic field were determined by numerical method for two locations of the magnet. The problem was solved taking into account the gravitational field, magnetic induction, density of medium, concentration and size of cells, and size and magnetization of nanoparticles attached to the cell. Increase in the number of the nanoparticles attached to the cell and decrease in the cell' size, enlarges the area, where the modified cells are captured and concentrated by the magnet. Solution is confirmed by the visible pattern formation of the modified cells Saccharomyces cerevisiae.

  12. VARIABLE SELECTION IN NONPARAMETRIC ADDITIVE MODELS

    PubMed Central

    Huang, Jian; Horowitz, Joel L.; Wei, Fengrong

    2010-01-01

    We consider a nonparametric additive model of a conditional mean function in which the number of variables and additive components may be larger than the sample size but the number of nonzero additive components is “small” relative to the sample size. The statistical problem is to determine which additive components are nonzero. The additive components are approximated by truncated series expansions with B-spline bases. With this approximation, the problem of component selection becomes that of selecting the groups of coefficients in the expansion. We apply the adaptive group Lasso to select nonzero components, using the group Lasso to obtain an initial estimator and reduce the dimension of the problem. We give conditions under which the group Lasso selects a model whose number of components is comparable with the underlying model, and the adaptive group Lasso selects the nonzero components correctly with probability approaching one as the sample size increases and achieves the optimal rate of convergence. The results of Monte Carlo experiments show that the adaptive group Lasso procedure works well with samples of moderate size. A data example is used to illustrate the application of the proposed method. PMID:21127739

  13. Macrodystrophia lipomatosa of foot involving great toe.

    PubMed

    Gaur, A K; Mhambre, A S; Popalwar, H; Sharma, R

    2014-06-01

    Macrodystrophia lipomatosa is a rare form of congenital disorder in which there is localized gigantism characterized by progressive overgrowth of all mesenchymal elements with a disproportionate increase in the fibroadipose tissues. The adipose tissue infiltration involves subcutaneous tissue, periosteum, nerves and bone marrow. Most of the cases reported have hand or foot involvement. Patient seeks medical help for improving cosmesis or to get the size of the involved part reduced in order to reduce mechanical problems. We report a case of macrodystrophia lipomatosa involving medial side of foot with significant enlargement of great toe causing concern for cosmesis and inconvenience due to mechanical problems. The X-rays showed increased soft tissue with more of adipose tissue and increased size of involved digits with widening of ends. Since the patient's mother did not want any surgical intervention he was educated about foot care and proper footwear design was suggested. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Heights of selected ponderosa pine seedlings during 20 years

    Treesearch

    R. Z. Callaham; J. W. Duffield

    1963-01-01

    Many silviculturists and geneticists, concerned with the problem of increasing the rate of production of forest plantations, advocate or practice the selection of the larger seedlings in the nursery bed. Such selection implies a hypothesis that size of seedlings is positively correlated with size of the same plants at some more advanced age. Two tests were established...

  15. Formulation of poorly water-soluble Gemfibrozil applying power ultrasound.

    PubMed

    Ambrus, R; Naghipour Amirzadi, N; Aigner, Z; Szabó-Révész, P

    2012-03-01

    The dissolution properties of a drug and its release from the dosage form have a basic impact on its bioavailability. Solubility problems are a major challenge for the pharmaceutical industry as concerns the development of new pharmaceutical products. Formulation problems may possibly be overcome by modification of particle size and morphology. The application of power ultrasound is a novel possibility in drug formulation. This article reports on solvent diffusion and melt emulsification, as new methods supplemented with drying in the field of sonocrystallization of poorly water-soluble Gemfibrozil. During thermoanalytical characterization, a modified structure was detected. The specific surface area of the drug was increased following particle size reduction and the poor wettability properties could also be improved. The dissolution rate was therefore significantly increased. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Wiki Activities in Blended Learning for Health Professional Students: Enhancing Critical Thinking and Clinical Reasoning Skills

    ERIC Educational Resources Information Center

    Snodgrass, Suzanne

    2011-01-01

    Health professionals use critical thinking, a key problem solving skill, for clinical reasoning which is defined as the use of knowledge and reflective inquiry to diagnose a clinical problem. Teaching these skills in traditional settings with growing class sizes is challenging, and students increasingly expect learning that is flexible and…

  17. Predator-driven brain size evolution in natural populations of Trinidadian killifish (Rivulus hartii)

    PubMed Central

    Walsh, Matthew R.; Broyles, Whitnee; Beston, Shannon M.; Munch, Stephan B.

    2016-01-01

    Vertebrates exhibit extensive variation in relative brain size. It has long been assumed that this variation is the product of ecologically driven natural selection. Yet, despite more than 100 years of research, the ecological conditions that select for changes in brain size are unclear. Recent laboratory selection experiments showed that selection for larger brains is associated with increased survival in risky environments. Such results lead to the prediction that increased predation should favour increased brain size. Work on natural populations, however, foreshadows the opposite trajectory of evolution; increased predation favours increased boldness, slower learning, and may thereby select for a smaller brain. We tested the influence of predator-induced mortality on brain size evolution by quantifying brain size variation in a Trinidadian killifish, Rivulus hartii, from communities that differ in predation intensity. We observed strong genetic differences in male (but not female) brain size between fish communities; second generation laboratory-reared males from sites with predators exhibited smaller brains than Rivulus from sites in which they are the only fish present. Such trends oppose the results of recent laboratory selection experiments and are not explained by trade-offs with other components of fitness. Our results suggest that increased male brain size is favoured in less risky environments because of the fitness benefits associated with faster rates of learning and problem-solving behaviour. PMID:27412278

  18. [Survival strategy of photosynthetic organisms. 1. Variability of the extent of light-harvesting pigment aggregation as a structural factor optimizing the function of oligomeric photosynthetic antenna. Model calculations].

    PubMed

    Fetisova, Z G

    2004-01-01

    In accordance with our concept of rigorous optimization of photosynthetic machinery by a functional criterion, this series of papers continues purposeful search in natural photosynthetic units (PSU) for the basic principles of their organization that we predicted theoretically for optimal model light-harvesting systems. This approach allowed us to determine the basic principles for the organization of a PSU of any fixed size. This series of papers deals with the problem of structural optimization of light-harvesting antenna of variable size controlled in vivo by the light intensity during the growth of organisms, which accentuates the problem of antenna structure optimization because optimization requirements become more stringent as the PSU increases in size. In this work, using mathematical modeling for the functioning of natural PSUs, we have shown that the aggregation of pigments of model light-harvesting antenna, being one of universal optimizing factors, furthermore allows controlling the antenna efficiency if the extent of pigment aggregation is a variable parameter. In this case, the efficiency of antenna increases with the size of the elementary antenna aggregate, thus ensuring the high efficiency of the PSU irrespective of its size; i.e., variation in the extent of pigment aggregation controlled by the size of light-harvesting antenna is biologically expedient.

  19. Single product lot-sizing on unrelated parallel machines with non-decreasing processing times

    NASA Astrophysics Data System (ADS)

    Eremeev, A.; Kovalyov, M.; Kuznetsov, P.

    2018-01-01

    We consider a problem in which at least a given quantity of a single product has to be partitioned into lots, and lots have to be assigned to unrelated parallel machines for processing. In one version of the problem, the maximum machine completion time should be minimized, in another version of the problem, the sum of machine completion times is to be minimized. Machine-dependent lower and upper bounds on the lot size are given. The product is either assumed to be continuously divisible or discrete. The processing time of each machine is defined by an increasing function of the lot volume, given as an oracle. Setup times and costs are assumed to be negligibly small, and therefore, they are not considered. We derive optimal polynomial time algorithms for several special cases of the problem. An NP-hard case is shown to admit a fully polynomial time approximation scheme. An application of the problem in energy efficient processors scheduling is considered.

  20. The Relationship Between Resilience and Internet Addiction: A Multiple Mediation Model Through Peer Relationship and Depression.

    PubMed

    Zhou, Pingyan; Zhang, Cai; Liu, Jian; Wang, Zhe

    2017-10-01

    Heavy use of the Internet may lead to profound academic problems in elementary students, such as poor grades, academic probation, and even expulsion from school. It is of great concern that Internet addiction problems in elementary school students have increased sharply in recent years. In this study, 58,756 elementary school students from the Henan province of China completed four questionnaires to explore the mechanisms of Internet addiction. The results showed that resilience was negatively correlated with Internet addiction. There were three mediational paths in the model: (a) the mediational path through peer relationship with an effect size of 50.0 percent, (b) the mediational path through depression with an effect size of 15.6 percent, (c) the mediational path through peer relationship and depression with an effect size of 13.7 percent. The total mediational effect size was 79.27 percent. The effect size through peer relationship was the strongest among the three mediation paths. The current findings suggest that resilience is a predictor of Internet addiction. Improving children's resilience (such as toughness, emotional control, and problem solving) can be an effective way to reduce Internet addiction behavior. The current findings provide useful information for early detection and intervention for Internet addiction.

  1. Cost effective campaigning in social networks

    NASA Astrophysics Data System (ADS)

    Kotnis, Bhushan; Kuri, Joy

    2016-05-01

    Campaigners are increasingly using online social networking platforms for promoting products, ideas and information. A popular method of promoting a product or even an idea is incentivizing individuals to evangelize the idea vigorously by providing them with referral rewards in the form of discounts, cash backs, or social recognition. Due to budget constraints on scarce resources such as money and manpower, it may not be possible to provide incentives for the entire population, and hence incentives need to be allocated judiciously to appropriate individuals for ensuring the highest possible outreach size. We aim to do the same by formulating and solving an optimization problem using percolation theory. In particular, we compute the set of individuals that are provided incentives for minimizing the expected cost while ensuring a given outreach size. We also solve the problem of computing the set of individuals to be incentivized for maximizing the outreach size for given cost budget. The optimization problem turns out to be non trivial; it involves quantities that need to be computed by numerically solving a fixed point equation. Our primary contribution is, that for a fairly general cost structure, we show that the optimization problems can be solved by solving a simple linear program. We believe that our approach of using percolation theory to formulate an optimization problem is the first of its kind.

  2. Study on the temperature field of large-sized sapphire single crystal furnace

    NASA Astrophysics Data System (ADS)

    Zhai, J. P.; Jiang, J. W.; Liu, K. G.; Peng, X. B.; Jian, D. L.; Li, I. L.

    2018-01-01

    In this paper, the temperature field of large-sized (120kg, 200kg and 300kg grade) sapphire single crystal furnace was simulated. By keeping the crucible diameter ratio and the insulation system unchanged, the power consumption, axial and radial temperature gradient, solid-liquid surface shape, stress distribution and melt flow were studied. The simulation results showed that with the increase of the single crystal furnace size, the power consumption increased, the temperature field insulation effect became worse, the growth stress value increased and the stress concentration phenomenon occurred. To solve these problems, the middle and bottom insulation system should be enhanced during designing the large-sized sapphire single crystal furnace. The appropriate radial and axial temperature gradient was favorable to reduce the crystal stress and prevent the occurrence of cracking. Expanding the interface between the seed and crystal was propitious to avoid the stress accumulation phenomenon.

  3. Introducing the MCHF/OVRP/SDMP: Multicapacitated/Heterogeneous Fleet/Open Vehicle Routing Problems with Split Deliveries and Multiproducts

    PubMed Central

    Yilmaz Eroglu, Duygu; Caglar Gencosman, Burcu; Cavdur, Fatih; Ozmutlu, H. Cenk

    2014-01-01

    In this paper, we analyze a real-world OVRP problem for a production company. Considering real-world constrains, we classify our problem as multicapacitated/heterogeneous fleet/open vehicle routing problem with split deliveries and multiproduct (MCHF/OVRP/SDMP) which is a novel classification of an OVRP. We have developed a mixed integer programming (MIP) model for the problem and generated test problems in different size (10–90 customers) considering real-world parameters. Although MIP is able to find optimal solutions of small size (10 customers) problems, when the number of customers increases, the problem gets harder to solve, and thus MIP could not find optimal solutions for problems that contain more than 10 customers. Moreover, MIP fails to find any feasible solution of large-scale problems (50–90 customers) within time limits (7200 seconds). Therefore, we have developed a genetic algorithm (GA) based solution approach for large-scale problems. The experimental results show that the GA based approach reaches successful solutions with 9.66% gap in 392.8 s on average instead of 7200 s for the problems that contain 10–50 customers. For large-scale problems (50–90 customers), GA reaches feasible solutions of problems within time limits. In conclusion, for the real-world applications, GA is preferable rather than MIP to reach feasible solutions in short time periods. PMID:25045735

  4. Solution of a large hydrodynamic problem using the STAR-100 computer

    NASA Technical Reports Server (NTRS)

    Weilmuenster, K. J.; Howser, L. M.

    1976-01-01

    A representative hydrodynamics problem, the shock initiated flow over a flat plate, was used for exploring data organizations and program structures needed to exploit the STAR-100 vector processing computer. A brief description of the problem is followed by a discussion of how each portion of the computational process was vectorized. Finally, timings of different portions of the program are compared with equivalent operations on serial machines. The speed up of the STAR-100 over the CDC 6600 program is shown to increase as the problem size increases. All computations were carried out on a CDC 6600 and a CDC STAR 100, with code written in FORTRAN for the 6600 and in STAR FORTRAN for the STAR 100.

  5. Aircraft of the future

    NASA Technical Reports Server (NTRS)

    Yeger, S.

    1985-01-01

    Some basic problems connected with attempts to increase the size and capacity of transport aircraft are discussed. According to the square-cubic law, the increase in structural weight is proportional to the third power of the increase in the linear dimensions of the aircraft when geomettric similarity is maintained, while the surface area of the aircraft increases according to the second power. A consequence is that the fraction of useful weight will decrease as aircraft increase in size. However, in flying-wing designs in which the whole load on the wing is proportional to the distribution of lifting forces, the total bending moment on the wing will be sharply reduced, enabling lighter construction. Flying wings may have an ultimate capacity of 3000 passengers.

  6. Autologous Skin Cell Spray for Massive Soft Tissue War Injuries: A Prospective, Case-Control, Multicenter Trial

    DTIC Science & Technology

    2014-04-01

    randomization design, after all patients are treated with dermal matrix, patients will be randomized to Arm 1 (control group; standard skin grafting with... grafts are often “meshed” or flattened and spread out to increase the size of the skin graft to better cover a large wound. Standard “meshing” increases...the size of the donor graft by 1.5 times (1:1.5). Problems with healing and skin irritation remain with such skin grafts when the injured areas are

  7. Experimental design, power and sample size for animal reproduction experiments.

    PubMed

    Chapman, Phillip L; Seidel, George E

    2008-01-01

    The present paper concerns statistical issues in the design of animal reproduction experiments, with emphasis on the problems of sample size determination and power calculations. We include examples and non-technical discussions aimed at helping researchers avoid serious errors that may invalidate or seriously impair the validity of conclusions from experiments. Screen shots from interactive power calculation programs and basic SAS power calculation programs are presented to aid in understanding statistical power and computing power in some common experimental situations. Practical issues that are common to most statistical design problems are briefly discussed. These include one-sided hypothesis tests, power level criteria, equality of within-group variances, transformations of response variables to achieve variance equality, optimal specification of treatment group sizes, 'post hoc' power analysis and arguments for the increased use of confidence intervals in place of hypothesis tests.

  8. Model selection with multiple regression on distance matrices leads to incorrect inferences.

    PubMed

    Franckowiak, Ryan P; Panasci, Michael; Jarvis, Karl J; Acuña-Rodriguez, Ian S; Landguth, Erin L; Fortin, Marie-Josée; Wagner, Helene H

    2017-01-01

    In landscape genetics, model selection procedures based on Information Theoretic and Bayesian principles have been used with multiple regression on distance matrices (MRM) to test the relationship between multiple vectors of pairwise genetic, geographic, and environmental distance. Using Monte Carlo simulations, we examined the ability of model selection criteria based on Akaike's information criterion (AIC), its small-sample correction (AICc), and the Bayesian information criterion (BIC) to reliably rank candidate models when applied with MRM while varying the sample size. The results showed a serious problem: all three criteria exhibit a systematic bias toward selecting unnecessarily complex models containing spurious random variables and erroneously suggest a high level of support for the incorrectly ranked best model. These problems effectively increased with increasing sample size. The failure of AIC, AICc, and BIC was likely driven by the inflated sample size and different sum-of-squares partitioned by MRM, and the resulting effect on delta values. Based on these findings, we strongly discourage the continued application of AIC, AICc, and BIC for model selection with MRM.

  9. Do small swarms have an advantage when house hunting? The effect of swarm size on nest-site selection by Apis mellifera.

    PubMed

    Schaerf, T M; Makinson, J C; Myerscough, M R; Beekman, M

    2013-10-06

    Reproductive swarms of honeybees are faced with the problem of finding a good site to establish a new colony. We examined the potential effects of swarm size on the quality of nest-site choice through a combination of modelling and field experiments. We used an individual-based model to examine the effects of swarm size on decision accuracy under the assumption that the number of bees actively involved in the decision-making process (scouts) is an increasing function of swarm size. We found that the ability of a swarm to choose the best of two nest sites decreases as swarm size increases when there is some time-lag between discovering the sites, consistent with Janson & Beekman (Janson & Beekman 2007 Proceedings of European Conference on Complex Systems, pp. 204-211.). However, when simulated swarms were faced with a realistic problem of choosing between many nest sites discoverable at all times, larger swarms were more accurate in their decisions than smaller swarms owing to their ability to discover nest sites more rapidly. Our experimental fieldwork showed that large swarms invest a larger number of scouts into the decision-making process than smaller swarms. Preliminary analysis of waggle dances from experimental swarms also suggested that large swarms could indeed discover and advertise nest sites at a faster rate than small swarms.

  10. Do small swarms have an advantage when house hunting? The effect of swarm size on nest-site selection by Apis mellifera

    PubMed Central

    Schaerf, T. M.; Makinson, J. C.; Myerscough, M. R.; Beekman, M.

    2013-01-01

    Reproductive swarms of honeybees are faced with the problem of finding a good site to establish a new colony. We examined the potential effects of swarm size on the quality of nest-site choice through a combination of modelling and field experiments. We used an individual-based model to examine the effects of swarm size on decision accuracy under the assumption that the number of bees actively involved in the decision-making process (scouts) is an increasing function of swarm size. We found that the ability of a swarm to choose the best of two nest sites decreases as swarm size increases when there is some time-lag between discovering the sites, consistent with Janson & Beekman (Janson & Beekman 2007 Proceedings of European Conference on Complex Systems, pp. 204–211.). However, when simulated swarms were faced with a realistic problem of choosing between many nest sites discoverable at all times, larger swarms were more accurate in their decisions than smaller swarms owing to their ability to discover nest sites more rapidly. Our experimental fieldwork showed that large swarms invest a larger number of scouts into the decision-making process than smaller swarms. Preliminary analysis of waggle dances from experimental swarms also suggested that large swarms could indeed discover and advertise nest sites at a faster rate than small swarms. PMID:23904590

  11. Correlation between Academic and Skills-Based Tests in Computer Networks

    ERIC Educational Resources Information Center

    Buchanan, William

    2006-01-01

    Computing-related programmes and modules have many problems, especially related to large class sizes, large-scale plagiarism, module franchising, and an increased requirement from students for increased amounts of hands-on, practical work. This paper presents a practical computer networks module which uses a mixture of online examinations and a…

  12. Neural Classifiers for Learning Higher-Order Correlations

    NASA Astrophysics Data System (ADS)

    Güler, Marifi

    1999-01-01

    Studies by various authors suggest that higher-order networks can be more powerful and are biologically more plausible with respect to the more traditional multilayer networks. These architectures make explicit use of nonlinear interactions between input variables in the form of higher-order units or product units. If it is known a priori that the problem to be implemented possesses a given set of invariances like in the translation, rotation, and scale invariant pattern recognition problems, those invariances can be encoded, thus eliminating all higher-order terms which are incompatible with the invariances. In general, however, it is a serious set-back that the complexity of learning increases exponentially with the size of inputs. This paper reviews higher-order networks and introduces an implicit representation in which learning complexity is mainly decided by the number of higher-order terms to be learned and increases only linearly with the input size.

  13. Interlaced coarse-graining for the dynamical cluster approximation

    NASA Astrophysics Data System (ADS)

    Haehner, Urs; Staar, Peter; Jiang, Mi; Maier, Thomas; Schulthess, Thomas

    The negative sign problem remains a challenging limiting factor in quantum Monte Carlo simulations of strongly correlated fermionic many-body systems. The dynamical cluster approximation (DCA) makes this problem less severe by coarse-graining the momentum space to map the bulk lattice to a cluster embedded in a dynamical mean-field host. Here, we introduce a new form of an interlaced coarse-graining and compare it with the traditional coarse-graining. We show that it leads to more controlled results with weaker cluster shape and smoother cluster size dependence, which with increasing cluster size converge to the results obtained using the standard coarse-graining. In addition, the new coarse-graining reduces the severity of the fermionic sign problem. Therefore, it enables calculations on much larger clusters and can allow the evaluation of the exact infinite cluster size result via finite size scaling. To demonstrate this, we study the hole-doped two-dimensional Hubbard model and show that the interlaced coarse-graining in combination with the DCA+ algorithm permits the determination of the superconducting Tc on cluster sizes, for which the results can be fitted with the Kosterlitz-Thouless scaling law. This research used resources of the Oak Ridge Leadership Computing Facility (OLCF) awarded by the INCITE program, and of the Swiss National Supercomputing Center. OLCF is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725.

  14. The neural bases of the multiplication problem-size effect across countries

    PubMed Central

    Prado, Jérôme; Lu, Jiayan; Liu, Li; Dong, Qi; Zhou, Xinlin; Booth, James R.

    2013-01-01

    Multiplication problems involving large numbers (e.g., 9 × 8) are more difficult to solve than problems involving small numbers (e.g., 2 × 3). Behavioral research indicates that this problem-size effect might be due to different factors across countries and educational systems. However, there is no neuroimaging evidence supporting this hypothesis. Here, we compared the neural correlates of the multiplication problem-size effect in adults educated in China and the United States. We found a greater neural problem-size effect in Chinese than American participants in bilateral superior temporal regions associated with phonological processing. However, we found a greater neural problem-size effect in American than Chinese participants in right intra-parietal sulcus (IPS) associated with calculation procedures. Therefore, while the multiplication problem-size effect might be a verbal retrieval effect in Chinese as compared to American participants, it may instead stem from the use of calculation procedures in American as compared to Chinese participants. Our results indicate that differences in educational practices might affect the neural bases of symbolic arithmetic. PMID:23717274

  15. An Artificial Immune System with Feedback Mechanisms for Effective Handling of Population Size

    NASA Astrophysics Data System (ADS)

    Gao, Shangce; Wang, Rong-Long; Ishii, Masahiro; Tang, Zheng

    This paper represents a feedback artificial immune system (FAIS). Inspired by the feedback mechanisms in the biological immune system, the proposed algorithm effectively manipulates the population size by increasing and decreasing B cells according to the diversity of the current population. Two kinds of assessments are used to evaluate the diversity aiming to capture the characteristics of the problem on hand. Furthermore, the processing of adding and declining the number of population is designed. The validity of the proposed algorithm is tested for several traveling salesman benchmark problems. Simulation results demonstrate the efficiency of the proposed algorithm when compared with the traditional genetic algorithm and an improved clonal selection algorithm.

  16. Continuous Tamper-proof Logging using TPM2.0

    DTIC Science & Technology

    2014-06-16

    process each log entry. Additional hardware support could mitigate this problem. Tradeoffs between performance and security guarantees Disk write...becomes weaker as the block size increases. This problem is mitigated in protocol B by allowing offline recovery from a power failure and detection of...M.K., Isozaki, H.: Flicker : An execution infrastructure for TCB minimization. ACM SIGOPS Operating Systems Review 42(4) (2008) 315–328 24. Parno, B

  17. Cerium Oxide Nanoparticle Nose-Only Inhalation Exposures ...

    EPA Pesticide Factsheets

    There is a critical need to assess the health effects associated with exposure of commercially produced NPs across the size ranges reflective of that detected in the industrial sectors that are generating, as well as incorporating, NPs into products. Generation of stable and low concentrations of size-fractionated nanoscale aerosols in nose-only chambers can be difficult, and when the aerosol agglomerates during generation, the problems are significantly increased. One problem is that many nanoscale aerosol generators have higher aerosol output and/or airflow than can be accommodated by a nose-only inhalation chamber, requiring much of the generated aerosol to be diverted to exhaust. Another problem is that mixing vessels used to modulate the fluctuating output from aerosol generators can cause substantial wall losses, consuming much of the generated aerosol. Other available aerosol generation systems can produce nanoscale aerosols from nanoparticles (NPs), however these NPs are generated in real time and do not approximate the physical and chemical characteristics of NPs that are commercially produced exposing the workers and the public. The health effects associated with exposure to commercial NP production, which are more morphologically and size heterogeneous, is required for risk assessment. To overcome these problems, a low-consumption dry-particulate nanoscale aerosol generator was developed to deliver stable concentrations in the range of 10–5000 µg

  18. Application of gradient elasticity to benchmark problems of beam vibrations

    NASA Astrophysics Data System (ADS)

    Kateb, K. M.; Almitani, K. H.; Alnefaie, K. A.; Abu-Hamdeh, N. H.; Papadopoulos, P.; Askes, H.; Aifantis, E. C.

    2016-04-01

    The gradient approach, specifically gradient elasticity theory, is adopted to revisit certain typical configurations on mechanical vibrations. New results on size effects and scale-dependent behavior not captured by classical elasticity are derived, aiming at illustrating the usefulness of this approach to applications in advanced technologies. In particular, elastic prismatic straight beams in bending are discussed using two different governing equations: the gradient elasticity bending moment equation (fourth order) and the gradient elasticity deflection equation (sixth order). Different boundary/support conditions are examined. One problem considers the free vibrations of a cantilever beam loaded by an end force. A second problem is concerned with a simply supported beam disturbed by a concentrated force in the middle of the beam. Both problems are solved analytically. Exact free vibration frequencies and mode shapes are derived and presented. The difference between the gradient elasticity solution and its classical counterpart is revealed. The size ratio c/L (c denotes internal length and L is the length of the beam) induces significant effects on vibration frequencies. For both beam configurations, it turns out that as the ratio c/L increases, the vibration frequencies decrease, a fact which implies lower beam stiffness. Numerical examples show this behavior explicitly and recover the classical vibration behavior for vanishing size ratio c/L.

  19. Full self-consistency in the Fermi-orbital self-interaction correction

    NASA Astrophysics Data System (ADS)

    Yang, Zeng-hui; Pederson, Mark R.; Perdew, John P.

    2017-05-01

    The Perdew-Zunger self-interaction correction cures many common problems associated with semilocal density functionals, but suffers from a size-extensivity problem when Kohn-Sham orbitals are used in the correction. Fermi-Löwdin-orbital self-interaction correction (FLOSIC) solves the size-extensivity problem, allowing its use in periodic systems and resulting in better accuracy in finite systems. Although the previously published FLOSIC algorithm Pederson et al., J. Chem. Phys. 140, 121103 (2014)., 10.1063/1.4869581 appears to work well in many cases, it is not fully self-consistent. This would be particularly problematic for systems where the occupied manifold is strongly changed by the correction. In this paper, we demonstrate a different algorithm for FLOSIC to achieve full self-consistency with only marginal increase of computational cost. The resulting total energies are found to be lower than previously reported non-self-consistent results.

  20. Enhancing Student Motivation as Evidenced by Improved Academic Growth and Increased Work Completion.

    ERIC Educational Resources Information Center

    Belcher, Gay; Macari, Nancy

    This project evaluated a program for enhancing student motivation as evidenced by improved academic growth and increased work completion. The targeted population consisted of fifth graders in a small school in a medium-sized rural community in the Midwest. The problem of lack of achievement motivation and lack of student concern about academic…

  1. New Measurements of the Particle Size Distribution of Apollo 11 Lunar Soil 10084

    NASA Technical Reports Server (NTRS)

    McKay, D.S.; Cooper, B.L.; Riofrio, L.M.

    2009-01-01

    We have initiated a major new program to determine the grain size distribution of nearly all lunar soils collected in the Apollo program. Following the return of Apollo soil and core samples, a number of investigators including our own group performed grain size distribution studies and published the results [1-11]. Nearly all of these studies were done by sieving the samples, usually with a working fluid such as Freon(TradeMark) or water. We have measured the particle size distribution of lunar soil 10084,2005 in water, using a Microtrac(TradeMark) laser diffraction instrument. Details of our own sieving technique and protocol (also used in [11]). are given in [4]. While sieving usually produces accurate and reproducible results, it has disadvantages. It is very labor intensive and requires hours to days to perform properly. Even using automated sieve shaking devices, four or five days may be needed to sieve each sample, although multiple sieve stacks increases productivity. Second, sieving is subject to loss of grains through handling and weighing operations, and these losses are concentrated in the finest grain sizes. Loss from handling becomes a more acute problem when smaller amounts of material are used. While we were able to quantitatively sieve into 6 or 8 size fractions using starting soil masses as low as 50mg, attrition and handling problems limit the practicality of sieving smaller amounts. Third, sieving below 10 or 20microns is not practical because of the problems of grain loss, and smaller grains sticking to coarser grains. Sieving is completely impractical below about 5- 10microns. Consequently, sieving gives no information on the size distribution below approx.10 microns which includes the important submicrometer and nanoparticle size ranges. Finally, sieving creates a limited number of size bins and may therefore miss fine structure of the distribution which would be revealed by other methods that produce many smaller size bins.

  2. Evidence of market-driven size-selective fishing and the mediating effects of biological and institutional factors.

    PubMed

    Reddy, Sheila M W; Wentz, Allison; Aburto-Oropeza, Octavio; Maxey, Martin; Nagavarapu, Sriniketh; Leslie, Heather M

    2013-06-01

    Market demand is often ignored or assumed to lead uniformly to the decline of resources. Yet little is known about how market demand influences natural resources in particular contexts, or the mediating effects of biological or institutional factors. Here, we investigate this problem by examining the Pacific red snapper (Lutjanus peru) fishery around La Paz, Mexico, where medium or "plate-sized" fish are sold to restaurants at a premium price. If higher demand for plate-sized fish increases the relative abundance of the smallest (recruit size class) and largest (most fecund) fish, this may be a market mechanism to increase stocks and fishermen's revenues. We tested this hypothesis by estimating the effect of prices on the distribution of catch across size classes using daily records of prices and catch. We linked predictions from this economic choice model to a staged-based model of the fishery to estimate the effects on the stock and revenues from harvest. We found that the supply of plate-sized fish increased by 6%, while the supply of large fish decreased by 4% as a result of a 13% price premium for plate-sized fish. This market-driven size selection increased revenues (14%) but decreased total fish biomass (-3%). However, when market-driven size selection was combined with limited institutional constraints, both fish biomass (28%) and fishermen's revenue (22%) increased. These results show that the direction and magnitude of the effects of market demand on biological populations and human behavior can depend on both biological attributes and institutional constraints. Fisheries management may capitalize on these conditional effects by implementing size-based regulations when economic and institutional incentives will enhance compliance, as in the case we describe here, or by creating compliance enhancing conditions for existing regulations.

  3. Risk Factors for Behavioral and Emotional Difficulties in Siblings of Children With Autism Spectrum Disorder.

    PubMed

    Walton, Katherine M

    2016-11-01

    This study examined risk factors for behavioral and emotional problems in 1973 siblings of children with autism spectrum disorders (ASD). Results revealed six correlates of sibling internalizing and externalizing problems: male gender, smaller family size, older age of the child with ASD, lower family income, child with ASD behavior problems, and sibling Broader Autism Phenotype. Siblings with few risk factors were at low risk for behavioral and emotional problems. However, siblings with many risk factors were at increased risk for both internalizing and externalizing problems. These results highlight the need to assess risk for individual siblings to best identify a sub-population of siblings who may be in need of additional support.

  4. Programmed Instruction Revisited.

    ERIC Educational Resources Information Center

    Skinner, B. F.

    1986-01-01

    Discusses the history and development of teaching machines, invented to restore the important features of personalized instruction as public school class size increased. Examines teaching and learning problems over the past 50 years, including motivation, attention, appreciation, discovery, and creativity in relation to programmed instruction.…

  5. On size-constrained minimum s–t cut problems and size-constrained dense subgraph problems

    DOE PAGES

    Chen, Wenbin; Samatova, Nagiza F.; Stallmann, Matthias F.; ...

    2015-10-30

    In some application cases, the solutions of combinatorial optimization problems on graphs should satisfy an additional vertex size constraint. In this paper, we consider size-constrained minimum s–t cut problems and size-constrained dense subgraph problems. We introduce the minimum s–t cut with at-least-k vertices problem, the minimum s–t cut with at-most-k vertices problem, and the minimum s–t cut with exactly k vertices problem. We prove that they are NP-complete. Thus, they are not polynomially solvable unless P = NP. On the other hand, we also study the densest at-least-k-subgraph problem (DalkS) and the densest at-most-k-subgraph problem (DamkS) introduced by Andersen andmore » Chellapilla [1]. We present a polynomial time algorithm for DalkS when k is bounded by some constant c. We also present two approximation algorithms for DamkS. In conclusion, the first approximation algorithm for DamkS has an approximation ratio of n-1/k-1, where n is the number of vertices in the input graph. The second approximation algorithm for DamkS has an approximation ratio of O (n δ), for some δ < 1/3.« less

  6. Offspring Size and Reproductive Allocation in Harvester Ants.

    PubMed

    Wiernasz, Diane C; Cole, Blaine J

    2018-01-01

    A fundamental decision that an organism must make is how to allocate resources to offspring, with respect to both size and number. The two major theoretical approaches to this problem, optimal offspring size and optimistic brood size models, make different predictions that may be reconciled by including how offspring fitness is related to size. We extended the reasoning of Trivers and Willard (1973) to derive a general model of how parents should allocate additional resources with respect to the number of males and females produced, and among individuals of each sex, based on the fitness payoffs of each. We then predicted how harvester ant colonies should invest additional resources and tested three hypotheses derived from our model, using data from 3 years of food supplementation bracketed by 6 years without food addition. All major results were predicted by our model: food supplementation increased the number of reproductives produced. Male, but not female, size increased with food addition; the greatest increases in male size occurred in colonies that made small females. We discuss how use of a fitness landscape improves quantitative predictions about allocation decisions. When parents can invest differentially in offspring of different types, the best strategy will depend on parental state as well as the effect of investment on offspring fitness.

  7. New method of extrapolation of the resistance of a model planing boat to full size

    NASA Technical Reports Server (NTRS)

    Sottorf, W

    1942-01-01

    The previously employed method of extrapolating the total resistance to full size with lambda(exp 3) (model scale) and thereby foregoing a separate appraisal of the frictional resistance, was permissible for large models and floats of normal size. But faced with the ever increasing size of aircraft a reexamination of the problem of extrapolation to full size is called for. A method is described by means of which, on the basis of an analysis of tests on planing surfaces, the variation of the wetted surface over the take-off range is analytically obtained. The friction coefficients are read from Prandtl's curve for turbulent boundary layer with laminar approach. With these two values a correction for friction is obtainable.

  8. Deep and surface learning in problem-based learning: a review of the literature.

    PubMed

    Dolmans, Diana H J M; Loyens, Sofie M M; Marcq, Hélène; Gijbels, David

    2016-12-01

    In problem-based learning (PBL), implemented worldwide, students learn by discussing professionally relevant problems enhancing application and integration of knowledge, which is assumed to encourage students towards a deep learning approach in which students are intrinsically interested and try to understand what is being studied. This review investigates: (1) the effects of PBL on students' deep and surface approaches to learning, (2) whether and why these effects do differ across (a) the context of the learning environment (single vs. curriculum wide implementation), and (b) study quality. Studies were searched dealing with PBL and students' approaches to learning. Twenty-one studies were included. The results indicate that PBL does enhance deep learning with a small positive average effect size of .11 and a positive effect in eleven of the 21 studies. Four studies show a decrease in deep learning and six studies show no effect. PBL does not seem to have an effect on surface learning as indicated by a very small average effect size (.08) and eleven studies showing no increase in the surface approach. Six studies demonstrate a decrease and four an increase in surface learning. It is concluded that PBL does seem to enhance deep learning and has little effect on surface learning, although more longitudinal research using high quality measurement instruments is needed to support this conclusion with stronger evidence. Differences cannot be explained by the study quality but a curriculum wide implementation of PBL has a more positive impact on the deep approach (effect size .18) compared to an implementation within a single course (effect size of -.05). PBL is assumed to enhance active learning and students' intrinsic motivation, which enhances deep learning. A high perceived workload and assessment that is perceived as not rewarding deep learning are assumed to enhance surface learning.

  9. Performance and state-space analyses of systems using Petri nets

    NASA Technical Reports Server (NTRS)

    Watson, James Francis, III

    1992-01-01

    The goal of any modeling methodology is to develop a mathematical description of a system that is accurate in its representation and also permits analysis of structural and/or performance properties. Inherently, trade-offs exist between the level detail in the model and the ease with which analysis can be performed. Petri nets (PN's), a highly graphical modeling methodology for Discrete Event Dynamic Systems, permit representation of shared resources, finite capacities, conflict, synchronization, concurrency, and timing between state changes. By restricting the state transition time delays to the family of exponential density functions, Markov chain analysis of performance problems is possible. One major drawback of PN's is the tendency for the state-space to grow rapidly (exponential complexity) compared to increases in the PN constructs. It is the state space, or the Markov chain obtained from it, that is needed in the solution of many problems. The theory of state-space size estimation for PN's is introduced. The problem of state-space size estimation is defined, its complexities are examined, and estimation algorithms are developed. Both top-down and bottom-up approaches are pursued, and the advantages and disadvantages of each are described. Additionally, the author's research in non-exponential transition modeling for PN's is discussed. An algorithm for approximating non-exponential transitions is developed. Since only basic PN constructs are used in the approximation, theory already developed for PN's remains applicable. Comparison to results from entropy theory show the transition performance is close to the theoretic optimum. Inclusion of non-exponential transition approximations improves performance results at the expense of increased state-space size. The state-space size estimation theory provides insight and algorithms for evaluating this trade-off.

  10. Impact of fiber source and feed particle size on swine manure properties related to spontaneous foam formation during anaerobic decomposition.

    PubMed

    Van Weelden, M B; Andersen, D S; Kerr, B J; Trabue, S L; Pepple, L M

    2016-02-01

    Foam accumulation in deep-pit manure storage facilities is of concern for swine producers because of the logistical and safety-related problems it creates. A feeding trial was performed to evaluate the impact of feed grind size, fiber source, and manure inoculation on foaming characteristics. Animals were fed: (1) C-SBM (corn-soybean meal): (2) C-DDGS (corn-dried distiller grains with solubles); and (3) C-Soybean Hull (corn-soybean meal with soybean hulls) with each diet ground to either fine (374 μm) or coarse (631 μm) particle size. Two sets of 24 pigs were fed and their manure collected. Factors that decreased feed digestibility (larger grind size and increased fiber content) resulted in increased solids loading to the manure, greater foaming characteristics, more particles in the critical particle size range (2-25 μm), and a greater biological activity/potential. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Energy Storage Sizing Taking Into Account Forecast Uncertainties and Receding Horizon Operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Kyri; Hug, Gabriela; Li, Xin

    Energy storage systems (ESS) have the potential to be very beneficial for applications such as reducing the ramping of generators, peak shaving, and balancing not only the variability introduced by renewable energy sources, but also the uncertainty introduced by errors in their forecasts. Optimal usage of storage may result in reduced generation costs and an increased use of renewable energy. However, optimally sizing these devices is a challenging problem. This paper aims to provide the tools to optimally size an ESS under the assumption that it will be operated under a model predictive control scheme and that the forecast ofmore » the renewable energy resources include prediction errors. A two-stage stochastic model predictive control is formulated and solved, where the optimal usage of the storage is simultaneously determined along with the optimal generation outputs and size of the storage. Wind forecast errors are taken into account in the optimization problem via probabilistic constraints for which an analytical form is derived. This allows for the stochastic optimization problem to be solved directly, without using sampling-based approaches, and sizing the storage to account not only for a wide range of potential scenarios, but also for a wide range of potential forecast errors. In the proposed formulation, we account for the fact that errors in the forecast affect how the device is operated later in the horizon and that a receding horizon scheme is used in operation to optimally use the available storage.« less

  12. Microcomputers in the Anesthesia Library.

    ERIC Educational Resources Information Center

    Wright, A. J.

    The combination of computer technology and library operation is helping to alleviate such library problems as escalating costs, increasing collection size, deteriorating materials, unwieldy arrangement schemes, poor subject control, and the acquisition and processing of large numbers of rarely used documents. Small special libraries such as…

  13. The relation between statistical power and inference in fMRI

    PubMed Central

    Wager, Tor D.; Yarkoni, Tal

    2017-01-01

    Statistically underpowered studies can result in experimental failure even when all other experimental considerations have been addressed impeccably. In fMRI the combination of a large number of dependent variables, a relatively small number of observations (subjects), and a need to correct for multiple comparisons can decrease statistical power dramatically. This problem has been clearly addressed yet remains controversial—especially in regards to the expected effect sizes in fMRI, and especially for between-subjects effects such as group comparisons and brain-behavior correlations. We aimed to clarify the power problem by considering and contrasting two simulated scenarios of such possible brain-behavior correlations: weak diffuse effects and strong localized effects. Sampling from these scenarios shows that, particularly in the weak diffuse scenario, common sample sizes (n = 20–30) display extremely low statistical power, poorly represent the actual effects in the full sample, and show large variation on subsequent replications. Empirical data from the Human Connectome Project resembles the weak diffuse scenario much more than the localized strong scenario, which underscores the extent of the power problem for many studies. Possible solutions to the power problem include increasing the sample size, using less stringent thresholds, or focusing on a region-of-interest. However, these approaches are not always feasible and some have major drawbacks. The most prominent solutions that may help address the power problem include model-based (multivariate) prediction methods and meta-analyses with related synthesis-oriented approaches. PMID:29155843

  14. Dangers of collapsible ventricular drainage systems. Technical note.

    PubMed

    Kaye, A H; Wallace, D

    1982-02-01

    Ventricular drainage systems employing a collapsible plastic bag for fluid collection were postulated to cause an increasing back-pressure produced in part by the elasticity of the bag. This postulate was shown to be correct in an experimental situation. There was a logarithmic rise in cerebrospinal fluid pressure as the bag filled. By increasing the size of the bag, the problem was overcome.

  15. Effect of sulfate and carbonate minerals on particle-size distributions in arid soils

    USGS Publications Warehouse

    Goossens, Dirk; Buck, Brenda J.; Teng, Yuazxin; Robins, Colin; Goldstein, Harland L.

    2014-01-01

    Arid soils pose unique problems during measurement and interpretation of particle-size distributions (PSDs) because they often contain high concentrations of water-soluble salts. This study investigates the effects of sulfate and carbonate minerals on grain-size analysis by comparing analyses in water, in which the minerals dissolve, and isopropanol (IPA), in which they do not. The presence of gypsum, in particular, substantially affects particle-size analysis once the concentration of gypsum in the sample exceeds the mineral’s solubility threshold. For smaller concentrations particle-size results are unaffected. This is because at concentrations above the solubility threshold fine particles cement together or bind to coarser particles or aggregates already present in the sample, or soluble mineral coatings enlarge grains. Formation of discrete crystallites exacerbates the problem. When soluble minerals are dissolved the original, insoluble grains will become partly or entirely liberated. Thus, removing soluble minerals will result in an increase in measured fine particles. Distortion of particle-size analysis is larger for sulfate minerals than for carbonate minerals because of the much higher solubility in water of the former. When possible, arid soils should be analyzed using a liquid in which the mineral grains do not dissolve, such as IPA, because the results will more accurately reflect the PSD under most arid soil field conditions. This is especially important when interpreting soil and environmental processes affected by particle size.

  16. PREDICTION OF SOLAR FLARE SIZE AND TIME-TO-FLARE USING SUPPORT VECTOR MACHINE REGRESSION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boucheron, Laura E.; Al-Ghraibah, Amani; McAteer, R. T. James

    We study the prediction of solar flare size and time-to-flare using 38 features describing magnetic complexity of the photospheric magnetic field. This work uses support vector regression to formulate a mapping from the 38-dimensional feature space to a continuous-valued label vector representing flare size or time-to-flare. When we consider flaring regions only, we find an average error in estimating flare size of approximately half a geostationary operational environmental satellite (GOES) class. When we additionally consider non-flaring regions, we find an increased average error of approximately three-fourths a GOES class. We also consider thresholding the regressed flare size for the experimentmore » containing both flaring and non-flaring regions and find a true positive rate of 0.69 and a true negative rate of 0.86 for flare prediction. The results for both of these size regression experiments are consistent across a wide range of predictive time windows, indicating that the magnetic complexity features may be persistent in appearance long before flare activity. This is supported by our larger error rates of some 40 hr in the time-to-flare regression problem. The 38 magnetic complexity features considered here appear to have discriminative potential for flare size, but their persistence in time makes them less discriminative for the time-to-flare problem.« less

  17. Simplifications for hydronic system models in modelica

    DOE PAGES

    Jorissen, F.; Wetter, M.; Helsen, L.

    2018-01-12

    Building systems and their heating, ventilation and air conditioning flow networks, are becoming increasingly complex. Some building energy simulation tools simulate these flow networks using pressure drop equations. These flow network models typically generate coupled algebraic nonlinear systems of equations, which become increasingly more difficult to solve as their sizes increase. This leads to longer computation times and can cause the solver to fail. These problems also arise when using the equation-based modelling language Modelica and Annex 60-based libraries. This may limit the applicability of the library to relatively small problems unless problems are restructured. This paper discusses two algebraicmore » loop types and presents an approach that decouples algebraic loops into smaller parts, or removes them completely. The approach is applied to a case study model where an algebraic loop of 86 iteration variables is decoupled into smaller parts with a maximum of five iteration variables.« less

  18. High-rise housing construction as a way of solving the problem of providing people with comfortable habitation

    NASA Astrophysics Data System (ADS)

    Misailovov, Andrey

    2018-03-01

    The article analyzes the role of high-rise construction in solving the problem of providing people with comfortable habitation. High-rise construction is considered as a part of urban environment of big cities, a way of effective land use and development of entrepreneurship, including small and medium-sized enterprises. The economic efficiency of high-rise construction, an increase in budgetary financing and the number of introduced innovations are discussed.

  19. New approach for pattern collapse problem by increasing contact area at sub-100nm patterning

    NASA Astrophysics Data System (ADS)

    Lee, Sung-Koo; Jung, Jae Chang; Lee, Min Suk; Lee, Sung K.; Kim, Sam Young; Hwang, Young-Sun; Bok, Cheol K.; Moon, Seung-Chan; Shin, Ki S.; Kim, Sang-Jung

    2003-06-01

    To accomplish minimizing feature size to sub 100nm, new light sources for photolithography are emerging, such as ArF(193nm), F2(157nm), and EUV(13nm). However as the pattern size decreases to sub 100nm, a new obstacle, that is pattern collapse problem, becomes most serious bottleneck to the road for the sub 100 nm lithography. The main reason for this pattern collapse problem is capillary force that is increased as the pattern size decreases. As a result there were some trials to decrease this capillary force by changing developer or rinse materials that had low surface tension. On the other hands, there were other efforts to increase adhesion between resists and sub materials (organic BARC). In this study, we will propose a novel approach to solve pattern collapse problems by increasing contact area between sub material (organic BARC) and resist pattern. The basic concept of this approach is that if nano-scale topology is made at the sub material, the contact area between sub materials and resist will be increased. The process scheme was like this. First after coating and baking of organic BARC material, the nano-scale topology (3~10nm) was made by etching at this organic BARC material. On this nano-scale topology, resist was coated and exposed. Finally after develop, the contact area between organic BARC and resist could be increased. Though nano-scale topology was made by etching technology, this 20nm topology variation induced large substrate reflectivity of 4.2% and as a result the pattern fidelity was not so good at 100nm 1:1 island pattern. So we needed a new method to improve pattern fidelity problem. This pattern fidelity problem could be solved by introducing a sacrificial BARC layer. The process scheme was like this. First organic BARC was coated of which k value was about 0.64 and then sacrificial BARC layers was coated of which k value was about 0.18 on the organic BARC. The nano-scale topology (1~4nm) was made by etching of this sacrificial BARC layer and then as the same method mentioned above, the contact area between sacrificial layer and resist could be increased. With this introduction of sacrificial layer, the substrate reflectivity of sacrificial BARC layer was decreased enormously to 0.2% though there is 20nm topology variation of sacrificial BARC layer. With this sacrificial BARC layer, we could get 100nm 1:1 L/S pattern. With conventional process, the minimum CD where no collapse occurred, was 96.5nm. By applying this sacrificial BARC layer, the minimum CD where no collapse occurred, was 65.7nm. In conclusion, with nano-scale topology and sacrificial BARC layer, we could get very small pattern that was strong to pattern collapse issue.

  20. Stimulant Medication and the Hyperactive Adolescent: Myths and Facts.

    ERIC Educational Resources Information Center

    Clampit, M. K.; Pirkle, Jane B.

    1983-01-01

    Reviews literature that describes the rational and nonrational factors sustaining the myth that stimulant medication is ineffective for hyperactive adolescents. Discusses methodological problems and factors--such as increasing size, misbehavior and misattribution, and perceived relationship to drug abuse--that influence treatment decisions. (JAC)

  1. AIRBORNE PARTICLE SIZES AND SOURCES FOUND IN INDOOR AIR

    EPA Science Inventory

    As concern about indoor air quality (IAQ) has grown in recent years, understanding indoor aerosols has become increasingly important so that control techniques may be implemented to reduce damaging health effects and soiling problems. This paper begins with a brief look at the me...

  2. Practice makes proficient: pigeons (Columba livia) learn efficient routes on full-circuit navigational traveling salesperson problems.

    PubMed

    Baron, Danielle M; Ramirez, Alejandro J; Bulitko, Vadim; Madan, Christopher R; Greiner, Ariel; Hurd, Peter L; Spetch, Marcia L

    2015-01-01

    Visiting multiple locations and returning to the start via the shortest route, referred to as the traveling salesman (or salesperson) problem (TSP), is a valuable skill for both humans and non-humans. In the current study, pigeons were trained with increasing set sizes of up to six goals, with each set size presented in three distinct configurations, until consistency in route selection emerged. After training at each set size, the pigeons were tested with two novel configurations. All pigeons acquired routes that were significantly more efficient (i.e., shorter in length) than expected by chance selection of the goals. On average, the pigeons also selected routes that were more efficient than expected based on a local nearest-neighbor strategy and were as efficient as the average route generated by a crossing-avoidance strategy. Analysis of the routes taken indicated that they conformed to both a nearest-neighbor and a crossing-avoidance strategy significantly more often than expected by chance. Both the time taken to visit all goals and the actual distance traveled decreased from the first to the last trials of training in each set size. On the first trial with novel configurations, average efficiency was higher than chance, but was not higher than expected from a nearest-neighbor or crossing-avoidance strategy. These results indicate that pigeons can learn to select efficient routes on a TSP problem.

  3. Modeling ultrasound propagation through material of increasing geometrical complexity.

    PubMed

    Odabaee, Maryam; Odabaee, Mostafa; Pelekanos, Matthew; Leinenga, Gerhard; Götz, Jürgen

    2018-06-01

    Ultrasound is increasingly being recognized as a neuromodulatory and therapeutic tool, inducing a broad range of bio-effects in the tissue of experimental animals and humans. To achieve these effects in a predictable manner in the human brain, the thick cancellous skull presents a problem, causing attenuation. In order to overcome this challenge, as a first step, the acoustic properties of a set of simple bone-modeling resin samples that displayed an increasing geometrical complexity (increasing step sizes) were analyzed. Using two Non-Destructive Testing (NDT) transducers, we found that Wiener deconvolution predicted the Ultrasound Acoustic Response (UAR) and attenuation caused by the samples. However, whereas the UAR of samples with step sizes larger than the wavelength could be accurately estimated, the prediction was not accurate when the sample had a smaller step size. Furthermore, a Finite Element Analysis (FEA) performed in ANSYS determined that the scattering and refraction of sound waves was significantly higher in complex samples with smaller step sizes compared to simple samples with a larger step size. Together, this reveals an interaction of frequency and geometrical complexity in predicting the UAR and attenuation. These findings could in future be applied to poro-visco-elastic materials that better model the human skull. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  4. Cognitive Models for Learning to Control Dynamic Systems

    DTIC Science & Technology

    2008-05-30

    2 3N NM NM NMK NK M− + + + + constraints, including KN M+ equality constraints, 7 2NM M+ inequality non- timing constraints and the rest are... inequality timing constraints. The size of the MILP model grows rapidly with the increase of problem size. So it is a big challenge to deal with more...task requirement, are studied in the section. An assumption is made in advance that the time of attack delay and flight time to the sink node are

  5. Optimization Issues with Complex Rotorcraft Comprehensive Analysis

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.; Young, Katherine C.; Tarzanin, Frank J.; Hirsh, Joel E.; Young, Darrell K.

    1998-01-01

    This paper investigates the use of the general purpose automatic differentiation (AD) tool called Automatic Differentiation of FORTRAN (ADIFOR) as a means of generating sensitivity derivatives for use in Boeing Helicopter's proprietary comprehensive rotor analysis code (VII). ADIFOR transforms an existing computer program into a new program that performs a sensitivity analysis in addition to the original analysis. In this study both the pros (exact derivatives, no step-size problems) and cons (more CPU, more memory) of ADIFOR are discussed. The size (based on the number of lines) of the VII code after ADIFOR processing increased by 70 percent and resulted in substantial computer memory requirements at execution. The ADIFOR derivatives took about 75 percent longer to compute than the finite-difference derivatives. However, the ADIFOR derivatives are exact and are not functions of step-size. The VII sensitivity derivatives generated by ADIFOR are compared with finite-difference derivatives. The ADIFOR and finite-difference derivatives are used in three optimization schemes to solve a low vibration rotor design problem.

  6. Friction Freeform Fabrication of Superalloy Inconel 718: Prospects and Problems

    NASA Astrophysics Data System (ADS)

    Dilip, J. J. S.; Janaki Ram, G. D.

    2014-01-01

    Friction Freeform Fabrication is a new solid-state additive manufacturing process. The present investigation reports a detailed study on the prospects of this process for additive part fabrication in superalloy Inconel 718. Using a rotary friction welding machine and employing alloy 718 consumable rods in solution treated condition, cylindrical-shaped multi-layer friction deposits (10 mm diameter) were successfully produced. In the as-deposited condition, the deposits showed very fine grain size with no grain boundary δ phase. The deposits responded well to direct aging and showed satisfactory room-temperature tensile properties. However, their stress rupture performance was unsatisfactory because of their layered microstructure with very fine grain size and no grain boundary δ phase. The problem was overcome by heat treating the deposits first at 1353 K (1080 °C) (for increasing the grain size) and then at 1223 K (950 °C) (for precipitating the δ phase). Overall, the current study shows that Friction Freeform Fabrication is a very useful process for additive part fabrication in alloy 718.

  7. Framework for computationally efficient optimal irrigation scheduling using ant colony optimization

    USDA-ARS?s Scientific Manuscript database

    A general optimization framework is introduced with the overall goal of reducing search space size and increasing the computational efficiency of evolutionary algorithm application for optimal irrigation scheduling. The framework achieves this goal by representing the problem in the form of a decisi...

  8. 24-Month-Old Children with Larger Oral Vocabularies Display Greater Academic and Behavioral Functioning at Kindergarten Entry

    PubMed Central

    Morgan, Paul L.; Farkas, George; Hillemeier, Marianne M.; Hammer, Carol Scheffner; Maczuga, Steve

    2015-01-01

    Data were analyzed from a population-based, longitudinal sample of 8,650 U.S. children to (a) identify factors associated with or predictive of oral vocabulary size at 24 months of age and (b) evaluate whether oral vocabulary size is uniquely predictive of academic and behavioral functioning at kindergarten entry. Children from higher socioeconomic status households, females, and those experiencing higher-quality parenting had larger oral vocabularies. Children born with very low birth weight or from households where the mother had health problems had smaller oral vocabularies. Even after extensive covariate adjustment, 24-month-old children with larger oral vocabularies displayed greater reading and mathematics achievement, increased behavioral self-regulation, and fewer externalizing and internalizing problem behaviors at kindergarten entry. PMID:26283023

  9. OCCUPATIONAL EDUCATION AND TRAINING FOR TOMORROW'S WORLD OF WORK. NUMBER 1, SQUARE PEGS AND ROUND HOLES.

    ERIC Educational Resources Information Center

    HORNER, JAMES T.; PETERSON, EVERETT E.

    A MAJOR PROBLEM OF AMERICAN YOUTH TODAY IS THAT OF QUALIFYING FOR AND HOLDING A JOB. GENERAL EDUCATION IS NOT ENOUGH FOR THE GREAT MAJORITY OF PEOPLE WHO MUST OPERATE FARMS, MACHINES, SHOPS, AND OFFICES AND PROVIDE SERVICES. YOUTH FACE INCREASED JOB COMPETITION BECAUSE OF THE INCREASED SIZE OF THE 14- TO 24-YEAR AGE GROUP. UNEMPLOYMENT AMONG YOUNG…

  10. Improved Fiber-Optic-Coupled Pressure And Vibration Sensors

    NASA Technical Reports Server (NTRS)

    Zuckerwar, Allan J.; Cuomo, Frank W.

    1994-01-01

    Improved fiber-optic coupler enables use of single optical fiber to carry light to and from sensor head. Eliminates problem of alignment of multiple fibers in sensor head and simplifies calibration by making performance both more predictable and more stable. Sensitivities increased, sizes reduced. Provides increased margin for design of compact sensor heads not required to contain amplifier circuits and withstand high operating temperatures.

  11. Family size and effective population size in a hatchery stock of coho salmon (Oncorhynchus kisutch)

    USGS Publications Warehouse

    Simon, R.C.; McIntyre, J.D.; Hemmingsen, A.R.

    1986-01-01

    Means and variances of family size measured in five year-classes of wire-tagged coho salmon (Oncorhynchus kisutch) were linearly related. Population effective size was calculated by using estimated means and variances of family size in a 25-yr data set. Although numbers of age 3 adults returning to the hatchery appeared to be large enough to avoid inbreeding problems (the 25-yr mean exceeded 4500), the numbers actually contributing to the hatchery production may be too low. Several strategies are proposed to correct the problem perceived. Argument is given to support the contention that the problem of effective size is fairly general and is not confined to the present study population.

  12. Spatial, socio-economic, and ecological implications of incorporating minimum size constraints in marine protected area network design.

    PubMed

    Metcalfe, Kristian; Vaughan, Gregory; Vaz, Sandrine; Smith, Robert J

    2015-12-01

    Marine protected areas (MPAs) are the cornerstone of most marine conservation strategies, but the effectiveness of each one partly depends on its size and distance to other MPAs in a network. Despite this, current recommendations on ideal MPA size and spacing vary widely, and data are lacking on how these constraints might influence the overall spatial characteristics, socio-economic impacts, and connectivity of the resultant MPA networks. To address this problem, we tested the impact of applying different MPA size constraints in English waters. We used the Marxan spatial prioritization software to identify a network of MPAs that met conservation feature targets, whilst minimizing impacts on fisheries; modified the Marxan outputs with the MinPatch software to ensure each MPA met a minimum size; and used existing data on the dispersal distances of a range of species found in English waters to investigate the likely impacts of such spatial constraints on the region's biodiversity. Increasing MPA size had little effect on total network area or the location of priority areas, but as MPA size increased, fishing opportunity cost to stakeholders increased. In addition, as MPA size increased, the number of closely connected sets of MPAs in networks and the average distance between neighboring MPAs decreased, which consequently increased the proportion of the planning region that was isolated from all MPAs. These results suggest networks containing large MPAs would be more viable for the majority of the region's species that have small dispersal distances, but dispersal between MPA sets and spill-over of individuals into unprotected areas would be reduced. These findings highlight the importance of testing the impact of applying different MPA size constraints because there are clear trade-offs that result from the interaction of size, number, and distribution of MPAs in a network. © 2015 Society for Conservation Biology.

  13. A massively parallel computational approach to coupled thermoelastic/porous gas flow problems

    NASA Technical Reports Server (NTRS)

    Shia, David; Mcmanus, Hugh L.

    1995-01-01

    A new computational scheme for coupled thermoelastic/porous gas flow problems is presented. Heat transfer, gas flow, and dynamic thermoelastic governing equations are expressed in fully explicit form, and solved on a massively parallel computer. The transpiration cooling problem is used as an example problem. The numerical solutions have been verified by comparison to available analytical solutions. Transient temperature, pressure, and stress distributions have been obtained. Small spatial oscillations in pressure and stress have been observed, which would be impractical to predict with previously available schemes. Comparisons between serial and massively parallel versions of the scheme have also been made. The results indicate that for small scale problems the serial and parallel versions use practically the same amount of CPU time. However, as the problem size increases the parallel version becomes more efficient than the serial version.

  14. A Kohonen-like decomposition method for the Euclidean traveling salesman problem-KNIES/spl I.bar/DECOMPOSE.

    PubMed

    Aras, N; Altinel, I K; Oommen, J

    2003-01-01

    In addition to the classical heuristic algorithms of operations research, there have also been several approaches based on artificial neural networks for solving the traveling salesman problem. Their efficiency, however, decreases as the problem size (number of cities) increases. A technique to reduce the complexity of a large-scale traveling salesman problem (TSP) instance is to decompose or partition it into smaller subproblems. We introduce an all-neural decomposition heuristic that is based on a recent self-organizing map called KNIES, which has been successfully implemented for solving both the Euclidean traveling salesman problem and the Euclidean Hamiltonian path problem. Our solution for the Euclidean TSP proceeds by solving the Euclidean HPP for the subproblems, and then patching these solutions together. No such all-neural solution has ever been reported.

  15. An Energy-Efficient Mobile Sink-Based Unequal Clustering Mechanism for WSNs.

    PubMed

    Gharaei, Niayesh; Abu Bakar, Kamalrulnizam; Mohd Hashim, Siti Zaiton; Hosseingholi Pourasl, Ali; Siraj, Mohammad; Darwish, Tasneem

    2017-08-11

    Network lifetime and energy efficiency are crucial performance metrics used to evaluate wireless sensor networks (WSNs). Decreasing and balancing the energy consumption of nodes can be employed to increase network lifetime. In cluster-based WSNs, one objective of applying clustering is to decrease the energy consumption of the network. In fact, the clustering technique will be considered effective if the energy consumed by sensor nodes decreases after applying clustering, however, this aim will not be achieved if the cluster size is not properly chosen. Therefore, in this paper, the energy consumption of nodes, before clustering, is considered to determine the optimal cluster size. A two-stage Genetic Algorithm (GA) is employed to determine the optimal interval of cluster size and derive the exact value from the interval. Furthermore, the energy hole is an inherent problem which leads to a remarkable decrease in the network's lifespan. This problem stems from the asynchronous energy depletion of nodes located in different layers of the network. For this reason, we propose Circular Motion of Mobile-Sink with Varied Velocity Algorithm (CM2SV2) to balance the energy consumption ratio of cluster heads (CH). According to the results, these strategies could largely increase the network's lifetime by decreasing the energy consumption of sensors and balancing the energy consumption among CHs.

  16. Direct Numerical Simulation of Automobile Cavity Tones

    NASA Technical Reports Server (NTRS)

    Kurbatskii, Konstantin; Tam, Christopher K. W.

    2000-01-01

    The Navier Stokes equation is solved computationally by the Dispersion-Relation-Preserving (DRP) scheme for the flow and acoustic fields associated with a laminar boundary layer flow over an automobile door cavity. In this work, the flow Reynolds number is restricted to R(sub delta*) < 3400; the range of Reynolds number for which laminar flow may be maintained. This investigation focuses on two aspects of the problem, namely, the effect of boundary layer thickness on the cavity tone frequency and intensity and the effect of the size of the computation domain on the accuracy of the numerical simulation. It is found that the tone frequency decreases with an increase in boundary layer thickness. When the boundary layer is thicker than a certain critical value, depending on the flow speed, no tone is emitted by the cavity. Computationally, solutions of aeroacoustics problems are known to be sensitive to the size of the computation domain. Numerical experiments indicate that the use of a small domain could result in normal mode type acoustic oscillations in the entire computation domain leading to an increase in tone frequency and intensity. When the computation domain is expanded so that the boundaries are at least one wavelength away from the noise source, the computed tone frequency and intensity are found to be computation domain size independent.

  17. Botanical trash mixtures analyzed with near-infrared and attenuated total reflectance fourier transform spectroscopy and thermogravimetric analysis

    USDA-ARS?s Scientific Manuscript database

    Botanical cotton trash mixed with lint reduces cotton’s marketability and appearance. During cotton harvesting, ginning, and processing, trash size reduction occurs, thus complicating its removal and identification. This trash causes problems by increasing ends down in yarn formation and thus proce...

  18. Off-flavor characterization and depuration in Atlantic salmon cultured to food-size within closed-containment systems

    USDA-ARS?s Scientific Manuscript database

    Atlantic salmon are typically cultured in marine net pens. However, technological advancements in recirculating aquaculture systems have increased the feasibility of culturing Atlantic salmon in land-based systems. One problem encountered when fish are harvested from recirculating systems is the pre...

  19. Study on improving the turbidity measurement of the absolute coagulation rate constant.

    PubMed

    Sun, Zhiwei; Liu, Jie; Xu, Shenghua

    2006-05-23

    The existing theories dealing with the evaluation of the absolute coagulation rate constant by turbidity measurement were experimentally tested for different particle-sized (radius = a) suspensions at incident wavelengths (lambda) ranging from near-infrared to ultraviolet light. When the size parameter alpha = 2pi a/lambda > 3, the rate constant data from previous theories for fixed-sized particles show significant inconsistencies at different light wavelengths. We attribute this problem to the imperfection of these theories in describing the light scattering from doublets through their evaluation of the extinction cross section. The evaluations of the rate constants by all previous theories become untenable as the size parameter increases and therefore hampers the applicable range of the turbidity measurement. By using the T-matrix method, we present a robust solution for evaluating the extinction cross section of doublets formed in the aggregation. Our experiments show that this new approach is effective in extending the applicability range of the turbidity methodology and increasing measurement accuracy.

  20. Association of Physical Activity and Sedentary Behavior With Psychological Well-Being Among Japanese Children: A Two-Year Longitudinal Study.

    PubMed

    Ishii, Kaori; Shibata, Ai; Adachi, Minoru; Oka, Koichiro

    2016-10-01

    Data on the effect of increased or decreased physical activity on children's psychological status are scarce, and effect sizes are small. This study conducted two-year longitudinal research to identify associations between physical activity, sedentary behavior, and psychological well-being in Japanese school children through a mail survey completed by 292 children aged 6-12 years. Data on sociodemographics, physical activity, sedentary behavior on weekdays and the weekend, and psychometrics (self-efficacy, anxiety, and behavioral/emotional problems) were collected using a self-reported questionnaire. A logistic regression analysis was performed, calculating odds ratios for physical activity, psychometrics, and baseline age and physical activity and sedentary behavior changes. For boys, a negative association was found between increased physical activity outside school and maintained or improved self-efficacy as opposed to a positive association in girls. Increased sedentary behavior on weekdays and long periods of sedentary behavior on weekends were associated with maintained or improved behavioral/emotional problems in girls only. This two-year longitudinal study is the first of its kind conducted in Japan. Although effect sizes were small, these results may nevertheless assist in intervention development to promote psychological well-being. © The Author(s) 2016.

  1. A simple encoding method for Sigma-Delta ADC based biopotential acquisition systems.

    PubMed

    Guerrero, Federico N; Spinelli, Enrique M

    2017-10-01

    Sigma Delta analogue-to-digital converters allow acquiring the full dynamic range of biomedical signals at the electrodes, resulting in less complex hardware and increased measurement robustness. However, the increased data size per sample (typically 24 bits) demands the transmission of extremely large volumes of data across the isolation barrier, thus increasing power consumption on the patient side. This problem is accentuated when a large number of channels is used as in current 128-256 electrodes biopotential acquisition systems, that usually opt for an optic fibre link to the computer. An analogous problem occurs for simpler low-power acquisition platforms that transmit data through a wireless link to a computing platform. In this paper, a low-complexity encoding method is presented to decrease sample data size without losses, while preserving the full DC-coupled signal. The method achieved a 2.3 average compression ratio evaluated over an ECG and EMG signal bank acquired with equipment based on Sigma-Delta converters. It demands a very low processing load: a C language implementation is presented that resulted in an 110 clock cycles average execution on an 8-bit microcontroller.

  2. Meta-analysis of the predictive factors of postpartum fatigue.

    PubMed

    Badr, Hanan A; Zauszniewski, Jaclene A

    2017-08-01

    Nearly 64% of new mothers are affected by fatigue during the postpartum period, making it the most common problem that a woman faces as she adapts to motherhood. Postpartum fatigue can lead to serious negative effects on the mother's health and the newborn's development and interfere with mother-infant interaction. The aim of this meta-analysis was to identify predictive factors of postpartum fatigue and to document the magnitude of their effects using effect sizes. We used two search engines, PubMed and Google Scholar, to identify studies that met three inclusion criteria: (a) the article was written in English, (b) the article studied the predictive factors of postpartum fatigue, and (c) the article included information about the validity and reliability of the instruments used in the research. Nine articles met these inclusion criteria. The direction and strength of correlation coefficients between predictive factors and postpartum fatigue were examined across the studies to determine their effect sizes. Measurement of predictor variables occurred from 3days to 6months postpartum. Correlations reported between predictive factors and postpartum fatigue were as follows: small effect size (r range =0.10 to 0.29) for education level, age, postpartum hemorrhage, infection, and child care difficulties; medium effect size (r range =0.30 to 0.49) for physiological illness, low ferritin level, low hemoglobin level, sleeping problems, stress and anxiety, and breastfeeding problems; and large effect size (r range =0.50+) for depression. Postpartum fatigue is a common condition that can lead to serious health problems for a new mother and her newborn. Therefore, increased knowledge concerning factors that influence the onset of postpartum fatigue is needed for early identification of new mothers who may be at risk. Appropriate treatments, interventions, information, and support can then be initiated to prevent or minimize the postpartum fatigue. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Reduced size first-order subsonic and supersonic aeroelastic modeling

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay

    1990-01-01

    Various aeroelastic, aeroservoelastic, dynamic-response, and sensitivity analyses are based on a time-domain first-order (state-space) formulation of the equations of motion. The formulation of this paper is based on the minimum-state (MS) aerodynamic approximation method, which yields a low number of aerodynamic augmenting states. Modifications of the MS and the physical weighting procedures make the modeling method even more attractive. The flexibility of constraint selection is increased without increasing the approximation problem size; the accuracy of dynamic residualization of high-frequency modes is improved; and the resulting model is less sensitive to parametric changes in subsequent analyses. Applications to subsonic and supersonic cases demonstrate the generality, flexibility, accuracy, and efficiency of the method.

  4. Universality of fragment shapes.

    PubMed

    Domokos, Gábor; Kun, Ferenc; Sipos, András Árpád; Szabó, Tímea

    2015-03-16

    The shape of fragments generated by the breakup of solids is central to a wide variety of problems ranging from the geomorphic evolution of boulders to the accumulation of space debris orbiting Earth. Although the statistics of the mass of fragments has been found to show a universal scaling behavior, the comprehensive characterization of fragment shapes still remained a fundamental challenge. We performed a thorough experimental study of the problem fragmenting various types of materials by slowly proceeding weathering and by rapid breakup due to explosion and hammering. We demonstrate that the shape of fragments obeys an astonishing universality having the same generic evolution with the fragment size irrespective of materials details and loading conditions. There exists a cutoff size below which fragments have an isotropic shape, however, as the size increases an exponential convergence is obtained to a unique elongated form. We show that a discrete stochastic model of fragmentation reproduces both the size and shape of fragments tuning only a single parameter which strengthens the general validity of the scaling laws. The dependence of the probability of the crack plan orientation on the linear extension of fragments proved to be essential for the shape selection mechanism.

  5. Universality of fragment shapes

    PubMed Central

    Domokos, Gábor; Kun, Ferenc; Sipos, András Árpád; Szabó, Tímea

    2015-01-01

    The shape of fragments generated by the breakup of solids is central to a wide variety of problems ranging from the geomorphic evolution of boulders to the accumulation of space debris orbiting Earth. Although the statistics of the mass of fragments has been found to show a universal scaling behavior, the comprehensive characterization of fragment shapes still remained a fundamental challenge. We performed a thorough experimental study of the problem fragmenting various types of materials by slowly proceeding weathering and by rapid breakup due to explosion and hammering. We demonstrate that the shape of fragments obeys an astonishing universality having the same generic evolution with the fragment size irrespective of materials details and loading conditions. There exists a cutoff size below which fragments have an isotropic shape, however, as the size increases an exponential convergence is obtained to a unique elongated form. We show that a discrete stochastic model of fragmentation reproduces both the size and shape of fragments tuning only a single parameter which strengthens the general validity of the scaling laws. The dependence of the probability of the crack plan orientation on the linear extension of fragments proved to be essential for the shape selection mechanism. PMID:25772300

  6. Statistical challenges in a regulatory review of cardiovascular and CNS clinical trials.

    PubMed

    Hung, H M James; Wang, Sue-Jane; Yang, Peiling; Jin, Kun; Lawrence, John; Kordzakhia, George; Massie, Tristan

    2016-01-01

    There are several challenging statistical problems identified in the regulatory review of large cardiovascular (CV) clinical outcome trials and central nervous system (CNS) trials. The problems can be common or distinct due to disease characteristics and the differences in trial design elements such as endpoints, trial duration, and trial size. In schizophrenia trials, heavy missing data is a big problem. In Alzheimer trials, the endpoints for assessing symptoms and the endpoints for assessing disease progression are essentially the same; it is difficult to construct a good trial design to evaluate a test drug for its ability to slow the disease progression. In CV trials, reliance on a composite endpoint with low event rate makes the trial size so large that it is infeasible to study multiple doses necessary to find the right dose for study patients. These are just a few typical problems. In the past decade, adaptive designs were increasingly used in these disease areas and some challenges occur with respect to that use. Based on our review experiences, group sequential designs (GSDs) have borne many successful stories in CV trials and are also increasingly used for developing treatments targeting CNS diseases. There is also a growing trend of using more advanced unblinded adaptive designs for producing efficacy evidence. Many statistical challenges with these kinds of adaptive designs have been identified through our experiences with the review of regulatory applications and are shared in this article.

  7. Self-reported behaviour problems and sibling relationship quality by siblings of children with autism spectrum disorder.

    PubMed

    Hastings, R P; Petalas, M A

    2014-11-01

    There are few published research studies in which siblings of children with autism spectrum disorder (ASD) provide self-reports about their own behavioural and emotional problems and their sibling relationships. Reliance on parent reports may lead to incomplete conclusions about the experiences of siblings themselves. Siblings 7-17 years and their mothers from 94 families of children with ASD were recruited. Mothers reported on family demographics, the behavioural and emotional problems of their child with ASD, and on their own symptoms of depression. Siblings reported on their relationship with their brother or sister with ASD, and siblings 11+ years of age also self-reported on their behavioural and emotional problems. Compared with normative British data, siblings reported very slightly elevated levels of behavioural and emotional problems. However, none of the mean differences were statistically significant and all group differences were associated with small or very small effect sizes - the largest being for peer problems (effect size = 0.31). Regression analysis was used to explore family systems relationships, with sibling self-reports predicted by the behaviour problems scores for the child with ASD and by maternal depression. Maternal depression did not emerge as a predictor of siblings' self-reported sibling relationships or their behavioural and emotional problems. Higher levels of behaviour problems in the child with ASD predicted decreased warmth/closeness and increased conflict in the sibling relationship. These data support the general findings of recent research in that there was little indication of clinically meaningful elevations in behavioural and emotional problems in siblings of children with ASD. Although further research replication is required, there was some indication that sibling relationships may be at risk where the child with ASD has significant behaviour problems. © 2014 John Wiley & Sons Ltd.

  8. Evidence of market-driven size-selective fishing and the mediating effects of biological and institutional factors

    PubMed Central

    Reddy, Sheila M. W.; Wentz, Allison; Aburto-Oropeza, Octavio; Maxey, Martin; Nagavarapu, Sriniketh; Leslie, Heather M.

    2014-01-01

    Market demand is often ignored or assumed to lead uniformly to the decline of resources. Yet little is known about how market demand influences natural resources in particular contexts, or the mediating effects of biological or institutional factors. Here, we investigate this problem by examining the Pacific red snapper (Lutjanus peru) fishery around La Paz, Mexico, where medium or “plate-sized” fish are sold to restaurants at a premium price. If higher demand for plate-sized fish increases the relative abundance of the smallest (recruit size class) and largest (most fecund) fish, this may be a market mechanism to increase stocks and fishermen’s revenues. We tested this hypothesis by estimating the effect of prices on the distribution of catch across size classes using daily records of prices and catch. We linked predictions from this economic choice model to a staged-based model of the fishery to estimate the effects on the stock and revenues from harvest. We found that the supply of plate-sized fish increased by 6%, while the supply of large fish decreased by 4% as a result of a 13% price premium for plate-sized fish. This market-driven size selection increased revenues (14%) but decreased total fish biomass (−3%). However, when market-driven size selection was combined with limited institutional constraints, both fish biomass (28%) and fishermen’s revenue (22%) increased. These results show that the direction and magnitude of the effects of market demand on biological populations and human behavior can depend on both biological attributes and institutional constraints. Fisheries management may capitalize on these conditional effects by implementing size-based regulations when economic and institutional incentives will enhance compliance, as in the case we describe here, or by creating compliance enhancing conditions for existing regulations. PMID:23865225

  9. An electromagnetism-like metaheuristic for open-shop problems with no buffer

    NASA Astrophysics Data System (ADS)

    Naderi, Bahman; Najafi, Esmaeil; Yazdani, Mehdi

    2012-12-01

    This paper considers open-shop scheduling with no intermediate buffer to minimize total tardiness. This problem occurs in many production settings, in the plastic molding, chemical, and food processing industries. The paper mathematically formulates the problem by a mixed integer linear program. The problem can be optimally solved by the model. The paper also develops a novel metaheuristic based on an electromagnetism algorithm to solve the large-sized problems. The paper conducts two computational experiments. The first includes small-sized instances by which the mathematical model and general performance of the proposed metaheuristic are evaluated. The second evaluates the metaheuristic for its performance to solve some large-sized instances. The results show that the model and algorithm are effective to deal with the problem.

  10. Reduction of the discretization stencil of direct forcing immersed boundary methods on rectangular cells: The ghost node shifting method

    NASA Astrophysics Data System (ADS)

    Picot, Joris; Glockner, Stéphane

    2018-07-01

    We present an analytical study of discretization stencils for the Poisson problem and the incompressible Navier-Stokes problem when used with some direct forcing immersed boundary methods. This study uses, but is not limited to, second-order discretization and Ghost-Cell Finite-Difference methods. We show that the stencil size increases with the aspect ratio of rectangular cells, which is undesirable as it breaks assumptions of some linear system solvers. To circumvent this drawback, a modification of the Ghost-Cell Finite-Difference methods is proposed to reduce the size of the discretization stencil to the one observed for square cells, i.e. with an aspect ratio equal to one. Numerical results validate this proposed method in terms of accuracy and convergence, for the Poisson problem and both Dirichlet and Neumann boundary conditions. An improvement on error levels is also observed. In addition, we show that the application of the chosen Ghost-Cell Finite-Difference methods to the Navier-Stokes problem, discretized by a pressure-correction method, requires an additional interpolation step. This extra step is implemented and validated through well known test cases of the Navier-Stokes equations.

  11. Distribution of the concentration of heavy metals associated with the sediment particles accumulated on road surfaces.

    PubMed

    Zafra, C A; Temprano, J; Tejero, I

    2011-07-01

    The heavy metal pollution caused by road run-off water constitutes a problem in urban areas. The metallic load associated with road sediment must be determined in order to study its impact in drainage systems and receiving waters, and to perfect the design of prevention systems. This paper presents data regarding the sediment collected on road surfaces in the city of Torrelavega (northern Spain) during a period of 65 days (132 samples). Two sample types were collected: vacuum-dried samples and those swept up following vacuuming. The sediment loading (g m(-2)), particle size distribution (63-2800 microm) and heavy metal concentrations were determined. The data showed that the concentration of heavy metals tends to increase with the reduction in the particle diameter (exponential tendency). The concentrations ofPb, Zn, Cu, Cr, Ni, Cd, Fe, Mn and Co in the size fraction <63 microm were 350, 630, 124, 57, 56, 38, 3231, 374 and 51 mg kg(-1), respectively (average traffic density: 3800 vehicles day(-1)). By increasing the residence time of the sediment, the concentration increases, whereas the ratio of the concentration between the different size fractions decreases. The concentration across the road diminishes when the distance between the roadway and the sampling siteincreases; when the distance increases, the ratio between size fractions for heavy metal concentrations increases. Finally, the main sources of heavy metals are the particles detached by braking (brake pads) and tyre wear (rubber), and are associated with particle sizes <125 microm.

  12. Do Patients’ Symptoms and Interpersonal Problems Improve in Psychotherapeutic Hospital Treatment in Germany? - A Systematic Review and Meta-Analysis

    PubMed Central

    Liebherz, Sarah; Rabung, Sven

    2014-01-01

    Background In Germany, inpatient psychotherapy plays a unique role in the treatment of patients with common mental disorders of higher severity. In addition to psychiatric inpatient services, psychotherapeutic hospital treatment and psychosomatic rehabilitation are offered as independent inpatient treatment options. This meta-analysis aims to provide systematic evidence for psychotherapeutic hospital treatment in Germany regarding its effects on symptomatic and interpersonal impairment. Methodology Relevant papers were identified by electronic database search and hand search. Randomized controlled trials as well as naturalistic prospective studies (including post-therapy and follow-up assessments) evaluating psychotherapeutic hospital treatment of mentally ill adults in Germany were included. Outcomes were required to be quantified by either the Symptom-Checklist (SCL-90-R or short versions) or the Inventory of Interpersonal Problems (IIP-64 or short versions). Effect sizes (Hedges’ g) were combined using random effect models. Principal Findings Sixty-seven papers representing 59 studies fulfilled inclusion criteria. Meta-analysis yielded a medium within-group effect size for symptom change at discharge (g = 0.72; 95% CI 0.68–0.76), with a small reduction to follow-up (g = 0.61; 95% CI 0.55–0.68). Regarding interpersonal problems, a small effect size was found at discharge (g = 0.35; 95% CI 0.29–0.41), which increased to follow-up (g = 0.48; 95% CI 0.36–0.60). While higher impairment at intake was associated with a larger effect size in both measures, longer treatment duration was related to lower effect sizes in SCL GSI and to larger effect sizes in IIP Total. Conclusions Psychotherapeutic hospital treatment may be considered an effective treatment. In accordance with Howard’s phase model of psychotherapy outcome, the present study demonstrated that symptom distress changes more quickly and strongly than interpersonal problems. Preliminary analyses show impairment at intake and treatment duration to be the strongest outcome predictors. Further analyses regarding this relationship are required. PMID:25141289

  13. Processing power limits social group size: computational evidence for the cognitive costs of sociality

    PubMed Central

    Dávid-Barrett, T.; Dunbar, R. I. M.

    2013-01-01

    Sociality is primarily a coordination problem. However, the social (or communication) complexity hypothesis suggests that the kinds of information that can be acquired and processed may limit the size and/or complexity of social groups that a species can maintain. We use an agent-based model to test the hypothesis that the complexity of information processed influences the computational demands involved. We show that successive increases in the kinds of information processed allow organisms to break through the glass ceilings that otherwise limit the size of social groups: larger groups can only be achieved at the cost of more sophisticated kinds of information processing that are disadvantageous when optimal group size is small. These results simultaneously support both the social brain and the social complexity hypotheses. PMID:23804623

  14. A genetic algorithm-based approach to flexible flow-line scheduling with variable lot sizes.

    PubMed

    Lee, I; Sikora, R; Shaw, M J

    1997-01-01

    Genetic algorithms (GAs) have been used widely for such combinatorial optimization problems as the traveling salesman problem (TSP), the quadratic assignment problem (QAP), and job shop scheduling. In all of these problems there is usually a well defined representation which GA's use to solve the problem. We present a novel approach for solving two related problems-lot sizing and sequencing-concurrently using GAs. The essence of our approach lies in the concept of using a unified representation for the information about both the lot sizes and the sequence and enabling GAs to evolve the chromosome by replacing primitive genes with good building blocks. In addition, a simulated annealing procedure is incorporated to further improve the performance. We evaluate the performance of applying the above approach to flexible flow line scheduling with variable lot sizes for an actual manufacturing facility, comparing it to such alternative approaches as pair wise exchange improvement, tabu search, and simulated annealing procedures. The results show the efficacy of this approach for flexible flow line scheduling.

  15. A hybrid, auto-adaptive and rule-based multi-agent approach using evolutionary algorithms for improved searching

    NASA Astrophysics Data System (ADS)

    Izquierdo, Joaquín; Montalvo, Idel; Campbell, Enrique; Pérez-García, Rafael

    2016-08-01

    Selecting the most appropriate heuristic for solving a specific problem is not easy, for many reasons. This article focuses on one of these reasons: traditionally, the solution search process has operated in a given manner regardless of the specific problem being solved, and the process has been the same regardless of the size, complexity and domain of the problem. To cope with this situation, search processes should mould the search into areas of the search space that are meaningful for the problem. This article builds on previous work in the development of a multi-agent paradigm using techniques derived from knowledge discovery (data-mining techniques) on databases of so-far visited solutions. The aim is to improve the search mechanisms, increase computational efficiency and use rules to enrich the formulation of optimization problems, while reducing the search space and catering to realistic problems.

  16. Latino Immigrant Children and Inequality in Access to Early Schooling Programs

    ERIC Educational Resources Information Center

    Zambrana, Ruth Enid; Morant, Tamyka

    2009-01-01

    Latino children in immigrant families are less likely than their peers to participate in early schooling programs, which puts them at increased risk for learning problems and school failure. Factors such as family structure and size, parental education, and income are strongly associated with early learning experiences, participation in early…

  17. Quantitative Methods for Administrative Decision Making in Junior Colleges.

    ERIC Educational Resources Information Center

    Gold, Benjamin Knox

    With the rapid increase in number and size of junior colleges, administrators must take advantage of the decision-making tools already used in business and industry. This study investigated how these quantitative techniques could be applied to junior college problems. A survey of 195 California junior college administrators found that the problems…

  18. An Integer Programming-Based Generalized Vehicle Routing Approach for Printed Circuit Board Assembly Optimization

    ERIC Educational Resources Information Center

    Seth, Anupam

    2009-01-01

    Production planning and scheduling for printed circuit, board assembly has so far defied standard operations research approaches due to the size and complexity of the underlying problems, resulting in unexploited automation flexibility. In this thesis, the increasingly popular collect-and-place machine configuration is studied and the assembly…

  19. On the Use of Software Metrics as a Predictor of Software Security Problems

    DTIC Science & Technology

    2013-01-01

    models to determine if additional metrics are required to increase the accuracy of the model: non-security SCSA warnings, code churn and size, the...vulnerabilities reported by testing and those found in the field. Summary of Most Important Results We evaluated our model on three commercial telecommunications

  20. On a Heat Exchange Problem under Sharply Changing External Conditions

    NASA Astrophysics Data System (ADS)

    Khishchenko, K. V.; Charakhch'yan, A. A.; Shurshalov, L. V.

    2018-02-01

    The heat exchange problem between carbon particles and an external environment (water) is stated and investigated based on the equations of heat conducting compressible fluid. The environment parameters are supposed to undergo large and fast variations. In the time of about 100 μs, the temperature of the environment first increases from the normal one to 2400 K, is preserved at this level for about 60 μs, and then decreases to 300 K during approximately 50 μs. At the same periods of time, the pressure of the external environment increases from the normal one to 67 GPa, is preserved at this level, and then decreases to zero. Under such external conditions, the heating of graphite particles of various sizes, their phase transition to the diamond phase, and the subsequent unloading and cooling almost to the initial values of the pressure and temperature without the reverse transition from the diamond to the graphite phase are investigated. Conclusions about the maximal size of diamond particles that can be obtained in experiments on the shock compression of the mixture of graphite with water are drawn.

  1. Teaching Medium-Sized ERP Systems - A Problem-Based Learning Approach

    NASA Astrophysics Data System (ADS)

    Winkelmann, Axel; Matzner, Martin

    In order to increase the diversity in IS education, we discuss an approach for teaching medium-sized ERP systems in master courses. Many of today's IS curricula are biased toward large ERP packages. Nevertheless, these ERP systems are only a part of the ERP market. Hence, this chapter describes a course outline for a course on medium-sized ERP systems. Students had to study, analyze, and compare five different ERP systems during a semester. The chapter introduces a procedure model and scenario for setting up similar courses at other universities. Furthermore, it describes some of the students' outcomes and evaluates the contribution of the course with regard to a practical but also academic IS education.

  2. Multicast backup reprovisioning problem for Hamiltonian cycle-based protection on WDM networks

    NASA Astrophysics Data System (ADS)

    Din, Der-Rong; Huang, Jen-Shen

    2014-03-01

    As networks grow in size and complexity, the chance and the impact of failures increase dramatically. The pre-allocated backup resources cannot provide 100% protection guarantee when continuous failures occur in a network. In this paper, the multicast backup re-provisioning problem (MBRP) for Hamiltonian cycle (HC)-based protection on WDM networks for the link-failure case is studied. We focus on how to recover the protecting capabilities of Hamiltonian cycle against the subsequent link-failures on WDM networks for multicast transmissions, after recovering the multicast trees affected by the previous link-failure. Since this problem is a hard problem, an algorithm, which consists of several heuristics and a genetic algorithm (GA), is proposed to solve it. The simulation results of the proposed method are also given. Experimental results indicate that the proposed algorithm can solve this problem efficiently.

  3. A fast least-squares algorithm for population inference

    PubMed Central

    2013-01-01

    Background Population inference is an important problem in genetics used to remove population stratification in genome-wide association studies and to detect migration patterns or shared ancestry. An individual’s genotype can be modeled as a probabilistic function of ancestral population memberships, Q, and the allele frequencies in those populations, P. The parameters, P and Q, of this binomial likelihood model can be inferred using slow sampling methods such as Markov Chain Monte Carlo methods or faster gradient based approaches such as sequential quadratic programming. This paper proposes a least-squares simplification of the binomial likelihood model motivated by a Euclidean interpretation of the genotype feature space. This results in a faster algorithm that easily incorporates the degree of admixture within the sample of individuals and improves estimates without requiring trial-and-error tuning. Results We show that the expected value of the least-squares solution across all possible genotype datasets is equal to the true solution when part of the problem has been solved, and that the variance of the solution approaches zero as its size increases. The Least-squares algorithm performs nearly as well as Admixture for these theoretical scenarios. We compare least-squares, Admixture, and FRAPPE for a variety of problem sizes and difficulties. For particularly hard problems with a large number of populations, small number of samples, or greater degree of admixture, least-squares performs better than the other methods. On simulated mixtures of real population allele frequencies from the HapMap project, Admixture estimates sparsely mixed individuals better than Least-squares. The least-squares approach, however, performs within 1.5% of the Admixture error. On individual genotypes from the HapMap project, Admixture and least-squares perform qualitatively similarly and within 1.2% of each other. Significantly, the least-squares approach nearly always converges 1.5- to 6-times faster. Conclusions The computational advantage of the least-squares approach along with its good estimation performance warrants further research, especially for very large datasets. As problem sizes increase, the difference in estimation performance between all algorithms decreases. In addition, when prior information is known, the least-squares approach easily incorporates the expected degree of admixture to improve the estimate. PMID:23343408

  4. A fast least-squares algorithm for population inference.

    PubMed

    Parry, R Mitchell; Wang, May D

    2013-01-23

    Population inference is an important problem in genetics used to remove population stratification in genome-wide association studies and to detect migration patterns or shared ancestry. An individual's genotype can be modeled as a probabilistic function of ancestral population memberships, Q, and the allele frequencies in those populations, P. The parameters, P and Q, of this binomial likelihood model can be inferred using slow sampling methods such as Markov Chain Monte Carlo methods or faster gradient based approaches such as sequential quadratic programming. This paper proposes a least-squares simplification of the binomial likelihood model motivated by a Euclidean interpretation of the genotype feature space. This results in a faster algorithm that easily incorporates the degree of admixture within the sample of individuals and improves estimates without requiring trial-and-error tuning. We show that the expected value of the least-squares solution across all possible genotype datasets is equal to the true solution when part of the problem has been solved, and that the variance of the solution approaches zero as its size increases. The Least-squares algorithm performs nearly as well as Admixture for these theoretical scenarios. We compare least-squares, Admixture, and FRAPPE for a variety of problem sizes and difficulties. For particularly hard problems with a large number of populations, small number of samples, or greater degree of admixture, least-squares performs better than the other methods. On simulated mixtures of real population allele frequencies from the HapMap project, Admixture estimates sparsely mixed individuals better than Least-squares. The least-squares approach, however, performs within 1.5% of the Admixture error. On individual genotypes from the HapMap project, Admixture and least-squares perform qualitatively similarly and within 1.2% of each other. Significantly, the least-squares approach nearly always converges 1.5- to 6-times faster. The computational advantage of the least-squares approach along with its good estimation performance warrants further research, especially for very large datasets. As problem sizes increase, the difference in estimation performance between all algorithms decreases. In addition, when prior information is known, the least-squares approach easily incorporates the expected degree of admixture to improve the estimate.

  5. A survey of Australian dairy farmers to investigate animal welfare risks associated with increasing scale of production.

    PubMed

    Beggs, D S; Fisher, A D; Jongman, E C; Hemsworth, P E

    2015-08-01

    Although large herds (more than 500 cows) only represent 13% of Australian dairy farms, they represent more than 35% of the cows milked. A survey of Australian dairy farmers was conducted to assess relationships between herd size and known or proposed risk factors for adverse animal welfare outcomes in Australian dairy herds in relation to increasing scale of production. Responses from 863 Australian dairy farms (13% of Australian dairy farms) were received. Increasing herd size was associated with increases in stocking density, stock per labor unit, and grain fed per day-all of which could reasonably be hypothesized to increase the risk of adverse welfare outcomes unless carefully managed. However, increasing herd size was also associated with an increased likelihood of staff with formal and industry-based training qualifications. Herd size was not associated with reported increases in mastitis or lameness treatments. Some disease conditions, such as milk fever, gut problems, and down cows, were reported less in larger herds. Larger herds were more likely to have routine veterinary herd health visits, separate milking of the main herd and the sick herd, transition diets before calving, and written protocols for disease treatment. They were more likely to use monitoring systems such as electronic identification in the dairy, computerized records, daily milk yield or cell count monitoring, and pedometers or activity meters. Euthanasia methods were consistent between herds of varying sizes, and it was noted that less than 3% of farms make use of captive-bolt devices despite their effectiveness and ready availability. Increasing herd size was related to increased herd milking time, increased time away from the paddock, and increased distance walked. If the milking order of cows is consistent, this may result in reduced feed access for late-milking-order cows because of a difference in time away from the paddock. More than 95% of farmers believed that their cows were content most of the time, and cows were reported as well behaved on more than 90% of farms. Although the potential animal welfare issues appear to be different between herd sizes, no evidence existed for a relationship between herd size and adverse welfare outcomes in terms of reported disease or cow contentment levels. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  6. Innovative Double Bypass Engine for Increased Performance

    NASA Astrophysics Data System (ADS)

    Manoharan, Sanjivan

    Engines continue to grow in size to meet the current thrust requirements of the civil aerospace industry. Large engines pose significant transportation problems and require them to be split in order to be shipped. Thus, large amounts of time have been spent in researching methods to increase thrust capabilities while maintaining a reasonable engine size. Unfortunately, much of this research has been focused on increasing the performance and efficiencies of individual components while limited research has been done on innovative engine configurations. This thesis focuses on an innovative engine configuration, the High Double Bypass Engine, aimed at increasing fuel efficiency and thrust while maintaining a competitive fan diameter and engine length. The 1-D analysis was done in Excel and then compared to the results from Numerical Propulsion Simulation System (NPSS) software and were found to be within 4% error. Flow performance characteristics were also determined and validated against their criteria.

  7. Does group size have an impact on welfare indicators in fattening pigs?

    PubMed

    Meyer-Hamme, S E K; Lambertz, C; Gauly, M

    2016-01-01

    Production systems for fattening pigs have been characterized over the last 2 decades by rising farm sizes coupled with increasing group sizes. These developments resulted in a serious public discussion regarding animal welfare and health in these intensive production systems. Even though large farm and group sizes came under severe criticism, it is still unknown whether these factors indeed negatively affect animal welfare. Therefore, the aim of this study was to assess the effect of group size (30 pigs/pen) on various animal-based measures of the Welfare Quality(®) protocol for growing pigs under conventional fattening conditions. A total of 60 conventional pig fattening farms with different group sizes in Germany were included. Moderate bursitis (35%) was found as the most prevalent indicator of welfare-related problems, while its prevalence increased with age during the fattening period. However, differences between group sizes were not detected (P>0.05). The prevalence of moderately soiled bodies increased from 9.7% at the start to 14.2% at the end of the fattening period, whereas large pens showed a higher prevalence (15.8%) than small pens (10.4%; P<0.05). With increasing group size, the incidence of moderate wounds with 8.5% and 11.3% in small- and medium-sized pens, respectively, was lower (P<0.05) than in large-sized ones (16.3%). Contrary to bursitis and dirtiness, its prevalence decreased during the fattening period. Moderate manure was less often found in pigs fed by a dry feeder than in those fed by a liquid feeding system (P<0.05). The human-animal relationship was improved in large in comparison to small groups. On the contrary, negative social behaviour was found more often in large groups. Exploration of enrichment material decreased with increasing live weight. Given that all animals were tail-docked, tail biting was observed at a very low rate of 1.9%. In conclusion, the results indicate that BW and feeding system are determining factors for the welfare status, while group size was not proved to affect the welfare level under the studied conditions of pig fattening.

  8. Fluid mechanics of additive manufacturing of metal objects by accretion of droplets - a survey

    NASA Astrophysics Data System (ADS)

    Tesař, Václav

    2016-03-01

    Paper presents a survey of principles of additive manufacturing of metal objects by accretion of molten metal droplets, focusing on fluid-mechanical problems that deserve being investigated. The main problem is slowness of manufacturing due to necessarily small size of added droplets. Increase of droplet repetition rate calls for basic research of the phenomena that take place inside and around the droplets: ballistics of their flight, internal flowfield with heat and mass transfer, oscillation of surfaces, and the ways to elimination of satellite droplets.

  9. Computerized adaptive testing: the capitalization on chance problem.

    PubMed

    Olea, Julio; Barrada, Juan Ramón; Abad, Francisco J; Ponsoda, Vicente; Cuevas, Lara

    2012-03-01

    This paper describes several simulation studies that examine the effects of capitalization on chance in the selection of items and the ability estimation in CAT, employing the 3-parameter logistic model. In order to generate different estimation errors for the item parameters, the calibration sample size was manipulated (N = 500, 1000 and 2000 subjects) as was the ratio of item bank size to test length (banks of 197 and 788 items, test lengths of 20 and 40 items), both in a CAT and in a random test. Results show that capitalization on chance is particularly serious in CAT, as revealed by the large positive bias found in the small sample calibration conditions. For broad ranges of theta, the overestimation of the precision (asymptotic Se) reaches levels of 40%, something that does not occur with the RMSE (theta). The problem is greater as the item bank size to test length ratio increases. Potential solutions were tested in a second study, where two exposure control methods were incorporated into the item selection algorithm. Some alternative solutions are discussed.

  10. Characterization of skin reactions and pain reported by patients receiving radiation therapy for cancer at different sites.

    PubMed

    Gewandter, Jennifer S; Walker, Joanna; Heckler, Charles E; Morrow, Gary R; Ryan, Julie L

    2013-12-01

    Skin reactions and pain are commonly reported side effects of radiation therapy (RT). To characterize RT-induced symptoms according to treatment site subgroups and identify skin symptoms that correlate with pain. A self-report survey-adapted from the MD Anderson Symptom Inventory and the McGill Pain Questionnaire--assessed RT-induced skin problems, pain, and specific skin symptoms. Wilcoxon Sign Ranked tests compared mean severity or pre- and post-RT pain and skin problems within each RT-site subgroup. Multiple linear regression (MLR) investigated associations between skin symptoms and pain. Survey respondents (N = 106) were 58% female and on average 64 years old. RT sites included lung, breast, lower abdomen, head/neck/brain, and upper abdomen. Only patients receiving breast RT reported significant increases in treatment site pain and skin problems (P < or = .007). Patients receiving head/neck/brain RT reported increased skin problems (P < .0009). MLR showed that post-RT skin tenderness and tightness were most strongly associated with post-RT pain (P = .066 and P = .122, respectively). Small sample size, exploratory analyses, and nonvalidated measure. Only patients receiving breast RT reported significant increases in pain and skin problems at the RT site while patients receiving head/neck/brain RT had increased skin problems but not pain. These findings suggest that the severity of skin problems is not the only factor that contributes to pain and that interventions should be tailored to specifically target pain at the RT site, possibly by targeting tenderness and tightness. These findings should be confirmed in a larger sampling of RT patients.

  11. GPU-accelerated computation of electron transfer.

    PubMed

    Höfinger, Siegfried; Acocella, Angela; Pop, Sergiu C; Narumi, Tetsu; Yasuoka, Kenji; Beu, Titus; Zerbetto, Francesco

    2012-11-05

    Electron transfer is a fundamental process that can be studied with the help of computer simulation. The underlying quantum mechanical description renders the problem a computationally intensive application. In this study, we probe the graphics processing unit (GPU) for suitability to this type of problem. Time-critical components are identified via profiling of an existing implementation and several different variants are tested involving the GPU at increasing levels of abstraction. A publicly available library supporting basic linear algebra operations on the GPU turns out to accelerate the computation approximately 50-fold with minor dependence on actual problem size. The performance gain does not compromise numerical accuracy and is of significant value for practical purposes. Copyright © 2012 Wiley Periodicals, Inc.

  12. Optimal sensor placement for leak location in water distribution networks using genetic algorithms.

    PubMed

    Casillas, Myrna V; Puig, Vicenç; Garza-Castañón, Luis E; Rosich, Albert

    2013-11-04

    This paper proposes a new sensor placement approach for leak location in water distribution networks (WDNs). The sensor placement problem is formulated as an integer optimization problem. The optimization criterion consists in minimizing the number of non-isolable leaks according to the isolability criteria introduced. Because of the large size and non-linear integer nature of the resulting optimization problem, genetic algorithms (GAs) are used as the solution approach. The obtained results are compared with a semi-exhaustive search method with higher computational effort, proving that GA allows one to find near-optimal solutions with less computational load. Moreover, three ways of increasing the robustness of the GA-based sensor placement method have been proposed using a time horizon analysis, a distance-based scoring and considering different leaks sizes. A great advantage of the proposed methodology is that it does not depend on the isolation method chosen by the user, as long as it is based on leak sensitivity analysis. Experiments in two networks allow us to evaluate the performance of the proposed approach.

  13. Optimal Sensor Placement for Leak Location in Water Distribution Networks Using Genetic Algorithms

    PubMed Central

    Casillas, Myrna V.; Puig, Vicenç; Garza-Castañón, Luis E.; Rosich, Albert

    2013-01-01

    This paper proposes a new sensor placement approach for leak location in water distribution networks (WDNs). The sensor placement problem is formulated as an integer optimization problem. The optimization criterion consists in minimizing the number of non-isolable leaks according to the isolability criteria introduced. Because of the large size and non-linear integer nature of the resulting optimization problem, genetic algorithms (GAs) are used as the solution approach. The obtained results are compared with a semi-exhaustive search method with higher computational effort, proving that GA allows one to find near-optimal solutions with less computational load. Moreover, three ways of increasing the robustness of the GA-based sensor placement method have been proposed using a time horizon analysis, a distance-based scoring and considering different leaks sizes. A great advantage of the proposed methodology is that it does not depend on the isolation method chosen by the user, as long as it is based on leak sensitivity analysis. Experiments in two networks allow us to evaluate the performance of the proposed approach. PMID:24193099

  14. A probabilistic approach to remote compositional analysis of planetary surfaces

    USGS Publications Warehouse

    Lapotre, Mathieu G.A.; Ehlmann, Bethany L.; Minson, Sarah E.

    2017-01-01

    Reflected light from planetary surfaces provides information, including mineral/ice compositions and grain sizes, by study of albedo and absorption features as a function of wavelength. However, deconvolving the compositional signal in spectra is complicated by the nonuniqueness of the inverse problem. Trade-offs between mineral abundances and grain sizes in setting reflectance, instrument noise, and systematic errors in the forward model are potential sources of uncertainty, which are often unquantified. Here we adopt a Bayesian implementation of the Hapke model to determine sets of acceptable-fit mineral assemblages, as opposed to single best fit solutions. We quantify errors and uncertainties in mineral abundances and grain sizes that arise from instrument noise, compositional end members, optical constants, and systematic forward model errors for two suites of ternary mixtures (olivine-enstatite-anorthite and olivine-nontronite-basaltic glass) in a series of six experiments in the visible-shortwave infrared (VSWIR) wavelength range. We show that grain sizes are generally poorly constrained from VSWIR spectroscopy. Abundance and grain size trade-offs lead to typical abundance errors of ≤1 wt % (occasionally up to ~5 wt %), while ~3% noise in the data increases errors by up to ~2 wt %. Systematic errors further increase inaccuracies by a factor of 4. Finally, phases with low spectral contrast or inaccurate optical constants can further increase errors. Overall, typical errors in abundance are <10%, but sometimes significantly increase for specific mixtures, prone to abundance/grain-size trade-offs that lead to high unmixing uncertainties. These results highlight the need for probabilistic approaches to remote determination of planetary surface composition.

  15. Reducing child conduct problems and promoting social skills in a middle-income country: cluster randomised controlled trial.

    PubMed

    Baker-Henningham, Helen; Scott, Stephen; Jones, Kelvyn; Walker, Susan

    2012-08-01

    There is an urgent need for effective, affordable interventions to prevent child mental health problems in low- and middle-income countries. To determine the effects of a universal pre-school-based intervention on child conduct problems and social skills at school and at home. In a cluster randomised design, 24 community pre-schools in inner-city areas of Kingston, Jamaica, were randomly assigned to receive the Incredible Years Teacher Training intervention (n = 12) or to a control group (n = 12). Three children from each class with the highest levels of teacher-reported conduct problems were selected for evaluation, giving 225 children aged 3-6 years. The primary outcome was observed child behaviour at school. Secondary outcomes were child behaviour by parent and teacher report, child attendance and parents' attitude to school. The study is registered as ISRCTN35476268. Children in intervention schools showed significantly reduced conduct problems (effect size (ES) = 0.42) and increased friendship skills (ES = 0.74) through observation, significant reductions to teacher-reported (ES = 0.47) and parent-reported (ES = 0.22) behaviour difficulties and increases in teacher-reported social skills (ES = 0.59) and child attendance (ES = 0.30). Benefits to parents' attitude to school were not significant. A low-cost, school-based intervention in a middle-income country substantially reduces child conduct problems and increases child social skills at home and at school.

  16. A fast time-difference inverse solver for 3D EIT with application to lung imaging.

    PubMed

    Javaherian, Ashkan; Soleimani, Manuchehr; Moeller, Knut

    2016-08-01

    A class of sparse optimization techniques that require solely matrix-vector products, rather than an explicit access to the forward matrix and its transpose, has been paid much attention in the recent decade for dealing with large-scale inverse problems. This study tailors application of the so-called Gradient Projection for Sparse Reconstruction (GPSR) to large-scale time-difference three-dimensional electrical impedance tomography (3D EIT). 3D EIT typically suffers from the need for a large number of voxels to cover the whole domain, so its application to real-time imaging, for example monitoring of lung function, remains scarce since the large number of degrees of freedom of the problem extremely increases storage space and reconstruction time. This study shows the great potential of the GPSR for large-size time-difference 3D EIT. Further studies are needed to improve its accuracy for imaging small-size anomalies.

  17. Bio-inspired group modeling and analysis for intruder detection in mobile sensor/robotic networks.

    PubMed

    Fu, Bo; Xiao, Yang; Liang, Xiannuan; Philip Chen, C L

    2015-01-01

    Although previous bio-inspired models have concentrated on invertebrates (such as ants), mammals such as primates with higher cognitive function are valuable for modeling the increasingly complex problems in engineering. Understanding primates' social and communication systems, and applying what is learned from them to engineering domains is likely to inspire solutions to a number of problems. This paper presents a novel bio-inspired approach to determine group size by researching and simulating primate society. Group size does matter for both primate society and digital entities. It is difficult to determine how to group mobile sensors/robots that patrol in a large area when many factors are considered such as patrol efficiency, wireless interference, coverage, inter/intragroup communications, etc. This paper presents a simulation-based theoretical study on patrolling strategies for robot groups with the comparison of large and small groups through simulations and theoretical results.

  18. Phase field benchmark problems for dendritic growth and linear elasticity

    DOE PAGES

    Jokisaari, Andrea M.; Voorhees, P. W.; Guyer, Jonathan E.; ...

    2018-03-26

    We present the second set of benchmark problems for phase field models that are being jointly developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST) along with input from other members in the phase field community. As the integrated computational materials engineering (ICME) approach to materials design has gained traction, there is an increasing need for quantitative phase field results. New algorithms and numerical implementations increase computational capabilities, necessitating standard problems to evaluate their impact on simulated microstructure evolution as well as their computational performance. We propose one benchmark problem formore » solidifiication and dendritic growth in a single-component system, and one problem for linear elasticity via the shape evolution of an elastically constrained precipitate. We demonstrate the utility and sensitivity of the benchmark problems by comparing the results of 1) dendritic growth simulations performed with different time integrators and 2) elastically constrained precipitate simulations with different precipitate sizes, initial conditions, and elastic moduli. As a result, these numerical benchmark problems will provide a consistent basis for evaluating different algorithms, both existing and those to be developed in the future, for accuracy and computational efficiency when applied to simulate physics often incorporated in phase field models.« less

  19. Phase field benchmark problems for dendritic growth and linear elasticity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jokisaari, Andrea M.; Voorhees, P. W.; Guyer, Jonathan E.

    We present the second set of benchmark problems for phase field models that are being jointly developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST) along with input from other members in the phase field community. As the integrated computational materials engineering (ICME) approach to materials design has gained traction, there is an increasing need for quantitative phase field results. New algorithms and numerical implementations increase computational capabilities, necessitating standard problems to evaluate their impact on simulated microstructure evolution as well as their computational performance. We propose one benchmark problem formore » solidifiication and dendritic growth in a single-component system, and one problem for linear elasticity via the shape evolution of an elastically constrained precipitate. We demonstrate the utility and sensitivity of the benchmark problems by comparing the results of 1) dendritic growth simulations performed with different time integrators and 2) elastically constrained precipitate simulations with different precipitate sizes, initial conditions, and elastic moduli. As a result, these numerical benchmark problems will provide a consistent basis for evaluating different algorithms, both existing and those to be developed in the future, for accuracy and computational efficiency when applied to simulate physics often incorporated in phase field models.« less

  20. A Coalitional Game for Distributed Inference in Sensor Networks With Dependent Observations

    NASA Astrophysics Data System (ADS)

    He, Hao; Varshney, Pramod K.

    2016-04-01

    We consider the problem of collaborative inference in a sensor network with heterogeneous and statistically dependent sensor observations. Each sensor aims to maximize its inference performance by forming a coalition with other sensors and sharing information within the coalition. It is proved that the inference performance is a nondecreasing function of the coalition size. However, in an energy constrained network, the energy consumption of inter-sensor communication also increases with increasing coalition size, which discourages the formation of the grand coalition (the set of all sensors). In this paper, the formation of non-overlapping coalitions with statistically dependent sensors is investigated under a specific communication constraint. We apply a game theoretical approach to fully explore and utilize the information contained in the spatial dependence among sensors to maximize individual sensor performance. Before formulating the distributed inference problem as a coalition formation game, we first quantify the gain and loss in forming a coalition by introducing the concepts of diversity gain and redundancy loss for both estimation and detection problems. These definitions, enabled by the statistical theory of copulas, allow us to characterize the influence of statistical dependence among sensor observations on inference performance. An iterative algorithm based on merge-and-split operations is proposed for the solution and the stability of the proposed algorithm is analyzed. Numerical results are provided to demonstrate the superiority of our proposed game theoretical approach.

  1. Effect of display size on utilization of traffic situation display for self-spacing task. [transport aircraft

    NASA Technical Reports Server (NTRS)

    Abbott, T. S.; Moen, G. C.

    1981-01-01

    The weather radar cathode ray tube (CRT) is the prime candidate for presenting cockpit display of traffic information (CDTI) in current, conventionally equipped transport aircraft. Problems may result from this, since the CRT size is not optimized for CDTI applications and the CRT is not in the pilot's primary visual scan area. The impact of display size on the ability of pilots to utilize the traffic information to maintain a specified spacing interval behind a lead aircraft during an approach task was studied. The five display sizes considered are representative of the display hardware configurations of airborne weather radar systems. From a pilot's subjective workload viewpoint, even the smallest display size was usable for performing the self spacing task. From a performane viewpoint, the mean spacing values, which are indicative of how well the pilots were able to perform the task, exhibit the same trends, irrespective of display size; however, the standard deviation of the spacing intervals decreased (performance improves) as the display size increased. Display size, therefore, does have a significant effect on pilot performance.

  2. Scalability problems of simple genetic algorithms.

    PubMed

    Thierens, D

    1999-01-01

    Scalable evolutionary computation has become an intensively studied research topic in recent years. The issue of scalability is predominant in any field of algorithmic design, but it became particularly relevant for the design of competent genetic algorithms once the scalability problems of simple genetic algorithms were understood. Here we present some of the work that has aided in getting a clear insight in the scalability problems of simple genetic algorithms. Particularly, we discuss the important issue of building block mixing. We show how the need for mixing places a boundary in the GA parameter space that, together with the boundary from the schema theorem, delimits the region where the GA converges reliably to the optimum in problems of bounded difficulty. This region shrinks rapidly with increasing problem size unless the building blocks are tightly linked in the problem coding structure. In addition, we look at how straightforward extensions of the simple genetic algorithm-namely elitism, niching, and restricted mating are not significantly improving the scalability problems.

  3. Stepped Care: A Promising Treatment Strategy for Mandated Students

    ERIC Educational Resources Information Center

    Borsari, Brian; Tevyaw, Tracy O'Leary

    2005-01-01

    Over the past decade, there has been a steady increase in the number of mandated students who have been referred to campus alcohol programs for violating campus alcohol policies. However, the severity of alcohol use and problems varies widely in mandated students, indicating that a "one size fits all" delivery of treatment may be inappropriate.…

  4. Addressing the STEM Workforce Challenge: Missouri. BHEF Research Brief

    ERIC Educational Resources Information Center

    Business-Higher Education Forum (NJ1), 2012

    2012-01-01

    While states and the federal government have put efforts in place to increase the size of the workforce trained in science, technology, engineering, and math (STEM) to meet innovation demands, there continues to be a nationwide shortage of students who are interested in and prepared for such careers. Missouri is no exception to this problem, one…

  5. The Problem of Scale in the Interpretation of Pictorial Representations of Cell Structure

    ERIC Educational Resources Information Center

    Vlaardingerbroek, Barend; Taylor, Neil; Bale, Colin

    2014-01-01

    Diagrams feature prominently in science education, and there has been an increase in research focusing on students' use of them in knowledge construction. This paper reports on an investigation into first year university students' perceptions of scale and size at the cellular level. It was found that many students appeared to tacitly assume that…

  6. Technical Vocational Education and Training for Micro-Enterprise Development in Ethiopia: A Solution or Part of the Problem?

    ERIC Educational Resources Information Center

    Gondo, Tendayi; Dafuleya, Gift

    2010-01-01

    Technical vocational education and training (TVET) programmes have recently received increased attention as an area of priority for stimulating growth in developed and developing countries. This paper considers the situation in Ethiopia where the promotion of micro and small-sized enterprises (MSEs) has been central to the development and…

  7. Research on Interventions for Adolescents with Learning Disabilities: A Meta-Analysis of Outcomes Related to Higher-Order Processing.

    ERIC Educational Resources Information Center

    Swanson, H. Lee

    2001-01-01

    Details meta-analysis of 58 intervention studies related to higher-order processing (i.e., problem solving) for adolescents with learning disabilities. Discusses factors that increased effect sizes: (1) measures of metacognition and text understanding; (2) instruction including advanced organizers, new skills, and extended practice; and (3)…

  8. Looking at Training in a Business Context. The Role of Organizational Performance Assessments. Business Assistance Note #5.

    ERIC Educational Resources Information Center

    Snyder, Phyllis; Bergman, Terri

    Organizations that provide training to small- and mid-sized companies must take a broad look at companies' performance needs and offer a package of services that will address their performance problems. Providers must also help the company to see the connection between investments in human capital and increased productivity. Organizational…

  9. Strategies for Sustaining Quality in PBL Facilitation for Large Student Cohorts

    ERIC Educational Resources Information Center

    Young, Louise; Papinczak, Tracey

    2013-01-01

    Problem-based learning (PBL) has been used to scaffold and support student learning in many Australian medical programs, with the role of the facilitator in the process considered crucial to the overall educational experience of students. With the increasing size of student cohorts and in an environment of financial constraint, it is important to…

  10. Three-dimensional Finite Element Formulation and Scalable Domain Decomposition for High Fidelity Rotor Dynamic Analysis

    NASA Technical Reports Server (NTRS)

    Datta, Anubhav; Johnson, Wayne R.

    2009-01-01

    This paper has two objectives. The first objective is to formulate a 3-dimensional Finite Element Model for the dynamic analysis of helicopter rotor blades. The second objective is to implement and analyze a dual-primal iterative substructuring based Krylov solver, that is parallel and scalable, for the solution of the 3-D FEM analysis. The numerical and parallel scalability of the solver is studied using two prototype problems - one for ideal hover (symmetric) and one for a transient forward flight (non-symmetric) - both carried out on up to 48 processors. In both hover and forward flight conditions, a perfect linear speed-up is observed, for a given problem size, up to the point of substructure optimality. Substructure optimality and the linear parallel speed-up range are both shown to depend on the problem size as well as on the selection of the coarse problem. With a larger problem size, linear speed-up is restored up to the new substructure optimality. The solver also scales with problem size - even though this conclusion is premature given the small prototype grids considered in this study.

  11. New mathematical modeling for a location-routing-inventory problem in a multi-period closed-loop supply chain in a car industry

    NASA Astrophysics Data System (ADS)

    Forouzanfar, F.; Tavakkoli-Moghaddam, R.; Bashiri, M.; Baboli, A.; Hadji Molana, S. M.

    2017-11-01

    This paper studies a location-routing-inventory problem in a multi-period closed-loop supply chain with multiple suppliers, producers, distribution centers, customers, collection centers, recovery, and recycling centers. In this supply chain, centers are multiple levels, a price increase factor is considered for operational costs at centers, inventory and shortage (including lost sales and backlog) are allowed at production centers, arrival time of vehicles of each plant to its dedicated distribution centers and also departure from them are considered, in such a way that the sum of system costs and the sum of maximum time at each level should be minimized. The aforementioned problem is formulated in the form of a bi-objective nonlinear integer programming model. Due to the NP-hard nature of the problem, two meta-heuristics, namely, non-dominated sorting genetic algorithm (NSGA-II) and multi-objective particle swarm optimization (MOPSO), are used in large sizes. In addition, a Taguchi method is used to set the parameters of these algorithms to enhance their performance. To evaluate the efficiency of the proposed algorithms, the results for small-sized problems are compared with the results of the ɛ-constraint method. Finally, four measuring metrics, namely, the number of Pareto solutions, mean ideal distance, spacing metric, and quality metric, are used to compare NSGA-II and MOPSO.

  12. Social interaction as a heuristic for combinatorial optimization problems

    NASA Astrophysics Data System (ADS)

    Fontanari, José F.

    2010-11-01

    We investigate the performance of a variant of Axelrod’s model for dissemination of culture—the Adaptive Culture Heuristic (ACH)—on solving an NP-Complete optimization problem, namely, the classification of binary input patterns of size F by a Boolean Binary Perceptron. In this heuristic, N agents, characterized by binary strings of length F which represent possible solutions to the optimization problem, are fixed at the sites of a square lattice and interact with their nearest neighbors only. The interactions are such that the agents’ strings (or cultures) become more similar to the low-cost strings of their neighbors resulting in the dissemination of these strings across the lattice. Eventually the dynamics freezes into a homogeneous absorbing configuration in which all agents exhibit identical solutions to the optimization problem. We find through extensive simulations that the probability of finding the optimal solution is a function of the reduced variable F/N1/4 so that the number of agents must increase with the fourth power of the problem size, N∝F4 , to guarantee a fixed probability of success. In this case, we find that the relaxation time to reach an absorbing configuration scales with F6 which can be interpreted as the overall computational cost of the ACH to find an optimal set of weights for a Boolean binary perceptron, given a fixed probability of success.

  13. Transformations of inorganic coal constituents in combustion systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helble, J.J.; Srinivasachar, S.; Wilemski, G.

    1992-11-01

    The inorganic constituents or ash contained in pulverized coal significantly increase the environmental and economic costs of coal utilization. For example, ash particles produced during combustion may deposit on heat transfer surfaces, decreasing heat transfer rates and increasing maintenance costs. The minimization of particulate emissions often requires the installation of cleanup devices such as electrostatic precipitators, also adding to the expense of coal utilization. Despite these costly problems, a comprehensive assessment of the ash formation and had never been attempted. At the start of this program, it was hypothesized that ash deposition and ash particle emissions both depended upon themore » size and chemical composition of individual ash particles. Questions such as: What determines the size of individual ash particles What determines their composition Whether or not particles deposit How combustion conditions, including reactor size, affect these processes remained to be answered. In this 6-year multidisciplinary study, these issues were addressed in detail. The ambitious overall goal was the development of a comprehensive model to predict the size and chemical composition distributions of ash produced during pulverized coal combustion. Results are described.« less

  14. Two phase genetic algorithm for vehicle routing and scheduling problem with cross-docking and time windows considering customer satisfaction

    NASA Astrophysics Data System (ADS)

    Baniamerian, Ali; Bashiri, Mahdi; Zabihi, Fahime

    2018-03-01

    Cross-docking is a new warehousing policy in logistics which is widely used all over the world and attracts many researchers attention to study about in last decade. In the literature, economic aspects has been often studied, while one of the most significant factors for being successful in the competitive global market is improving quality of customer servicing and focusing on customer satisfaction. In this paper, we introduce a vehicle routing and scheduling problem with cross-docking and time windows in a three-echelon supply chain that considers customer satisfaction. A set of homogeneous vehicles collect products from suppliers and after consolidation process in the cross-dock, immediately deliver them to customers. A mixed integer linear programming model is presented for this problem to minimize transportation cost and early/tardy deliveries with scheduling of inbound and outbound vehicles to increase customer satisfaction. A two phase genetic algorithm (GA) is developed for the problem. For investigating the performance of the algorithm, it was compared with exact and lower bound solutions in small and large-size instances, respectively. Results show that there are at least 86.6% customer satisfaction by the proposed method, whereas customer satisfaction in the classical model is at most 33.3%. Numerical examples results show that the proposed two phase algorithm could achieve optimal solutions in small-size instances. Also in large-size instances, the proposed two phase algorithm could achieve better solutions with less gap from the lower bound in less computational time in comparison with the classic GA.

  15. A hybrid binary particle swarm optimization for large capacitated multi item multi level lot sizing (CMIMLLS) problem

    NASA Astrophysics Data System (ADS)

    Mishra, S. K.; Sahithi, V. V. D.; Rao, C. S. P.

    2016-09-01

    The lot sizing problem deals with finding optimal order quantities which minimizes the ordering and holding cost of product mix. when multiple items at multiple levels with all capacity restrictions are considered, the lot sizing problem become NP hard. Many heuristics were developed in the past have inevitably failed due to size, computational complexity and time. However the authors were successful in the development of PSO based technique namely iterative improvement binary particles swarm technique to address very large capacitated multi-item multi level lot sizing (CMIMLLS) problem. First binary particle Swarm Optimization algorithm is used to find a solution in a reasonable time and iterative improvement local search mechanism is employed to improvise the solution obtained by BPSO algorithm. This hybrid mechanism of using local search on the global solution is found to improve the quality of solutions with respect to time thus IIBPSO method is found best and show excellent results.

  16. Parameterized Complexity of k-Anonymity: Hardness and Tractability

    NASA Astrophysics Data System (ADS)

    Bonizzoni, Paola; Della Vedova, Gianluca; Dondi, Riccardo; Pirola, Yuri

    The problem of publishing personal data without giving up privacy is becoming increasingly important. A precise formalization that has been recently proposed is the k-anonymity, where the rows of a table are partitioned in clusters of size at least k and all rows in a cluster become the same tuple after the suppression of some entries. The natural optimization problem, where the goal is to minimize the number of suppressed entries, is hard even when the stored values are over a binary alphabet or the table consists of a bounded number of columns. In this paper we study how the complexity of the problem is influenced by different parameters. First we show that the problem is W[1]-hard when parameterized by the value of the solution (and k). Then we exhibit a fixed-parameter algorithm when the problem is parameterized by the number of columns and the number of different values in any column.

  17. Solving a supply chain scheduling problem with non-identical job sizes and release times by applying a novel effective heuristic algorithm

    NASA Astrophysics Data System (ADS)

    Pei, Jun; Liu, Xinbao; Pardalos, Panos M.; Fan, Wenjuan; Wang, Ling; Yang, Shanlin

    2016-03-01

    Motivated by applications in manufacturing industry, we consider a supply chain scheduling problem, where each job is characterised by non-identical sizes, different release times and unequal processing times. The objective is to minimise the makespan by making batching and sequencing decisions. The problem is formalised as a mixed integer programming model and proved to be strongly NP-hard. Some structural properties are presented for both the general case and a special case. Based on these properties, a lower bound is derived, and a novel two-phase heuristic (TP-H) is developed to solve the problem, which guarantees to obtain a worst case performance ratio of ?. Computational experiments with a set of different sizes of random instances are conducted to evaluate the proposed approach TP-H, which is superior to another two heuristics proposed in the literature. Furthermore, the experimental results indicate that TP-H can effectively and efficiently solve large-size problems in a reasonable time.

  18. The evolution of complex life cycles when parasite mortality is size- or time-dependent.

    PubMed

    Ball, M A; Parker, G A; Chubb, J C

    2008-07-07

    In complex cycles, helminth larvae in their intermediate hosts typically grow to a fixed size. We define this cessation of growth before transmission to the next host as growth arrest at larval maturity (GALM). Where the larval parasite controls its own growth in the intermediate host, in order that growth eventually arrests, some form of size- or time-dependent increase in its death rate must apply. In contrast, the switch from growth to sexual reproduction in the definitive host can be regulated by constant (time-independent) mortality as in standard life history theory. We here develop a step-wise model for the evolution of complex helminth life cycles through trophic transmission, based on the approach of Parker et al. [2003a. Evolution of complex life cycles in helminth parasites. Nature London 425, 480-484], but which includes size- or time-dependent increase in mortality rate. We assume that the growing larval parasite has two components to its death rate: (i) a constant, size- or time-independent component, and (ii) a component that increases with size or time in the intermediate host. When growth stops at larval maturity, there is a discontinuous change in mortality to a constant (time-independent) rate. This model generates the same optimal size for the parasite larva at GALM in the intermediate host whether the evolutionary approach to the complex life cycle is by adding a new host above the original definitive host (upward incorporation), or below the original definitive host (downward incorporation). We discuss some unexplored problems for cases where complex life cycles evolve through trophic transmission.

  19. Steepest descent method implementation on unconstrained optimization problem using C++ program

    NASA Astrophysics Data System (ADS)

    Napitupulu, H.; Sukono; Mohd, I. Bin; Hidayat, Y.; Supian, S.

    2018-03-01

    Steepest Descent is known as the simplest gradient method. Recently, many researches are done to obtain the appropriate step size in order to reduce the objective function value progressively. In this paper, the properties of steepest descent method from literatures are reviewed together with advantages and disadvantages of each step size procedure. The development of steepest descent method due to its step size procedure is discussed. In order to test the performance of each step size, we run a steepest descent procedure in C++ program. We implemented it to unconstrained optimization test problem with two variables, then we compare the numerical results of each step size procedure. Based on the numerical experiment, we conclude the general computational features and weaknesses of each procedure in each case of problem.

  20. Nature's crucible: Manufacturing optical nonlinearities for high resolution, high sensitivity encoding in the compound eye of the fly, Musca domestica

    NASA Technical Reports Server (NTRS)

    Wilcox, Mike

    1993-01-01

    The number of pixels per unit area sampling an image determines Nyquist resolution. Therefore, the highest pixel density is the goal. Unfortunately, as reduction in pixel size approaches the wavelength of light, sensitivity is lost and noise increases. Animals face the same problems and have achieved novel solutions. Emulating these solutions offers potentially unlimited sensitivity with detector size approaching the diffraction limit. Once an image is 'captured', cellular preprocessing of information allows extraction of high resolution information from the scene. Computer simulation of this system promises hyperacuity for machine vision.

  1. The Cost of Class Size Reduction: Advice for Policymakers. RAND Graduate School Dissertation.

    ERIC Educational Resources Information Center

    Reichardt, Robert E.

    This dissertation provides information to state-level policymakers that will help them avoid two implementation problems seen in the past in California's class-size-reduction (CSR) reform. The first problem was that flat, per student reimbursement did not adequately cover costs in districts with larger pre-CSR class-sizes or smaller schools. The…

  2. Rear-End Crashes: Problem Size Assessment And Statistical Description

    DOT National Transportation Integrated Search

    1993-05-01

    KEYWORDS : RESEARCH AND DEVELOPMENT OR R&D, ADVANCED VEHICLE CONTROL & SAFETY SYSTEMS OR AVCSS, INTELLIGENT VEHICLE INITIATIVE OR IVI : THIS DOCUMENT PRESENTS PROBLEM SIZE ASSESSMENTS AND STATISTICAL CRASH DESCRIPTION FOR REAR-END CRASHES, INC...

  3. Reciprocal Relations Between Student-Teacher Relationship and Children's Behavioral Problems: Moderation by Child-Care Group Size.

    PubMed

    Skalická, Věra; Belsky, Jay; Stenseng, Frode; Wichstrøm, Lars

    2015-01-01

    In this Norwegian study, bidirectional relations between children's behavior problems and child-teacher conflict and closeness were examined, and the possibility of moderation of these associations by child-care group size was tested. Eight hundred and nineteen 4-year-old children were followed up in first grade. Results revealed reciprocal effects linking child-teacher conflict and behavior problems. Effects of child-teacher closeness on later behavior problems were moderated by group size: For children in small groups only (i.e., ≤ 15 children), greater closeness predicted reduced behavior problems in first grade. In consequence, stability of behavior problems was greater in larger than in smaller groups. Results are discussed in light of regulatory mechanisms and social learning theory, with possible implications for organization of child care. © 2015 The Authors. Child Development © 2015 Society for Research in Child Development, Inc.

  4. Characterization of skin reactions and pain reported by patients receiving radiation therapy for cancer at different sites

    PubMed Central

    Gewandter, Jennifer S.; Walker, Joanna; Heckler, Charles E.; Morrow, Gary R.; Ryan, Julie L.

    2015-01-01

    Background Skin reactions and pain are commonly reported side effects of radiation therapy (RT). Objective To characterize RT-induced symptoms according to treatment site subgroups and identify skin symptoms that correlate with pain. Methods A self-report survey, adapted from the MD Anderson Symptom Inventory and the McGill Pain Questionnaire, assessed RT-induced skin problems, pain, and specific skin symptoms. Wilcoxon Sign Ranked tests compared mean severity of pre- and post-RT pain and skin problems within each RT-site subgroup. Multiple linear regression (MLR) investigated associations between skin symptoms and pain. Results Survey respondents (n=106) were 58% female and on average 64 years old. RT sites included lung, breast, lower abdomen, head/neck/brain, and upper abdomen. Only patients receiving breast RT reported significant increases in treatment site pain and skin problems (p≤0.007). Patients receiving head/neck/brain RT reported increased skin problems (p<0.0009). MLR showed that post-RT skin tenderness and tightness were most strongly associated with post-RT pain (p=0.066 and p=0.122, respectively). Limitations Small sample size, exploratory analyses, and non-validated measure. Conclusions Only patients receiving breast RT reported significant increases in pain and skin problems at the RT site, while patients receiving head/neck/brain RT had increased skin problems, but not pain. These findings suggest that the severity of skin problems is not the only factor that contributes to pain, and interventions should be tailored to specifically target pain at the RT site, possibly by targeting tenderness and tightness. These findings should be confirmed in a larger sampling of RT patients. PMID:24645338

  5. A traveling-salesman-based approach to aircraft scheduling in the terminal area

    NASA Technical Reports Server (NTRS)

    Luenberger, Robert A.

    1988-01-01

    An efficient algorithm is presented, based on the well-known algorithm for the traveling salesman problem, for scheduling aircraft arrivals into major terminal areas. The algorithm permits, but strictly limits, reassigning an aircraft from its initial position in the landing order. This limitation is needed so that no aircraft or aircraft category is unduly penalized. Results indicate, for the mix of arrivals investigated, a potential increase in capacity in the 3 to 5 percent range. Furthermore, it is shown that the computation time for the algorithm grows only linearly with problem size.

  6. Estimating Premium Sensitivity for Children's Public Health Insurance Coverage: Selection but No Death Spiral

    PubMed Central

    Marton, James; Ketsche, Patricia G; Snyder, Angela; Adams, E Kathleen; Zhou, Mei

    2015-01-01

    Objective To estimate the effect of premium increases on the probability that near-poor and moderate-income children disenroll from public coverage. Data Sources Enrollment, eligibility, and claims data for Georgia's PeachCare for Kids™ (CHIP) program for multiple years. Study Design We exploited policy-induced variation in premiums generated by cross-sectional differences and changes over time in enrollee age, family size, and income to estimate the duration of enrollment as a function of the effective (per child) premium. We classify children as being of low, medium, or high illness severity. Principal Findings A dollar increase in the per-child premium is associated with a slight increase in a typical child's monthly probability of exiting coverage from 7.70 to 7.83 percent. Children with low illness severity have a significantly higher monthly baseline probability of exiting than children with medium or high illness severity, but the enrollment response to premium increases is similar across all three groups. Conclusions Success in achieving coverage gains through public programs is tempered by persistent problems in maintaining enrollment, which is modestly affected by premium increases. Retention is subject to adverse selection problems, but premium increases do not appear to significantly magnify the selection problem in this case. PMID:25130764

  7. Unresolved Problems by Shock Capturing: Taming the Overheating Problem

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing

    2012-01-01

    The overheating problem, first observed by von Neumann [1] and later studied extensively by Noh [2] using both Eulerian and Lagrangian formulations, remains to be one of the unsolved problems by shock capturing. It is historically well known to occur when a flow is under compression, such as when a shock wave hits and reflects from a wall or when two streams collides with each other. The overheating phenomenon is also found numerically in a smooth flow undergoing rarefaction created by two streams receding from each other. This is in contrary to one s intuition expecting a decrease in internal energy. The excessive amount in the temperature increase does not reduce by refining the mesh size or increasing the order of accuracy. This study finds that the overheating in the receding flow correlates with the entropy generation. By requiring entropy preservation, the overheating is eliminated and the solution is grid convergent. The shock-capturing scheme, as being practiced today, gives rise to the entropy generation, which in turn causes the overheating. This assertion stands up to the convergence test.

  8. Task Oriented Evaluation of Module Extraction Techniques

    NASA Astrophysics Data System (ADS)

    Palmisano, Ignazio; Tamma, Valentina; Payne, Terry; Doran, Paul

    Ontology Modularization techniques identify coherent and often reusable regions within an ontology. The ability to identify such modules, thus potentially reducing the size or complexity of an ontology for a given task or set of concepts is increasingly important in the Semantic Web as domain ontologies increase in terms of size, complexity and expressivity. To date, many techniques have been developed, but evaluation of the results of these techniques is sketchy and somewhat ad hoc. Theoretical properties of modularization algorithms have only been studied in a small number of cases. This paper presents an empirical analysis of a number of modularization techniques, and the modules they identify over a number of diverse ontologies, by utilizing objective, task-oriented measures to evaluate the fitness of the modules for a number of statistical classification problems.

  9. Physical therapy workforce shortage for aging and aged societies in Thailand.

    PubMed

    Kraiwong, Ratchanok; Vongsirinavarat, Mantana; Soonthorndhada, Kusol

    2014-07-01

    According to demographic changes, the size of the aging population has rapidly increased. Thailand has been facing the "aging society" since 2005 and the "aged society" has been projected to appear by the year 2025. Increased life expectancy is associated with health problems and risks, specifically chronic diseases and disability. Aging and aged societies and related specific conditions as stroke require the provision of services from health professionals. The shortage of the physical therapy workforce in Thailand has been reported. This study investigated the size of physical therapy workforce required for the approaching aging society of Thailand and estimated the number of needed physical therapists, specifically regarding stroke condition. Evidently, the issue of the physical therapy workforce to serve aging and aged societies in Thailand requires advocating and careful arranging.

  10. Trends in increasing gas-turbine units efficiency

    NASA Astrophysics Data System (ADS)

    Lebedev, A. S.; Kostennikov, S. V.

    2008-06-01

    A review of the latest models of gas-turbine units (GTUs) manufactured by leading firms of the world is given. With the example of units made by General Electric, Siemens, and Alstom, modern approaches to the problem of increasing the efficiency of gas-turbine units are dealt with. Basic principles of designing of moderate-size capacity gas turbine units are discussed, and comparison between characteristics of foreign-made GTUs belonging to this class and the advanced domestic GTE-65 unit is made.

  11. Routine human-competitive machine intelligence by means of genetic programming

    NASA Astrophysics Data System (ADS)

    Koza, John R.; Streeter, Matthew J.; Keane, Martin

    2004-01-01

    Genetic programming is a systematic method for getting computers to automatically solve a problem. Genetic programming starts from a high-level statement of what needs to be done and automatically creates a computer program to solve the problem. The paper demonstrates that genetic programming (1) now routinely delivers high-return human-competitive machine intelligence; (2) is an automated invention machine; (3) can automatically create a general solution to a problem in the form of a parameterized topology; and (4) has delivered a progression of qualitatively more substantial results in synchrony with five approximately order-of-magnitude increases in the expenditure of computer time. Recent results involving the automatic synthesis of the topology and sizing of analog electrical circuits and controllers demonstrate these points.

  12. An Airplane Design having a Wing with Fuselage Attached to Each Tip

    NASA Technical Reports Server (NTRS)

    Spearman, Leroy M.

    2001-01-01

    This paper describes the conceptual design of an airplane having a low aspect ratio wing with fuselages that are attached to each wing tip. The concept is proposed for a high-capacity transport as an alternate to progressively increasing the size of a conventional transport design having a single fuselage with cantilevered wing panels attached to the sides and tail surfaces attached at the rear. Progressively increasing the size of conventional single body designs may lead to problems in some area's such as manufacturing, ground-handling and aerodynamic behavior. A limited review will be presented of some past work related to means of relieving some size constraints through the use of multiple bodies. Recent low-speed wind-tunnel tests have been made of models representative of the inboard-wing concept. These models have a low aspect ratio wing with a fuselage attached to each tip. Results from these tests, which included force measurements, surface pressure measurements, and wake surveys, will be presented herein.

  13. The choice of sample size: a mixed Bayesian / frequentist approach.

    PubMed

    Pezeshk, Hamid; Nematollahi, Nader; Maroufy, Vahed; Gittins, John

    2009-04-01

    Sample size computations are largely based on frequentist or classical methods. In the Bayesian approach the prior information on the unknown parameters is taken into account. In this work we consider a fully Bayesian approach to the sample size determination problem which was introduced by Grundy et al. and developed by Lindley. This approach treats the problem as a decision problem and employs a utility function to find the optimal sample size of a trial. Furthermore, we assume that a regulatory authority, which is deciding on whether or not to grant a licence to a new treatment, uses a frequentist approach. We then find the optimal sample size for the trial by maximising the expected net benefit, which is the expected benefit of subsequent use of the new treatment minus the cost of the trial.

  14. Opportunities for making wood products from small diameter trees in Colorado

    Treesearch

    Dennis L. Lynch; Kurt H. Mackes

    2002-01-01

    Colorado's forests are at risk to forest health problems and catastrophic fire. Forest areas at high risk to catastrophic fire, commonly referred to as Red Zones, contain 2.4 million acres in the Colorado Front Range and 6.3 million acres Statewide. The increasing frequency, size, and intensity of recent forest fires have prompted large appropriations of Federal...

  15. Teaching Case: Introduction to NoSQL in a Traditional Database Course

    ERIC Educational Resources Information Center

    Fowler, Brad; Godin, Joy; Geddy, Margaret

    2016-01-01

    Many organizations are dealing with the increasing demands of big data, so they are turning to NoSQL databases as their preferred system for handling the unique problems of capturing and storing massive amounts of data. Therefore, it is likely that employees in all sizes of organizations will encounter NoSQL databases. Thus, to be more job-ready,…

  16. Technical Training Requirements of Middle Management in the Greek Textile and Clothing Industries.

    ERIC Educational Resources Information Center

    Fotinopoulou, K.; Manolopoulos, N.

    A case study of 16 companies in the Greek textile and clothing industry elicited the training needs of the industry's middle managers. The study concentrated on large and medium-sized work units, using a lengthy questionnaire. The study found that middle managers increasingly need to solve problems and ensure the reliability of new equipment and…

  17. Teething Problems in the Academy: Negotiating the Transition to Large-Class Teaching in the Discipline of History

    ERIC Educational Resources Information Center

    Keirle, Philip A.; Morgan, Ruth A.

    2011-01-01

    In this paper we provide a template for transitioning from tutorial to larger-class teaching environments in the discipline of history. We commence by recognising a number of recent trends in tertiary education in Australian universities that have made this transition to larger class sizes an imperative for many academics: increased student…

  18. What Is the Problem? The Challenge of Providing Effective Teachers for All Children

    ERIC Educational Resources Information Center

    Murnane, Richard J.; Steele, Jennifer L.

    2007-01-01

    Richard Murnane and Jennifer Steele argue that if the United States is to equip its young people with the skills essential in the new economy, high-quality teachers are more important than ever. In recent years, the demand for effective teachers has increased as enrollments have risen, class sizes have fallen, and a large share of the teacher…

  19. A Study of the U.S. Capacity to Address Tropical Disease Problems

    DTIC Science & Technology

    1986-04-01

    TROPICAL DISEASE SPECIALISTS. ............... 3-1 DEMOGRAPHY .. ....................... 3-4 Size of the Work Force .. .................. 3-5 Age...tropical disease research experience. This is an increasingly important talent pool. - 3-4 - DEMOGRAPHY Data on the number of tropical disease specialists...lines pioneered in the agricultural field in the institutions supported by the Consultative Group for International Agricultural Research. That model

  20. Cost analysis of a mini-facet heliostat

    NASA Astrophysics Data System (ADS)

    Hall, Colin; Pratt, Rodney; Farrant, David; Corsi, Clotilde; Pye, John; Coventry, Joe

    2017-06-01

    A significant problem with conventional heliostats is off-axis astigmatism, which increases the spot size at the central receiver, limiting the temperature and efficiency of solar thermal systems. Inspired by low-cost mini-actuators used for car wing mirrors, we examine the economic feasibility of a heliostat with individually adjustable mini-facets to correct astigmatic effects, and we compare three alternative tracking configurations.

  1. Solutions to an advanced functional partial differential equation of the pantograph type

    PubMed Central

    Zaidi, Ali A.; Van Brunt, B.; Wake, G. C.

    2015-01-01

    A model for cells structured by size undergoing growth and division leads to an initial boundary value problem that involves a first-order linear partial differential equation with a functional term. Here, size can be interpreted as DNA content or mass. It has been observed experimentally and shown analytically that solutions for arbitrary initial cell distributions are asymptotic as time goes to infinity to a certain solution called the steady size distribution. The full solution to the problem for arbitrary initial distributions, however, is elusive owing to the presence of the functional term and the paucity of solution techniques for such problems. In this paper, we derive a solution to the problem for arbitrary initial cell distributions. The method employed exploits the hyperbolic character of the underlying differential operator, and the advanced nature of the functional argument to reduce the problem to a sequence of simple Cauchy problems. The existence of solutions for arbitrary initial distributions is established along with uniqueness. The asymptotic relationship with the steady size distribution is established, and because the solution is known explicitly, higher-order terms in the asymptotics can be readily obtained. PMID:26345391

  2. Solutions to an advanced functional partial differential equation of the pantograph type.

    PubMed

    Zaidi, Ali A; Van Brunt, B; Wake, G C

    2015-07-08

    A model for cells structured by size undergoing growth and division leads to an initial boundary value problem that involves a first-order linear partial differential equation with a functional term. Here, size can be interpreted as DNA content or mass. It has been observed experimentally and shown analytically that solutions for arbitrary initial cell distributions are asymptotic as time goes to infinity to a certain solution called the steady size distribution. The full solution to the problem for arbitrary initial distributions, however, is elusive owing to the presence of the functional term and the paucity of solution techniques for such problems. In this paper, we derive a solution to the problem for arbitrary initial cell distributions. The method employed exploits the hyperbolic character of the underlying differential operator, and the advanced nature of the functional argument to reduce the problem to a sequence of simple Cauchy problems. The existence of solutions for arbitrary initial distributions is established along with uniqueness. The asymptotic relationship with the steady size distribution is established, and because the solution is known explicitly, higher-order terms in the asymptotics can be readily obtained.

  3. Table-sized matrix model in fractional learning

    NASA Astrophysics Data System (ADS)

    Soebagyo, J.; Wahyudin; Mulyaning, E. C.

    2018-05-01

    This article provides an explanation of the fractional learning model i.e. a Table-Sized Matrix model in which fractional representation and its operations are symbolized by the matrix. The Table-Sized Matrix are employed to develop problem solving capabilities as well as the area model. The Table-Sized Matrix model referred to in this article is used to develop an understanding of the fractional concept to elementary school students which can then be generalized into procedural fluency (algorithm) in solving the fractional problem and its operation.

  4. Eddy Current Testing and Sizing of Deep Cracks in a Thick Structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, H.; Endo, H.; Uchimoto, T.

    2004-02-26

    Due to the skin effect of eddy current testing, target of ECT restricts to thin structure such as steam generator tubes with 1.27mm thickness. Detecting and sizing of a deep crack in a thick structure remains a problem. In this paper, an ECT probe is presented to solve this problem with the help of numerical analysis. The parameters such as frequency, coil size etc. are discussed. The inverse problem of crack sizing is solved by applying a fast simulator of ECT based on an edge based finite element method and steepest descent method, and reconstructed results of 5, 10 andmore » 15mm depth cracks from experimental signals are shown.« less

  5. Impact of induced magnetic field on synovial fluid with peristaltic flow in an asymmetric channel

    NASA Astrophysics Data System (ADS)

    Afsar Khan, Ambreen; Farooq, Arfa; Vafai, Kambiz

    2018-01-01

    In this paper, we have worked for the impact of induced magnetic field on peristaltic motion of a non-Newtonian, incompressible, synovial fluid in an asymmetric channel. We have solved the problem for two models, Model-1 which behaves as shear thinning fluid and Model-2 which behaves as shear thickening fluid. The problem is solved by using modified Adomian Decomposition method. It has seen that two models behave quite opposite to each other for some parameters. The impact of various parameters on u, dp/dx, Δp and induced magnetic field bx have been studied graphically. The significant findings of this study is that the size of the trapped bolus and the pressure gradient increases by increasing M for both models.

  6. Size-Dependent Couple-Stress Fluid Mechanics and Application to the Lid-Driven Square Cavity Flow

    NASA Astrophysics Data System (ADS)

    Hajesfandiari, Arezoo; Dargush, Gary; Hadjesfandiari, Ali

    2012-11-01

    We consider a size-dependent fluid that possesses a characteristic material length l, which becomes increasingly important as the characteristic geometric dimension of the problem decreases. The term involving l in the modified Navier-Stokes equations ρDv/Dt = - ∇ p + μ∇2 v - μl2∇2∇2 v generates a new mechanism for energy dissipation in the flow, which has stabilizing effects at high Reynolds numbers. Interestingly, the idea of adding a fourth order term has been introduced long ago in the form of an artificial dissipation term to stabilize numerical results in CFD methods. However, this additional dissipation has no physical basis for inclusion in the differential equations of motion and is never considered at the boundary nodes of the domain. On the other hand, our couple stress-related dissipation is physically motivated, resulting from the consistent application of energy principles, kinematics and boundary conditions. We should note, in particular, that the boundary conditions in the size-dependent theory must be modified from the classical case to include specification of either rotations or moment-tractions. In order to validate the approach, we focus on the lid-driven cavity problem.

  7. Concerted control of Escherichia coli cell division

    PubMed Central

    Osella, Matteo; Nugent, Eileen; Cosentino Lagomarsino, Marco

    2014-01-01

    The coordination of cell growth and division is a long-standing problem in biology. Focusing on Escherichia coli in steady growth, we quantify cell division control using a stochastic model, by inferring the division rate as a function of the observable parameters from large empirical datasets of dividing cells. We find that (i) cells have mechanisms to control their size, (ii) size control is effected by changes in the doubling time, rather than in the single-cell elongation rate, (iii) the division rate increases steeply with cell size for small cells, and saturates for larger cells. Importantly, (iv) the current size is not the only variable controlling cell division, but the time spent in the cell cycle appears to play a role, and (v) common tests of cell size control may fail when such concerted control is in place. Our analysis illustrates the mechanisms of cell division control in E. coli. The phenomenological framework presented is sufficiently general to be widely applicable and opens the way for rigorous tests of molecular cell-cycle models. PMID:24550446

  8. Sizing protein-templated gold nanoclusters by time resolved fluorescence anisotropy decay measurements

    NASA Astrophysics Data System (ADS)

    Soleilhac, Antonin; Bertorelle, Franck; Antoine, Rodolphe

    2018-03-01

    Protein-templated gold nanoclusters (AuNCs) are very attractive due to their unique fluorescence properties. A major problem however may arise due to protein structure changes upon the nucleation of an AuNC within the protein for any future use as in vivo probes, for instance. In this work, we propose a simple and reliable fluorescence based technique measuring the hydrodynamic size of protein-templated gold nanoclusters. This technique uses the relation between the time resolved fluorescence anisotropy decay and the hydrodynamic volume, through the rotational correlation time. We determine the molecular size of protein-directed AuNCs, with protein templates of increasing sizes, e.g. insulin, lysozyme, and bovine serum albumin (BSA). The comparison of sizes obtained by other techniques (e.g. dynamic light scattering and small-angle X-ray scattering) between bare and gold clusters containing proteins allows us to address the volume changes induced either by conformational changes (for BSA) or the formation of protein dimers (for insulin and lysozyme) during cluster formation and incorporation.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kotula, Paul Gabriel; Brozik, Susan Marie; Achyuthan, Komandoor E.

    Engineered nanomaterials (ENMs) are increasingly being used in commercial products, particularly in the biomedical, cosmetic, and clothing industries. For example, pants and shirts are routinely manufactured with silver nanoparticles to render them 'wrinkle-free.' Despite the growing applications, the associated environmental health and safety (EHS) impacts are completely unknown. The significance of this problem became pervasive within the general public when Prince Charles authored an article in 2004 warning of the potential social, ethical, health, and environmental issues connected to nanotechnology. The EHS concerns, however, continued to receive relatively little consideration from federal agencies as compared with large investments in basicmore » nanoscience R&D. The mounting literature regarding the toxicology of ENMs (e.g., the ability of inhaled nanoparticles to cross the blood-brain barrier; Kwon et al., 2008, J. Occup. Health 50, 1) has spurred a recent realization within the NNI and other federal agencies that the EHS impacts related to nanotechnology must be addressed now. In our study we proposed to address critical aspects of this problem by developing primary correlations between nanoparticle properties and their effects on cell health and toxicity. A critical challenge embodied within this problem arises from the ability to synthesize nanoparticles with a wide array of physical properties (e.g., size, shape, composition, surface chemistry, etc.), which in turn creates an immense, multidimensional problem in assessing toxicological effects. In this work we first investigated varying sizes of quantum dots (Qdots) and their ability to cross cell membranes based on their aspect ratio utilizing hyperspectral confocal fluorescence microscopy. We then studied toxicity of epithelial cell lines that were exposed to different sized gold and silver nanoparticles using advanced imaging techniques, biochemical analyses, and optical and mass spectrometry methods. Finally we evaluated a new assay to measure transglutaminase (TG) activity; a potential marker for cell toxicity.« less

  10. The effect of font size and type on reading performance with Arabic words in normally sighted and simulated cataract subjects.

    PubMed

    Alotaibi, Abdullah Z

    2007-05-01

    Previous investigations have shown that reading is the most common functional problem reported by patients at a low vision practice. While there have been studies investigating effect of fonts in normal and low vision patients in English, no study has been carried out in Arabic. Additionally, there has been no investigation into the use of optimum print sizes or fonts that should be used in Arabic books and leaflets for low vision patients. Arabic sentences were read by 100 normally sighted volunteers with and without simulated cataract. Subjects read two font types (Times New Roman and Courier) in three different sizes (N8, N10 and N12). The subjects were asked to read the sentences aloud. The reading speed was calculated as number of words read divided by the time taken, while reading rate was calculated as the number of words read correctly divided by the time taken. There was an improvement in reading performance of normally sighted and simulated visually impaired subjects when the print size increased. There was no significant difference in reading performance between the two types of font used at small print size, however the reading rate improved as print size increased with Times New Roman. The results suggest that the use of N12 print in Times New Roman enhanced reading performance in normally sighted and simulated cataract subjects.

  11. Evaluation of emerging factors blocking filtration of high-adjunct-ratio wort.

    PubMed

    Ma, Ting; Zhu, Linjiang; Zheng, Feiyun; Li, Yongxian; Li, Qi

    2014-08-20

    Corn starch has become a common adjunct for beer brewing in Chinese breweries. However, with increasing ratio of corn starch, problems like poor wort filtration performance arise, which will decrease production capacity of breweries. To solve this problem, factors affecting wort filtration were evaluated, such as the size of corn starch particle, special yellow floats formed during liquefaction of corn starch, and residual substance after liquefaction. The effects of different enzyme preparations including β-amylase and β-glucanase on filtration rate were also evaluated. The results indicate that the emerging yellow floats do not severely block filtration, while the fine and uniform-shape corn starch particle and its incompletely hydrolyzed residue after liquefaction are responsible for filtration blocking. Application of β-amylase preparation increased the filtration rate of liquefied corn starch. This study is useful for our insight into the filtration blocking problem arising in the process of high-adjunct-ratio beer brewing and also provides a feasible solution using enzyme preparations.

  12. Modelling the variation in skin-test tuberculin reactions, post-mortem lesion counts and case pathology in tuberculosis-exposed cattle: Effects of animal characteristics, histories and co-infection.

    PubMed

    Byrne, A W; Graham, J; Brown, C; Donaghy, A; Guelbenzu-Gonzalo, M; McNair, J; Skuce, R A; Allen, A; McDowell, S W

    2018-06-01

    Correctly identifying bovine tuberculosis (bTB) in cattle remains a significant problem in endemic countries. We hypothesized that animal characteristics (sex, age, breed), histories (herd effects, testing, movement) and potential exposure to other pathogens (co-infection; BVDV, liver fluke and Mycobacterium avium reactors) could significantly impact the immune responsiveness detected at skin testing and the variation in post-mortem pathology (confirmation) in bTB-exposed cattle. Three model suites were developed using a retrospective observational data set of 5,698 cattle culled during herd breakdowns in Northern Ireland. A linear regression model suggested that antemortem tuberculin reaction size (difference in purified protein derivative avium [PPDa] and bovine [PPDb] reactions) was significantly positively associated with post-mortem maximum lesion size and the number of lesions found. This indicated that reaction size could be considered a predictor of both the extent (number of lesions/tissues) and the pathological progression of infection (maximum lesion size). Tuberculin reaction size was related to age class, and younger animals (<2.85 years) displayed larger reaction sizes than older animals. Tuberculin reaction size was also associated with breed and animal movement and increased with the time between the penultimate and disclosing tests. A negative binomial random-effects model indicated a significant increase in lesion counts for animals with M. avium reactions (PPDb-PPDa < 0) relative to non-reactors (PPDb-PPDa = 0). Lesion counts were significantly increased in animals with previous positive severe interpretation skin-test results. Animals with increased movement histories, young animals and non-dairy breed animals also had significantly increased lesion counts. Animals from herds that had BVDV-positive cattle had significantly lower lesion counts than animals from herds without evidence of BVDV infection. Restricting the data set to only animals with a bTB visible lesion at slaughter (n = 2471), an ordinal regression model indicated that liver fluke-infected animals disclosed smaller lesions, relative to liver fluke-negative animals, and larger lesions were disclosed in animals with increased movement histories. © 2018 Blackwell Verlag GmbH.

  13. Comparison of Acceleration Techniques for Selected Low-Level Bioinformatics Operations

    PubMed Central

    Langenkämper, Daniel; Jakobi, Tobias; Feld, Dustin; Jelonek, Lukas; Goesmann, Alexander; Nattkemper, Tim W.

    2016-01-01

    Within the recent years clock rates of modern processors stagnated while the demand for computing power continued to grow. This applied particularly for the fields of life sciences and bioinformatics, where new technologies keep on creating rapidly growing piles of raw data with increasing speed. The number of cores per processor increased in an attempt to compensate for slight increments of clock rates. This technological shift demands changes in software development, especially in the field of high performance computing where parallelization techniques are gaining in importance due to the pressing issue of large sized datasets generated by e.g., modern genomics. This paper presents an overview of state-of-the-art manual and automatic acceleration techniques and lists some applications employing these in different areas of sequence informatics. Furthermore, we provide examples for automatic acceleration of two use cases to show typical problems and gains of transforming a serial application to a parallel one. The paper should aid the reader in deciding for a certain techniques for the problem at hand. We compare four different state-of-the-art automatic acceleration approaches (OpenMP, PluTo-SICA, PPCG, and OpenACC). Their performance as well as their applicability for selected use cases is discussed. While optimizations targeting the CPU worked better in the complex k-mer use case, optimizers for Graphics Processing Units (GPUs) performed better in the matrix multiplication example. But performance is only superior at a certain problem size due to data migration overhead. We show that automatic code parallelization is feasible with current compiler software and yields significant increases in execution speed. Automatic optimizers for CPU are mature and usually no additional manual adjustment is required. In contrast, some automatic parallelizers targeting GPUs still lack maturity and are limited to simple statements and structures. PMID:26904094

  14. Comparison of Acceleration Techniques for Selected Low-Level Bioinformatics Operations.

    PubMed

    Langenkämper, Daniel; Jakobi, Tobias; Feld, Dustin; Jelonek, Lukas; Goesmann, Alexander; Nattkemper, Tim W

    2016-01-01

    Within the recent years clock rates of modern processors stagnated while the demand for computing power continued to grow. This applied particularly for the fields of life sciences and bioinformatics, where new technologies keep on creating rapidly growing piles of raw data with increasing speed. The number of cores per processor increased in an attempt to compensate for slight increments of clock rates. This technological shift demands changes in software development, especially in the field of high performance computing where parallelization techniques are gaining in importance due to the pressing issue of large sized datasets generated by e.g., modern genomics. This paper presents an overview of state-of-the-art manual and automatic acceleration techniques and lists some applications employing these in different areas of sequence informatics. Furthermore, we provide examples for automatic acceleration of two use cases to show typical problems and gains of transforming a serial application to a parallel one. The paper should aid the reader in deciding for a certain techniques for the problem at hand. We compare four different state-of-the-art automatic acceleration approaches (OpenMP, PluTo-SICA, PPCG, and OpenACC). Their performance as well as their applicability for selected use cases is discussed. While optimizations targeting the CPU worked better in the complex k-mer use case, optimizers for Graphics Processing Units (GPUs) performed better in the matrix multiplication example. But performance is only superior at a certain problem size due to data migration overhead. We show that automatic code parallelization is feasible with current compiler software and yields significant increases in execution speed. Automatic optimizers for CPU are mature and usually no additional manual adjustment is required. In contrast, some automatic parallelizers targeting GPUs still lack maturity and are limited to simple statements and structures.

  15. Effect of mean diameter and polydispersity of PLG microspheres on drug release: experiment and theory.

    PubMed

    Berchane, N S; Carson, K H; Rice-Ficht, A C; Andrews, M J

    2007-06-07

    The need to tailor release rate profiles from polymeric microspheres is a significant problem. Microsphere size, which has a significant effect on drug release rate, can potentially be varied to design a controlled drug delivery system with desired release profile. In this work the effects of microspheres mean diameter, polydispersity, and polymer degradation on drug release rate from poly(lactide-co-glycolide) (PLG) microspheres are described. Piroxicam containing PLG microspheres were fabricated at 20% loading, and at three different impeller speeds. A portion of the microspheres was then sieved giving five different size distributions. In vitro release kinetics were determined for each preparation. Based on these experimental results, a suitable mathematical theory has been developed that incorporates the effect of microsphere size distribution and polymer degradation on drug release. We show from in vitro release experiments that microsphere size has a significant effect on drug release rate. The initial release rate decreased with an increase in microsphere size. In addition, the release profile changed from first order to concave-upward (sigmoidal) as the microsphere size was increased. The mathematical model gave a good fit to the experimental release data. For highly polydisperse populations (polydispersity parameter b<3), incorporating the microsphere size distribution into the mathematical model gave a better fit to the experimental results than using the representative mean diameter. The validated mathematical model can be used to predict small-molecule drug release from PLG microsphere populations.

  16. What can volumes reveal about human brain evolution? A framework for bridging behavioral, histometric, and volumetric perspectives

    PubMed Central

    de Sousa, Alexandra A.; Proulx, Michael J.

    2014-01-01

    An overall relationship between brain size and cognitive ability exists across primates. Can more specific information about neural function be gleaned from cortical area volumes? Numerous studies have found significant relationships between brain structures and behaviors. However, few studies have speculated about brain structure-function relationships from the microanatomical to the macroanatomical level. Here we address this problem in comparative neuroanatomy, where the functional relevance of overall brain size and the sizes of cortical regions have been poorly understood, by considering comparative psychology, with measures of visual acuity and the perception of visual illusions. We outline a model where the macroscopic size (volume or surface area) of a cortical region (such as the primary visual cortex, V1) is related to the microstructure of discrete brain regions. The hypothesis developed here is that an absolutely larger V1 can process more information with greater fidelity due to having more neurons to represent a field of space. This is the first time that the necessary comparative neuroanatomical research at the microstructural level has been brought to bear on the issue. The evidence suggests that as the size of V1 increases: the number of neurons increases, the neuron density decreases, and the density of neuronal connections increases. Thus, we describe how information about gross neuromorphology, using V1 as a model for the study of other cortical areas, may permit interpretations of cortical function. PMID:25009469

  17. Toxic Picoplanktonic Cyanobacteria—Review

    PubMed Central

    Jakubowska, Natalia; Szeląg-Wasielewska, Elżbieta

    2015-01-01

    Cyanobacteria of a picoplanktonic cell size (0.2 to 2.0 µm) are common organisms of both freshwater and marine ecosystems. However, due to their small size and relatively short study history, picoplanktonic cyanobacteria, in contrast to the microplanktonic cyanobacteria, still remains a poorly studied fraction of plankton. So far, only little information on picocyanobacteria toxicity has been reported, while the number of reports concerning their presence in ecosystems is increasing. Thus, the issue of picocyanobacteria toxicity needs more researchers’ attention and interest. In this report, we present information on the current knowledge concerning the picocyanobacteria toxicity, as well as their harmfulness and problems they can cause. PMID:25793428

  18. Transformations of inorganic coal constituents in combustion systems. Volume 1, sections 1--5: Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helble, J.J.; Srinivasachar, S.; Wilemski, G.

    1992-11-01

    The inorganic constituents or ash contained in pulverized coal significantly increase the environmental and economic costs of coal utilization. For example, ash particles produced during combustion may deposit on heat transfer surfaces, decreasing heat transfer rates and increasing maintenance costs. The minimization of particulate emissions often requires the installation of cleanup devices such as electrostatic precipitators, also adding to the expense of coal utilization. Despite these costly problems, a comprehensive assessment of the ash formation and had never been attempted. At the start of this program, it was hypothesized that ash deposition and ash particle emissions both depended upon themore » size and chemical composition of individual ash particles. Questions such as: What determines the size of individual ash particles? What determines their composition? Whether or not particles deposit? How combustion conditions, including reactor size, affect these processes? remained to be answered. In this 6-year multidisciplinary study, these issues were addressed in detail. The ambitious overall goal was the development of a comprehensive model to predict the size and chemical composition distributions of ash produced during pulverized coal combustion. Results are described.« less

  19. Multigrid contact detection method

    NASA Astrophysics Data System (ADS)

    He, Kejing; Dong, Shoubin; Zhou, Zhaoyao

    2007-03-01

    Contact detection is a general problem of many physical simulations. This work presents a O(N) multigrid method for general contact detection problems (MGCD). The multigrid idea is integrated with contact detection problems. Both the time complexity and memory consumption of the MGCD are O(N) . Unlike other methods, whose efficiencies are influenced strongly by the object size distribution, the performance of MGCD is insensitive to the object size distribution. We compare the MGCD with the no binary search (NBS) method and the multilevel boxing method in three dimensions for both time complexity and memory consumption. For objects with similar size, the MGCD is as good as the NBS method, both of which outperform the multilevel boxing method regarding memory consumption. For objects with diverse size, the MGCD outperform both the NBS method and the multilevel boxing method. We use the MGCD to solve the contact detection problem for a granular simulation system based on the discrete element method. From this granular simulation, we get the density property of monosize packing and binary packing with size ratio equal to 10. The packing density for monosize particles is 0.636. For binary packing with size ratio equal to 10, when the number of small particles is 300 times as the number of big particles, the maximal packing density 0.824 is achieved.

  20. Description and use of LSODE, the Livermore Solver for Ordinary Differential Equations

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Hindmarsh, Alan C.

    1993-01-01

    LSODE, the Livermore Solver for Ordinary Differential Equations, is a package of FORTRAN subroutines designed for the numerical solution of the initial value problem for a system of ordinary differential equations. It is particularly well suited for 'stiff' differential systems, for which the backward differentiation formula method of orders 1 to 5 is provided. The code includes the Adams-Moulton method of orders 1 to 12, so it can be used for nonstiff problems as well. In addition, the user can easily switch methods to increase computational efficiency for problems that change character. For both methods a variety of corrector iteration techniques is included in the code. Also, to minimize computational work, both the step size and method order are varied dynamically. This report presents complete descriptions of the code and integration methods, including their implementation. It also provides a detailed guide to the use of the code, as well as an illustrative example problem.

  1. Optimizing Integrated Terminal Airspace Operations Under Uncertainty

    NASA Technical Reports Server (NTRS)

    Bosson, Christabelle; Xue, Min; Zelinski, Shannon

    2014-01-01

    In the terminal airspace, integrated departures and arrivals have the potential to increase operations efficiency. Recent research has developed geneticalgorithm- based schedulers for integrated arrival and departure operations under uncertainty. This paper presents an alternate method using a machine jobshop scheduling formulation to model the integrated airspace operations. A multistage stochastic programming approach is chosen to formulate the problem and candidate solutions are obtained by solving sample average approximation problems with finite sample size. Because approximate solutions are computed, the proposed algorithm incorporates the computation of statistical bounds to estimate the optimality of the candidate solutions. A proof-ofconcept study is conducted on a baseline implementation of a simple problem considering a fleet mix of 14 aircraft evolving in a model of the Los Angeles terminal airspace. A more thorough statistical analysis is also performed to evaluate the impact of the number of scenarios considered in the sampled problem. To handle extensive sampling computations, a multithreading technique is introduced.

  2. Locomotor problems among rural elderly population in a District of Aligarh, North India.

    PubMed

    Maroof, Mohd; Ahmad, Anees; Khalique, Najam; Ansari, M Athar

    2017-01-01

    Locomotor functions decline with the age along with other physiological changes. This results in deterioration of the quality of life with decreased social and economic role in the society, as well as increased dependency, for the health care and other basic services. The demographic transition resulting in increased proportion of elderly may pose a burden to the health system. To find the prevalence of locomotor problems among the elderly population, and related sociodemographic factors. The study was a community-based cross-sectional study done at field practice area of Rural Health Training Centre, JN Medical College, AMU, Aligarh, Uttar Pradesh, India. A sample of 225 was drawn from 1018 elderly population aged 60 years and above using systematic random sampling with probability proportionate to size. Sociodemographic characteristics were obtained using pretested and predesigned questionnaire. Locomotor problems were assessed using the criteria used by National Sample Survey Organization. Data were analyzed using SPSS version 20. Chi-square test was used to test relationship of locomotor problems with sociodemographic factors. P <0.05 was considered statistically significant. The prevalence of locomotor problems among the elderly population was 25.8%. Locomotor problems were significantly associated with age, gender, and working status whereas no significant association with literacy status and marital status was observed. The study concluded that approximately one-fourth of the elderly population suffered from locomotor problems. The sociodemographic factors related to locomotor problems needs to be addressed properly to help them lead an independent and economically productive life.

  3. Complex multidisciplinary system composition for aerospace vehicle conceptual design

    NASA Astrophysics Data System (ADS)

    Gonzalez, Lex

    Although, there exists a vast amount of work concerning the analysis, design, integration of aerospace vehicle systems, there is no standard for how this data and knowledge should be combined in order to create a synthesis system. Each institution creating a synthesis system has in house vehicle and hardware components they are attempting to model and proprietary methods with which to model them. This leads to the fact that synthesis systems begin as one-off creations meant to answer a specific problem. As the scope of the synthesis system grows to encompass more and more problems, so does its size and complexity; in order for a single synthesis system to answer multiple questions the number of methods and method interface must increase. As a means to curtail the requirement that the increase of an aircraft synthesis systems capability leads to an increase in its size and complexity, this research effort focuses on the idea that each problem in aerospace requires its own analysis framework. By focusing on the creation of a methodology which centers on the matching of an analysis framework towards the problem being solved, the complexity of the analysis framework is decoupled from the complexity of the system that creates it. The derived methodology allows for the composition of complex multi-disciplinary systems (CMDS) through the automatic creation and implementation of system and disciplinary method interfaces. The CMDS Composition process follows a four step methodology meant to take a problem definition and progress towards the creation of an analysis framework meant to answer said problem. The unique implementation of the CMDS Composition process take user selected disciplinary analysis methods and automatically integrates them, together in order to create a syntactically composable analysis framework. As a means of assessing the validity of the CMDS Composition process a prototype system (AVDDBMS) has been developed. AVD DBMS has been used to model the Generic Hypersonic Vehicle (GHV), an open source family of hypersonic vehicles originating from the Air Force Research Laboratory. AVDDBMS has been applied in three different ways in order to assess its validity: Verification using GHV disciplinary data, Validation using selected disciplinary analysis methods, and Application of the CMDS Composition Process to assess the design solution space for the GHV hardware. The research demonstrates the holistic effect that selection of individual disciplinary analysis methods has on the structure and integration of the analysis framework.

  4. Compression-RSA technique: A more efficient encryption-decryption procedure

    NASA Astrophysics Data System (ADS)

    Mandangan, Arif; Mei, Loh Chai; Hung, Chang Ee; Che Hussin, Che Haziqah

    2014-06-01

    The efficiency of encryption-decryption procedures has become a major problem in asymmetric cryptography. Compression-RSA technique is developed to overcome the efficiency problem by compressing the numbers of kplaintext, where k∈Z+ and k > 2, becoming only 2 plaintext. That means, no matter how large the numbers of plaintext, they will be compressed to only 2 plaintext. The encryption-decryption procedures are expected to be more efficient since these procedures only receive 2 inputs to be processed instead of kinputs. However, it is observed that as the numbers of original plaintext are increasing, the size of the new plaintext becomes bigger. As a consequence, it will probably affect the efficiency of encryption-decryption procedures, especially for RSA cryptosystem since both of its encryption-decryption procedures involve exponential operations. In this paper, we evaluated the relationship between the numbers of original plaintext and the size of the new plaintext. In addition, we conducted several experiments to show that the RSA cryptosystem with embedded Compression-RSA technique is more efficient than the ordinary RSA cryptosystem.

  5. When is bigger better? The effects of group size on the evolution of helping behaviours.

    PubMed

    Powers, Simon T; Lehmann, Laurent

    2017-05-01

    Understanding the evolution of sociality in humans and other species requires understanding how selection on social behaviour varies with group size. However, the effects of group size are frequently obscured in the theoretical literature, which often makes assumptions that are at odds with empirical findings. In particular, mechanisms are suggested as supporting large-scale cooperation when they would in fact rapidly become ineffective with increasing group size. Here we review the literature on the evolution of helping behaviours (cooperation and altruism), and frame it using a simple synthetic model that allows us to delineate how the three main components of the selection pressure on helping must vary with increasing group size. The first component is the marginal benefit of helping to group members, which determines both direct fitness benefits to the actor and indirect fitness benefits to recipients. While this is often assumed to be independent of group size, marginal benefits are in practice likely to be maximal at intermediate group sizes for many types of collective action problems, and will eventually become very small in large groups due to the law of decreasing marginal returns. The second component is the response of social partners on the past play of an actor, which underlies conditional behaviour under repeated social interactions. We argue that under realistic conditions on the transmission of information in a population, this response on past play decreases rapidly with increasing group size so that reciprocity alone (whether direct, indirect, or generalised) cannot sustain cooperation in very large groups. The final component is the relatedness between actor and recipient, which, according to the rules of inheritance, again decreases rapidly with increasing group size. These results explain why helping behaviours in very large social groups are limited to cases where the number of reproducing individuals is small, as in social insects, or where there are social institutions that can promote (possibly through sanctioning) large-scale cooperation, as in human societies. Finally, we discuss how individually devised institutions can foster the transition from small-scale to large-scale cooperative groups in human evolution. © 2016 Cambridge Philosophical Society.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stroemqvist, Martin H., E-mail: stromqv@kth.se

    We study the problem of optimally controlling the solution of the obstacle problem in a domain perforated by small periodically distributed holes. The solution is controlled by the choice of a perforated obstacle which is to be chosen in such a fashion that the solution is close to a given profile and the obstacle is not too irregular. We prove existence, uniqueness and stability of an optimal obstacle and derive necessary and sufficient conditions for optimality. When the number of holes increase indefinitely we determine the limit of the sequence of optimal obstacles and solutions. This limit depends strongly onmore » the rate at which the size of the holes shrink.« less

  7. Oceanic protection of prebiotic organic compounds from UV radiation

    NASA Technical Reports Server (NTRS)

    Cleaves, H. J.; Miller, S. L.; Bada, J. L. (Principal Investigator)

    1998-01-01

    It is frequently stated that UV light would cause massive destruction of prebiotic organic compounds because of the absence of an ozone layer. The elevated UV flux of the early sun compounds this problem. This applies to organic compounds of both terrestrial and extraterrestrial origin. Attempts to deal with this problem generally involve atmospheric absorbers. We show here that prebiotic organic polymers as well as several inorganic compounds are sufficient to protect oceanic organic molecules from UV degradation. This aqueous protection is in addition to any atmospheric UV absorbers and should be a ubiquitous planetary phenomenon serving to increase the size of planetary habitable zones.

  8. Number Partitioning via Quantum Adiabatic Computation

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, Vadim N.; Toussaint, Udo

    2002-01-01

    We study both analytically and numerically the complexity of the adiabatic quantum evolution algorithm applied to random instances of combinatorial optimization problems. We use as an example the NP-complete set partition problem and obtain an asymptotic expression for the minimal gap separating the ground and exited states of a system during the execution of the algorithm. We show that for computationally hard problem instances the size of the minimal gap scales exponentially with the problem size. This result is in qualitative agreement with the direct numerical simulation of the algorithm for small instances of the set partition problem. We describe the statistical properties of the optimization problem that are responsible for the exponential behavior of the algorithm.

  9. Hierarchical modeling of cluster size in wildlife surveys

    USGS Publications Warehouse

    Royle, J. Andrew

    2008-01-01

    Clusters or groups of individuals are the fundamental unit of observation in many wildlife sampling problems, including aerial surveys of waterfowl, marine mammals, and ungulates. Explicit accounting of cluster size in models for estimating abundance is necessary because detection of individuals within clusters is not independent and detectability of clusters is likely to increase with cluster size. This induces a cluster size bias in which the average cluster size in the sample is larger than in the population at large. Thus, failure to account for the relationship between delectability and cluster size will tend to yield a positive bias in estimates of abundance or density. I describe a hierarchical modeling framework for accounting for cluster-size bias in animal sampling. The hierarchical model consists of models for the observation process conditional on the cluster size distribution and the cluster size distribution conditional on the total number of clusters. Optionally, a spatial model can be specified that describes variation in the total number of clusters per sample unit. Parameter estimation, model selection, and criticism may be carried out using conventional likelihood-based methods. An extension of the model is described for the situation where measurable covariates at the level of the sample unit are available. Several candidate models within the proposed class are evaluated for aerial survey data on mallard ducks (Anas platyrhynchos).

  10. Task-specific modulation of adult humans' tool preferences: number of choices and size of the problem.

    PubMed

    Silva, Kathleen M; Gross, Thomas J; Silva, Francisco J

    2015-03-01

    In two experiments, we examined the effect of modifications to the features of a stick-and-tube problem on the stick lengths that adult humans used to solve the problem. In Experiment 1, we examined whether people's tool preferences for retrieving an out-of-reach object in a tube might more closely resemble those reported with laboratory crows if people could modify a single stick to an ideal length to solve the problem. Contrary to when adult humans have selected a tool from a set of ten sticks, asking people to modify a single stick to retrieve an object did not generally result in a stick whose length was related to the object's distance. Consistent with the prior research, though, the working length of the stick was related to the object's distance. In Experiment 2, we examined the effect of increasing the scale of the stick-and-tube problem on people's tool preferences. Increasing the scale of the task influenced people to select relatively shorter tools than had selected in previous studies. Although the causal structures of the tasks used in the two experiments were identical, their results were not. This underscores the necessity of studying physical cognition in relation to a particular causal structure by using a variety of tasks and methods.

  11. Investigating Created Properties of Nanoparticles Based Drilling Mud

    NASA Astrophysics Data System (ADS)

    Ghasemi, Nahid; Mirzaee, Mojtaba; Aghayari, Reza; Maddah, Heydar

    2018-05-01

    The success of drilling operations is heavily dependent on the drilling fluid. Drilling fluids cool down and lubricate the drill bit, remove cuttings, prevent formation damage, suspend cuttings and also cake off the permeable formation, thus retarding the passage of fluid into the formation. Typical micro or macro sized loss circulation materials (LCM) show limited success, especially in formations dominated by micropores, due to their relatively large sizes. Due to unique characteristics of nanoparticles such as their size and high surface area to volume ratio, they play an effective role in solving problems associated with the drilling fluid. In this study, we investigate the effect of adding Al2O3 and TiO2 nanoparticles into the drilling mud. Al2O3 and TiO2 nanoparticles were used in 20 and 60 nm of size and 0.05 wt% in concentration. Investigating the effects of temperature and pressure has shown that an increase in temperature can reduce the drilling mud rheological properties such as plastic viscosity, while an increase in pressure can enhance these properties. Also, the effects of pressure in high temperatures were less than those in low temperatures. Studying the effects of adding nanoparticles has shown that they can reduce the drilling mud rheological properties. Moreover, they can increase gel strength, reduce capillary suction time and decrease formation damage.

  12. The lumbosacral segment as a vulnerable region in various postures

    NASA Technical Reports Server (NTRS)

    Rosemeyer, B.

    1978-01-01

    The lumbosacral region in man is exposed to special static and dynamic load. In a supine position, the disc size increases because of the absence of axial load. In a standing position, with physiological posture of the spine, strain discomfort occurs which is increased even more in the sitting position due to the curvature of the lumbar region of the spine and the irregular distribution of pressure in the discs as a result of this. This special problem of sitting posture can be confirmed by examinations.

  13. A comparison of quality and utilization problems in large and small group practices.

    PubMed

    Gleason, S C; Richards, M J; Quinnell, J E

    1995-12-01

    Physicians practicing in large, multispecialty medical groups share an organizational culture that differs from that of physicians in small or independent practices. Since 1980, there has been a sharp increase in the size of multispecialty group practice organizations, in part because of increased efficiencies of large group practices. The greater number of physicians and support personnel in a large group practice also requires a relatively more sophisticated management structure. The efficiencies, conveniences, and management structure of a large group practice provide an optimal environment to practice medicine. However, a search of the literature found no data linking a large group practice environment to practice outcomes. The purpose of the study reported in this article was to determine if physicians in large practices have fewer quality and utilization problems than physicians in small or independent practices.

  14. Influence of Stress and Antibiotic Resistance on Cell-Length Distribution in Mycobacterium tuberculosis Clinical Isolates

    PubMed Central

    Vijay, Srinivasan; Vinh, Dao N.; Hai, Hoang T.; Ha, Vu T. N.; Dung, Vu T. M.; Dinh, Tran D.; Nhung, Hoang N.; Tram, Trinh T. B.; Aldridge, Bree B.; Hanh, Nguyen T.; Thu, Do D. A.; Phu, Nguyen H.; Thwaites, Guy E.; Thuong, Nguyen T. T.

    2017-01-01

    Mycobacterial cellular variations in growth and division increase heterogeneity in cell length, possibly contributing to cell-to-cell variation in host and antibiotic stress tolerance. This may be one of the factors influencing Mycobacterium tuberculosis persistence to antibiotics. Tuberculosis (TB) is a major public health problem in developing countries, antibiotic persistence, and emergence of antibiotic resistance further complicates this problem. We wanted to investigate the factors influencing cell-length distribution in clinical M. tuberculosis strains. In parallel we examined M. tuberculosis cell-length distribution in a large set of clinical strains (n = 158) from ex vivo sputum samples, in vitro macrophage models, and in vitro cultures. Our aim was to understand the influence of clinically relevant factors such as host stresses, M. tuberculosis lineages, antibiotic resistance, antibiotic concentrations, and disease severity on the cell size distribution in clinical M. tuberculosis strains. Increased cell size and cell-to-cell variation in cell length were associated with bacteria in sputum and infected macrophages rather than liquid culture. Multidrug-resistant (MDR) strains displayed increased cell length heterogeneity compared to sensitive strains in infected macrophages and also during growth under rifampicin (RIF) treatment. Importantly, increased cell length was also associated with pulmonary TB disease severity. Supporting these findings, individual host stresses, such as oxidative stress and iron deficiency, increased cell-length heterogeneity of M. tuberculosis strains. In addition we also observed synergism between host stress and RIF treatment in increasing cell length in MDR-TB strains. This study has identified some clinical factors contributing to cell-length heterogeneity in clinical M. tuberculosis strains. The role of these cellular adaptations to host and antibiotic tolerance needs further investigation. PMID:29209302

  15. Bats adjust their pulse emission rates with swarm size in the field.

    PubMed

    Lin, Yuan; Abaid, Nicole; Müller, Rolf

    2016-12-01

    Flying in swarms, e.g., when exiting a cave, could pose a problem to bats that use an active biosonar system because the animals could risk jamming each other's biosonar signals. Studies from current literature have found different results with regard to whether bats reduce or increase emission rate in the presence of jamming ultrasound. In the present work, the number of Eastern bent-wing bats (Miniopterus fuliginosus) that were flying inside a cave during emergence was estimated along with the number of signal pulses recorded. Over the range of average bat numbers present in the recording (0 to 14 bats), the average number of detected pulses per bat increased with the average number of bats. The result was interpreted as an indication that the Eastern bent-wing bats increased their emission rate and/or pulse amplitude with swarm size on average. This finding could be explained by the hypothesis that the bats might not suffer from substantial jamming probabilities under the observed density regimes, so jamming might not have been a limiting factor for their emissions. When jamming did occur, the bats could avoid it through changing the pulse amplitude and other pulse properties such as duration or frequency, which has been suggested by other studies. More importantly, the increased biosonar activities may have addressed a collision-avoidance challenge that was posed by the increased swarm size.

  16. Possibility of using waste tire rubber and fly ash with Portland cement as construction materials.

    PubMed

    Yilmaz, Arin; Degirmenci, Nurhayat

    2009-05-01

    The growing amount of waste rubber produced from used tires has resulted in an environmental problem. Recycling waste tires has been widely studied for the last 20 years in applications such as asphalt pavement, waterproofing systems and membrane liners. The aim of this study is to evaluate the feasibility of utilizing fly ash and rubber waste with Portland cement as a composite material for masonry applications. Class C fly ash and waste automobile tires in three different sizes were used with Portland cement. Compressive and flexural strength, dry unit weight and water absorption tests were performed on the composite specimens containing waste tire rubber. The compressive strength decreased by increasing the rubber content while increased by increasing the fly ash content for all curing periods. This trend is slightly influenced by particle size. For flexural strength, the specimens with waste tire rubber showed higher values than the control mix probably due to the effect of rubber fibers. The dry unit weight of all specimens decreased with increasing rubber content, which can be explained by the low specific gravity of rubber particles. Water absorption decreased slightly with the increase in rubber particles size. These composite materials containing 10% Portland cement, 70% and 60% fly ash and 20% and 30% tire rubber particles have sufficient strength for masonry applications.

  17. The problem and promise of scale dependency in community phylogenetics.

    PubMed

    Swenson, Nathan G; Enquist, Brian J; Pither, Jason; Thompson, Jill; Zimmerman, Jess K

    2006-10-01

    The problem of scale dependency is widespread in investigations of ecological communities. Null model investigations of community assembly exemplify the challenges involved because they typically include subjectively defined "regional species pools." The burgeoning field of community phylogenetics appears poised to face similar challenges. Our objective is to quantify the scope of the problem of scale dependency by comparing the phylogenetic structure of assemblages across contrasting geographic and taxonomic scales. We conduct phylogenetic analyses on communities within three tropical forests, and perform a sensitivity analysis with respect to two scaleable inputs: taxonomy and species pool size. We show that (1) estimates of phylogenetic overdispersion within local assemblages depend strongly on the taxonomic makeup of the local assemblage and (2) comparing the phylogenetic structure of a local assemblage to a species pool drawn from increasingly larger geographic scales results in an increased signal of phylogenetic clustering. We argue that, rather than posing a problem, "scale sensitivities" are likely to reveal general patterns of diversity that could help identify critical scales at which local or regional influences gain primacy for the structuring of communities. In this way, community phylogenetics promises to fill an important gap in community ecology and biogeography research.

  18. Interference and problem size effect in multiplication fact solving: Individual differences in brain activations and arithmetic performance.

    PubMed

    De Visscher, Alice; Vogel, Stephan E; Reishofer, Gernot; Hassler, Eva; Koschutnig, Karl; De Smedt, Bert; Grabner, Roland H

    2018-05-15

    In the development of math ability, a large variability of performance in solving simple arithmetic problems is observed and has not found a compelling explanation yet. One robust effect in simple multiplication facts is the problem size effect, indicating better performance for small problems compared to large ones. Recently, behavioral studies brought to light another effect in multiplication facts, the interference effect. That is, high interfering problems (receiving more proactive interference from previously learned problems) are more difficult to retrieve than low interfering problems (in terms of physical feature overlap, namely the digits, De Visscher and Noël, 2014). At the behavioral level, the sensitivity to the interference effect is shown to explain individual differences in the performance of solving multiplications in children as well as in adults. The aim of the present study was to investigate the individual differences in multiplication ability in relation to the neural interference effect and the neural problem size effect. To that end, we used a paradigm developed by De Visscher, Berens, et al. (2015) that contrasts the interference effect and the problem size effect in a multiplication verification task, during functional magnetic resonance imaging (fMRI) acquisition. Forty-two healthy adults, who showed high variability in an arithmetic fluency test, participated in our fMRI study. In order to control for the general reasoning level, the IQ was taken into account in the individual differences analyses. Our findings revealed a neural interference effect linked to individual differences in multiplication in the left inferior frontal gyrus, while controlling for the IQ. This interference effect in the left inferior frontal gyrus showed a negative relation with individual differences in arithmetic fluency, indicating a higher interference effect for low performers compared to high performers. This region is suggested in the literature to be involved in resolution of proactive interference. Besides, no correlation between the neural problem size effect and multiplication performance was found. This study supports the idea that the interference due to similarities/overlap of physical traits (the digits) is crucial in memorizing arithmetic facts and in determining individual differences in arithmetic. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. The nonequilibrium quantum many-body problem as a paradigm for extreme data science

    NASA Astrophysics Data System (ADS)

    Freericks, J. K.; Nikolić, B. K.; Frieder, O.

    2014-12-01

    Generating big data pervades much of physics. But some problems, which we call extreme data problems, are too large to be treated within big data science. The nonequilibrium quantum many-body problem on a lattice is just such a problem, where the Hilbert space grows exponentially with system size and rapidly becomes too large to fit on any computer (and can be effectively thought of as an infinite-sized data set). Nevertheless, much progress has been made with computational methods on this problem, which serve as a paradigm for how one can approach and attack extreme data problems. In addition, viewing these physics problems from a computer-science perspective leads to new approaches that can be tried to solve more accurately and for longer times. We review a number of these different ideas here.

  20. Very Large Scale Optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)

    2002-01-01

    The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.

  1. A reduced successive quadratic programming strategy for errors-in-variables estimation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tjoa, I.-B.; Biegler, L. T.; Carnegie-Mellon Univ.

    Parameter estimation problems in process engineering represent a special class of nonlinear optimization problems, because the maximum likelihood structure of the objective function can be exploited. Within this class, the errors in variables method (EVM) is particularly interesting. Here we seek a weighted least-squares fit to the measurements with an underdetermined process model. Thus, both the number of variables and degrees of freedom available for optimization increase linearly with the number of data sets. Large optimization problems of this type can be particularly challenging and expensive to solve because, for general-purpose nonlinear programming (NLP) algorithms, the computational effort increases atmore » least quadratically with problem size. In this study we develop a tailored NLP strategy for EVM problems. The method is based on a reduced Hessian approach to successive quadratic programming (SQP), but with the decomposition performed separately for each data set. This leads to the elimination of all variables but the model parameters, which are determined by a QP coordination step. In this way the computational effort remains linear in the number of data sets. Moreover, unlike previous approaches to the EVM problem, global and superlinear properties of the SQP algorithm apply naturally. Also, the method directly incorporates inequality constraints on the model parameters (although not on the fitted variables). This approach is demonstrated on five example problems with up to 102 degrees of freedom. Compared to general-purpose NLP algorithms, large improvements in computational performance are observed.« less

  2. The Potential Improvement of Team-Working Skills in Biomedical and Natural Science Students Using a Problem-Based Learning Approach

    ERIC Educational Resources Information Center

    Nowrouzian, Forough L.; Farewell, Anne

    2013-01-01

    Teamwork has become an integral part of most organisations today, and it is clearly important in Science and other disciplines. In Science, research teams increase in size while the number of single-authored papers and patents decline. Team-work in laboratory sciences permits projects that are too big or complex for one individual to be tackled.…

  3. [The impact of malnutrition on brain development, intelligence and school work performance].

    PubMed

    Leiva Plaza, B; Inzunza Brito, N; Pérez Torrejón, H; Castro Gloor, V; Jansana Medina, J M; Toro Díaz, T; Almagiá Flores, A; Navarro Díaz, A; Urrutia Cáceres, M S; Cervilla Oltremari, J; Ivanovic Marincovich, D

    2001-03-01

    The findings from several authors confirm that undernutrition at an early age affects brain growth and intellectual quotient. Most part of students with the lowest scholastic achievement scores present suboptimal head circumference (anthropometric indicator of past nutrition and brain development) and brain size. On the other hand, intellectual quotient measured through intelligence tests (Weschler-R, or the Raven Progressives Matrices Test) has been described positively and significantly correlated with brain size measured by magnetic resonance imaging (MRI); in this respect, intellectual ability has been recognized as one of the best predictors of scholastic achievement. Considering that education is the change lever for the improvement of the quality of life and that the absolute numbers of undernourished children have been increasing in the world, is of major relevance to analyse the long-term effects of undernutrition at an early age. The investigations related to the interrelationships between nutritional status, brain development, intelligence and scholastic achievement are of greatest importance, since nutritional problems affect the lowest socioeconomic stratum with negative consequences manifested in school-age, in higher levels of school dropout, learning problems and a low percentage of students enrolling into higher education. This limits the development of people by which a clear economic benefit to increase adult productivity for government policies might be successful preventing childhood malnutrition.

  4. An analysis of spectral envelope-reduction via quadratic assignment problems

    NASA Technical Reports Server (NTRS)

    George, Alan; Pothen, Alex

    1994-01-01

    A new spectral algorithm for reordering a sparse symmetric matrix to reduce its envelope size was described. The ordering is computed by associating a Laplacian matrix with the given matrix and then sorting the components of a specified eigenvector of the Laplacian. In this paper, we provide an analysis of the spectral envelope reduction algorithm. We described related 1- and 2-sum problems; the former is related to the envelope size, while the latter is related to an upper bound on the work involved in an envelope Cholesky factorization scheme. We formulate the latter two problems as quadratic assignment problems, and then study the 2-sum problem in more detail. We obtain lower bounds on the 2-sum by considering a projected quadratic assignment problem, and then show that finding a permutation matrix closest to an orthogonal matrix attaining one of the lower bounds justifies the spectral envelope reduction algorithm. The lower bound on the 2-sum is seen to be tight for reasonably 'uniform' finite element meshes. We also obtain asymptotically tight lower bounds for the envelope size for certain classes of meshes.

  5. Convergence analysis of two-node CMFD method for two-group neutron diffusion eigenvalue problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeong, Yongjin; Park, Jinsu; Lee, Hyun Chul

    2015-12-01

    In this paper, the nonlinear coarse-mesh finite difference method with two-node local problem (CMFD2N) is proven to be unconditionally stable for neutron diffusion eigenvalue problems. The explicit current correction factor (CCF) is derived based on the two-node analytic nodal method (ANM2N), and a Fourier stability analysis is applied to the linearized algorithm. It is shown that the analytic convergence rate obtained by the Fourier analysis compares very well with the numerically measured convergence rate. It is also shown that the theoretical convergence rate is only governed by the converged second harmonic buckling and the mesh size. It is also notedmore » that the convergence rate of the CCF of the CMFD2N algorithm is dependent on the mesh size, but not on the total problem size. This is contrary to expectation for eigenvalue problem. The novel points of this paper are the analytical derivation of the convergence rate of the CMFD2N algorithm for eigenvalue problem, and the convergence analysis based on the analytic derivations.« less

  6. An investigation of messy genetic algorithms

    NASA Technical Reports Server (NTRS)

    Goldberg, David E.; Deb, Kalyanmoy; Korb, Bradley

    1990-01-01

    Genetic algorithms (GAs) are search procedures based on the mechanics of natural selection and natural genetics. They combine the use of string codings or artificial chromosomes and populations with the selective and juxtapositional power of reproduction and recombination to motivate a surprisingly powerful search heuristic in many problems. Despite their empirical success, there has been a long standing objection to the use of GAs in arbitrarily difficult problems. A new approach was launched. Results to a 30-bit, order-three-deception problem were obtained using a new type of genetic algorithm called a messy genetic algorithm (mGAs). Messy genetic algorithms combine the use of variable-length strings, a two-phase selection scheme, and messy genetic operators to effect a solution to the fixed-coding problem of standard simple GAs. The results of the study of mGAs in problems with nonuniform subfunction scale and size are presented. The mGA approach is summarized, both its operation and the theory of its use. Experiments on problems of varying scale, varying building-block size, and combined varying scale and size are presented.

  7. Recycled wind turbine blades as a feedstock for second generation composites.

    PubMed

    Mamanpush, Seyed Hossein; Li, Hui; Englund, Karl; Tabatabaei, Azadeh Tavousi

    2018-06-01

    With an increase in renewable wind energy via turbines, an underlying problem of the turbine blade disposal is looming in many areas of the world. These wind turbine blades are predominately a mixture of glass fiber composites (GFCs) and wood and currently have not found an economically viable recycling pathway. This work investigates a series of second generation composites fabricated using recycled wind turbine material and a polyurethane adhesive. The recycled material was first comminuted via a hammer-mill through a range of varying screen sizes, resinated and compressed to a final thickness. The refined particle size, moisture content and resin content were assessed for their influence on the properties of recycled composites. Static bending, internal bond and water sorption properties were obtained for all composites panels. Overall improvement of mechanical properties correlated with increase in resin content, moisture content, and particle size. The current investigation demonstrates that it is feasible and promising to recycle the wind turbine blade to fabricate value-added high-performance composite. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Six-Degree-of-Freedom Trajectory Optimization Utilizing a Two-Timescale Collocation Architecture

    NASA Technical Reports Server (NTRS)

    Desai, Prasun N.; Conway, Bruce A.

    2005-01-01

    Six-degree-of-freedom (6DOF) trajectory optimization of a reentry vehicle is solved using a two-timescale collocation methodology. This class of 6DOF trajectory problems are characterized by two distinct timescales in their governing equations, where a subset of the states have high-frequency dynamics (the rotational equations of motion) while the remaining states (the translational equations of motion) vary comparatively slowly. With conventional collocation methods, the 6DOF problem size becomes extraordinarily large and difficult to solve. Utilizing the two-timescale collocation architecture, the problem size is reduced significantly. The converged solution shows a realistic landing profile and captures the appropriate high-frequency rotational dynamics. A large reduction in the overall problem size (by 55%) is attained with the two-timescale architecture as compared to the conventional single-timescale collocation method. Consequently, optimum 6DOF trajectory problems can now be solved efficiently using collocation, which was not previously possible for a system with two distinct timescales in the governing states.

  9. The crack-inclusion interaction problem

    NASA Technical Reports Server (NTRS)

    Liu, X.-H.; Erdogan, F.

    1986-01-01

    The general plane elastostatic problem of interaction between a crack and an inclusion is considered. The Green's functions for a pair of dislocations and a pair of concentrated body forces are used to generate the crack and the inclusion. Integral equations are obtained for a line crack and an elastic line inclusion having an arbitrary relative orientation and size. The nature of stress singularity around the end points of rigid and elastic inclusions is described and three special cases of this intersection problem are studied. The problem is solved for an arbitrary uniform stress state away from the crack-inclusion region. The nonintersecting crack-inclusion problem is considered for various relative size, orientation, and stiffness parameters, and the stress intensity factors at the ends of the inclusion and the crack are calculated. For the crack-inclusion intersection case, special stress intensity factors are defined and are calculated for various values of the parameters defining the relative size and orientation of the crack and the inclusion and the stiffness of the inclusion.

  10. The crack-inclusion interaction problem

    NASA Technical Reports Server (NTRS)

    Xue-Hui, L.; Erdogan, F.

    1984-01-01

    The general plane elastostatic problem of interaction between a crack and an inclusion is considered. The Green's functions for a pair of dislocations and a pair of concentrated body forces are used to generate the crack and the inclusion. Integral equations are obtained for a line crack and an elastic line inclusion having an arbitrary relative orientation and size. The nature of stress singularity around the end points of rigid and elastic inclusions is described and three special cases of this intersection problem are studied. The problem is solved for an arbitrary uniform stress state away from the crack-inclusion region. The nonintersecting crack-inclusion problem is considered for various relative size, orientation, and stiffness parameters, and the stress intensity factors at the ends of the inclusion and the crack are calculated. For the crack-inclusion intersection case, special stress intensity factors are defined and are calculated for various values of the parameters defining the relative size and orientation of the crack and the inclusion and the stiffness of the inclusion.

  11. Life-history tactics: a review of the ideas.

    PubMed

    Stearns, S C

    1976-03-01

    This review organizes ideas on the evolution of life histories. The key life-history traits are brood size, size of young, the age distribution of reproductive effort, the interaction of reproductive effort with adult mortality, and the variation in these traits among an individual's progeny. The general theoretical problem is to predict which combinations of traits will evolve in organisms living in specified circumstances. First consider single traits. Theorists have made the following predictions: (1) Where adult exceeds juvenile mortality, the organism should reproduce only once in its lifetime. Where juvenile exceeds adult mortality, the organism should reproduce several times. (2) Brood size should macimize the number of young surviving to maturity, summed over the lifetime of the parent. But when optimum brood-size unpredictably in time, smaller broods should be favored because they decrease the chances of total failure on a given attempt. (3) In expanding populations, selection should minimize age at maturity. In stable populations, when reproductive success depends on size, age, or social status, or when adult exceeds juvenile mortality, then maturation should be delayed, as it should be in declining populations. (4) Young should increase in size at birth with increased predation risk, and decrease in size with increased resource availability. Theorists have also predicted that only particular combinations of traits should occur in specified circumstances. (5) In growing populations, age at maturity should be minimized, reproductive effort concentrated early in life, and brood size increased. (6) One view holds that in stable environments, late maturity, broods, a few, large young, parental care, and small reproductive efforts should be favored (K-selection). In fluctuating environments, early maturity, many small young, reduced parental care, and large reproductive efforts should be favored (r-selection). (7) But another view holds that when juvenile mortality fluctuates more than adult mortality, the traits associated with stable and fluctuating environments should be reversed. We need experiments that test the assumptions and predictions reviewed here, more comprehensive theory that makes more readily falsifiable predictions, and examination of different definitions of fitness.

  12. Weighted mining of massive collections of [Formula: see text]-values by convex optimization.

    PubMed

    Dobriban, Edgar

    2018-06-01

    Researchers in data-rich disciplines-think of computational genomics and observational cosmology-often wish to mine large bodies of [Formula: see text]-values looking for significant effects, while controlling the false discovery rate or family-wise error rate. Increasingly, researchers also wish to prioritize certain hypotheses, for example, those thought to have larger effect sizes, by upweighting, and to impose constraints on the underlying mining, such as monotonicity along a certain sequence. We introduce Princessp , a principled method for performing weighted multiple testing by constrained convex optimization. Our method elegantly allows one to prioritize certain hypotheses through upweighting and to discount others through downweighting, while constraining the underlying weights involved in the mining process. When the [Formula: see text]-values derive from monotone likelihood ratio families such as the Gaussian means model, the new method allows exact solution of an important optimal weighting problem previously thought to be non-convex and computationally infeasible. Our method scales to massive data set sizes. We illustrate the applications of Princessp on a series of standard genomics data sets and offer comparisons with several previous 'standard' methods. Princessp offers both ease of operation and the ability to scale to extremely large problem sizes. The method is available as open-source software from github.com/dobriban/pvalue_weighting_matlab (accessed 11 October 2017).

  13. MHD code using multi graphical processing units: SMAUG+

    NASA Astrophysics Data System (ADS)

    Gyenge, N.; Griffiths, M. K.; Erdélyi, R.

    2018-01-01

    This paper introduces the Sheffield Magnetohydrodynamics Algorithm Using GPUs (SMAUG+), an advanced numerical code for solving magnetohydrodynamic (MHD) problems, using multi-GPU systems. Multi-GPU systems facilitate the development of accelerated codes and enable us to investigate larger model sizes and/or more detailed computational domain resolutions. This is a significant advancement over the parent single-GPU MHD code, SMAUG (Griffiths et al., 2015). Here, we demonstrate the validity of the SMAUG + code, describe the parallelisation techniques and investigate performance benchmarks. The initial configuration of the Orszag-Tang vortex simulations are distributed among 4, 16, 64 and 100 GPUs. Furthermore, different simulation box resolutions are applied: 1000 × 1000, 2044 × 2044, 4000 × 4000 and 8000 × 8000 . We also tested the code with the Brio-Wu shock tube simulations with model size of 800 employing up to 10 GPUs. Based on the test results, we observed speed ups and slow downs, depending on the granularity and the communication overhead of certain parallel tasks. The main aim of the code development is to provide massively parallel code without the memory limitation of a single GPU. By using our code, the applied model size could be significantly increased. We demonstrate that we are able to successfully compute numerically valid and large 2D MHD problems.

  14. Phase Transitions in Planning Problems: Design and Analysis of Parameterized Families of Hard Planning Problems

    NASA Technical Reports Server (NTRS)

    Hen, Itay; Rieffel, Eleanor G.; Do, Minh; Venturelli, Davide

    2014-01-01

    There are two common ways to evaluate algorithms: performance on benchmark problems derived from real applications and analysis of performance on parametrized families of problems. The two approaches complement each other, each having its advantages and disadvantages. The planning community has concentrated on the first approach, with few ways of generating parametrized families of hard problems known prior to this work. Our group's main interest is in comparing approaches to solving planning problems using a novel type of computational device - a quantum annealer - to existing state-of-the-art planning algorithms. Because only small-scale quantum annealers are available, we must compare on small problem sizes. Small problems are primarily useful for comparison only if they are instances of parametrized families of problems for which scaling analysis can be done. In this technical report, we discuss our approach to the generation of hard planning problems from classes of well-studied NP-complete problems that map naturally to planning problems or to aspects of planning problems that many practical planning problems share. These problem classes exhibit a phase transition between easy-to-solve and easy-to-show-unsolvable planning problems. The parametrized families of hard planning problems lie at the phase transition. The exponential scaling of hardness with problem size is apparent in these families even at very small problem sizes, thus enabling us to characterize even very small problems as hard. The families we developed will prove generally useful to the planning community in analyzing the performance of planning algorithms, providing a complementary approach to existing evaluation methods. We illustrate the hardness of these problems and their scaling with results on four state-of-the-art planners, observing significant differences between these planners on these problem families. Finally, we describe two general, and quite different, mappings of planning problems to QUBOs, the form of input required for a quantum annealing machine such as the D-Wave II.

  15. Microstructural abnormalities of the brain white matter in attention-deficit/hyperactivity disorder

    PubMed Central

    Chen, Lizhou; Huang, Xiaoqi; Lei, Du; He, Ning; Hu, Xinyu; Chen, Ying; Li, Yuanyuan; Zhou, Jinbo; Guo, Lanting; Kemp, Graham J.; Gong, Qiyong

    2015-01-01

    Background Attention-deficit/hyperactivity disorder (ADHD) is an early-onset neurodevelopmental disorder with multiple behavioural problems and executive dysfunctions for which neuroimaging studies have reported a variety of abnormalities, with inconsistencies partly owing to confounding by medication and concurrent psychiatric disease. We aimed to investigate the microstructural abnormalities of white matter in unmedicated children and adolescents with pure ADHD and to explore the association between these abnormalities and behavioural symptoms and executive functions. Methods We assessed children and adolescents with ADHD and healthy controls using psychiatric interviews. Behavioural problems were rated using the revised Conners’ Parent Rating Scale, and executive functions were measured using the Stroop Colour-Word Test and the Wisconsin Card Sorting test. We acquired diffusion tensor imaging data using a 3 T MRI system, and we compared diffusion parameters, including fractional anisotropy (FA) and mean, axial and radial diffusivities, between the 2 groups. Results Thirty-three children and adolescents with ADHD and 35 healthy controls were included in our study. In patients compared with controls, FA was increased in the left posterior cingulum bundle as a result of both increased axial diffusivity and decreased radial diffusivity. In addition, the averaged FA of the cluster in this region correlated with behavioural measures as well as executive function in patients with ADHD. Limitations This study was limited by its cross-sectional design and small sample size. The cluster size of the significant result was small. Conclusion Our findings suggest that white matter abnormalities within the limbic network could be part of the neural underpinning of behavioural problems and executive dysfunction in patients with ADHD. PMID:25853285

  16. Numerical algorithms for scatter-to-attenuation reconstruction in PET: empirical comparison of convergence, acceleration, and the effect of subsets.

    PubMed

    Berker, Yannick; Karp, Joel S; Schulz, Volkmar

    2017-09-01

    The use of scattered coincidences for attenuation correction of positron emission tomography (PET) data has recently been proposed. For practical applications, convergence speeds require further improvement, yet there exists a trade-off between convergence speed and the risk of non-convergence. In this respect, a maximum-likelihood gradient-ascent (MLGA) algorithm and a two-branch back-projection (2BP), which was previously proposed, were evaluated. MLGA was combined with the Armijo step size rule; and accelerated using conjugate gradients, Nesterov's momentum method, and data subsets of different sizes. In 2BP, we varied the subset size, an important determinant of convergence speed and computational burden. We used three sets of simulation data to evaluate the impact of a spatial scale factor. The Armijo step size allowed 10-fold increased step sizes compared to native MLGA. Conjugate gradients and Nesterov momentum lead to slightly faster, yet non-uniform convergence; improvements were mostly confined to later iterations, possibly due to the non-linearity of the problem. MLGA with data subsets achieved faster, uniform, and predictable convergence, with a speed-up factor equivalent to the number of subsets and no increase in computational burden. By contrast, 2BP computational burden increased linearly with the number of subsets due to repeated evaluation of the objective function, and convergence was limited to the case of many (and therefore small) subsets, which resulted in high computational burden. Possibilities of improving 2BP appear limited. While general-purpose acceleration methods appear insufficient for MLGA, results suggest that data subsets are a promising way of improving MLGA performance.

  17. Optimizing the selection of small-town wastewater treatment processes

    NASA Astrophysics Data System (ADS)

    Huang, Jianping; Zhang, Siqi

    2018-04-01

    Municipal wastewater treatment is energy-intensive. This high energy consumption causes high sewage treatment plant operating costs and increases the energy burden. To mitigate the adverse impacts of China’s development, sewage treatment plants should adopt effective energy-saving technologies. Artificial fortified natural water treatment and use of activated sludge and biofilm are all suitable technologies for small-town sewage treatment. This study features an analysis of the characteristics of small and medium-sized township sewage, an overview of current technologies, and a discussion of recent progress in sewage treatment. Based on this, an analysis of existing problems in municipal wastewater treatment is presented, and countermeasures to improve sewage treatment in small and medium-sized towns are proposed.

  18. ROC curves in clinical chemistry: uses, misuses, and possible solutions.

    PubMed

    Obuchowski, Nancy A; Lieber, Michael L; Wians, Frank H

    2004-07-01

    ROC curves have become the standard for describing and comparing the accuracy of diagnostic tests. Not surprisingly, ROC curves are used often by clinical chemists. Our aims were to observe how the accuracy of clinical laboratory diagnostic tests is assessed, compared, and reported in the literature; to identify common problems with the use of ROC curves; and to offer some possible solutions. We reviewed every original work using ROC curves and published in Clinical Chemistry in 2001 or 2002. For each article we recorded phase of the research, prospective or retrospective design, sample size, presence/absence of confidence intervals (CIs), nature of the statistical analysis, and major analysis problems. Of 58 articles, 31% were phase I (exploratory), 50% were phase II (challenge), and 19% were phase III (advanced) studies. The studies increased in sample size from phase I to III and showed a progression in the use of prospective designs. Most phase I studies were powered to assess diagnostic tests with ROC areas >/=0.70. Thirty-eight percent of studies failed to include CIs for diagnostic test accuracy or the CIs were constructed inappropriately. Thirty-three percent of studies provided insufficient analysis for comparing diagnostic tests. Other problems included dichotomization of the gold standard scale and inappropriate analysis of the equivalence of two diagnostic tests. We identify available software and make some suggestions for sample size determination, testing for equivalence in diagnostic accuracy, and alternatives to a dichotomous classification of a continuous-scale gold standard. More methodologic research is needed in areas specific to clinical chemistry.

  19. Efficient, Multi-Scale Designs Take Flight

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Engineers can solve aerospace design problems faster and more efficiently with a versatile software product that performs automated structural analysis and sizing optimization. Collier Research Corporation's HyperSizer Structural Sizing Software is a design, analysis, and documentation tool that increases productivity and standardization for a design team. Based on established aerospace structural methods for strength, stability, and stiffness, HyperSizer can be used all the way from the conceptual design to in service support. The software originated from NASA s efforts to automate its capability to perform aircraft strength analyses, structural sizing, and weight prediction and reduction. With a strategy to combine finite element analysis with an automated design procedure, NASA s Langley Research Center led the development of a software code known as ST-SIZE from 1988 to 1995. Collier Research employees were principal developers of the code along with Langley researchers. The code evolved into one that could analyze the strength and stability of stiffened panels constructed of any material, including light-weight, fiber-reinforced composites.

  20. Geometrical separation method for lipoproteins using bioformulated-fiber matrix electrophoresis: size of high-density lipoprotein does not reflect its density.

    PubMed

    Tabuchi, Mari; Seo, Makoto; Inoue, Takayuki; Ikeda, Takeshi; Kogure, Akinori; Inoue, Ikuo; Katayama, Shigehiro; Matsunaga, Toshiyuki; Hara, Akira; Komoda, Tsugikazu

    2011-02-01

    The increasing number of patients with metabolic syndrome is a critical global problem. In this study, we describe a novel geometrical electrophoretic separation method using a bioformulated-fiber matrix to analyze high-density lipoprotein (HDL) particles. HDL particles are generally considered to be a beneficial component of the cholesterol fraction. Conventional electrophoresis is widely used but is not necessarily suitable for analyzing HDL particles. Furthermore, a higher HDL density is generally believed to correlate with a smaller particle size. Here, we use a novel geometrical separation technique incorporating recently developed nanotechnology (Nata de Coco) to contradict this belief. A dyslipidemia patient given a 1-month treatment of fenofibrate showed an inverse relationship between HDL density and size. Direct microscopic observation and morphological observation of fractionated HDL particles confirmed a lack of relationship between particle density and size. This new technique may improve diagnostic accuracy and medical treatment for lipid related diseases.

  1. Atmospheric particulate matter size distribution and concentration in West Virginia coal mining and non-mining areas.

    PubMed

    Kurth, Laura M; McCawley, Michael; Hendryx, Michael; Lusk, Stephanie

    2014-07-01

    People who live in Appalachian areas where coal mining is prominent have increased health problems compared with people in non-mining areas of Appalachia. Coal mines and related mining activities result in the production of atmospheric particulate matter (PM) that is associated with human health effects. There is a gap in research regarding particle size concentration and distribution to determine respiratory dose around coal mining and non-mining areas. Mass- and number-based size distributions were determined with an Aerodynamic Particle Size and Scanning Mobility Particle Sizer to calculate lung deposition around mining and non-mining areas of West Virginia. Particle number concentrations and deposited lung dose were significantly greater around mining areas compared with non-mining areas, demonstrating elevated risks to humans. The greater dose was correlated with elevated disease rates in the West Virginia mining areas. Number concentrations in the mining areas were comparable to a previously documented urban area where number concentration was associated with respiratory and cardiovascular disease.

  2. Sizing protein-templated gold nanoclusters by time resolved fluorescence anisotropy decay measurements.

    PubMed

    Soleilhac, Antonin; Bertorelle, Franck; Antoine, Rodolphe

    2018-03-15

    Protein-templated gold nanoclusters (AuNCs) are very attractive due to their unique fluorescence properties. A major problem however may arise due to protein structure changes upon the nucleation of an AuNC within the protein for any future use as in vivo probes, for instance. In this work, we propose a simple and reliable fluorescence based technique measuring the hydrodynamic size of protein-templated gold nanoclusters. This technique uses the relation between the time resolved fluorescence anisotropy decay and the hydrodynamic volume, through the rotational correlation time. We determine the molecular size of protein-directed AuNCs, with protein templates of increasing sizes, e.g. insulin, lysozyme, and bovine serum albumin (BSA). The comparison of sizes obtained by other techniques (e.g. dynamic light scattering and small-angle X-ray scattering) between bare and gold clusters containing proteins allows us to address the volume changes induced either by conformational changes (for BSA) or the formation of protein dimers (for insulin and lysozyme) during cluster formation and incorporation. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Influence of nitrogen as grain refiner in low carbon and microalloyed steels

    NASA Astrophysics Data System (ADS)

    Hasan, B. M.; Sathyamurthy, P.

    2018-02-01

    Microalloyed steel is replacing using of low alloy steel in automotive industry. Microalloying elements like vanadium, niobium and titanium are used to enhance the steel property. The current work is focused on using nitrogen as a strengthening element in existing steel grade. Nitrogen in free form acts as solid solution strengthener and in combined form as precipitates acts as grain refiner for enhancing strength. The problem of grain coarsening at high temperature in case carburizing steel was avoided by increasing nitrogen level from 60ppm to 200ppm. Grain size of ASTM no 10 is obtained at carburizing temperature of 950 °C by increasing nitrogen content from grain size no 6 with lower nitrogen. Mostly crankshaft is made from Cr-Mo alloyed steel. At JSW, nitrogen in the level of 130-200ppm is added to medium carbon steel to meet property requirement for crankshaft application

  4. When smoke gets in our eyes: the multiple impacts of atmospheric black carbon on climate, air quality and health.

    PubMed

    Highwood, Eleanor J; Kinnersley, Robert P

    2006-05-01

    With both climate change and air quality on political and social agendas from local to global scale, the links between these hitherto separate fields are becoming more apparent. Black carbon, largely from combustion processes, scatters and absorbs incoming solar radiation, contributes to poor air quality and induces respiratory and cardiovascular problems. Uncertainties in the amount, location, size and shape of atmospheric black carbon cause large uncertainty in both climate change estimates and toxicology studies alike. Increased research has led to new effects and areas of uncertainty being uncovered. Here we draw together recent results and explore the increasing opportunities for synergistic research that will lead to improved confidence in the impact of black carbon on climate change, air quality and human health. Topics of mutual interest include better information on spatial distribution, size, mixing state and measuring and monitoring.

  5. Recognition Using Hybrid Classifiers.

    PubMed

    Osadchy, Margarita; Keren, Daniel; Raviv, Dolev

    2016-04-01

    A canonical problem in computer vision is category recognition (e.g., find all instances of human faces, cars etc., in an image). Typically, the input for training a binary classifier is a relatively small sample of positive examples, and a huge sample of negative examples, which can be very diverse, consisting of images from a large number of categories. The difficulty of the problem sharply increases with the dimension and size of the negative example set. We propose to alleviate this problem by applying a "hybrid" classifier, which replaces the negative samples by a prior, and then finds a hyperplane which separates the positive samples from this prior. The method is extended to kernel space and to an ensemble-based approach. The resulting binary classifiers achieve an identical or better classification rate than SVM, while requiring far smaller memory and lower computational complexity to train and apply.

  6. Integrating Micro-computers with a Centralized DBMS: ORACLE, SEED AND INGRES

    NASA Technical Reports Server (NTRS)

    Hoerger, J.

    1984-01-01

    Users of ADABAS, a relational-like data base management system (ADABAS) with its data base programming language (NATURAL) are acquiring microcomputers with hopes of solving their individual word processing, office automation, decision support, and simple data processing problems. As processor speeds, memory sizes, and disk storage capacities increase, individual departments begin to maintain "their own" data base on "their own" micro-computer. This situation can adversely affect several of the primary goals set for implementing a centralized DBMS. In order to avoid this potential problem, these micro-computers must be integrated with the centralized DBMS. An easy to use and flexible means for transferring logic data base files between the central data base machine and micro-computers must be provided. Some of the problems encounted in an effort to accomplish this integration and possible solutions are discussed.

  7. Radiative corrections to quantum sticking on graphene

    NASA Astrophysics Data System (ADS)

    Sengupta, Sanghita; Clougherty, Dennis P.

    2017-07-01

    We study the sticking rate of atomic hydrogen to suspended graphene using four different methods that include contributions from processes with multiphonon emission. We compare the numerical results of the sticking rate obtained by: (i) the loop expansion of the atom self-energy; (ii) the noncrossing approximation (NCA); (iii) the independent boson model approximation (IBMA); and (iv) a leading-order soft-phonon resummation method (SPR). The loop expansion reveals an infrared problem, analogous to the infamous infrared problem in QED. The two-loop contribution to the sticking rate gives a result that tends to diverge for large membranes. The latter three methods remedy this infrared problem and give results that are finite in the limit of an infinite membrane. We find that for micromembranes (sizes ranging 100 nm to 10 μ m ), the latter three methods give results that are in good agreement with each other and yield sticking rates that are mildly suppressed relative to the lowest-order golden rule rate. Lastly, we find that the SPR sticking rate decreases slowly to zero with increasing membrane size, while both the NCA and IBMA rates tend to a nonzero constant in this limit. Thus, approximations to the sticking rate can be sensitive to the effects of soft-phonon emission for large membranes.

  8. Sources of spurious force oscillations from an immersed boundary method for moving-body problems

    NASA Astrophysics Data System (ADS)

    Lee, Jongho; Kim, Jungwoo; Choi, Haecheon; Yang, Kyung-Soo

    2011-04-01

    When a discrete-forcing immersed boundary method is applied to moving-body problems, it produces spurious force oscillations on a solid body. In the present study, we identify two sources of these force oscillations. One source is from the spatial discontinuity in the pressure across the immersed boundary when a grid point located inside a solid body becomes that of fluid with a body motion. The addition of mass source/sink together with momentum forcing proposed by Kim et al. [J. Kim, D. Kim, H. Choi, An immersed-boundary finite volume method for simulations of flow in complex geometries, Journal of Computational Physics 171 (2001) 132-150] reduces the spurious force oscillations by alleviating this pressure discontinuity. The other source is from the temporal discontinuity in the velocity at the grid points where fluid becomes solid with a body motion. The magnitude of velocity discontinuity decreases with decreasing the grid spacing near the immersed boundary. Four moving-body problems are simulated by varying the grid spacing at a fixed computational time step and at a constant CFL number, respectively. It is found that the spurious force oscillations decrease with decreasing the grid spacing and increasing the computational time step size, but they depend more on the grid spacing than on the computational time step size.

  9. Magnetic Suspension and Balance Systems: A Selected, Annotated Bibliography

    NASA Technical Reports Server (NTRS)

    Tuttle Marie H.; Kilgore, Robert A.; Boyden, Richmond P.

    1983-01-01

    This publication, containing 206 entries, supersedes an earlier bibliography, NASA TM-80225 (April 1980). Citations for 18 documents have been added in this updated version. Most of the additions report results of recent studies aimed at increasing the research capabilities of magnetic suspension and balance systems, e.g., increasing force and torque capability, increasing angle of attack capability, and increasing overall system reliability. Some of the additions address the problem of scaling from the relatively small size of existing systems to much larger sizes. The purpose of this bibliography is to provide an up-to-date list of publications that might be helpful to persons interested in magnetic suspension and balance systems for use in wind tunnels. The arrangement is generally chronological by date of publication. However, papers presented at conferences or meetings are placed under dates of presentation. The numbers assigned to many of the citations have been changed from those used in the previous bibliography. This has been done in order to allow outdated citations to be removed and some recently discovered older works to be included in their proper chronological order.

  10. Fitzmaurice Voicework Pilot Study.

    PubMed

    Watson, Lynn; Nayak, Sadhana

    2015-11-01

    A repeated-measures pilot study was used to investigate acoustic changes in the voices of participants in a Fitzmaurice Voicework (FV) teacher certification program. Maximum phonation time (MPT) was also measured. Eleven participants with no reported voice problems were studied. Pretraining and posttraining recordings were made of each participant. Measures of MPT were made, and the recordings were analyzed for jitter, shimmer, and noise-to-harmonics ratio (NHR). The measure of effect size for MPT was moderate, and there was an overall increase in MPT from pretraining to posttraining, with 70% of participants showing an increase in MPT. The measure of effect sizes for jitter, shimmer, and NHR were small, with measurements showing no significant changes from pretraining to posttraining. There were indications that FV training may have positive outcomes for actors and professional voice users, particularly in increasing MPT. Further studies with larger subject groups are needed to investigate the significance of the increase in MPT noted in this study and to test whether FV training can help to lower rates of shimmer and jitter. Copyright © 2015 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  11. Heterogeneous quantum computing for satellite constellation optimization: solving the weighted k-clique problem

    NASA Astrophysics Data System (ADS)

    Bass, Gideon; Tomlin, Casey; Kumar, Vaibhaw; Rihaczek, Pete; Dulny, Joseph, III

    2018-04-01

    NP-hard optimization problems scale very rapidly with problem size, becoming unsolvable with brute force methods, even with supercomputing resources. Typically, such problems have been approximated with heuristics. However, these methods still take a long time and are not guaranteed to find an optimal solution. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. Current quantum annealing (QA) devices are designed to solve difficult optimization problems, but they are limited by hardware size and qubit connectivity restrictions. We present a novel heterogeneous computing stack that combines QA and classical machine learning, allowing the use of QA on problems larger than the hardware limits of the quantum device. These results represent experiments on a real-world problem represented by the weighted k-clique problem. Through this experiment, we provide insight into the state of quantum machine learning.

  12. Setting health research priorities using the CHNRI method: VI. Quantitative properties of human collective opinion

    PubMed Central

    Yoshida, Sachiyo; Rudan, Igor; Cousens, Simon

    2016-01-01

    Introduction Crowdsourcing has become an increasingly important tool to address many problems – from government elections in democracies, stock market prices, to modern online tools such as TripAdvisor or Internet Movie Database (IMDB). The CHNRI method (the acronym for the Child Health and Nutrition Research Initiative) for setting health research priorities has crowdsourcing as the major component, which it uses to generate, assess and prioritize between many competing health research ideas. Methods We conducted a series of analyses using data from a group of 91 scorers to explore the quantitative properties of their collective opinion. We were interested in the stability of their collective opinion as the sample size increases from 15 to 90. From a pool of 91 scorers who took part in a previous CHNRI exercise, we used sampling with replacement to generate multiple random samples of different size. First, for each sample generated, we identified the top 20 ranked research ideas, among 205 that were proposed and scored, and calculated the concordance with the ranking generated by the 91 original scorers. Second, we used rank correlation coefficients to compare the ranks assigned to all 205 proposed research ideas when samples of different size are used. We also analysed the original pool of 91 scorers to to look for evidence of scoring variations based on scorers' characteristics. Results The sample sizes investigated ranged from 15 to 90. The concordance for the top 20 scored research ideas increased with sample sizes up to about 55 experts. At this point, the median level of concordance stabilized at 15/20 top ranked questions (75%), with the interquartile range also generally stable (14–16). There was little further increase in overlap when the sample size increased from 55 to 90. When analysing the ranking of all 205 ideas, the rank correlation coefficient increased as the sample size increased, with a median correlation of 0.95 reached at the sample size of 45 experts (median of the rank correlation coefficient = 0.95; IQR 0.94–0.96). Conclusions Our analyses suggest that the collective opinion of an expert group on a large number of research ideas, expressed through categorical variables (Yes/No/Not Sure/Don't know), stabilises relatively quickly in terms of identifying the ideas that have most support. In the exercise we found a high degree of reproducibility of the identified research priorities was achieved with as few as 45–55 experts. PMID:27350874

  13. Setting health research priorities using the CHNRI method: VI. Quantitative properties of human collective opinion.

    PubMed

    Yoshida, Sachiyo; Rudan, Igor; Cousens, Simon

    2016-06-01

    Crowdsourcing has become an increasingly important tool to address many problems - from government elections in democracies, stock market prices, to modern online tools such as TripAdvisor or Internet Movie Database (IMDB). The CHNRI method (the acronym for the Child Health and Nutrition Research Initiative) for setting health research priorities has crowdsourcing as the major component, which it uses to generate, assess and prioritize between many competing health research ideas. We conducted a series of analyses using data from a group of 91 scorers to explore the quantitative properties of their collective opinion. We were interested in the stability of their collective opinion as the sample size increases from 15 to 90. From a pool of 91 scorers who took part in a previous CHNRI exercise, we used sampling with replacement to generate multiple random samples of different size. First, for each sample generated, we identified the top 20 ranked research ideas, among 205 that were proposed and scored, and calculated the concordance with the ranking generated by the 91 original scorers. Second, we used rank correlation coefficients to compare the ranks assigned to all 205 proposed research ideas when samples of different size are used. We also analysed the original pool of 91 scorers to to look for evidence of scoring variations based on scorers' characteristics. The sample sizes investigated ranged from 15 to 90. The concordance for the top 20 scored research ideas increased with sample sizes up to about 55 experts. At this point, the median level of concordance stabilized at 15/20 top ranked questions (75%), with the interquartile range also generally stable (14-16). There was little further increase in overlap when the sample size increased from 55 to 90. When analysing the ranking of all 205 ideas, the rank correlation coefficient increased as the sample size increased, with a median correlation of 0.95 reached at the sample size of 45 experts (median of the rank correlation coefficient = 0.95; IQR 0.94-0.96). Our analyses suggest that the collective opinion of an expert group on a large number of research ideas, expressed through categorical variables (Yes/No/Not Sure/Don't know), stabilises relatively quickly in terms of identifying the ideas that have most support. In the exercise we found a high degree of reproducibility of the identified research priorities was achieved with as few as 45-55 experts.

  14. Direct Solve of Electrically Large Integral Equations for Problem Sizes to 1M Unknowns

    NASA Technical Reports Server (NTRS)

    Shaeffer, John

    2008-01-01

    Matrix methods for solving integral equations via direct solve LU factorization are presently limited to weeks to months of very expensive supercomputer time for problems sizes of several hundred thousand unknowns. This report presents matrix LU factor solutions for electromagnetic scattering problems for problem sizes to one million unknowns with thousands of right hand sides that run in mere days on PC level hardware. This EM solution is accomplished by utilizing the numerical low rank nature of spatially blocked unknowns using the Adaptive Cross Approximation for compressing the rank deficient blocks of the system Z matrix, the L and U factors, the right hand side forcing function and the final current solution. This compressed matrix solution is applied to a frequency domain EM solution of Maxwell's equations using standard Method of Moments approach. Compressed matrix storage and operations count leads to orders of magnitude reduction in memory and run time.

  15. Hamstring autograft size importance in anterior cruciate ligament repair surgery.

    PubMed

    Figueroa, Francisco; Figueroa, David; Espregueira-Mendes, João

    2018-03-01

    Graft size in hamstring autograft anterior cruciate ligament (ACL) surgery is an important factor directly related to failure. Most of the evidence in the field suggests that the size of the graft in hamstring autograft ACL reconstruction matters when the surgeon is trying to avoid failures.The exact graft diameter needed to avoid failures is not absolutely clear and could depend on other factors, but newer studies suggest than even increases of 0.5 mm up to a graft size of 10 mm are beneficial for the patient. There is still no evidence to recommend the use of grafts > 10 mm.Several methods - e.g. folding the graft in more strands - that are simple and reproducible have been published lately to address the problem of having an insufficient graft size when performing an ACL reconstruction. Due to the evidence presented, we think it is necessary for the surgeon to have them in his or her arsenal before performing an ACL reconstruction.There are obviously other factors that should be considered, especially age. Therefore, a larger graft size should not be taken as the only goal in ACL reconstruction. Cite this article: EFORT Open Rev 2018;3:93-97. DOI: 10.1302/2058-5241.3.170038.

  16. An Experimental Study of Team Size and Performance on a Complex Task.

    PubMed

    Mao, Andrew; Mason, Winter; Suri, Siddharth; Watts, Duncan J

    2016-01-01

    The relationship between team size and productivity is a question of broad relevance across economics, psychology, and management science. For complex tasks, however, where both the potential benefits and costs of coordinated work increase with the number of workers, neither theoretical arguments nor empirical evidence consistently favor larger vs. smaller teams. Experimental findings, meanwhile, have relied on small groups and highly stylized tasks, hence are hard to generalize to realistic settings. Here we narrow the gap between real-world task complexity and experimental control, reporting results from an online experiment in which 47 teams of size ranging from n = 1 to 32 collaborated on a realistic crisis mapping task. We find that individuals in teams exerted lower overall effort than independent workers, in part by allocating their effort to less demanding (and less productive) sub-tasks; however, we also find that individuals in teams collaborated more with increasing team size. Directly comparing these competing effects, we find that the largest teams outperformed an equivalent number of independent workers, suggesting that gains to collaboration dominated losses to effort. Importantly, these teams also performed comparably to a field deployment of crisis mappers, suggesting that experiments of the type described here can help solve practical problems as well as advancing the science of collective intelligence.

  17. A Simulation Study of Paced TCP

    NASA Technical Reports Server (NTRS)

    Kulik, Joanna; Coulter, Robert; Rockwell, Dennis; Partridge, Craig

    2000-01-01

    In this paper, we study the performance of paced TCP, a modified version of TCP designed especially for high delay- bandwidth networks. In typical networks, TCP optimizes its send-rate by transmitting increasingly large bursts, or windows, of packets, one burst per round-trip time, until it reaches a maximum window-size, which corresponds to the full capacity of the network. In a network with a high delay-bandwidth product, however, Transmission Control Protocol's (TCPs) maximum window-size may be larger than the queue size of the intermediate routers, and routers will begin to drop packets as soon as the windows become too large for the router queues. The TCP sender then concludes that the bottleneck capacity of the network has been reached, and it limits its send-rate accordingly. Partridge proposed paced TCP as a means of solving the problem of queueing bottlenecks. A sender using paced TCP would release packets in multiple, small bursts during a round-trip time in which ordinary TCP would release a single, large burst of packets. This approach allows the sender to increase its send-rate to the maximum window size without encountering queueing bottlenecks. This paper describes the performance of paced TCP in a simulated network and discusses implementation details that can affect the performance of paced TCP.

  18. An Experimental Study of Team Size and Performance on a Complex Task

    PubMed Central

    Mao, Andrew; Mason, Winter; Suri, Siddharth; Watts, Duncan J.

    2016-01-01

    The relationship between team size and productivity is a question of broad relevance across economics, psychology, and management science. For complex tasks, however, where both the potential benefits and costs of coordinated work increase with the number of workers, neither theoretical arguments nor empirical evidence consistently favor larger vs. smaller teams. Experimental findings, meanwhile, have relied on small groups and highly stylized tasks, hence are hard to generalize to realistic settings. Here we narrow the gap between real-world task complexity and experimental control, reporting results from an online experiment in which 47 teams of size ranging from n = 1 to 32 collaborated on a realistic crisis mapping task. We find that individuals in teams exerted lower overall effort than independent workers, in part by allocating their effort to less demanding (and less productive) sub-tasks; however, we also find that individuals in teams collaborated more with increasing team size. Directly comparing these competing effects, we find that the largest teams outperformed an equivalent number of independent workers, suggesting that gains to collaboration dominated losses to effort. Importantly, these teams also performed comparably to a field deployment of crisis mappers, suggesting that experiments of the type described here can help solve practical problems as well as advancing the science of collective intelligence. PMID:27082239

  19. A simple approach to power and sample size calculations in logistic regression and Cox regression models.

    PubMed

    Vaeth, Michael; Skovlund, Eva

    2004-06-15

    For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.

  20. Handwriting Impairments in People With Parkinson's Disease and Freezing of Gait.

    PubMed

    Heremans, Elke; Nackaerts, Evelien; Broeder, Sanne; Vervoort, Griet; Swinnen, Stephan P; Nieuwboer, Alice

    2016-11-01

    Recent studies show that patients with Parkinson's disease (PD) and freezing of gait (FOG) experience motor problems outside their gait freezing episodes. Because handwriting is also a sequential movement, it may be affected in PD patients with FOG relative to those without. The current study aimed to assess the quality of writing in PD patients with and without FOG in comparison to healthy controls (CTs) during various writing tasks. Handwriting was assessed by the writing of cursive loops on a touch-sensitive writing tablet and by means of the Systematic Screening of Handwriting Difficulties (SOS) test in 30 PD patients with and without freezing and 15 healthy age-matched CTs. The tablet tests were performed at 2 different sizes, either continuously or alternatingly, as indicated by visual target lines. Patients with freezing showed decreased writing amplitudes and increased variability compared with CTs and patients without freezing on the writing tablet tests. Writing problems were present during both tests but were more pronounced during writing at alternating compared with writing at continuous size. Patients with freezing also had a higher total score on the SOS test than patients without freezing and CTs, reflecting more extensive handwriting problems, particularly with writing fluency. Writing is more severely affected in PD patients with FOG than in those without FOG. These results indicate that deficient movement sequencing and adaptation is a generic problem in patients with FOG. © The Author(s) 2016.

  1. Modal Characteristics of Novel Wind Turbine Rotors with Hinged Structures

    NASA Astrophysics Data System (ADS)

    Lu, Hongya; Zeng, Pan; Lei, Liping

    2018-03-01

    The vibration problems of the wind turbine rotors have drawn public attention as the size of wind turbine has increased incredibly. Although various factors may cause the vibration problems, the flexibility is a big threat among them. Therefore, ensuring the high stiffness of the rotors by adopting novel techniques becomes a necessity. The study was a further investigation of several novel designs regarding the dynamic behaviour and the influencing mechanism. The modal testing experiments were conducted on a traditional blade and an isolated blade with the hinged rods mounted close to the root. The results showed that the rod increased both the modal frequency and the damping of the blade. More studies were done on the rods’ impact on the wind turbine rotor with a numerical model, where dimensionless parameters were defined to describe the configuration of the interveined and the bisymmetrical rods. Their influences on the modal frequencies of the rotor were analyzed and discussed.

  2. Brain potentials during mental arithmetic: effects of extensive practice and problem difficulty.

    PubMed

    Pauli, P; Lutzenberger, W; Rau, H; Birbaumer, N; Rickard, T C; Yaroush, R A; Bourne, L E

    1994-07-01

    Recent behavioral investigations indicate that the processes underlying mental arithmetic change systematically with practice from deliberate, conscious calculation to automatic, direct retrieval of answers from memory [Bourne, L.E.Jr. and Rickard, T.C., Mental calculation: The development of a cognitive skill, Paper presented at the Interamerican Congress of Psychology, San Jose, Costa Rica, 1991: Psychol. Rev., 95 (1988) 492-527]. Results reviewed by Moscovitch and Winocur [In: The handbook of aging and cognition, Erlbaum, Hillsdale, NJ, 1992, pp. 315-372] suggest that consciously controlled processes are more dependent on frontal lobe function than are automatic processes. It is appropriate, therefore to determine whether transitions in the locus of primary brain activity occur with practice on mental calculation. In this experiment, we examine the relationship between characteristics of event-related brain potentials (ERPs) and mental arithmetic. Single-digit mental multiplication problems varying in difficulty (problem size) were used, and subjects were trained on these problems for four sessions. Problem-size and practice effects were reliably found in behavioral measures (RT). The ERP was characterized by a pronounced late positivity after task presentation followed by a slow wave, and a negativity during response indication. These components responded differentially to the practice and problem-size manipulations. Practice mainly affected topography of the amplitude of positivity and offset latency of slow wave, and problem-size mainly offset latency of slow wave and pre-response negativity. Fronto-central positivity diminished from session to session, and the focus of positivity centered finally at centro-parietal regions.(ABSTRACT TRUNCATED AT 250 WORDS)

  3. Theoretical study of network design methodologies for the aerial relay system. [energy consumption and air traffic control

    NASA Technical Reports Server (NTRS)

    Rivera, J. M.; Simpson, R. W.

    1980-01-01

    The aerial relay system network design problem is discussed. A generalized branch and bound based algorithm is developed which can consider a variety of optimization criteria, such as minimum passenger travel time and minimum liner and feeder operating costs. The algorithm, although efficient, is basically useful for small size networks, due to its nature of exponentially increasing computation time with the number of variables.

  4. Parallel-Computing Architecture for JWST Wavefront-Sensing Algorithms

    DTIC Science & Technology

    2011-09-01

    results due to the increasing cost and complexity of each test. 2. ALGORITHM OVERVIEW Phase retrieval is an image-based wavefront-sensing...broadband illumination problems we have found that hand-tuning the right matrix sizes can account for a speedup of 86x faster. This comes from hand-picking...Wavefront Sensing and Control”. Proceedings of SPIE (2007) vol. 6687 (08). [5] Greenhouse, M. A., Drury , M. P., Dunn, J. L., Glazer, S. D., Greville, E

  5. Within-Group Effect-Size Benchmarks for Problem-Solving Therapy for Depression in Adults

    ERIC Educational Resources Information Center

    Rubin, Allen; Yu, Miao

    2017-01-01

    This article provides benchmark data on within-group effect sizes from published randomized clinical trials that supported the efficacy of problem-solving therapy (PST) for depression among adults. Benchmarks are broken down by type of depression (major or minor), type of outcome measure (interview or self-report scale), whether PST was provided…

  6. A Role for M-Matrices in Modelling Population Growth

    ERIC Educational Resources Information Center

    James, Glyn; Rumchev, Ventsi

    2006-01-01

    Adopting a discrete-time cohort-type model to represent the dynamics of a population, the problem of achieving a desired total size of the population under a balanced growth (contraction) and the problem of maintaining the desired size, once achieved, are studied. Properties of positive-time systems and M-matrices are used to develop the results,…

  7. Reciprocal Relations between Student-Teacher Relationship and Children's Behavioral Problems: Moderation by Child-Care Group Size

    ERIC Educational Resources Information Center

    Skalická, Vera; Belsky, Jay; Stenseng, Frode; Wichstrøm, Lars

    2015-01-01

    In this Norwegian study, bidirectional relations between children's behavior problems and child-teacher conflict and closeness were examined, and the possibility of moderation of these associations by child-care group size was tested. Eight hundred and nineteen 4-year-old children were followed up in first grade. Results revealed reciprocal…

  8. Lower Sensitivity to Happy and Angry Facial Emotions in Young Adults with Psychiatric Problems

    PubMed Central

    Vrijen, Charlotte; Hartman, Catharina A.; Lodder, Gerine M. A.; Verhagen, Maaike; de Jonge, Peter; Oldehinkel, Albertine J.

    2016-01-01

    Many psychiatric problem domains have been associated with emotion-specific biases or general deficiencies in facial emotion identification. However, both within and between psychiatric problem domains, large variability exists in the types of emotion identification problems that were reported. Moreover, since the domain-specificity of the findings was often not addressed, it remains unclear whether patterns found for specific problem domains can be better explained by co-occurrence of other psychiatric problems or by more generic characteristics of psychopathology, for example, problem severity. In this study, we aimed to investigate associations between emotion identification biases and five psychiatric problem domains, and to determine the domain-specificity of these biases. Data were collected as part of the ‘No Fun No Glory’ study and involved 2,577 young adults. The study participants completed a dynamic facial emotion identification task involving happy, sad, angry, and fearful faces, and filled in the Adult Self-Report Questionnaire, of which we used the scales depressive problems, anxiety problems, avoidance problems, Attention-Deficit Hyperactivity Disorder (ADHD) problems and antisocial problems. Our results suggest that participants with antisocial problems were significantly less sensitive to happy facial emotions, participants with ADHD problems were less sensitive to angry emotions, and participants with avoidance problems were less sensitive to both angry and happy emotions. These effects could not be fully explained by co-occurring psychiatric problems. Whereas this seems to indicate domain-specificity, inspection of the overall pattern of effect sizes regardless of statistical significance reveals generic patterns as well, in that for all psychiatric problem domains the effect sizes for happy and angry emotions were larger than the effect sizes for sad and fearful emotions. As happy and angry emotions are strongly associated with approach and avoidance mechanisms in social interaction, these mechanisms may hold the key to understanding the associations between facial emotion identification and a wide range of psychiatric problems. PMID:27920735

  9. Brain size predicts problem-solving ability in mammalian carnivores

    PubMed Central

    Benson-Amram, Sarah; Dantzer, Ben; Stricker, Gregory; Swanson, Eli M.; Holekamp, Kay E.

    2016-01-01

    Despite considerable interest in the forces shaping the relationship between brain size and cognitive abilities, it remains controversial whether larger-brained animals are, indeed, better problem-solvers. Recently, several comparative studies have revealed correlations between brain size and traits thought to require advanced cognitive abilities, such as innovation, behavioral flexibility, invasion success, and self-control. However, the general assumption that animals with larger brains have superior cognitive abilities has been heavily criticized, primarily because of the lack of experimental support for it. Here, we designed an experiment to inquire whether specific neuroanatomical or socioecological measures predict success at solving a novel technical problem among species in the mammalian order Carnivora. We presented puzzle boxes, baited with food and scaled to accommodate body size, to members of 39 carnivore species from nine families housed in multiple North American zoos. We found that species with larger brains relative to their body mass were more successful at opening the boxes. In a subset of species, we also used virtual brain endocasts to measure volumes of four gross brain regions and show that some of these regions improve model prediction of success at opening the boxes when included with total brain size and body mass. Socioecological variables, including measures of social complexity and manual dexterity, failed to predict success at opening the boxes. Our results, thus, fail to support the social brain hypothesis but provide important empirical support for the relationship between relative brain size and the ability to solve this novel technical problem. PMID:26811470

  10. Brain size predicts problem-solving ability in mammalian carnivores.

    PubMed

    Benson-Amram, Sarah; Dantzer, Ben; Stricker, Gregory; Swanson, Eli M; Holekamp, Kay E

    2016-03-01

    Despite considerable interest in the forces shaping the relationship between brain size and cognitive abilities, it remains controversial whether larger-brained animals are, indeed, better problem-solvers. Recently, several comparative studies have revealed correlations between brain size and traits thought to require advanced cognitive abilities, such as innovation, behavioral flexibility, invasion success, and self-control. However, the general assumption that animals with larger brains have superior cognitive abilities has been heavily criticized, primarily because of the lack of experimental support for it. Here, we designed an experiment to inquire whether specific neuroanatomical or socioecological measures predict success at solving a novel technical problem among species in the mammalian order Carnivora. We presented puzzle boxes, baited with food and scaled to accommodate body size, to members of 39 carnivore species from nine families housed in multiple North American zoos. We found that species with larger brains relative to their body mass were more successful at opening the boxes. In a subset of species, we also used virtual brain endocasts to measure volumes of four gross brain regions and show that some of these regions improve model prediction of success at opening the boxes when included with total brain size and body mass. Socioecological variables, including measures of social complexity and manual dexterity, failed to predict success at opening the boxes. Our results, thus, fail to support the social brain hypothesis but provide important empirical support for the relationship between relative brain size and the ability to solve this novel technical problem.

  11. Preliminary investigation on the effects of primary airflow to coal particle distribution in coal-fired boilers

    NASA Astrophysics Data System (ADS)

    Noor, N. A. W. Mohd; Hassan, H.; Hashim, M. F.; Hasini, H.; Munisamy, K. M.

    2017-04-01

    This paper presents an investigation on the effects of primary airflow to coal fineness in coal-fired boilers. In coal fired power plant, coal is pulverized in a pulverizer, and it is then transferred to boiler for combustion. Coal need to be ground to its desired size to obtain maximum combustion efficiency. Coarse coal particle size may lead to many performance problems such as formation of clinker. In this study, the effects of primary airflow to coal particles size and coal flow distribution were investigated by using isokinetic coal sampling and computational fluid dynamic (CFD) modelling. Four different primary airflows were tested and the effects to resulting coal fineness were recorded. Results show that the optimum coal fineness distribution is obtained at design primary airflow. Any reduction or increase of air flow rate results in undesirable coal fineness distribution.

  12. Support vector regression to predict porosity and permeability: Effect of sample size

    NASA Astrophysics Data System (ADS)

    Al-Anazi, A. F.; Gates, I. D.

    2012-02-01

    Porosity and permeability are key petrophysical parameters obtained from laboratory core analysis. Cores, obtained from drilled wells, are often few in number for most oil and gas fields. Porosity and permeability correlations based on conventional techniques such as linear regression or neural networks trained with core and geophysical logs suffer poor generalization to wells with only geophysical logs. The generalization problem of correlation models often becomes pronounced when the training sample size is small. This is attributed to the underlying assumption that conventional techniques employing the empirical risk minimization (ERM) inductive principle converge asymptotically to the true risk values as the number of samples increases. In small sample size estimation problems, the available training samples must span the complexity of the parameter space so that the model is able both to match the available training samples reasonably well and to generalize to new data. This is achieved using the structural risk minimization (SRM) inductive principle by matching the capability of the model to the available training data. One method that uses SRM is support vector regression (SVR) network. In this research, the capability of SVR to predict porosity and permeability in a heterogeneous sandstone reservoir under the effect of small sample size is evaluated. Particularly, the impact of Vapnik's ɛ-insensitivity loss function and least-modulus loss function on generalization performance was empirically investigated. The results are compared to the multilayer perception (MLP) neural network, a widely used regression method, which operates under the ERM principle. The mean square error and correlation coefficients were used to measure the quality of predictions. The results demonstrate that SVR yields consistently better predictions of the porosity and permeability with small sample size than the MLP method. Also, the performance of SVR depends on both kernel function type and loss functions used.

  13. Dynamics of upper mantle rocks decompression melting above hot spots under continental plates

    NASA Astrophysics Data System (ADS)

    Perepechko, Yury; Sorokin, Konstantin; Sharapov, Victor

    2014-05-01

    Numeric 2D simulation of the decompression melting above the hot spots (HS) was accomplished under the following conditions: initial temperature within crust mantle section was postulated; thickness of the metasomatized lithospheric mantle is determined by the mantle rheology and position of upper asthenosphere boundary; upper and lower boundaries were postulated to be not permeable and the condition for adhesion and the distribution of temperature (1400-2050°C); lateral boundaries imitated infinity of layer. Sizes and distribution of lateral points, their symmetry, and maximum temperature varied between the thermodynamic condition for existences of perovskite - majorite transition and its excess above transition temperature. Problem was solved numerically a cell-vertex finite volume method for thermo hydrodynamic problems. For increasing convergence of iterative process the method of lower relaxation with different value of relaxation parameter for each equation was used. The method of through calculation was used for the increase in the computing rate for the two-layered upper mantle - lithosphere system. Calculated region was selected as 700 x (2100-4900) km. The time step for the study of the asthenosphere dynamics composed 0.15-0.65 Ma. The following factors controlling the sizes and melting degree of the convective upper mantle, are shown: a) the initial temperature distribution along the section of upper mantleb) sizes and the symmetry of HS, c) temperature excess within the HS above the temperature on the upper and lower mantle border TB=1500-2000oC with 5-15% deviation but not exceed 2350oC. It is found, that appearance of decompression melting with HS presence initiate primitive mantle melting at TB > of 1600oC. Initial upper mantle heating influence on asthenolens dimensions with a constant HS size is controlled mainly by decompression melting degree. Thus, with lateral sizes of HS = 400 km the decompression melting appears at TB > 1600oC and HS temperature (THS) > 1900oC asthenolens size ~700 km. When THS = of 2000oC the maximum melting degree of the primitive mantle is near 40%. An increase in the TB > 1900oC the maximum degree of melting could rich 100% with the same size of decompression melting zone (700 km). We examined decompression melting above the HS having LHS = 100 km - 780 km at a TB 1850- 2100oC with the thickness of lithosphere = 100 km.It is shown that asthenolens size (Lln) does not change substantially: Lln=700 km at LHS = of 100 km; Lln= 800 km at LHS = of 780 km. In presence of asymmetry of large HS the region of advection is developed above the HS maximum with the formation of asymmetrical cell. Influence of lithospheric plate thicknesses on appearance and evolution of asthenolens above the HS were investigated for the model stepped profile for the TB ≤ of 1750oS with Lhs = 100km and maximum of THS =2350oC. With an increase of TB the Lln difference beneath lithospheric steps is leveled with retention of a certain difference to melting degrees and time of the melting appearance a top of the HS. RFBR grant 12-05-00625.

  14. The Problem of Size in Robust Design

    NASA Technical Reports Server (NTRS)

    Koch, Patrick N.; Allen, Janet K.; Mistree, Farrokh; Mavris, Dimitri

    1997-01-01

    To facilitate the effective solution of multidisciplinary, multiobjective complex design problems, a departure from the traditional parametric design analysis and single objective optimization approaches is necessary in the preliminary stages of design. A necessary tradeoff becomes one of efficiency vs. accuracy as approximate models are sought to allow fast analysis and effective exploration of a preliminary design space. In this paper we apply a general robust design approach for efficient and comprehensive preliminary design to a large complex system: a high speed civil transport (HSCT) aircraft. Specifically, we investigate the HSCT wing configuration design, incorporating life cycle economic uncertainties to identify economically robust solutions. The approach is built on the foundation of statistical experimentation and modeling techniques and robust design principles, and is specialized through incorporation of the compromise Decision Support Problem for multiobjective design. For large problems however, as in the HSCT example, this robust design approach developed for efficient and comprehensive design breaks down with the problem of size - combinatorial explosion in experimentation and model building with number of variables -and both efficiency and accuracy are sacrificed. Our focus in this paper is on identifying and discussing the implications and open issues associated with the problem of size for the preliminary design of large complex systems.

  15. A two-stage approach to the depot shunting driver assignment problem with workload balance considerations.

    PubMed

    Wang, Jiaxi; Gronalt, Manfred; Sun, Yan

    2017-01-01

    Due to its environmentally sustainable and energy-saving characteristics, railway transportation nowadays plays a fundamental role in delivering passengers and goods. Emerged in the area of transportation planning, the crew (workforce) sizing problem and the crew scheduling problem have been attached great importance by the railway industry and the scientific community. In this paper, we aim to solve the two problems by proposing a novel two-stage optimization approach in the context of the electric multiple units (EMU) depot shunting driver assignment problem. Given a predefined depot shunting schedule, the first stage of the approach focuses on determining an optimal size of shunting drivers. While the second stage is formulated as a bi-objective optimization model, in which we comprehensively consider the objectives of minimizing the total walking distance and maximizing the workload balance. Then we combine the normalized normal constraint method with a modified Pareto filter algorithm to obtain Pareto solutions for the bi-objective optimization problem. Furthermore, we conduct a series of numerical experiments to demonstrate the proposed approach. Based on the computational results, the regression analysis yield a driver size predictor and the sensitivity analysis give some interesting insights that are useful for decision makers.

  16. A two-stage approach to the depot shunting driver assignment problem with workload balance considerations

    PubMed Central

    Gronalt, Manfred; Sun, Yan

    2017-01-01

    Due to its environmentally sustainable and energy-saving characteristics, railway transportation nowadays plays a fundamental role in delivering passengers and goods. Emerged in the area of transportation planning, the crew (workforce) sizing problem and the crew scheduling problem have been attached great importance by the railway industry and the scientific community. In this paper, we aim to solve the two problems by proposing a novel two-stage optimization approach in the context of the electric multiple units (EMU) depot shunting driver assignment problem. Given a predefined depot shunting schedule, the first stage of the approach focuses on determining an optimal size of shunting drivers. While the second stage is formulated as a bi-objective optimization model, in which we comprehensively consider the objectives of minimizing the total walking distance and maximizing the workload balance. Then we combine the normalized normal constraint method with a modified Pareto filter algorithm to obtain Pareto solutions for the bi-objective optimization problem. Furthermore, we conduct a series of numerical experiments to demonstrate the proposed approach. Based on the computational results, the regression analysis yield a driver size predictor and the sensitivity analysis give some interesting insights that are useful for decision makers. PMID:28704489

  17. Pattern-set generation algorithm for the one-dimensional multiple stock sizes cutting stock problem

    NASA Astrophysics Data System (ADS)

    Cui, Yaodong; Cui, Yi-Ping; Zhao, Zhigang

    2015-09-01

    A pattern-set generation algorithm (PSG) for the one-dimensional multiple stock sizes cutting stock problem (1DMSSCSP) is presented. The solution process contains two stages. In the first stage, the PSG solves the residual problems repeatedly to generate the patterns in the pattern set, where each residual problem is solved by the column-generation approach, and each pattern is generated by solving a single large object placement problem. In the second stage, the integer linear programming model of the 1DMSSCSP is solved using a commercial solver, where only the patterns in the pattern set are considered. The computational results of benchmark instances indicate that the PSG outperforms existing heuristic algorithms and rivals the exact algorithm in solution quality.

  18. Efficient dual approach to distance metric learning.

    PubMed

    Shen, Chunhua; Kim, Junae; Liu, Fayao; Wang, Lei; van den Hengel, Anton

    2014-02-01

    Distance metric learning is of fundamental interest in machine learning because the employed distance metric can significantly affect the performance of many learning methods. Quadratic Mahalanobis metric learning is a popular approach to the problem, but typically requires solving a semidefinite programming (SDP) problem, which is computationally expensive. The worst case complexity of solving an SDP problem involving a matrix variable of size D×D with O(D) linear constraints is about O(D(6.5)) using interior-point methods, where D is the dimension of the input data. Thus, the interior-point methods only practically solve problems exhibiting less than a few thousand variables. Because the number of variables is D(D+1)/2, this implies a limit upon the size of problem that can practically be solved around a few hundred dimensions. The complexity of the popular quadratic Mahalanobis metric learning approach thus limits the size of problem to which metric learning can be applied. Here, we propose a significantly more efficient and scalable approach to the metric learning problem based on the Lagrange dual formulation of the problem. The proposed formulation is much simpler to implement, and therefore allows much larger Mahalanobis metric learning problems to be solved. The time complexity of the proposed method is roughly O(D(3)), which is significantly lower than that of the SDP approach. Experiments on a variety of data sets demonstrate that the proposed method achieves an accuracy comparable with the state of the art, but is applicable to significantly larger problems. We also show that the proposed method can be applied to solve more general Frobenius norm regularized SDP problems approximately.

  19. An evaluation of exact methods for the multiple subset maximum cardinality selection problem.

    PubMed

    Brusco, Michael J; Köhn, Hans-Friedrich; Steinley, Douglas

    2016-05-01

    The maximum cardinality subset selection problem requires finding the largest possible subset from a set of objects, such that one or more conditions are satisfied. An important extension of this problem is to extract multiple subsets, where the addition of one more object to a larger subset would always be preferred to increases in the size of one or more smaller subsets. We refer to this as the multiple subset maximum cardinality selection problem (MSMCSP). A recently published branch-and-bound algorithm solves the MSMCSP as a partitioning problem. Unfortunately, the computational requirement associated with the algorithm is often enormous, thus rendering the method infeasible from a practical standpoint. In this paper, we present an alternative approach that successively solves a series of binary integer linear programs to obtain a globally optimal solution to the MSMCSP. Computational comparisons of the methods using published similarity data for 45 food items reveal that the proposed sequential method is computationally far more efficient than the branch-and-bound approach. © 2016 The British Psychological Society.

  20. Computational Study for Planar Connected Dominating Set Problem

    NASA Astrophysics Data System (ADS)

    Marzban, Marjan; Gu, Qian-Ping; Jia, Xiaohua

    The connected dominating set (CDS) problem is a well studied NP-hard problem with many important applications. Dorn et al. [ESA2005, LNCS3669,pp95-106] introduce a new technique to generate 2^{O(sqrt{n})} time and fixed-parameter algorithms for a number of non-local hard problems, including the CDS problem in planar graphs. The practical performance of this algorithm is yet to be evaluated. We perform a computational study for such an evaluation. The results show that the size of instances can be solved by the algorithm mainly depends on the branchwidth of the instances, coinciding with the theoretical result. For graphs with small or moderate branchwidth, the CDS problem instances with size up to a few thousands edges can be solved in a practical time and memory space. This suggests that the branch-decomposition based algorithms can be practical for the planar CDS problem.

  1. Similar Ratios of Introns to Intergenic Sequence across Animal Genomes

    PubMed Central

    Wörheide, Gert

    2017-01-01

    Abstract One central goal of genome biology is to understand how the usage of the genome differs between organisms. Our knowledge of genome composition, needed for downstream inferences, is critically dependent on gene annotations, yet problems associated with gene annotation and assembly errors are usually ignored in comparative genomics. Here, we analyze the genomes of 68 species across 12 animal phyla and some single-cell eukaryotes for general trends in genome composition and transcription, taking into account problems of gene annotation. We show that, regardless of genome size, the ratio of introns to intergenic sequence is comparable across essentially all animals, with nearly all deviations dominated by increased intergenic sequence. Genomes of model organisms have ratios much closer to 1:1, suggesting that the majority of published genomes of nonmodel organisms are underannotated and consequently omit substantial numbers of genes, with likely negative impact on evolutionary interpretations. Finally, our results also indicate that most animals transcribe half or more of their genomes arguing against differences in genome usage between animal groups, and also suggesting that the transcribed portion is more dependent on genome size than previously thought. PMID:28633296

  2. Combined Simulated Annealing and Genetic Algorithm Approach to Bus Network Design

    NASA Astrophysics Data System (ADS)

    Liu, Li; Olszewski, Piotr; Goh, Pong-Chai

    A new method - combined simulated annealing (SA) and genetic algorithm (GA) approach is proposed to solve the problem of bus route design and frequency setting for a given road network with fixed bus stop locations and fixed travel demand. The method involves two steps: a set of candidate routes is generated first and then the best subset of these routes is selected by the combined SA and GA procedure. SA is the main process to search for a better solution to minimize the total system cost, comprising user and operator costs. GA is used as a sub-process to generate new solutions. Bus demand assignment on two alternative paths is performed at the solution evaluation stage. The method was implemented on four theoretical grid networks of different size and a benchmark network. Several GA operators (crossover and mutation) were utilized and tested for their effectiveness. The results show that the proposed method can efficiently converge to the optimal solution on a small network but computation time increases significantly with network size. The method can also be used for other transport operation management problems.

  3. Control of minimum member size in parameter-free structural shape optimization by a medial axis approximation

    NASA Astrophysics Data System (ADS)

    Schmitt, Oliver; Steinmann, Paul

    2018-06-01

    We introduce a manufacturing constraint for controlling the minimum member size in structural shape optimization problems, which is for example of interest for components fabricated in a molding process. In a parameter-free approach, whereby the coordinates of the FE boundary nodes are used as design variables, the challenging task is to find a generally valid definition for the thickness of non-parametric geometries in terms of their boundary nodes. Therefore we use the medial axis, which is the union of all points with at least two closest points on the boundary of the domain. Since the effort for the exact computation of the medial axis of geometries given by their FE discretization highly increases with the number of surface elements we use the distance function instead to approximate the medial axis by a cloud of points. The approximation is demonstrated on three 2D examples. Moreover, the formulation of a minimum thickness constraint is applied to a sensitivity-based shape optimization problem of one 2D and one 3D model.

  4. Validation of a high-performance size-exclusion chromatography method to determine and characterize β-glucans in beer wort using a triple-detector array.

    PubMed

    Tomasi, Ivan; Marconi, Ombretta; Sileoni, Valeria; Perretti, Giuseppe

    2017-01-01

    Beer wort β-glucans are high-molecular-weight non-starch polysaccharides of that are great interest to the brewing industries. Because glucans can increase the viscosity of the solutions and form gels, hazes, and precipitates, they are often related to poor lautering performance and beer filtration problems. In this work, a simple and suitable method was developed to determine and characterize β-glucans in beer wort using size exclusion chromatography coupled with a triple-detector array, which is composed of a light scatterer, a viscometer, and a refractive-index detector. The method performances are comparable to the commercial reference method as result from the statistical validation and enable one to obtain interesting parameters of β-glucan in beer wort, such as the molecular weight averages, fraction description, hydrodynamic radius, intrinsic viscosity, polydispersity and Mark-Houwink parameters. This characterization can be useful in brewing science to understand filtration problems, which are not always explained through conventional analysis. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Solution to the problem of the poor cyclic fatigue resistance of bulk metallic glasses

    PubMed Central

    Launey, Maximilien E.; Hofmann, Douglas C.; Johnson, William L.; Ritchie, Robert O.

    2009-01-01

    The recent development of metallic glass-matrix composites represents a particular milestone in engineering materials for structural applications owing to their remarkable combination of strength and toughness. However, metallic glasses are highly susceptible to cyclic fatigue damage, and previous attempts to solve this problem have been largely disappointing. Here, we propose and demonstrate a microstructural design strategy to overcome this limitation by matching the microstructural length scales (of the second phase) to mechanical crack-length scales. Specifically, semisolid processing is used to optimize the volume fraction, morphology, and size of second-phase dendrites to confine any initial deformation (shear banding) to the glassy regions separating dendrite arms having length scales of ≈2 μm, i.e., to less than the critical crack size for failure. Confinement of the damage to such interdendritic regions results in enhancement of fatigue lifetimes and increases the fatigue limit by an order of magnitude, making these “designed” composites as resistant to fatigue damage as high-strength steels and aluminum alloys. These design strategies can be universally applied to any other metallic glass systems. PMID:19289820

  6. Study on recent execution of overall evaluation bidding method in small and medium-sized regional local governments

    NASA Astrophysics Data System (ADS)

    Fujishima, Hirohide; Yanase, Norihiko

    About 70% of local governments in Japan, endeavored to introduce overall evaluation bidding method for their public works in 2011 and each authority ordered one or some projects in according to the new bidding process. That is, their enforcement was an only trial level and they say that the reason why is long-term procedure and heavily administrative load of the system. The author think that such burden has relationship of human affairs of local govern ments, practical problems on kinds and price of constructions and the officers' experience on the new bidding method. The aim of this study is to analyze such problems among the officers' profession, posts and experience of administrative matter by statistical data, questionnaire and hearing to the officers. The result could indicate that a group of small local governments uses the method appropriately and that another group of medium-sized rejects to increase more contracts in according to the new bidding system because of unbalance between the stuffs' ability and order quantity of public works.

  7. Control of minimum member size in parameter-free structural shape optimization by a medial axis approximation

    NASA Astrophysics Data System (ADS)

    Schmitt, Oliver; Steinmann, Paul

    2017-09-01

    We introduce a manufacturing constraint for controlling the minimum member size in structural shape optimization problems, which is for example of interest for components fabricated in a molding process. In a parameter-free approach, whereby the coordinates of the FE boundary nodes are used as design variables, the challenging task is to find a generally valid definition for the thickness of non-parametric geometries in terms of their boundary nodes. Therefore we use the medial axis, which is the union of all points with at least two closest points on the boundary of the domain. Since the effort for the exact computation of the medial axis of geometries given by their FE discretization highly increases with the number of surface elements we use the distance function instead to approximate the medial axis by a cloud of points. The approximation is demonstrated on three 2D examples. Moreover, the formulation of a minimum thickness constraint is applied to a sensitivity-based shape optimization problem of one 2D and one 3D model.

  8. Robustness-Based Design Optimization Under Data Uncertainty

    NASA Technical Reports Server (NTRS)

    Zaman, Kais; McDonald, Mark; Mahadevan, Sankaran; Green, Lawrence

    2010-01-01

    This paper proposes formulations and algorithms for design optimization under both aleatory (i.e., natural or physical variability) and epistemic uncertainty (i.e., imprecise probabilistic information), from the perspective of system robustness. The proposed formulations deal with epistemic uncertainty arising from both sparse and interval data without any assumption about the probability distributions of the random variables. A decoupled approach is proposed in this paper to un-nest the robustness-based design from the analysis of non-design epistemic variables to achieve computational efficiency. The proposed methods are illustrated for the upper stage design problem of a two-stage-to-orbit (TSTO) vehicle, where the information on the random design inputs are only available as sparse point and/or interval data. As collecting more data reduces uncertainty but increases cost, the effect of sample size on the optimality and robustness of the solution is also studied. A method is developed to determine the optimal sample size for sparse point data that leads to the solutions of the design problem that are least sensitive to variations in the input random variables.

  9. Amoeba-inspired nanoarchitectonic computing: solving intractable computational problems using nanoscale photoexcitation transfer dynamics.

    PubMed

    Aono, Masashi; Naruse, Makoto; Kim, Song-Ju; Wakabayashi, Masamitsu; Hori, Hirokazu; Ohtsu, Motoichi; Hara, Masahiko

    2013-06-18

    Biologically inspired computing devices and architectures are expected to overcome the limitations of conventional technologies in terms of solving computationally demanding problems, adapting to complex environments, reducing energy consumption, and so on. We previously demonstrated that a primitive single-celled amoeba (a plasmodial slime mold), which exhibits complex spatiotemporal oscillatory dynamics and sophisticated computing capabilities, can be used to search for a solution to a very hard combinatorial optimization problem. We successfully extracted the essential spatiotemporal dynamics by which the amoeba solves the problem. This amoeba-inspired computing paradigm can be implemented by various physical systems that exhibit suitable spatiotemporal dynamics resembling the amoeba's problem-solving process. In this Article, we demonstrate that photoexcitation transfer phenomena in certain quantum nanostructures mediated by optical near-field interactions generate the amoebalike spatiotemporal dynamics and can be used to solve the satisfiability problem (SAT), which is the problem of judging whether a given logical proposition (a Boolean formula) is self-consistent. SAT is related to diverse application problems in artificial intelligence, information security, and bioinformatics and is a crucially important nondeterministic polynomial time (NP)-complete problem, which is believed to become intractable for conventional digital computers when the problem size increases. We show that our amoeba-inspired computing paradigm dramatically outperforms a conventional stochastic search method. These results indicate the potential for developing highly versatile nanoarchitectonic computers that realize powerful solution searching with low energy consumption.

  10. Behavioral and Emotional Problems Reported by Parents of Children Ages 6 to 16 in 31 Societies

    ERIC Educational Resources Information Center

    Rescorla, Leslie; Achenbach, Thomas; Ivanova, Masha Y.; Dumenci, Levent; Almqvist, Fredrik; Bilenberg, Niels; Bird, Hector; Chen, Wei; Dobrean, Anca; Dopfner, Manfred; Erol, Nese; Fombonne, Eric; Fonseca, Antonio; Frigerio, Alessandra; Grietens, Hans; Hannesdottir, Helga; Kanbayashi, Yasuko; Lambert, Michael; Larsson, Bo; Leung, Patrick; Liu, Xianchen; Minaei, Asghar; Mulatu, Mesfin S.; Novik, Torunn S.; Oh, Kyung-Ja; Roussos, Alexandra; Sawyer, Michael; Simsek, Zeynep; Steinhausen, Hans-Christoph; Weintraub, Sheila; Weisz, John; Metzke, Christa Winkler; Wolanczyk, Tomasz; Yang, Hao-Jan; Zilber, Nelly; Zukauskiene, Rita; Verhulst, Frank

    2007-01-01

    This study compared parents' ratings of behavioral and emotional problems on the "Child Behavior Checklist" (Achenbach, 1991; Achenbach & Rescorla, 2001) for general population samples of children ages 6 to 16 from 31 societies (N = 55,508). Effect sizes for society ranged from 0.03 to 0.14. Effect sizes for gender were less than or…

  11. A modular finite-element model (MODFE) for areal and axisymmetric ground-water-flow problems, Part 2: Derivation of finite-element equations and comparisons with analytical solutions

    USGS Publications Warehouse

    Cooley, Richard L.

    1992-01-01

    MODFE, a modular finite-element model for simulating steady- or unsteady-state, area1 or axisymmetric flow of ground water in a heterogeneous anisotropic aquifer is documented in a three-part series of reports. In this report, part 2, the finite-element equations are derived by minimizing a functional of the difference between the true and approximate hydraulic head, which produces equations that are equivalent to those obtained by either classical variational or Galerkin techniques. Spatial finite elements are triangular with linear basis functions, and temporal finite elements are one dimensional with linear basis functions. Physical processes that can be represented by the model include (1) confined flow, unconfined flow (using the Dupuit approximation), or a combination of both; (2) leakage through either rigid or elastic confining units; (3) specified recharge or discharge at points, along lines, or areally; (4) flow across specified-flow, specified-head, or head-dependent boundaries; (5) decrease of aquifer thickness to zero under extreme water-table decline and increase of aquifer thickness from zero as the water table rises; and (6) head-dependent fluxes from springs, drainage wells, leakage across riverbeds or confining units combined with aquifer dewatering, and evapotranspiration. The matrix equations produced by the finite-element method are solved by the direct symmetric-Doolittle method or the iterative modified incomplete-Cholesky conjugate-gradient method. The direct method can be efficient for small- to medium-sized problems (less than about 500 nodes), and the iterative method is generally more efficient for larger-sized problems. Comparison of finite-element solutions with analytical solutions for five example problems demonstrates that the finite-element model can yield accurate solutions to ground-water flow problems.

  12. Computational Studies of Strongly Correlated Quantum Matter

    NASA Astrophysics Data System (ADS)

    Shi, Hao

    The study of strongly correlated quantum many-body systems is an outstanding challenge. Highly accurate results are needed for the understanding of practical and fundamental problems in condensed-matter physics, high energy physics, material science, quantum chemistry and so on. Our familiar mean-field or perturbative methods tend to be ineffective. Numerical simulations provide a promising approach for studying such systems. The fundamental difficulty of numerical simulation is that the dimension of the Hilbert space needed to describe interacting systems increases exponentially with the system size. Quantum Monte Carlo (QMC) methods are one of the best approaches to tackle the problem of enormous Hilbert space. They have been highly successful for boson systems and unfrustrated spin models. For systems with fermions, the exchange symmetry in general causes the infamous sign problem, making the statistical noise in the computed results grow exponentially with the system size. This hinders our understanding of interesting physics such as high-temperature superconductivity, metal-insulator phase transition. In this thesis, we present a variety of new developments in the auxiliary-field quantum Monte Carlo (AFQMC) methods, including the incorporation of symmetry in both the trial wave function and the projector, developing the constraint release method, using the force-bias to drastically improve the efficiency in Metropolis framework, identifying and solving the infinite variance problem, and sampling Hartree-Fock-Bogoliubov wave function. With these developments, some of the most challenging many-electron problems are now under control. We obtain an exact numerical solution of two-dimensional strongly interacting Fermi atomic gas, determine the ground state properties of the 2D Fermi gas with Rashba spin-orbit coupling, provide benchmark results for the ground state of the two-dimensional Hubbard model, and establish that the Hubbard model has a stripe order in the underdoped region.

  13. On the relationship between parallel computation and graph embedding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gupta, A.K.

    1989-01-01

    The problem of efficiently simulating an algorithm designed for an n-processor parallel machine G on an m-processor parallel machine H with n > m arises when parallel algorithms designed for an ideal size machine are simulated on existing machines which are of a fixed size. The author studies this problem when every processor of H takes over the function of a number of processors in G, and he phrases the simulation problem as a graph embedding problem. New embeddings presented address relevant issues arising from the parallel computation environment. The main focus centers around embedding complete binary trees into smaller-sizedmore » binary trees, butterflies, and hypercubes. He also considers simultaneous embeddings of r source machines into a single hypercube. Constant factors play a crucial role in his embeddings since they are not only important in practice but also lead to interesting theoretical problems. All of his embeddings minimize dilation and load, which are the conventional cost measures in graph embeddings and determine the maximum amount of time required to simulate one step of G on H. His embeddings also optimize a new cost measure called ({alpha},{beta})-utilization which characterizes how evenly the processors of H are used by the processors of G. Ideally, the utilization should be balanced (i.e., every processor of H simulates at most (n/m) processors of G) and the ({alpha},{beta})-utilization measures how far off from a balanced utilization the embedding is. He presents embeddings for the situation when some processors of G have different capabilities (e.g. memory or I/O) than others and the processors with different capabilities are to be distributed uniformly among the processors of H. Placing such conditions on an embedding results in an increase in some of the cost measures.« less

  14. Simultaneous personnel and vehicle shift scheduling in the waste management sector.

    PubMed

    Ghiani, Gianpaolo; Guerriero, Emanuela; Manni, Andrea; Manni, Emanuele; Potenza, Agostino

    2013-07-01

    Urban waste management is becoming an increasingly complex task, absorbing a huge amount of resources, and having a major environmental impact. The design of a waste management system consists in various activities, and one of these is related to the definition of shift schedules for both personnel and vehicles. This activity has a great incidence on the tactical and operational cost for companies. In this paper, we propose an integer programming model to find an optimal solution to the integrated problem. The aim is to determine optimal schedules at minimum cost. Moreover, we design a fast and effective heuristic to face large-size problems. Both approaches are tested on data from a real-world case in Southern Italy and compared to the current practice utilized by the company managing the service, showing that simultaneously solving these problems can lead to significant monetary savings. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Confronting Practical Problems for Initiation of On-line Hemodiafiltration Therapy.

    PubMed

    Kim, Yang Wook; Park, Sihyung

    2016-06-01

    Conventional hemodialysis, which is based on the diffusive transport of solutes, is the most widely used renal replacement therapy. It effectively removes small solutes such as urea and corrects fluid, electrolyte and acid-base imbalance. However, solute diffusion coefficients decreased rapidly as molecular size increased. Because of this, middle and large molecules are not removed effectively and clinical problem such as dialysis amyloidosis might occur. Online hemodiafiltration which is combined by diffusive and convective therapies can overcome such problems by removing effectively middle and large solutes. Online hemodiafiltration is safe, very effective, economically affordable, improving session tolerance and may improve the mortality superior to high flux hemodialysis. However, there might be some potential limitations for setting up online hemodiafiltaration. In this article, we review the uremic toxins associated with dialysis, definition of hemodiafiltration, indication and prescription of hemodiafiltration and the limitations of setting up hemodiafiltration.

  16. A comparison of human performance in figural and navigational versions of the traveling salesman problem.

    PubMed

    Blaser, R E; Wilber, Julie

    2013-11-01

    Performance on a typical pen-and-paper (figural) version of the Traveling Salesman Problem was compared to performance on a room-sized navigational version of the same task. Nine configurations were designed to examine the use of the nearest-neighbor (NN), cluster approach, and convex-hull strategies. Performance decreased with an increasing number of nodes internal to the hull, and improved when the NN strategy produced the optimal path. There was no overall difference in performance between figural and navigational task modalities. However, there was an interaction between modality and configuration, with evidence that participants relied more heavily on the NN strategy in the figural condition. Our results suggest that participants employed similar, but not identical, strategies when solving figural and navigational versions of the problem. Surprisingly, there was no evidence that participants favored global strategies in the figural version and local strategies in the navigational version.

  17. Dual-Use Partnership Addresses Performance Problems with "Y" Pattern Control Valves

    NASA Technical Reports Server (NTRS)

    Bailey, John W.

    2004-01-01

    A Dual-Use Cooperative Agreement between the Propulsion Test Directorate (PTD) at Stennis Space Center (SSC) and Oceaneering Reflange, Inc. of Houston, TX has produced an improved 'Y' pattern split-body control valve for use in the propulsion test facilities at Stennis Space Center. The split-body, or clamped bonnet technology, provides for a 'cleaner' valve design featuring enhanced performance and increased flow capacity with extended life expectancy. Other points addressed by the partnership include size, weight and costs. Overall size and weight of each valve will be reduced by 50% compared to valves currently in use at SSC. An initial procurement of two 10 inch valves will result in an overall cost reduction of 15% or approximately $50,000 per valve.

  18. Biophysical mechanisms of modification of skin optical properties in the UV wavelength range with nanoparticles

    NASA Astrophysics Data System (ADS)

    Popov, A. P.; Priezzhev, A. V.; Lademann, J.; Myllylä, R.

    2009-05-01

    In this paper, by means of the Mie theory and Monte Carlo simulations we investigate modification of optical properties of the superficial layer of human skin (stratum corneum) for 310- and 400-nm ultraviolet (UV) radiation by embedding of 35-200-nm-sized particles of titanium dioxide (TiO2) and silicon (Si). Problem of skin protection against UV light is of major importance due to increased frequency of skin cancer provoked by excessive doses of accepted UV radiation. For 310-nm light, the optimal sizes of the TiO2 and Si particles are found to be 62 and 55 nm, respectively, and for 400-nm radiation, 122 and 70 nm, respectively.

  19. Time Dependence of Aerosol Light Scattering Downwind of Forest Fires

    NASA Astrophysics Data System (ADS)

    Kleinman, L. I.; Sedlacek, A. J., III; Wang, J.; Lewis, E. R.; Springston, S. R.; Chand, D.; Shilling, J.; Arnott, W. P.; Freedman, A.; Onasch, T. B.; Fortner, E.; Zhang, Q.; Yokelson, R. J.; Adachi, K.; Buseck, P. R.

    2017-12-01

    In the first phase of BBOP (Biomass Burn Observation Project), a Department of Energy (DOE) sponsored study, wildland fires in the Pacific Northwest were sampled from the G-1 aircraft via sequences of transects that encountered emission whose age (time since emission) ranged from approximately 15 minutes to four hours. Comparisons between transects allowed us to determine the near-field time evolution of trace gases, aerosol particles, and optical properties. The fractional increase in aerosol concentration with plume age was typically less than a third of the fractional increase in light scattering. In some fires the increase in light scattering exceeded a factor of two. Two possible causes for the discrepancy between scattering and aerosol mass are i) the downwind formation of refractory tar balls that are not detected by the AMS and therefore contribute to scattering but not to aerosol mass and ii) changes to the aerosol size distribution. Both possibilities are considered. Our information on tar balls comes from an analysis of TEM grids. A direct determination of size changes is complicated by extremely high aerosol number concentrations that caused coincidence problems for the PCASP and UHSAS probes. We instead construct a set of plausible log normal size distributions and for each member of the set do Mie calculations to determine mass scattering efficiency (MSE), angstrom exponents, and backscatter ratios. Best fit size distributions are selected by comparison with observed data derived from multi-wavelength scattering measurements, an extrapolated FIMS size distribution, and mass measurements from an SP-AMS. MSE at 550 nm varies from a typical near source value of 2-3 to about 4 in aged air.

  20. Cloud-based large-scale air traffic flow optimization

    NASA Astrophysics Data System (ADS)

    Cao, Yi

    The ever-increasing traffic demand makes the efficient use of airspace an imperative mission, and this paper presents an effort in response to this call. Firstly, a new aggregate model, called Link Transmission Model (LTM), is proposed, which models the nationwide traffic as a network of flight routes identified by origin-destination pairs. The traversal time of a flight route is assumed to be the mode of distribution of historical flight records, and the mode is estimated by using Kernel Density Estimation. As this simplification abstracts away physical trajectory details, the complexity of modeling is drastically decreased, resulting in efficient traffic forecasting. The predicative capability of LTM is validated against recorded traffic data. Secondly, a nationwide traffic flow optimization problem with airport and en route capacity constraints is formulated based on LTM. The optimization problem aims at alleviating traffic congestions with minimal global delays. This problem is intractable due to millions of variables. A dual decomposition method is applied to decompose the large-scale problem such that the subproblems are solvable. However, the whole problem is still computational expensive to solve since each subproblem is an smaller integer programming problem that pursues integer solutions. Solving an integer programing problem is known to be far more time-consuming than solving its linear relaxation. In addition, sequential execution on a standalone computer leads to linear runtime increase when the problem size increases. To address the computational efficiency problem, a parallel computing framework is designed which accommodates concurrent executions via multithreading programming. The multithreaded version is compared with its monolithic version to show decreased runtime. Finally, an open-source cloud computing framework, Hadoop MapReduce, is employed for better scalability and reliability. This framework is an "off-the-shelf" parallel computing model that can be used for both offline historical traffic data analysis and online traffic flow optimization. It provides an efficient and robust platform for easy deployment and implementation. A small cloud consisting of five workstations was configured and used to demonstrate the advantages of cloud computing in dealing with large-scale parallelizable traffic problems.

  1. Future orientation, school contexts, and problem behaviors: a multilevel study.

    PubMed

    Chen, Pan; Vazsonyi, Alexander T

    2013-01-01

    The association between future orientation and problem behaviors has received extensive empirical attention; however, previous work has not considered school contextual influences on this link. Using a sample of N = 9,163 9th to 12th graders (51.0 % females) from N = 85 high schools of the National Longitudinal Study of Adolescent Health, the present study examined the independent and interactive effects of adolescent future orientation and school contexts (school size, school location, school SES, school future orientation climate) on problem behaviors. Results provided evidence that adolescent future orientation was associated independently and negatively with problem behaviors. In addition, adolescents from large-size schools reported higher levels of problem behaviors than their age mates from small-size schools, controlling for individual-level covariates. Furthermore, an interaction effect between adolescent future orientation and school future orientation climate was found, suggesting influences of school future orientation climate on the link between adolescent future orientation and problem behaviors as well as variations in effects of school future orientation climate across different levels of adolescent future orientation. Specifically, the negative association between adolescent future orientation and problem behaviors was stronger at schools with a more positive climate of future orientation, whereas school future orientation climate had a significant and unexpectedly positive relationship with problem behaviors for adolescents with low levels of future orientation. Findings implicate the importance of comparing how the future orientation-problem behaviors link varies across different ecological contexts and the need to understand influences of school climate on problem behaviors in light of differences in psychological processes among adolescents.

  2. Reflecting anastigmatic optical systems: a retrospective

    NASA Astrophysics Data System (ADS)

    Rakich, Andrew

    2017-11-01

    Reflecting anastigmatic optical systems hold several inherent advantages over refracting equivalents; such as compactness, absence of color, high "refractive efficiency", wide bandwidth, and size-scalability to enormous apertures. Such advantages have led to these systems becoming, increasingly since their first deliberate development in 1905, the "go-to" solution for various classes of optical design problem. This paper describes in broad terms the history of the development of this class of optical system, with an emphasis on the early history.

  3. Arcade: A Web-Java Based Framework for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Chen, Zhikai; Maly, Kurt; Mehrotra, Piyush; Zubair, Mohammad; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    Distributed heterogeneous environments are being increasingly used to execute a variety of large size simulations and computational problems. We are developing Arcade, a web-based environment to design, execute, monitor, and control distributed applications. These targeted applications consist of independent heterogeneous modules which can be executed on a distributed heterogeneous environment. In this paper we describe the overall design of the system and discuss the prototype implementation of the core functionalities required to support such a framework.

  4. Drought Management, Service Interruption, and Water Pricing: Evidence From Hong Kong

    NASA Astrophysics Data System (ADS)

    Woo, Chi-Keung

    1992-10-01

    Supply shortage is a common problem faced by an urban water supply system. Nonmarket programs are often used to reduce consumption. Using monthly water consumption data collected for Hong Kong for the period 1973-1984, we estimate the effect of service interruption on per capita consumption. The findings show that this effect is statistically significant but relatively small in size. A price increase of 16-35% could have produced the same amount of consumption reduction.

  5. The epidemiology of obesity: the size of the problem.

    PubMed

    James, W P T

    2008-04-01

    The epidemic of obesity took off from about 1980 and in almost all countries has been rising inexorably ever since. Only in 1997 did WHO accept that this was a major public health problem and, even then, there was no accepted method for monitoring the problem in children. It was soon evident, however, that the optimum population body mass index is about 21 and this is particularly true in Asia and Latin America where the populations are very prone to developing abdominal obesity, type 2 diabetes and hypertension. These features are now being increasingly linked to epigenetic programming of gene expression and body composition in utero and early childhood, both in terms of fat/lean tissue ratios and also in terms of organ size and metabolic pathway regulation. New Indian evidence suggests that insulin resistance at birth seems linked to low birth weight and a higher proportion of body fat with selective B12 deficiency and abnormalities of one carbon pool metabolism potentially responsible and affecting 75% of Indians and many populations in the developing world. Biologically there are also adaptive biological mechanisms which limit weight loss after weight gain and thereby in part account for the continuing epidemic despite the widespread desire to slim. Logically, the burden of disease induced by inappropriate diets and widespread physical inactivity can be addressed by increasing physical activity (PA), but simply advocating more leisure time activity is unrealistic. Substantial changes in urban planning and diet are needed to counter the removal of any every day need for PA and the decades of misdirected food policies which with free market forces have induced our current 'toxic environment'. Counteracting this requires unusual policy initiatives.

  6. Development, primacy, and systems of cities.

    PubMed

    El-shakhs, S

    1972-10-01

    The relationship between the evolutionary changes in the city size distribution of nationally defined urban systems and the process of socioeconomic development is examined. Attention is directed to the problems of defining and measuring changes in city size distributions, using the results to test empirically the relationship of such changes to the development process. Existing theoretical structures and empirical generalizations which have tried to explain or to describe, respectively, the hierarchical relationships of cities are represented by central place theory and rank size relationships. The problem is not that deviations exist but that an adequate definition is lacking of urban systems on the 1 hand, and a universal measure of city size distribution, which could be applied to any system irrespective of its level of development, on the other. The problem of measuring changes in city size distributions is further compounded by the lack of sufficient reliable information about different systems of cities for the purposes of empirical comparative analysis. Changes in city size distributions have thus far been viewed largely within the framework of classic equilibrium theory. A more differentiated continuum of the development process should replace the bioplar continuum of underdeveloped developed countries in relating changes in city size distribution with development. Implicit in this distinction is the view that processes which influence spatial organization during the early formative stages of development are inherently different from those operating during the more advanced stages. 2 approaches were used to examine the relationship between national levels of development and primacy: a comparative analysis of a large number of countries at a given point in time; and a historical analysis of a limited sample of 2 advanced countries, the US and Great Britain. The 75 countries included in this study cover a wide range of characteristics. The study found a significant association between the degree of primacy of distributions of cities and their socioeconomic level of development; and the form of the primacy curve (or its evolution with development) seemed to follow a consistent pattern in which the peak of primacy obtained during the stages of socioeconomic transition with countries being less primate in either direction from that peak. This pattern is the result of 2 reverse influences of the development process on the spatial structure of countries--centralization and concentration beginning with the rise of cities and a decentralization and spread effect accompanying the increasing influence and importance of the periphery and structural changes in the pattern of authority.

  7. A new estimator of the discovery probability.

    PubMed

    Favaro, Stefano; Lijoi, Antonio; Prünster, Igor

    2012-12-01

    Species sampling problems have a long history in ecological and biological studies and a number of issues, including the evaluation of species richness, the design of sampling experiments, and the estimation of rare species variety, are to be addressed. Such inferential problems have recently emerged also in genomic applications, however, exhibiting some peculiar features that make them more challenging: specifically, one has to deal with very large populations (genomic libraries) containing a huge number of distinct species (genes) and only a small portion of the library has been sampled (sequenced). These aspects motivate the Bayesian nonparametric approach we undertake, since it allows to achieve the degree of flexibility typically needed in this framework. Based on an observed sample of size n, focus will be on prediction of a key aspect of the outcome from an additional sample of size m, namely, the so-called discovery probability. In particular, conditionally on an observed basic sample of size n, we derive a novel estimator of the probability of detecting, at the (n+m+1)th observation, species that have been observed with any given frequency in the enlarged sample of size n+m. Such an estimator admits a closed-form expression that can be exactly evaluated. The result we obtain allows us to quantify both the rate at which rare species are detected and the achieved sample coverage of abundant species, as m increases. Natural applications are represented by the estimation of the probability of discovering rare genes within genomic libraries and the results are illustrated by means of two expressed sequence tags datasets. © 2012, The International Biometric Society.

  8. Rare events in stochastic populations under bursty reproduction

    NASA Astrophysics Data System (ADS)

    Be'er, Shay; Assaf, Michael

    2016-11-01

    Recently, a first step was made by the authors towards a systematic investigation of the effect of reaction-step-size noise—uncertainty in the step size of the reaction—on the dynamics of stochastic populations. This was done by investigating the effect of bursty influx on the switching dynamics of stochastic populations. Here we extend this formalism to account for bursty reproduction processes, and improve the accuracy of the formalism to include subleading-order corrections. Bursty reproduction appears in various contexts, where notable examples include bursty viral production from infected cells, and reproduction of mammals involving varying number of offspring. The main question we quantitatively address is how bursty reproduction affects the overall fate of the population. We consider two complementary scenarios: population extinction and population survival; in the former a population gets extinct after maintaining a long-lived metastable state, whereas in the latter a population proliferates despite undergoing a deterministic drift towards extinction. In both models reproduction occurs in bursts, sampled from an arbitrary distribution. Using the WKB approach, we show in the extinction problem that bursty reproduction broadens the quasi-stationary distribution of population sizes in the metastable state, which results in a drastic reduction of the mean time to extinction compared to the non-bursty case. In the survival problem, it is shown that bursty reproduction drastically increases the survival probability of the population. Close to the bifurcation limit our analytical results simplify considerably and are shown to depend solely on the mean and variance of the burst-size distribution. Our formalism is demonstrated on several realistic distributions which all compare well with numerical Monte-Carlo simulations.

  9. Design optimization of steel frames using an enhanced firefly algorithm

    NASA Astrophysics Data System (ADS)

    Carbas, Serdar

    2016-12-01

    Mathematical modelling of real-world-sized steel frames under the Load and Resistance Factor Design-American Institute of Steel Construction (LRFD-AISC) steel design code provisions, where the steel profiles for the members are selected from a table of steel sections, turns out to be a discrete nonlinear programming problem. Finding the optimum design of such design optimization problems using classical optimization techniques is difficult. Metaheuristic algorithms provide an alternative way of solving such problems. The firefly algorithm (FFA) belongs to the swarm intelligence group of metaheuristics. The standard FFA has the drawback of being caught up in local optima in large-sized steel frame design problems. This study attempts to enhance the performance of the FFA by suggesting two new expressions for the attractiveness and randomness parameters of the algorithm. Two real-world-sized design examples are designed by the enhanced FFA and its performance is compared with standard FFA as well as with particle swarm and cuckoo search algorithms.

  10. Does a pre-intervention functional assessment increase intervention effectiveness? A meta-analysis of within-subject interrupted time-series studies.

    PubMed

    Hurl, Kylee; Wightman, Jade; Haynes, Stephen N; Virues-Ortega, Javier

    2016-07-01

    This study examined the relative effectiveness of interventions based on a pre-intervention functional behavioral assessment (FBA), compared to interventions not based on a pre-intervention FBA. We examined 19 studies that included a direct comparison between the effects of FBA- and non-FBA-based interventions with the same participants. A random effects meta-analysis of effect sizes indicated that FBA-based interventions were associated with large reductions in problem behaviors when using non-FBA-based interventions as a reference intervention (Effect size=0.85, 95% CI [0.42, 1.27], p<0.001). In addition, non-FBA based interventions had no effect on problem behavior when compared to no intervention (0.06, 95% CI [-0.21, 0.33], p=0.664). Interestingly, both FBA-based and non-FBA-based interventions had significant effects on appropriate behavior relative to no intervention, albeit the overall effect size was much larger for FBA-based interventions (FBA-based: 1.27, 95% CI [0.89, 1.66], p<0.001 vs. non-FBA-based: 0.35, 95% CI [0.14, 0.56], p=0.001). In spite of the evidence in favor of FBA-based interventions, the limited number of comparative studies with high methodological standards underlines the need for further comparisons of FBA-based versus non-FBA-based interventions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Sensitivity of LES results from turbine rim seals to changes in grid resolution and sector size

    NASA Astrophysics Data System (ADS)

    O'Mahoney, T.; Hills, N.; Chew, J.

    2012-07-01

    Large-Eddy Simulations (LES) were carried out for a turbine rim seal and the sensitivity of the results to changes in grid resolution and the size of the computational domain are investigated. Ingestion of hot annulus gas into the rotor-stator cavity is compared between LES results and against experiments and Unsteady Reynolds-Averaged Navier-Stokes (URANS) calculations. The LES calculations show greater ingestion than the URANS calculation and show better agreement with experiments. Increased grid resolution shows a small improvement in ingestion predictions whereas increasing the sector model size has little effect on the results. The contrast between the different CFD models is most stark in the inner cavity, where the URANS shows almost no ingestion. Particular attention is also paid to the presence of low frequency oscillations in the disc cavity. URANS calculations show such low frequency oscillations at different frequencies than the LES. The oscillations also take a very long time to develop in the LES. The results show that the difficult problem of estimating ingestion through rim seals could be overcome by using LES but that the computational requirements were still restrictive.

  12. Reaching extended length-scales with accelerated dynamics

    NASA Astrophysics Data System (ADS)

    Hubartt, Bradley; Shim, Yunsic; Amar, Jacques

    2012-02-01

    While temperature-accelerated dynamics (TAD) has been quite successful in extending the time-scales for non-equilibrium simulations of small systems, the computational time increases rapidly with system size. One possible solution to this problem, which we refer to as parTAD^1 is to use spatial decomposition combined with our previously developed semi-rigorous synchronous sublattice algorithm^2. However, while such an approach leads to significantly better scaling as a function of system-size, it also artificially limits the size of activated events and is not completely rigorous. Here we discuss progress we have made in developing an alternative approach in which localized saddle-point searches are combined with parallel GPU-based molecular dynamics in order to improve the scaling behavior. By using this method, along with the use of an adaptive method to determine the optimal high-temperature^3, we have been able to significantly increase the range of time- and length-scales over which accelerated dynamics simulations may be carried out. [1] Y. Shim et al, Phys. Rev. B 76, 205439 (2007); ibid, Phys. Rev. Lett. 101, 116101 (2008). [2] Y. Shim and J.G. Amar, Phys. Rev. B 71, 125432 (2005). [3] Y. Shim and J.G. Amar, J. Chem. Phys. 134, 054127 (2011).

  13. Variable Step-Size Selection Methods for Implicit Integration Schemes

    DTIC Science & Technology

    2005-10-01

    for ρk numerically. 23 4 Examples In this section we explore this variable step-size selection method for two problems, the Lotka - Volterra model and...the Kepler problem. 4.1 The Lotka - Volterra Model For this example we consider the Lotka - Volterra model of a simple predator- prey system from...problems. Consider this variation to the Lotka - Volterra problem:   u̇ v̇   =   u2v(v − 2) v2u(1− u)   = f(u, v); t ∈ [0, 50

  14. The Migration Behavior of College Students in Siberia

    ERIC Educational Resources Information Center

    Gorbacheva, E. A.

    2008-01-01

    Given the conditions of the aging of the population of Russia there has been a steady decline in the size of the population, and starting in 2006 that includes a decline in the size of the working-age population. This is a very serious problem in regard to the social and economic development of the country, and the ways to solve the problem will…

  15. Designing Adaptive Instructional Environments: Insights from Empirical Evidence

    DTIC Science & Technology

    2011-10-01

    theorems. Cohen’s f effect size for pretest to posttest gain, averaged across different problems = 0.46. 7 Basis for Adaptation Ability of...problems and took a posttest . Measures of Learning 26-item multiple choice pretest and posttest . Effect size on posttest scores as measured by...solving algebraic equations. Measures of Learning Pretest and posttest using rapid diagnostic testing procedure: Student had to provide their

  16. Simulated annealing algorithm for solving chambering student-case assignment problem

    NASA Astrophysics Data System (ADS)

    Ghazali, Saadiah; Abdul-Rahman, Syariza

    2015-12-01

    The problem related to project assignment problem is one of popular practical problem that appear nowadays. The challenge of solving the problem raise whenever the complexity related to preferences, the existence of real-world constraints and problem size increased. This study focuses on solving a chambering student-case assignment problem by using a simulated annealing algorithm where this problem is classified under project assignment problem. The project assignment problem is considered as hard combinatorial optimization problem and solving it using a metaheuristic approach is an advantage because it could return a good solution in a reasonable time. The problem of assigning chambering students to cases has never been addressed in the literature before. For the proposed problem, it is essential for law graduates to peruse in chambers before they are qualified to become legal counselor. Thus, assigning the chambering students to cases is a critically needed especially when involving many preferences. Hence, this study presents a preliminary study of the proposed project assignment problem. The objective of the study is to minimize the total completion time for all students in solving the given cases. This study employed a minimum cost greedy heuristic in order to construct a feasible initial solution. The search then is preceded with a simulated annealing algorithm for further improvement of solution quality. The analysis of the obtained result has shown that the proposed simulated annealing algorithm has greatly improved the solution constructed by the minimum cost greedy heuristic. Hence, this research has demonstrated the advantages of solving project assignment problem by using metaheuristic techniques.

  17. Hypothesis testing of scientific Monte Carlo calculations.

    PubMed

    Wallerberger, Markus; Gull, Emanuel

    2017-11-01

    The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.

  18. Application of a distributed network in computational fluid dynamic simulations

    NASA Technical Reports Server (NTRS)

    Deshpande, Manish; Feng, Jinzhang; Merkle, Charles L.; Deshpande, Ashish

    1994-01-01

    A general-purpose 3-D, incompressible Navier-Stokes algorithm is implemented on a network of concurrently operating workstations using parallel virtual machine (PVM) and compared with its performance on a CRAY Y-MP and on an Intel iPSC/860. The problem is relatively computationally intensive, and has a communication structure based primarily on nearest-neighbor communication, making it ideally suited to message passing. Such problems are frequently encountered in computational fluid dynamics (CDF), and their solution is increasingly in demand. The communication structure is explicitly coded in the implementation to fully exploit the regularity in message passing in order to produce a near-optimal solution. Results are presented for various grid sizes using up to eight processors.

  19. Hypothesis testing of scientific Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Wallerberger, Markus; Gull, Emanuel

    2017-11-01

    The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.

  20. From sticky-hard-sphere to Lennard-Jones-type clusters

    NASA Astrophysics Data System (ADS)

    Trombach, Lukas; Hoy, Robert S.; Wales, David J.; Schwerdtfeger, Peter

    2018-04-01

    A relation MSHS →LJ between the set of nonisomorphic sticky-hard-sphere clusters MSHS and the sets of local energy minima ML J of the (m ,n ) -Lennard-Jones potential Vmn LJ(r ) =ɛ/n -m [m r-n-n r-m] is established. The number of nonisomorphic stable clusters depends strongly and nontrivially on both m and n and increases exponentially with increasing cluster size N for N ≳10 . While the map from MSHS→MSHS →LJ is noninjective and nonsurjective, the number of Lennard-Jones structures missing from the map is relatively small for cluster sizes up to N =13 , and most of the missing structures correspond to energetically unfavorable minima even for fairly low (m ,n ) . Furthermore, even the softest Lennard-Jones potential predicts that the coordination of 13 spheres around a central sphere is problematic (the Gregory-Newton problem). A more realistic extended Lennard-Jones potential chosen from coupled-cluster calculations for a rare gas dimer leads to a substantial increase in the number of nonisomorphic clusters, even though the potential curve is very similar to a (6,12)-Lennard-Jones potential.

  1. Driven Boson Sampling.

    PubMed

    Barkhofen, Sonja; Bartley, Tim J; Sansoni, Linda; Kruse, Regina; Hamilton, Craig S; Jex, Igor; Silberhorn, Christine

    2017-01-13

    Sampling the distribution of bosons that have undergone a random unitary evolution is strongly believed to be a computationally hard problem. Key to outperforming classical simulations of this task is to increase both the number of input photons and the size of the network. We propose driven boson sampling, in which photons are input within the network itself, as a means to approach this goal. We show that the mean number of photons entering a boson sampling experiment can exceed one photon per input mode, while maintaining the required complexity, potentially leading to less stringent requirements on the input states for such experiments. When using heralded single-photon sources based on parametric down-conversion, this approach offers an ∼e-fold enhancement in the input state generation rate over scattershot boson sampling, reaching the scaling limit for such sources. This approach also offers a dramatic increase in the signal-to-noise ratio with respect to higher-order photon generation from such probabilistic sources, which removes the need for photon number resolution during the heralding process as the size of the system increases.

  2. Theoretical Study of near Neutrality. II. Effect of Subdivided Population Structure with Local Extinction and Recolonization

    PubMed Central

    Ohta, T.

    1992-01-01

    There are several unsolved problems concerning the model of nearly neutral mutations. One is the interaction of subdivided population structure and weak selection that spatially fluctuates. The model of nearly neutral mutations whose selection coefficient spatially fluctuates has been studied by adopting the island model with periodic extinction-recolonization. Both the number of colonies and the migration rate play significant roles in determining mutants' behavior, and selection is ineffective when the extinction-recolonization is frequent with low migration rate. In summary, the number of mutant substitutions decreases and the polymorphism increases by increasing the total population size, and/or decreasing the extinction-recolonization rate. However, by increasing the total size of the population, the mutant substitution rate does not become as low when compared with that in panmictic populations, because of the extinction-recolonization, especially when the migration rate is limited. It is also found that the model satisfactorily explains the contrasting patterns of molecular polymorphisms observed in sibling species of Drosophila, including heterozygosity, proportion of polymorphism and fixation index. PMID:1582566

  3. From sticky-hard-sphere to Lennard-Jones-type clusters.

    PubMed

    Trombach, Lukas; Hoy, Robert S; Wales, David J; Schwerdtfeger, Peter

    2018-04-01

    A relation M_{SHS→LJ} between the set of nonisomorphic sticky-hard-sphere clusters M_{SHS} and the sets of local energy minima M_{LJ} of the (m,n)-Lennard-Jones potential V_{mn}^{LJ}(r)=ɛ/n-m[mr^{-n}-nr^{-m}] is established. The number of nonisomorphic stable clusters depends strongly and nontrivially on both m and n and increases exponentially with increasing cluster size N for N≳10. While the map from M_{SHS}→M_{SHS→LJ} is noninjective and nonsurjective, the number of Lennard-Jones structures missing from the map is relatively small for cluster sizes up to N=13, and most of the missing structures correspond to energetically unfavorable minima even for fairly low (m,n). Furthermore, even the softest Lennard-Jones potential predicts that the coordination of 13 spheres around a central sphere is problematic (the Gregory-Newton problem). A more realistic extended Lennard-Jones potential chosen from coupled-cluster calculations for a rare gas dimer leads to a substantial increase in the number of nonisomorphic clusters, even though the potential curve is very similar to a (6,12)-Lennard-Jones potential.

  4. "Whatever average is:" understanding African-American mothers' perceptions of infant weight, growth, and health.

    PubMed

    Thompson, Amanda L; Adair, Linda; Bentley, Margaret E

    2014-06-01

    Biomedical researchers have raised concerns that mothers' inability to recognize infant and toddler overweight poses a barrier to stemming increasing rates of overweight and obesity, particularly among low-income or minority mothers. Little anthropological research has examined the sociocultural, economic or structural factors shaping maternal perceptions of infant and toddler size or addressed biomedical depictions of maternal misperception as a "socio-cultural problem." We use qualitative and quantitative data from 237 low-income, African-American mothers to explore how they define 'normal' infant growth and infant overweight. Our quantitative results document that mothers' perceptions of infant size change with infant age, are sensitive to the size of other infants in the community, and are associated with concerns over health and appetite. Qualitative analysis documents that mothers are concerned with their children's weight status and assess size in relation to their infants' cues, local and societal norms of appropriate size, interactions with biomedicine, and concerns about infant health and sufficiency. These findings suggest that mothers use multiple models to interpret and respond to child weight. An anthropological focus on the complex social and structural factors shaping what is considered 'normal' and 'abnormal' infant weight is critical for shaping appropriate and successful interventions.

  5. Optimal Sizing and Placement of Battery Energy Storage in Distribution System Based on Solar Size for Voltage Regulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nazaripouya, Hamidreza; Wang, Yubo; Chu, Peter

    2016-07-26

    This paper proposes a new strategy to achieve voltage regulation in distributed power systems in the presence of solar energy sources and battery storage systems. The goal is to find the minimum size of battery storage and its corresponding location in the network based on the size and place of the integrated solar generation. The proposed method formulates the problem by employing the network impedance matrix to obtain an analytical solution instead of using a recursive algorithm such as power flow. The required modifications for modeling the slack and PV buses (generator buses) are utilized to increase the accuracy ofmore » the approach. The use of reactive power control to regulate the voltage regulation is not always an optimal solution as in distribution systems R/X is large. In this paper the minimum size and the best place of battery storage is achieved by optimizing the amount of both active and reactive power exchanged by battery storage and its gridtie inverter (GTI) based on the network topology and R/X ratios in the distribution system. Simulation results for the IEEE 14-bus system verify the effectiveness of the proposed approach.« less

  6. A model for size- and rotation-invariant pattern processing in the visual system.

    PubMed

    Reitboeck, H J; Altmann, J

    1984-01-01

    The mapping of retinal space onto the striate cortex of some mammals can be approximated by a log-polar function. It has been proposed that this mapping is of functional importance for scale- and rotation-invariant pattern recognition in the visual system. An exact log-polar transform converts centered scaling and rotation into translations. A subsequent translation-invariant transform, such as the absolute value of the Fourier transform, thus generates overall size- and rotation-invariance. In our model, the translation-invariance is realized via the R-transform. This transform can be executed by simple neural networks, and it does not require the complex computations of the Fourier transform, used in Mellin-transform size-invariance models. The logarithmic space distortion and differentiation in the first processing stage of the model is realized via "Mexican hat" filters whose diameter increases linearly with eccentricity, similar to the characteristics of the receptive fields of retinal ganglion cells. Except for some special cases, the model can explain object recognition independent of size, orientation and position. Some general problems of Mellin-type size-invariance models-that also apply to our model-are discussed.

  7. Visual Analytics for Power Grid Contingency Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, Pak C.; Huang, Zhenyu; Chen, Yousu

    2014-01-20

    Contingency analysis is the process of employing different measures to model scenarios, analyze them, and then derive the best response to remove the threats. This application paper focuses on a class of contingency analysis problems found in the power grid management system. A power grid is a geographically distributed interconnected transmission network that transmits and delivers electricity from generators to end users. The power grid contingency analysis problem is increasingly important because of both the growing size of the underlying raw data that need to be analyzed and the urgency to deliver working solutions in an aggressive timeframe. Failure tomore » do so may bring significant financial, economic, and security impacts to all parties involved and the society at large. The paper presents a scalable visual analytics pipeline that transforms about 100 million contingency scenarios to a manageable size and form for grid operators to examine different scenarios and come up with preventive or mitigation strategies to address the problems in a predictive and timely manner. Great attention is given to the computational scalability, information scalability, visual scalability, and display scalability issues surrounding the data analytics pipeline. Most of the large-scale computation requirements of our work are conducted on a Cray XMT multi-threaded parallel computer. The paper demonstrates a number of examples using western North American power grid models and data.« less

  8. Hamstring autograft size importance in anterior cruciate ligament repair surgery

    PubMed Central

    Figueroa, Francisco; Figueroa, David; Espregueira-Mendes, João

    2018-01-01

    Graft size in hamstring autograft anterior cruciate ligament (ACL) surgery is an important factor directly related to failure. Most of the evidence in the field suggests that the size of the graft in hamstring autograft ACL reconstruction matters when the surgeon is trying to avoid failures. The exact graft diameter needed to avoid failures is not absolutely clear and could depend on other factors, but newer studies suggest than even increases of 0.5 mm up to a graft size of 10 mm are beneficial for the patient. There is still no evidence to recommend the use of grafts > 10 mm. Several methods – e.g. folding the graft in more strands – that are simple and reproducible have been published lately to address the problem of having an insufficient graft size when performing an ACL reconstruction. Due to the evidence presented, we think it is necessary for the surgeon to have them in his or her arsenal before performing an ACL reconstruction. There are obviously other factors that should be considered, especially age. Therefore, a larger graft size should not be taken as the only goal in ACL reconstruction. Cite this article: EFORT Open Rev 2018;3:93-97. DOI: 10.1302/2058-5241.3.170038 PMID:29657850

  9. Prediction of anthropometric accommodation in aircraft cockpits

    NASA Astrophysics Data System (ADS)

    Zehner, Gregory Franklin

    Designing aircraft cockpits to accommodate the wide range of body sizes existing in the U.S. population has always been a difficult problem for Crewstation Engineers. The approach taken in the design of military aircraft has been to restrict the range of body sizes allowed into flight training, and then to develop standards and specifications to ensure that the majority of the pilots are accommodated. Accommodation in this instance is defined as the ability to: (1) Adequately see, reach, and actuate controls; (2) Have external visual fields so that the pilot can see to land, clear for other aircraft, and perform a wide variety of missions (ground support/attack or air to air combat); and (3) Finally, if problems arise, the pilot has to be able to escape safely. Each of these areas is directly affected by the body size of the pilot. Unfortunately, accommodation problems persist and may get worse. Currently the USAF is considering relaxing body size entrance requirements so that smaller and larger people could become pilots. This will make existing accommodation problems much worse. This dissertation describes a methodology for correcting this problem and demonstrates the method by predicting pilot fit and performance in the USAF T-38A aircraft based on anthropometric data. The methods described can be applied to a variety of design applications where fitting the human operator into a system is a major concern. A systematic approach is described which includes: defining the user population, setting functional requirements that operators must be able to perform, testing the ability of the user population to perform the functional requirements, and developing predictive equations for selecting future users of the system. Also described is a process for the development of new anthropometric design criteria and cockpit design methods that assure body size accommodation is improved in the future.

  10. An Enhanced Memetic Algorithm for Single-Objective Bilevel Optimization Problems.

    PubMed

    Islam, Md Monjurul; Singh, Hemant Kumar; Ray, Tapabrata; Sinha, Ankur

    2017-01-01

    Bilevel optimization, as the name reflects, deals with optimization at two interconnected hierarchical levels. The aim is to identify the optimum of an upper-level  leader problem, subject to the optimality of a lower-level follower problem. Several problems from the domain of engineering, logistics, economics, and transportation have an inherent nested structure which requires them to be modeled as bilevel optimization problems. Increasing size and complexity of such problems has prompted active theoretical and practical interest in the design of efficient algorithms for bilevel optimization. Given the nested nature of bilevel problems, the computational effort (number of function evaluations) required to solve them is often quite high. In this article, we explore the use of a Memetic Algorithm (MA) to solve bilevel optimization problems. While MAs have been quite successful in solving single-level optimization problems, there have been relatively few studies exploring their potential for solving bilevel optimization problems. MAs essentially attempt to combine advantages of global and local search strategies to identify optimum solutions with low computational cost (function evaluations). The approach introduced in this article is a nested Bilevel Memetic Algorithm (BLMA). At both upper and lower levels, either a global or a local search method is used during different phases of the search. The performance of BLMA is presented on twenty-five standard test problems and two real-life applications. The results are compared with other established algorithms to demonstrate the efficacy of the proposed approach.

  11. Nonlinear dynamics of contact interaction of a size-dependent plate supported by a size-dependent beam

    NASA Astrophysics Data System (ADS)

    Awrejcewicz, J.; Krysko, V. A.; Yakovleva, T. V.; Pavlov, S. P.; Krysko, V. A.

    2018-05-01

    A mathematical model of complex vibrations exhibited by contact dynamics of size-dependent beam-plate constructions was derived by taking the account of constraints between these structural members. The governing equations were yielded by variational principles based on the moment theory of elasticity. The centre of the investigated plate was supported by a beam. The plate and the beam satisfied the Kirchhoff/Euler-Bernoulli hypotheses. The derived partial differential equations (PDEs) were reduced to the Cauchy problems by the Faedo-Galerkin method in higher approximations, whereas the Cauchy problem was solved using a few Runge-Kutta methods. Reliability of results was validated by comparing the solutions obtained by qualitatively different methods. Complex vibrations were investigated with the help of methods of nonlinear dynamics such as vibration signals, phase portraits, Fourier power spectra, wavelet analysis, and estimation of the largest Lyapunov exponents based on the Rosenstein, Kantz, and Wolf methods. The effect of size-dependent parameters of the beam and plate on their contact interaction was investigated. It was detected and illustrated that the first contact between the size-dependent structural members implies chaotic vibrations. In addition, problems of chaotic synchronization between a nanoplate and a nanobeam were addressed.

  12. Optimal Wavelength Selection on Hyperspectral Data with Fused Lasso for Biomass Estimation of Tropical Rain Forest

    NASA Astrophysics Data System (ADS)

    Takayama, T.; Iwasaki, A.

    2016-06-01

    Above-ground biomass prediction of tropical rain forest using remote sensing data is of paramount importance to continuous large-area forest monitoring. Hyperspectral data can provide rich spectral information for the biomass prediction; however, the prediction accuracy is affected by a small-sample-size problem, which widely exists as overfitting in using high dimensional data where the number of training samples is smaller than the dimensionality of the samples due to limitation of require time, cost, and human resources for field surveys. A common approach to addressing this problem is reducing the dimensionality of dataset. Also, acquired hyperspectral data usually have low signal-to-noise ratio due to a narrow bandwidth and local or global shifts of peaks due to instrumental instability or small differences in considering practical measurement conditions. In this work, we propose a methodology based on fused lasso regression that select optimal bands for the biomass prediction model with encouraging sparsity and grouping, which solves the small-sample-size problem by the dimensionality reduction from the sparsity and the noise and peak shift problem by the grouping. The prediction model provided higher accuracy with root-mean-square error (RMSE) of 66.16 t/ha in the cross-validation than other methods; multiple linear analysis, partial least squares regression, and lasso regression. Furthermore, fusion of spectral and spatial information derived from texture index increased the prediction accuracy with RMSE of 62.62 t/ha. This analysis proves efficiency of fused lasso and image texture in biomass estimation of tropical forests.

  13. Solution of Eshelby's inclusion problem with a bounded domain and Eshelby's tensor for a spherical inclusion in a finite spherical matrix based on a simplified strain gradient elasticity theory

    NASA Astrophysics Data System (ADS)

    Gao, X.-L.; Ma, H. M.

    2010-05-01

    A solution for Eshelby's inclusion problem of a finite homogeneous isotropic elastic body containing an inclusion prescribed with a uniform eigenstrain and a uniform eigenstrain gradient is derived in a general form using a simplified strain gradient elasticity theory (SSGET). An extended Betti's reciprocal theorem and an extended Somigliana's identity based on the SSGET are proposed and utilized to solve the finite-domain inclusion problem. The solution for the disturbed displacement field is expressed in terms of the Green's function for an infinite three-dimensional elastic body in the SSGET. It contains a volume integral term and a surface integral term. The former is the same as that for the infinite-domain inclusion problem based on the SSGET, while the latter represents the boundary effect. The solution reduces to that of the infinite-domain inclusion problem when the boundary effect is not considered. The problem of a spherical inclusion embedded concentrically in a finite spherical elastic body is analytically solved by applying the general solution, with the Eshelby tensor and its volume average obtained in closed forms. This Eshelby tensor depends on the position, inclusion size, matrix size, and material length scale parameter, and, as a result, can capture the inclusion size and boundary effects, unlike existing Eshelby tensors. It reduces to the classical Eshelby tensor for the spherical inclusion in an infinite matrix if both the strain gradient and boundary effects are suppressed. Numerical results quantitatively show that the inclusion size effect can be quite large when the inclusion is very small and that the boundary effect can dominate when the inclusion volume fraction is very high. However, the inclusion size effect is diminishing as the inclusion becomes large enough, and the boundary effect is vanishing as the inclusion volume fraction gets sufficiently low.

  14. Severe Pollution in China Amplified by Atmospheric Moisture.

    PubMed

    Tie, Xuexi; Huang, Ru-Jin; Cao, Junji; Zhang, Qiang; Cheng, Yafang; Su, Hang; Chang, Di; Pöschl, Ulrich; Hoffmann, Thorsten; Dusek, Uli; Li, Guohui; Worsnop, Douglas R; O'Dowd, Colin D

    2017-11-17

    In recent years, severe haze events often occurred in China, causing serious environmental problems. The mechanisms responsible for the haze formation, however, are still not well understood, hindering the forecast and mitigation of haze pollution. Our study of the 2012-13 winter haze events in Beijing shows that atmospheric water vapour plays a critical role in enhancing the heavy haze events. Under weak solar radiation and stagnant moist meteorological conditions in winter, air pollutants and water vapour accumulate in a shallow planetary boundary layer (PBL). A positive feedback cycle is triggered resulting in the formation of heavy haze: (1) the dispersal of water vapour is constrained by the shallow PBL, leading to an increase in relative humidity (RH); (2) the high RH induces an increase of aerosol particle size by enhanced hygroscopic growth and multiphase reactions to increase particle size and mass, which results in (3) further dimming and decrease of PBL height, and thus further depressing of aerosol and water vapour in a very shallow PBL. This positive feedback constitutes a self-amplification mechanism in which water vapour leads to a trapping and massive increase of particulate matter in the near-surface air to which people are exposed with severe health hazards.

  15. Tablet Velocity Measurement and Prediction in the Pharmaceutical Film Coating Process.

    PubMed

    Suzuki, Yasuhiro; Yokohama, Chihiro; Minami, Hidemi; Terada, Katsuhide

    2016-01-01

    The purpose of this study was to measure the tablet velocity in pan coating machines during the film coating process in order to understand the impact of the batch size (laboratory to commercial scale), coating machine type (DRIACOATER, HICOATER® and AQUA COATER®) and manufacturing conditions on tablet velocity. We used a high speed camera and particle image velocimetry to measure the tablet velocity in the coating pans. It was observed that increasing batch sizes resulted in increased tablet velocities under the same rotation number because of the differences in circumferential rotation speeds. We also observed the tendency that increase in the filling ratio of tablets resulted in an increased tablet velocity for all coating machines. Statistical analysis was used to make a tablet velocity predictive equation by employing the filling ratio and rotation speed as the parameters from these measured values. The correlation coefficients of predicted value and experimental value were more than 0.959 in each machine. Using the predictive equation to determine tablet velocities, the manufacturing conditions of previous products were reviewed, and it was found that the tablet velocities of commercial scales, in which tablet chipping and breakage problems had occurred, were higher than those of pilot scales or laboratory scales.

  16. Does Prostate Size Predict the Development of Incident Lower Urinary Tract Symptoms in Men with Mild to No Current Symptoms? Results from the REDUCE Trial.

    PubMed

    Simon, Ross M; Howard, Lauren E; Moreira, Daniel M; Roehrborn, Claus; Vidal, Adriana C; Castro-Santamaria, Ramiro; Freedland, Stephen J

    2016-05-01

    It has been shown that increased prostate size is a risk factor for lower urinary tract symptom (LUTS) progression in men who currently have LUTS presumed due to benign prostatic hyperplasia (BPH). To determine if prostate size is a risk factor for incident LUTS in men with mild to no symptoms. We conducted a post hoc analysis of the REDUCE study, which contained a substantial number of men (n=3090) with mild to no LUTS (International Prostate Symptom Score [IPSS] <8). Our primary outcome was determination of the effect of prostate size on incident LUTS presumed due to BPH defined as two consecutive IPSS values >14, or receiving any medical (α-blockers) or surgical treatment for BPH throughout the study course. To determine the risk of developing incident LUTS, we used univariable and multivariable Cox models, as well as Kaplan-Meier curves and the log-rank test. Among men treated with placebo during the REDUCE study, those with a prostate size of 40.1-80ml had a 67% higher risk (hazard risk 1.67, 95% confidence interval 1.23-2.26, p=0.001) of developing incident LUTS compared to men with a prostate size 40.0ml or smaller. There was no association between prostate size and risk of incident LUTS in men treated with 0.5mg of dutasteride. The post hoc nature of our study design is a potential limitation. Men with mild to no LUTS but increased prostate size are at higher risk of incident LUTS presumed due to BPH. This association was negated by dutasteride treatment. Benign prostatic hyperplasia (BPH) is a very common problem among older men, which often manifests as lower urinary tract symptoms (LUTS), and can lead to potentially serious side effects. In our study we determined that men with mild to no current LUTS but increased prostate size are much more likely to develop LUTS presumed due to BPH in the future. This association was not seen in men treated with dutasteride, a drug approved for treatment of BPH. Our study reveals that men with a prostate size of 40.1-80ml are potential candidates for closer follow-up. Copyright © 2015 European Association of Urology. Published by Elsevier B.V. All rights reserved.

  17. Maternal depression in childhood and aggression in young adulthood: evidence for mediation by offspring amygdala-hippocampal volume ratio.

    PubMed

    Gilliam, Mary; Forbes, Erika E; Gianaros, Peter J; Erickson, Kirk I; Brennan, Lauretta M; Shaw, Daniel S

    2015-10-01

    There is abundant evidence that offspring of depressed mothers are at increased risk for persistent behavior problems related to emotion regulation, but the mechanisms by which offspring incur this risk are not entirely clear. Early adverse caregiving experiences have been associated with structural alterations in the amygdala and hippocampus, which parallel findings of cortical regions altered in adults with behavior problems related to emotion regulation. This study examined whether exposure to maternal depression during childhood might predict increased aggression and/or depression in early adulthood, and whether offspring amygdala:hippocampal volume ratio might mediate this relationship. Participants were 258 mothers and sons at socioeconomic risk for behavior problems. Sons' trajectories of exposure to maternal depression were generated from eight reports collected prospectively from offspring ages 18 months to 10 years. Offspring brain structure, aggression, and depression were assessed at age 20 (n = 170). Persistent, moderately high trajectories of maternal depression during childhood predicted increased aggression in adult offspring. In contrast, stable and very elevated trajectories of maternal depression during childhood predicted depression in adult offspring. Increased amygdala: hippocampal volume ratios at age 20 were significantly associated with concurrently increased aggression, but not depression, in adult offspring. Offspring amygdala: hippocampal volume ratio mediated the relationship found between trajectories of moderately elevated maternal depression during childhood and aggression in adult offspring. Alterations in the relative size of brain structures implicated in emotion regulation may be one mechanism by which offspring of depressed mothers incur increased risk for the development of aggression. © 2014 Association for Child and Adolescent Mental Health.

  18. Optical solver of combinatorial problems: nanotechnological approach.

    PubMed

    Cohen, Eyal; Dolev, Shlomi; Frenkel, Sergey; Kryzhanovsky, Boris; Palagushkin, Alexandr; Rosenblit, Michael; Zakharov, Victor

    2013-09-01

    We present an optical computing system to solve NP-hard problems. As nano-optical computing is a promising venue for the next generation of computers performing parallel computations, we investigate the application of submicron, or even subwavelength, computing device designs. The system utilizes a setup of exponential sized masks with exponential space complexity produced in polynomial time preprocessing. The masks are later used to solve the problem in polynomial time. The size of the masks is reduced to nanoscaled density. Simulations were done to choose a proper design, and actual implementations show the feasibility of such a system.

  19. Effects of PMTO in Foster Families with Children with Behavior Problems: A Randomized Controlled Trial.

    PubMed

    Maaskant, Anne M; van Rooij, Floor B; Overbeek, Geertjan J; Oort, Frans J; Arntz, Maureen; Hermanns, Jo M A

    2017-01-01

    The present randomized controlled trial examined the effectiveness of Parent Management Training Oregon for foster parents with foster children (aged 4-12) with severe externalizing behavior problems in long-term foster care arrangements. Foster children's behavior problems are challenging for foster parents and increase the risk of placement breakdown. There is little evidence for the effectiveness of established interventions to improve child and parent functioning in foster families. The goal of Parent Management Training Oregon, a relatively long and intensive (6-9 months, with weekly sessions) parent management training, is to reduce children's problem behavior through improvement of parenting practices. We specifically investigated whether Parent Management Training Oregon is effective to reduce foster parenting stress. A significant effect of Parent Management Training Oregon, compared to Care as Usual was expected on reduced parenting stress improved parenting practices, and on reduced child behavior problems. Multi-informant (foster mothers, foster fathers, and teachers) data were used from 86 foster families (46 Parent Management Training Oregon, 40 Care as Usual) using a pre-posttest design. Multilevel analyses based on the intention to treat principle (retention rate 73 %) showed that Parent Management Training Oregon, compared to Care as Usual, reduced general levels of parenting stress as well as child related stress and parent-related stress (small to medium effect sizes). The clinical significance of this effect was, however, limited. Compared to a decrease in the Care as Usual group, Parent Management Training Oregon helped foster mothers to maintain parental warmth (small effect size). There were no other effects of Parent Management Training Oregon on self-reported parenting behaviors. Child behavior problems were reduced in both conditions, indicating no additive effects of Parent Management Training Oregon to Care as Usual on child functioning. The potential implication of reduced foster parenting stress for placement stability is discussed.

  20. Bathymetric patterns of body size: implications for deep-sea biodiversity

    NASA Astrophysics Data System (ADS)

    Rex, Michael A.; Etter, Ron J.

    1998-01-01

    The evolution of body size is a problem of fundamental interest, and one that has an important bearing on community structure and conservation of biodiversity. The most obvious and pervasive characteristic of the deep-sea benthos is the small size of most species. The numerous attempts to document and explain geographic patterns of body size in the deep-sea benthos have focused on variation among species or whole faunal components, and have led to conflicting and contradictory results. It is important to recognize that studying size as an adaptation to the deep-sea environment should include analyses within species using measures of size that are standardized to common growth stages. An analysis within eight species of deep-sea benthic gastropods presented here reveals a clear trend for size to increase with depth in both larval and adult shells. An ANCOVA with multiple comparison tests showed that, in general, size-depth relationships for both adult and larval shells are more pronounced in the bathyal region than in the abyss. This result reinforces the notion that steepness of the bathymetric selective gradient decreases with depth, and that the bathyal region is an evolutionary hotspot that promotes diversification. Bathymetric size clines in gastropods support neither the predictions of optimality models nor earlier arguments based on tradeoffs among scaling factors. As in other environments, body size is inversely related to both abundance and species density. We suggest that the decrease in nutrient input with depth may select for larger size because of its metabolic or competitive advantages, and that larger size plays a role in limiting diversity. Adaptation is an important evolutionary driving force of biological diversity, and geographic patterns of body size could help unify ecological and historical theories of deep-sea biodiversity.

  1. The Influence of Function, Topography, and Setting on Noncontingent Reinforcement Effect Sizes for Reduction in Problem Behavior: A Meta-Analysis of Single-Case Experimental Design Data

    ERIC Educational Resources Information Center

    Ritter, William A.; Barnard-Brak, Lucy; Richman, David M.; Grubb, Laura M.

    2018-01-01

    Richman et al. ("J Appl Behav Anal" 48:131-152, 2015) completed a meta-analytic analysis of single-case experimental design data on noncontingent reinforcement (NCR) for the treatment of problem behavior exhibited by individuals with developmental disabilities. Results showed that (1) NCR produced very large effect sizes for reduction in…

  2. M-Adapting Low Order Mimetic Finite Differences for Dielectric Interface Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGregor, Duncan A.; Gyrya, Vitaliy; Manzini, Gianmarco

    2016-03-07

    We consider a problem of reducing numerical dispersion for electromagnetic wave in the domain with two materials separated by a at interface in 2D with a factor of two di erence in wave speed. The computational mesh in the homogeneous parts of the domain away from the interface consists of square elements. Here the method construction is based on m-adaptation construction in homogeneous domain that leads to fourth-order numerical dispersion (vs. second order in non-optimized method). The size of the elements in two domains also di ers by a factor of two, so as to preserve the same value ofmore » Courant number in each. Near the interface where two meshes merge the mesh with larger elements consists of degenerate pentagons. We demonstrate that prior to m-adaptation the accuracy of the method falls from second to rst due to breaking of symmetry in the mesh. Next we develop m-adaptation framework for the interface region and devise an optimization criteria. We prove that for the interface problem m-adaptation cannot produce increase in method accuracy. This is in contrast to homogeneous medium where m-adaptation can increase accuracy by two orders.« less

  3. Grain size distribution of road-deposited sediment and its contribution to heavy metal pollution in urban runoff in Beijing, China.

    PubMed

    Zhao, Hongtao; Li, Xuyong; Wang, Xiaomei; Tian, Di

    2010-11-15

    Pollutant washoff from road-deposited sediment (RDS) is an increasing problem associated with the rapid urbanization of China that results in urban non-point source pollution. Here, we analyzed the RDS grain size distribution and its potential impact on heavy metal pollution in urban runoff from impervious surfaces of urban villages, colleges and residences, and main traffic roads in the Haidian District, Beijing, China. RDS with smaller grain size had a higher metal concentration. Specifically, particles with the smallest grain size (<44 μm) had the highest metal concentration in most areas (unit: mg/kg): Cd 0.28-1.31, Cr 57.9-154, Cu 68.1-142, Ni 25.8-78.0, Pb 73.1-222 and Zn 264-664. Particles with smaller grain size (<250 μm) contributed more than 80% of the total metal loads in RDS washoff, while suspended solids with a grain size <44 μm in runoff water accounted for greater than 70% of the metal mass in the total suspended solids (TSS). The heavy metal content in the TSS was 2.21-6.52% of that in the RDS. These findings will facilitate our understanding of the importance of RDS grain size distribution in heavy metal pollution caused by urban storm runoff. Copyright © 2010 Elsevier B.V. All rights reserved.

  4. Construction and test of flexible walls for the throat of the ILR high-speed wind tunnel

    NASA Technical Reports Server (NTRS)

    Igeta, Y.

    1983-01-01

    Aerodynamic tests in wind tunnels are jeopardized by the lateral limitations of the throat. This influence expands with increasing size of the model in proportion to the cross-section of the throat. Wall interference of this type can be avoided by giving the wall the form of a stream surface that would be identical to the one observed during free flight. To solve this problem, flexible walls that can adapt to every contour of surface flow are needed.

  5. Scale-Up: Improving Large Enrollment Physics Courses

    NASA Astrophysics Data System (ADS)

    Beichner, Robert

    1999-11-01

    The Student-Centered Activities for Large Enrollment University Physics (SCALE-UP) project is working to establish a learning environment that will promote increased conceptual understanding, improved problem-solving performance, and greater student satisfaction, while still maintaining class sizes of approximately 100. We are also addressing the new ABET engineering accreditation requirements for inquiry-based learning along with communication and team-oriented skills development. Results of studies of our latest classroom design, plans for future classroom space, and the current iteration of instructional materials will be discussed.

  6. Getting the current out

    NASA Astrophysics Data System (ADS)

    Burger, D. R.

    1983-11-01

    Progress of a photovoltaic (PV) device from a research concept to a competitive power-generation source requires an increasing concern with current collection. The initial metallization focus is usually on contact resistance, since a good ohmic contact is desirable for accurate device characterization measurements. As the device grows in size, sheet resistance losses become important and a metal grid is usually added to reduce the effective sheet resistance. Later, as size and conversion efficiency continue to increase, grid-line resistance and cell shadowing must be considered simultaneously, because grid-line resistance is inversely related to total grid-line area and cell shadowing is directly related. A PV cell grid design must consider the five power-loss phenomena mentioned above: sheet resistance, contact resistance, grid resistance, bus-bar resistance and cell shadowing. Although cost, reliability and usage are important factors in deciding upon the best metallization system, this paper will focus only upon grid-line design and substrate material problems for flat-plate solar arrays.

  7. Large-scale three-dimensional phase-field simulations for phase coarsening at ultrahigh volume fraction on high-performance architectures

    NASA Astrophysics Data System (ADS)

    Yan, Hui; Wang, K. G.; Jones, Jim E.

    2016-06-01

    A parallel algorithm for large-scale three-dimensional phase-field simulations of phase coarsening is developed and implemented on high-performance architectures. From the large-scale simulations, a new kinetics in phase coarsening in the region of ultrahigh volume fraction is found. The parallel implementation is capable of harnessing the greater computer power available from high-performance architectures. The parallelized code enables increase in three-dimensional simulation system size up to a 5123 grid cube. Through the parallelized code, practical runtime can be achieved for three-dimensional large-scale simulations, and the statistical significance of the results from these high resolution parallel simulations are greatly improved over those obtainable from serial simulations. A detailed performance analysis on speed-up and scalability is presented, showing good scalability which improves with increasing problem size. In addition, a model for prediction of runtime is developed, which shows a good agreement with actual run time from numerical tests.

  8. Bright betatron X-ray radiation from a laser-driven-clustering gas target

    PubMed Central

    Chen, L. M.; Yan, W. C.; Li, D. Z.; Hu, Z. D.; Zhang, L.; Wang, W. M.; Hafz, N.; Mao, J. Y.; Huang, K.; Ma, Y.; Zhao, J. R.; Ma, J. L.; Li, Y. T.; Lu, X.; Sheng, Z. M.; Wei, Z. Y.; Gao, J.; Zhang, J.

    2013-01-01

    Hard X-ray sources from femtosecond (fs) laser-produced plasmas, including the betatron X-rays from laser wakefield-accelerated electrons, have compact sizes, fs pulse duration and fs pump-probe capability, making it promising for wide use in material and biological sciences. Currently the main problem with such betatron X-ray sources is the limited average flux even with ultra-intense laser pulses. Here, we report ultra-bright betatron X-rays can be generated using a clustering gas jet target irradiated with a small size laser, where a ten-fold enhancement of the X-ray yield is achieved compared to the results obtained using a gas target. We suggest the increased X-ray photon is due to the existence of clusters in the gas, which results in increased total electron charge trapped for acceleration and larger wiggling amplitudes during the acceleration. This observation opens a route to produce high betatron average flux using small but high repetition rate laser facilities for applications. PMID:23715033

  9. Millimeter-wave/infrared rectenna development at Georgia Tech

    NASA Technical Reports Server (NTRS)

    Gouker, Mark A.

    1989-01-01

    The key design issues of the Millimeter Wave/Infrared (MMW/IR) monolithic rectenna have been resolved. The work at Georgia Tech in the last year has focused on increasing the power received by the physically small MMW rectennas in order to increase the rectification efficiency. The solution to this problem is to place a focusing element on the back side of the substrate. The size of the focusing element can be adjusted to help maintain the optimum input power density not only for different power densities called for in various mission scenarios, but also for the nonuniform power density profile of a narrow EM-beam.

  10. Parallel-vector unsymmetric Eigen-Solver on high performance computers

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.; Jiangning, Qin

    1993-01-01

    The popular QR algorithm for solving all eigenvalues of an unsymmetric matrix is reviewed. Among the basic components in the QR algorithm, it was concluded from this study, that the reduction of an unsymmetric matrix to a Hessenberg form (before applying the QR algorithm itself) can be done effectively by exploiting the vector speed and multiple processors offered by modern high-performance computers. Numerical examples of several test cases have indicated that the proposed parallel-vector algorithm for converting a given unsymmetric matrix to a Hessenberg form offers computational advantages over the existing algorithm. The time saving obtained by the proposed methods is increased as the problem size increased.

  11. Effects of adaptive refinement on the inverse EEG solution

    NASA Astrophysics Data System (ADS)

    Weinstein, David M.; Johnson, Christopher R.; Schmidt, John A.

    1995-10-01

    One of the fundamental problems in electroencephalography can be characterized by an inverse problem. Given a subset of electrostatic potentials measured on the surface of the scalp and the geometry and conductivity properties within the head, calculate the current vectors and potential fields within the cerebrum. Mathematically the generalized EEG problem can be stated as solving Poisson's equation of electrical conduction for the primary current sources. The resulting problem is mathematically ill-posed i.e., the solution does not depend continuously on the data, such that small errors in the measurement of the voltages on the scalp can yield unbounded errors in the solution, and, for the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions to such problems could be obtained, neurologists would gain noninvasive accesss to patient-specific cortical activity. Access to such data would ultimately increase the number of patients who could be effectively treated for pathological cortical conditions such as temporal lobe epilepsy. In this paper, we present the effects of spatial adaptive refinement on the inverse EEG problem and show that the use of adaptive methods allow for significantly better estimates of electric and potential fileds within the brain through an inverse procedure. To test these methods, we have constructed several finite element head models from magneteic resonance images of a patient. The finite element meshes ranged in size from 2724 nodes and 12,812 elements to 5224 nodes and 29,135 tetrahedral elements, depending on the level of discretization. We show that an adaptive meshing algorithm minimizes the error in the forward problem due to spatial discretization and thus increases the accuracy of the inverse solution.

  12. Statistical theory and methodology for remote sensing data analysis

    NASA Technical Reports Server (NTRS)

    Odell, P. L.

    1974-01-01

    A model is developed for the evaluation of acreages (proportions) of different crop-types over a geographical area using a classification approach and methods for estimating the crop acreages are given. In estimating the acreages of a specific croptype such as wheat, it is suggested to treat the problem as a two-crop problem: wheat vs. nonwheat, since this simplifies the estimation problem considerably. The error analysis and the sample size problem is investigated for the two-crop approach. Certain numerical results for sample sizes are given for a JSC-ERTS-1 data example on wheat identification performance in Hill County, Montana and Burke County, North Dakota. Lastly, for a large area crop acreages inventory a sampling scheme is suggested for acquiring sample data and the problem of crop acreage estimation and the error analysis is discussed.

  13. Satisfiability Test with Synchronous Simulated Annealing on the Fujitsu AP1000 Massively-Parallel Multiprocessor

    NASA Technical Reports Server (NTRS)

    Sohn, Andrew; Biswas, Rupak

    1996-01-01

    Solving the hard Satisfiability Problem is time consuming even for modest-sized problem instances. Solving the Random L-SAT Problem is especially difficult due to the ratio of clauses to variables. This report presents a parallel synchronous simulated annealing method for solving the Random L-SAT Problem on a large-scale distributed-memory multiprocessor. In particular, we use a parallel synchronous simulated annealing procedure, called Generalized Speculative Computation, which guarantees the same decision sequence as sequential simulated annealing. To demonstrate the performance of the parallel method, we have selected problem instances varying in size from 100-variables/425-clauses to 5000-variables/21,250-clauses. Experimental results on the AP1000 multiprocessor indicate that our approach can satisfy 99.9 percent of the clauses while giving almost a 70-fold speedup on 500 processors.

  14. Calibrating the Ordovician Radiation of marine life: implications for Phanerozoic diversity trends

    NASA Technical Reports Server (NTRS)

    Miller, A. I.; Foote, M.

    1996-01-01

    It has long been suspected that trends in global marine biodiversity calibrated for the Phanerozoic may be affected by sampling problems. However, this possibility has not been evaluated definitively, and raw diversity trends are generally accepted at face value in macroevolutionary investigations. Here, we analyze a global-scale sample of fossil occurrences that allows us to determine directly the effects of sample size on the calibration of what is generally thought to be among the most significant global biodiversity increases in the history of life: the Ordovician Radiation. Utilizing a composite database that includes trilobites, brachiopods, and three classes of molluscs, we conduct rarefaction analyses to demonstrate that the diversification trajectory for the Radiation was considerably different than suggested by raw diversity time-series. Our analyses suggest that a substantial portion of the increase recognized in raw diversity depictions for the last three Ordovician epochs (the Llandeilian, Caradocian, and Ashgillian) is a consequence of increased sample size of the preserved and catalogued fossil record. We also use biometric data for a global sample of Ordovician trilobites, along with methods of measuring morphological diversity that are not biased by sample size, to show that morphological diversification in this major clade had leveled off by the Llanvirnian. The discordance between raw diversity depictions and more robust taxonomic and morphological diversity metrics suggests that sampling effects may strongly influence our perception of biodiversity trends throughout the Phanerozoic.

  15. Analytical sizing methods for behind-the-meter battery storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Di; Kintner-Meyer, Michael; Yang, Tao

    In behind-the-meter application, battery storage system (BSS) is utilized to reduce a commercial or industrial customer’s payment for electricity use, including energy charge and demand charge. The potential value of BSS in payment reduction and the most economic size can be determined by formulating and solving standard mathematical programming problems. In this method, users input system information such as load profiles, energy/demand charge rates, and battery characteristics to construct a standard programming problem that typically involve a large number of constraints and decision variables. Such a large scale programming problem is then solved by optimization solvers to obtain numerical solutions.more » Such a method cannot directly link the obtained optimal battery sizes to input parameters and requires case-by-case analysis. In this paper, we present an objective quantitative analysis of costs and benefits of customer-side energy storage, and thereby identify key factors that affect battery sizing. Based on the analysis, we then develop simple but effective guidelines that can be used to determine the most cost-effective battery size or guide utility rate design for stimulating energy storage development. The proposed analytical sizing methods are innovative, and offer engineering insights on how the optimal battery size varies with system characteristics. We illustrate the proposed methods using practical building load profile and utility rate. The obtained results are compared with the ones using mathematical programming based methods for validation.« less

  16. Optimization of multi-objective integrated process planning and scheduling problem using a priority based optimization algorithm

    NASA Astrophysics Data System (ADS)

    Ausaf, Muhammad Farhan; Gao, Liang; Li, Xinyu

    2015-12-01

    For increasing the overall performance of modern manufacturing systems, effective integration of process planning and scheduling functions has been an important area of consideration among researchers. Owing to the complexity of handling process planning and scheduling simultaneously, most of the research work has been limited to solving the integrated process planning and scheduling (IPPS) problem for a single objective function. As there are many conflicting objectives when dealing with process planning and scheduling, real world problems cannot be fully captured considering only a single objective for optimization. Therefore considering multi-objective IPPS (MOIPPS) problem is inevitable. Unfortunately, only a handful of research papers are available on solving MOIPPS problem. In this paper, an optimization algorithm for solving MOIPPS problem is presented. The proposed algorithm uses a set of dispatching rules coupled with priority assignment to optimize the IPPS problem for various objectives like makespan, total machine load, total tardiness, etc. A fixed sized external archive coupled with a crowding distance mechanism is used to store and maintain the non-dominated solutions. To compare the results with other algorithms, a C-matric based method has been used. Instances from four recent papers have been solved to demonstrate the effectiveness of the proposed algorithm. The experimental results show that the proposed method is an efficient approach for solving the MOIPPS problem.

  17. Binge drinking and sleep problems among young adults.

    PubMed

    Popovici, Ioana; French, Michael T

    2013-09-01

    As most of the literature exploring the relationships between alcohol use and sleep problems is descriptive and with small sample sizes, the present study seeks to provide new information on the topic by employing a large, nationally representative dataset with several waves of data and a broad set of measures for binge drinking and sleep problems. We use data from the National Longitudinal Study of Adolescent Health (Add Health), a nationally representative survey of adolescents and young adults. The analysis sample consists of all Wave 4 observations without missing values for the sleep problems variables (N=14,089, 53% females). We estimate gender-specific multivariate probit models with a rich set of socioeconomic, demographic, physical, and mental health variables to control for confounding factors. Our results confirm that alcohol use, and specifically binge drinking, is positively and significantly associated with various types of sleep problems. The detrimental effects on sleep increase in magnitude with frequency of binge drinking, suggesting a dose-response relationship. Moreover, binge drinking is associated with sleep problems independent of psychiatric conditions. The statistically strong association between sleep problems and binge drinking found in this study is a first step in understanding these relationships. Future research is needed to determine the causal links between alcohol misuse and sleep problems to inform appropriate clinical and policy responses. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  18. The effects of maternal working conditions and mastery on child behavior problems: studying the intergenerational transmission of social control.

    PubMed

    Rogers, S J; Parcel, T L; Menaghan, E G

    1991-06-01

    We assess the impact of maternal sense of mastery and maternal working conditions on maternal perceptions of children's behavior problems as a means to study the transmission of social control across generations. We use a sample of 521 employed mothers and their four-to six-year-old children from the National Longitudinal Survey's Youth Cohort in 1986. Regarding working conditions, we consider mother's hourly wage, work hours, and job content including involvement with things (vs. people), the requisite level of physical activity, and occupational complexity. We also consider maternal and child background and current family characteristics, including marital status, family size, and home environment. Maternal mastery was related to fewer reported behavior problems among children. Lower involvement with people and higher involvement with things, as well as low physical activity, were related significantly to higher levels of perceived problems. In addition, recent changes in maternal marital status, including maternal marriage or remarriage, increased reports of problems; stronger home environments had the opposite effect. We interpret these findings as suggesting how maternal experiences of control in the workplace and personal resources of control can influence the internalization of control in children.

  19. Eco-driving: behavioural pattern change in Polish passenger vehicle drivers

    NASA Astrophysics Data System (ADS)

    Czechowski, Piotr Oskar; Oniszczuk-Jastrząbek, Aneta; Czuba, Tomasz

    2018-01-01

    In Poland, as in the rest of Europe, air quality depends primarily on emissions from municipal, domestic and road transport sources. The problems of appropriate air quality are especially important within urban areas due to numerous sources of emissions being concentrated in relatively small spaces in both large cities and small/medium-sized towns. Due to the steadily increasing share of urban population in the overall number of population, the issue of providing clean air will over the years become a more significant problem for human health, and therefore a stronger incentive to intensify research. The key challenge faced by a modern society is, therefore, to limit harmful substance emissions in order to minimise the contribution of transport to pollution and health hazards. Increasingly stringent emission standards are being imposed on car manufacturers; on the other hand, scant regard is paid to the issue of drivers, i.e. how they can help reduce emissions and protect their life and health by applying eco-driving rules.

  20. Quantum machine learning: a classical perspective

    NASA Astrophysics Data System (ADS)

    Ciliberto, Carlo; Herbster, Mark; Ialongo, Alessandro Davide; Pontil, Massimiliano; Rocchetto, Andrea; Severini, Simone; Wossnig, Leonard

    2018-01-01

    Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning (ML) techniques to impressive results in regression, classification, data generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets is motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed up classical ML algorithms. Here we review the literature in quantum ML and discuss perspectives for a mixed readership of classical ML and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in ML are identified as promising directions for the field. Practical questions, such as how to upload classical data into quantum form, will also be addressed.

  1. Enabling scientific workflows in virtual reality

    USGS Publications Warehouse

    Kreylos, O.; Bawden, G.; Bernardin, T.; Billen, M.I.; Cowgill, E.S.; Gold, R.D.; Hamann, B.; Jadamec, M.; Kellogg, L.H.; Staadt, O.G.; Sumner, D.Y.

    2006-01-01

    To advance research and improve the scientific return on data collection and interpretation efforts in the geosciences, we have developed methods of interactive visualization, with a special focus on immersive virtual reality (VR) environments. Earth sciences employ a strongly visual approach to the measurement and analysis of geologic data due to the spatial and temporal scales over which such data ranges, As observations and simulations increase in size and complexity, the Earth sciences are challenged to manage and interpret increasing amounts of data. Reaping the full intellectual benefits of immersive VR requires us to tailor exploratory approaches to scientific problems. These applications build on the visualization method's strengths, using both 3D perception and interaction with data and models, to take advantage of the skills and training of the geological scientists exploring their data in the VR environment. This interactive approach has enabled us to develop a suite of tools that are adaptable to a range of problems in the geosciences and beyond. Copyright ?? 2008 by the Association for Computing Machinery, Inc.

  2. Quantum machine learning: a classical perspective

    PubMed Central

    Ciliberto, Carlo; Herbster, Mark; Ialongo, Alessandro Davide; Pontil, Massimiliano; Severini, Simone; Wossnig, Leonard

    2018-01-01

    Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning (ML) techniques to impressive results in regression, classification, data generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets is motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed up classical ML algorithms. Here we review the literature in quantum ML and discuss perspectives for a mixed readership of classical ML and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in ML are identified as promising directions for the field. Practical questions, such as how to upload classical data into quantum form, will also be addressed. PMID:29434508

  3. Quantum machine learning: a classical perspective.

    PubMed

    Ciliberto, Carlo; Herbster, Mark; Ialongo, Alessandro Davide; Pontil, Massimiliano; Rocchetto, Andrea; Severini, Simone; Wossnig, Leonard

    2018-01-01

    Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning (ML) techniques to impressive results in regression, classification, data generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets is motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed up classical ML algorithms. Here we review the literature in quantum ML and discuss perspectives for a mixed readership of classical ML and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in ML are identified as promising directions for the field. Practical questions, such as how to upload classical data into quantum form, will also be addressed.

  4. Suppressing explosive synchronization by contrarians

    NASA Astrophysics Data System (ADS)

    Zhang, Xiyun; Guan, Shuguang; Zou, Yong; Chen, Xiaosong; Liu, Zonghua

    2016-01-01

    Explosive synchronization (ES) has recently received increasing attention and studies have mainly focused on the conditions of its onset so far. However, its inverse problem, i.e. the suppression of ES, has not been systematically studied so far. As ES is usually considered to be harmful in certain circumstances such as the cascading failure of power grids and epileptic seizure, etc., its suppression is definitely important and deserves to be studied. We here study this inverse problem by presenting an efficient approach to suppress ES from a first-order to second-order transition, without changing the intrinsic network structure. We find that ES can be suppressed by only changing a small fraction of oscillators into contrarians with negative couplings and the critical fraction for the transition from the first order to the second order increases with both the network size and the average degree. A brief theory is presented to explain the underlying mechanism. This finding underlines the importance of our method to improve the understanding of neural interactions underlying cognitive processes.

  5. Solution of nonlinear time-dependent PDEs through componentwise approximation of matrix functions

    NASA Astrophysics Data System (ADS)

    Cibotarica, Alexandru; Lambers, James V.; Palchak, Elisabeth M.

    2016-09-01

    Exponential propagation iterative (EPI) methods provide an efficient approach to the solution of large stiff systems of ODEs, compared to standard integrators. However, the bulk of the computational effort in these methods is due to products of matrix functions and vectors, which can become very costly at high resolution due to an increase in the number of Krylov projection steps needed to maintain accuracy. In this paper, it is proposed to modify EPI methods by using Krylov subspace spectral (KSS) methods, instead of standard Krylov projection methods, to compute products of matrix functions and vectors. Numerical experiments demonstrate that this modification causes the number of Krylov projection steps to become bounded independently of the grid size, thus dramatically improving efficiency and scalability. As a result, for each test problem featured, as the total number of grid points increases, the growth in computation time is just below linear, while other methods achieved this only on selected test problems or not at all.

  6. Application of artificial intelligence to search ground-state geometry of clusters

    NASA Astrophysics Data System (ADS)

    Lemes, Maurício Ruv; Marim, L. R.; dal Pino, A.

    2002-08-01

    We introduce a global optimization procedure, the neural-assisted genetic algorithm (NAGA). It combines the power of an artificial neural network (ANN) with the versatility of the genetic algorithm. This method is suitable to solve optimization problems that depend on some kind of heuristics to limit the search space. If a reasonable amount of data is available, the ANN can ``understand'' the problem and provide the genetic algorithm with a selected population of elements that will speed up the search for the optimum solution. We tested the method in a search for the ground-state geometry of silicon clusters. We trained the ANN with information about the geometry and energetics of small silicon clusters. Next, the ANN learned how to restrict the configurational space for larger silicon clusters. For Si10 and Si20, we noticed that the NAGA is at least three times faster than the ``pure'' genetic algorithm. As the size of the cluster increases, it is expected that the gain in terms of time will increase as well.

  7. Dengue Seroprevalence and Risk Factors for Past and Recent Viral Transmission in Venezuela: A Comprehensive Community-Based Study

    PubMed Central

    Velasco-Salas, Zoraida I.; Sierra, Gloria M.; Guzmán, Diamelis M.; Zambrano, Julio; Vivas, Daniel; Comach, Guillermo; Wilschut, Jan C.; Tami, Adriana

    2014-01-01

    Dengue transmission in Venezuela has become perennial and a major public health problem. The increase in frequency and magnitude of recent epidemics prompted a comprehensive community-based cross-sectional study of 2,014 individuals in high-incidence neighborhoods of Maracay, Venezuela. We found a high seroprevalence (77.4%), with 10% of people experiencing recent infections. Multivariate logistic regression analysis showed that poverty-related socioeconomic factors (place and duration of residence, crowding, household size, and living in a shack) and factors/constraints related to intradomiciliary potential mosquito breeding sites (storing water and used tires) were linked with a greater risk of acquiring a dengue infection. Our results also suggest that transmission occurs mainly at home. The combination of increasingly crowded living conditions, growing population density, precarious homes, and water storage issues caused by enduring problems in public services in Maracay are the most likely factors that determine the permanent dengue transmission and the failure of vector control programs. PMID:25223944

  8. Sparse radar imaging using 2D compressed sensing

    NASA Astrophysics Data System (ADS)

    Hou, Qingkai; Liu, Yang; Chen, Zengping; Su, Shaoying

    2014-10-01

    Radar imaging is an ill-posed linear inverse problem and compressed sensing (CS) has been proved to have tremendous potential in this field. This paper surveys the theory of radar imaging and a conclusion is drawn that the processing of ISAR imaging can be denoted mathematically as a problem of 2D sparse decomposition. Based on CS, we propose a novel measuring strategy for ISAR imaging radar and utilize random sub-sampling in both range and azimuth dimensions, which will reduce the amount of sampling data tremendously. In order to handle 2D reconstructing problem, the ordinary solution is converting the 2D problem into 1D by Kronecker product, which will increase the size of dictionary and computational cost sharply. In this paper, we introduce the 2D-SL0 algorithm into the reconstruction of imaging. It is proved that 2D-SL0 can achieve equivalent result as other 1D reconstructing methods, but the computational complexity and memory usage is reduced significantly. Moreover, we will state the results of simulating experiments and prove the effectiveness and feasibility of our method.

  9. A distributed approach to the OPF problem

    NASA Astrophysics Data System (ADS)

    Erseghe, Tomaso

    2015-12-01

    This paper presents a distributed approach to optimal power flow (OPF) in an electrical network, suitable for application in a future smart grid scenario where access to resource and control is decentralized. The non-convex OPF problem is solved by an augmented Lagrangian method, similar to the widely known ADMM algorithm, with the key distinction that penalty parameters are constantly increased. A (weak) assumption on local solver reliability is required to always ensure convergence. A certificate of convergence to a local optimum is available in the case of bounded penalty parameters. For moderate sized networks (up to 300 nodes, and even in the presence of a severe partition of the network), the approach guarantees a performance very close to the optimum, with an appreciably fast convergence speed. The generality of the approach makes it applicable to any (convex or non-convex) distributed optimization problem in networked form. In the comparison with the literature, mostly focused on convex SDP approximations, the chosen approach guarantees adherence to the reference problem, and it also requires a smaller local computational complexity effort.

  10. TLBO based Voltage Stable Environment Friendly Economic Dispatch Considering Real and Reactive Power Constraints

    NASA Astrophysics Data System (ADS)

    Verma, H. K.; Mafidar, P.

    2013-09-01

    In view of growing concern towards environment, power system engineers are forced to generate quality green energy. Hence the economic dispatch (ED) aims at the power generation to meet the load demand at minimum fuel cost with environmental and voltage constraints along with essential constraints on real and reactive power. The emission control which reduces the negative impact on environment is achieved by including the additional constraints in ED problem. Presently, the power system mostly operates near its stability limits, therefore with increased demand the system faces voltage problem. The bus voltages are brought within limit in the present work by placement of static var compensator (SVC) at weak bus which is identified from bus participation factor. The optimal size of SVC is determined by univariate search method. This paper presents the use of Teaching Learning based Optimization (TLBO) algorithm for voltage stable environment friendly ED problem with real and reactive power constraints. The computational effectiveness of TLBO is established through test results over particle swarm optimization (PSO) and Big Bang-Big Crunch (BB-BC) algorithms for the ED problem.

  11. An effective hybrid immune algorithm for solving the distributed permutation flow-shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Xu, Ye; Wang, Ling; Wang, Shengyao; Liu, Min

    2014-09-01

    In this article, an effective hybrid immune algorithm (HIA) is presented to solve the distributed permutation flow-shop scheduling problem (DPFSP). First, a decoding method is proposed to transfer a job permutation sequence to a feasible schedule considering both factory dispatching and job sequencing. Secondly, a local search with four search operators is presented based on the characteristics of the problem. Thirdly, a special crossover operator is designed for the DPFSP, and mutation and vaccination operators are also applied within the framework of the HIA to perform an immune search. The influence of parameter setting on the HIA is investigated based on the Taguchi method of design of experiment. Extensive numerical testing results based on 420 small-sized instances and 720 large-sized instances are provided. The effectiveness of the HIA is demonstrated by comparison with some existing heuristic algorithms and the variable neighbourhood descent methods. New best known solutions are obtained by the HIA for 17 out of 420 small-sized instances and 585 out of 720 large-sized instances.

  12. Why Does Rebalancing Class-Unbalanced Data Improve AUC for Linear Discriminant Analysis?

    PubMed

    Xue, Jing-Hao; Hall, Peter

    2015-05-01

    Many established classifiers fail to identify the minority class when it is much smaller than the majority class. To tackle this problem, researchers often first rebalance the class sizes in the training dataset, through oversampling the minority class or undersampling the majority class, and then use the rebalanced data to train the classifiers. This leads to interesting empirical patterns. In particular, using the rebalanced training data can often improve the area under the receiver operating characteristic curve (AUC) for the original, unbalanced test data. The AUC is a widely-used quantitative measure of classification performance, but the property that it increases with rebalancing has, as yet, no theoretical explanation. In this note, using Gaussian-based linear discriminant analysis (LDA) as the classifier, we demonstrate that, at least for LDA, there is an intrinsic, positive relationship between the rebalancing of class sizes and the improvement of AUC. We show that the largest improvement of AUC is achieved, asymptotically, when the two classes are fully rebalanced to be of equal sizes.

  13. Team Dimensions: Their Identity, Their Measurement and Their Relationships

    DTIC Science & Technology

    1985-01-01

    business games (e.g., Cummings, Huber & Arendt, 1974; Kennedy, 1971 ). Apart from the problem solving tasks, the second largest group of studies...positive relationship between size and number of answers on. an anagram task. In a disjunctive problem, solving-task-, Frank & Anderson ( 1971 ) found that...4, or 5 members. However, there were no differences between the groups in time to solutions. Goldman ( 1971 ) found posi’tive effects for size with

  14. The Image Understanding Architecture Project

    DTIC Science & Technology

    1988-04-01

    The error resulted in the frame being reduced in size and incorrectly bonded . The problem has been corrected and3 the design has been re-submitted...Promotional literature, Beaverton, OR, 1985. Nii, 1986] Nil, H.P., The Blackboard Model of Problem Solving and the Evolution of Blackboard...microns. This resulted in a reduction in pad sizes to two thirds of the minimum required for safe bonding . All chips had many wire bonds on the die

  15. Modified reactive tabu search for the symmetric traveling salesman problems

    NASA Astrophysics Data System (ADS)

    Lim, Yai-Fung; Hong, Pei-Yee; Ramli, Razamin; Khalid, Ruzelan

    2013-09-01

    Reactive tabu search (RTS) is an improved method of tabu search (TS) and it dynamically adjusts tabu list size based on how the search is performed. RTS can avoid disadvantage of TS which is in the parameter tuning in tabu list size. In this paper, we proposed a modified RTS approach for solving symmetric traveling salesman problems (TSP). The tabu list size of the proposed algorithm depends on the number of iterations when the solutions do not override the aspiration level to achieve a good balance between diversification and intensification. The proposed algorithm was tested on seven chosen benchmarked problems of symmetric TSP. The performance of the proposed algorithm is compared with that of the TS by using empirical testing, benchmark solution and simple probabilistic analysis in order to validate the quality of solution. The computational results and comparisons show that the proposed algorithm provides a better quality solution than that of the TS.

  16. Scheduling algorithm for flow shop with two batch-processing machines and arbitrary job sizes

    NASA Astrophysics Data System (ADS)

    Cheng, Bayi; Yang, Shanlin; Hu, Xiaoxuan; Li, Kai

    2014-03-01

    This article considers the problem of scheduling two batch-processing machines in flow shop where the jobs have arbitrary sizes and the machines have limited capacity. The jobs are processed in batches and the total size of jobs in each batch cannot exceed the machine capacity. Once a batch is being processed, no interruption is allowed until all the jobs in it are completed. The problem of minimising makespan is NP-hard in the strong sense. First, we present a mathematical model of the problem using integer programme. We show the scale of feasible solutions of the problem and provide optimality properties. Then, we propose a polynomial time algorithm with running time in O(nlogn). The jobs are first assigned in feasible batches and then scheduled on machines. For the general case, we prove that the proposed algorithm has a performance guarantee of 4. For the special case where the processing times of each job on the two machines satisfy p 1 j = ap 2 j , the performance guarantee is ? for a > 0.

  17. Illicit and prescription drug problems among urban Aboriginal adults in Canada: the role of traditional culture in protection and resilience.

    PubMed

    Currie, Cheryl L; Wild, T Cameron; Schopflocher, Donald P; Laing, Lory; Veugelers, Paul

    2013-07-01

    Illicit and prescription drug use disorders are two to four times more prevalent among Aboriginal peoples in North America than the general population. Research suggests Aboriginal cultural participation may be protective against substance use problems in rural and remote Aboriginal communities. As Aboriginal peoples continue to urbanize rapidly around the globe, the role traditional Aboriginal beliefs and practices may play in reducing or even preventing substance use problems in cities is becoming increasingly relevant, and is the focus of the present study. Mainstream acculturation was also examined. Data were collected via in-person surveys with a community-based sample of Aboriginal adults living in a mid-sized city in western Canada (N = 381) in 2010. Associations were analysed using two sets of bootstrapped linear regression models adjusted for confounders with continuous illicit and prescription drug problem scores as outcomes. Psychological mechanisms that may explain why traditional culture is protective for Aboriginal peoples were examined using the cross-products of coefficients mediation method. The extent to which culture served as a resilience factor was examined via interaction testing. Results indicate Aboriginal enculturation was a protective factor associated with reduced 12-month illicit drug problems and 12-month prescription drug problems among Aboriginal adults in an urban setting. Increased self-esteem partially explained why cultural participation was protective. Cultural participation also promoted resilience by reducing the effects of high school incompletion on drug problems. In contrast, mainstream acculturation was not associated with illicit drug problems and served as a risk factor for prescription drug problems in this urban sample. Findings encourage the growth of programs and services that support Aboriginal peoples who strive to maintain their cultural traditions within cities, and further studies that examine how Aboriginal cultural practices and beliefs may promote and protect Aboriginal health in an urban environment. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Similar Ratios of Introns to Intergenic Sequence across Animal Genomes.

    PubMed

    Francis, Warren R; Wörheide, Gert

    2017-06-01

    One central goal of genome biology is to understand how the usage of the genome differs between organisms. Our knowledge of genome composition, needed for downstream inferences, is critically dependent on gene annotations, yet problems associated with gene annotation and assembly errors are usually ignored in comparative genomics. Here, we analyze the genomes of 68 species across 12 animal phyla and some single-cell eukaryotes for general trends in genome composition and transcription, taking into account problems of gene annotation. We show that, regardless of genome size, the ratio of introns to intergenic sequence is comparable across essentially all animals, with nearly all deviations dominated by increased intergenic sequence. Genomes of model organisms have ratios much closer to 1:1, suggesting that the majority of published genomes of nonmodel organisms are underannotated and consequently omit substantial numbers of genes, with likely negative impact on evolutionary interpretations. Finally, our results also indicate that most animals transcribe half or more of their genomes arguing against differences in genome usage between animal groups, and also suggesting that the transcribed portion is more dependent on genome size than previously thought. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  19. Simulation shows that HLA-matched stem cell donors can remain unidentified in donor searches

    PubMed Central

    Sauter, Jürgen; Solloch, Ute V.; Giani, Anette S.; Hofmann, Jan A.; Schmidt, Alexander H.

    2016-01-01

    The heterogeneous nature of HLA information in real-life stem cell donor registries may hamper unrelated donor searches. It is even possible that fully HLA-matched donors with incomplete HLA information are not identified. In our simulation study, we estimated the probability of these unnecessarily failed donor searches. For that purpose, we carried out donor searches in several virtual donor registries. The registries differed by size, composition with respect to HLA typing levels, and genetic diversity. When up to three virtual HLA typing requests were allowed within donor searches, the share of unnecessarily failed donor searches ranged from 1.19% to 4.13%, thus indicating that non-identification of completely HLA-matched stem cell donors is a problem of practical relevance. The following donor registry characteristics were positively correlated with the share of unnecessarily failed donor searches: large registry size, high genetic diversity, and, most strongly correlated, large fraction of registered donors with incomplete HLA typing. Increasing the number of virtual HLA typing requests within donor searches up to ten had a smaller effect. It follows that the problem of donor non-identification can be substantially reduced by complete high-resolution HLA typing of potential donors. PMID:26876789

  20. Simulation shows that HLA-matched stem cell donors can remain unidentified in donor searches

    NASA Astrophysics Data System (ADS)

    Sauter, Jürgen; Solloch, Ute V.; Giani, Anette S.; Hofmann, Jan A.; Schmidt, Alexander H.

    2016-02-01

    The heterogeneous nature of HLA information in real-life stem cell donor registries may hamper unrelated donor searches. It is even possible that fully HLA-matched donors with incomplete HLA information are not identified. In our simulation study, we estimated the probability of these unnecessarily failed donor searches. For that purpose, we carried out donor searches in several virtual donor registries. The registries differed by size, composition with respect to HLA typing levels, and genetic diversity. When up to three virtual HLA typing requests were allowed within donor searches, the share of unnecessarily failed donor searches ranged from 1.19% to 4.13%, thus indicating that non-identification of completely HLA-matched stem cell donors is a problem of practical relevance. The following donor registry characteristics were positively correlated with the share of unnecessarily failed donor searches: large registry size, high genetic diversity, and, most strongly correlated, large fraction of registered donors with incomplete HLA typing. Increasing the number of virtual HLA typing requests within donor searches up to ten had a smaller effect. It follows that the problem of donor non-identification can be substantially reduced by complete high-resolution HLA typing of potential donors.

  1. Collective action problem in heterogeneous groups

    PubMed Central

    Gavrilets, Sergey

    2015-01-01

    I review the theoretical and experimental literature on the collective action problem in groups whose members differ in various characteristics affecting individual costs, benefits and preferences in collective actions. I focus on evolutionary models that predict how individual efforts and fitnesses, group efforts and the amount of produced collective goods depend on the group's size and heterogeneity, as well as on the benefit and cost functions and parameters. I consider collective actions that aim to overcome the challenges from nature or win competition with neighbouring groups of co-specifics. I show that the largest contributors towards production of collective goods will typically be group members with the highest stake in it or for whom the effort is least costly, or those who have the largest capability or initial endowment. Under some conditions, such group members end up with smaller net pay-offs than the rest of the group. That is, they effectively behave as altruists. With weak nonlinearity in benefit and cost functions, the group effort typically decreases with group size and increases with within-group heterogeneity. With strong nonlinearity in benefit and cost functions, these patterns are reversed. I discuss the implications of theoretical results for animal behaviour, human origins and psychology. PMID:26503689

  2. Evaluation of a wave-vector-frequency-domain method for nonlinear wave propagation

    PubMed Central

    Jing, Yun; Tao, Molei; Clement, Greg T.

    2011-01-01

    A wave-vector-frequency-domain method is presented to describe one-directional forward or backward acoustic wave propagation in a nonlinear homogeneous medium. Starting from a frequency-domain representation of the second-order nonlinear acoustic wave equation, an implicit solution for the nonlinear term is proposed by employing the Green’s function. Its approximation, which is more suitable for numerical implementation, is used. An error study is carried out to test the efficiency of the model by comparing the results with the Fubini solution. It is shown that the error grows as the propagation distance and step-size increase. However, for the specific case tested, even at a step size as large as one wavelength, sufficient accuracy for plane-wave propagation is observed. A two-dimensional steered transducer problem is explored to verify the nonlinear acoustic field directional independence of the model. A three-dimensional single-element transducer problem is solved to verify the forward model by comparing it with an existing nonlinear wave propagation code. Finally, backward-projection behavior is examined. The sound field over a plane in an absorptive medium is backward projected to the source and compared with the initial field, where good agreement is observed. PMID:21302985

  3. Lévy flight artificial bee colony algorithm

    NASA Astrophysics Data System (ADS)

    Sharma, Harish; Bansal, Jagdish Chand; Arya, K. V.; Yang, Xin-She

    2016-08-01

    Artificial bee colony (ABC) optimisation algorithm is a relatively simple and recent population-based probabilistic approach for global optimisation. The solution search equation of ABC is significantly influenced by a random quantity which helps in exploration at the cost of exploitation of the search space. In the ABC, there is a high chance to skip the true solution due to its large step sizes. In order to balance between diversity and convergence in the ABC, a Lévy flight inspired search strategy is proposed and integrated with ABC. The proposed strategy is named as Lévy Flight ABC (LFABC) has both the local and global search capability simultaneously and can be achieved by tuning the Lévy flight parameters and thus automatically tuning the step sizes. In the LFABC, new solutions are generated around the best solution and it helps to enhance the exploitation capability of ABC. Furthermore, to improve the exploration capability, the numbers of scout bees are increased. The experiments on 20 test problems of different complexities and five real-world engineering optimisation problems show that the proposed strategy outperforms the basic ABC and recent variants of ABC, namely, Gbest-guided ABC, best-so-far ABC and modified ABC in most of the experiments.

  4. An approach to improving science knowledge about energy balance and nutrition among elementary- and middle-school students.

    PubMed

    Moreno, Nancy P; Denk, James P; Roberts, J Kyle; Tharp, Barbara Z; Bost, Michelle; Thomson, William A

    2004-01-01

    Unhealthy diets, lack of fitness, and obesity are serious problems in the United States. The Centers for Disease Control, Surgeon General, and Department of Health and Human Services are calling for action to address these problems. Scientists and educators at Baylor College of Medicine and the National Space Biomedical Research Institute teamed to produce an instructional unit, "Food and Fitness," and evaluated it with students in grades 3-7 in Houston, Texas. A field-test group (447 students) completed all unit activities under the guidance of their teachers. This group and a comparison group (343 students) completed pre and postassessments measuring knowledge of concepts covered in the unit. Outcomes indicate that the unit significantly increased students' knowledge and awareness of science concepts related to energy in living systems, metabolism, nutrients, and diet. Pre-assessment results suggest that most students understand concepts related to calories in food, exercise and energy use, and matching food intake to energy use. Students' prior knowledge was found to be much lower on topics related to healthy portion sizes, foods that supply the most energy, essential nutrients, what "diet" actually means, and the relationship between body size and basal metabolic rate.

  5. CPR Drilling.

    ERIC Educational Resources Information Center

    Kittleson, Mark

    1980-01-01

    Problems encountered by cardiopulmonary resuscitation (CPR) instructors are discussed and some possible solutions to these problems are suggested. Management techniques for effective use of class size, time, and instructional materials are described. (JN)

  6. Decision and function problems based on boson sampling

    NASA Astrophysics Data System (ADS)

    Nikolopoulos, Georgios M.; Brougham, Thomas

    2016-07-01

    Boson sampling is a mathematical problem that is strongly believed to be intractable for classical computers, whereas passive linear interferometers can produce samples efficiently. So far, the problem remains a computational curiosity, and the possible usefulness of boson-sampling devices is mainly limited to the proof of quantum supremacy. The purpose of this work is to investigate whether boson sampling can be used as a resource of decision and function problems that are computationally hard, and may thus have cryptographic applications. After the definition of a rather general theoretical framework for the design of such problems, we discuss their solution by means of a brute-force numerical approach, as well as by means of nonboson samplers. Moreover, we estimate the sample sizes required for their solution by passive linear interferometers, and it is shown that they are independent of the size of the Hilbert space.

  7. Cultural considerations for treatment of childhood obesity.

    PubMed

    Davis, S P; Northington, L; Kolar, K

    2000-01-01

    Childhood obesity has become one of the most common health problems facing children in America. Results from the Third National Health and Nutrition Examination Survey reveal that ethnic minority children in the United States are at particular risk for development of cardiovascular disease due to their disproportionate levels of obesity. In treating childhood obesity among ethnic minorities, practitioners need to be mindful of the cultural norms surrounding body size. Additional concerns that must be addressed include the effects of target marketing of unhealthy foods toward ethnic minorities and environmental deterrents to outside physical activities, to name a few. Strategies given to address the problem of childhood obesity among ethnic minorities include, increasing the child's physical activity, reducing television viewing and the adoption and maintenance of healthy lifestyle practices for the entire family.

  8. Parallel scalability of Hartree-Fock calculations

    NASA Astrophysics Data System (ADS)

    Chow, Edmond; Liu, Xing; Smelyanskiy, Mikhail; Hammond, Jeff R.

    2015-03-01

    Quantum chemistry is increasingly performed using large cluster computers consisting of multiple interconnected nodes. For a fixed molecular problem, the efficiency of a calculation usually decreases as more nodes are used, due to the cost of communication between the nodes. This paper empirically investigates the parallel scalability of Hartree-Fock calculations. The construction of the Fock matrix and the density matrix calculation are analyzed separately. For the former, we use a parallelization of Fock matrix construction based on a static partitioning of work followed by a work stealing phase. For the latter, we use density matrix purification from the linear scaling methods literature, but without using sparsity. When using large numbers of nodes for moderately sized problems, density matrix computations are network-bandwidth bound, making purification methods potentially faster than eigendecomposition methods.

  9. Internet-Assisted Parent Training Intervention for Disruptive Behavior in 4-Year-Old Children: A Randomized Clinical Trial.

    PubMed

    Sourander, Andre; McGrath, Patrick J; Ristkari, Terja; Cunningham, Charles; Huttunen, Jukka; Lingley-Pottie, Patricia; Hinkka-Yli-Salomäki, Susanna; Kinnunen, Malin; Vuorio, Jenni; Sinokki, Atte; Fossum, Sturla; Unruh, Anita

    2016-04-01

    There is a large gap worldwide in the provision of evidence-based early treatment of children with disruptive behavioral problems. To determine whether an Internet-assisted intervention using whole-population screening that targets the most symptomatic 4-year-old children is effective at 6 and 12 months after the start of treatment. This 2-parallel-group randomized clinical trial was performed from October 1, 2011, through November 30, 2013, at a primary health care clinic in Southwest Finland. Data analysis was performed from August 6, 2015, to December 11, 2015. Of a screened population of 4656 children, 730 met the screening criteria indicating a high level of disruptive behavioral problems. A total of 464 parents of 4-year-old children were randomized into the Strongest Families Smart Website (SFSW) intervention group (n = 232) or an education control (EC) group (n = 232). The SFSW intervention, an 11-session Internet-assisted parent training program that included weekly telephone coaching. Child Behavior Checklist version for preschool children (CBCL/1.5-5) externalizing scale (primary outcome), other CBCL/1.5-5 scales and subscores, Parenting Scale, Inventory of Callous-Unemotional Traits, and the 21-item Depression, Anxiety, and Stress Scale. All data were analyzed by intention to treat and per protocol. The assessments were made before randomization and 6 and 12 months after randomization. Of the children randomized, 287 (61.9%) were male and 79 (17.1%) lived in other than a family with 2 biological parents. At 12-month follow-up, improvement in the SFSW intervention group was significantly greater compared with the control group on the following measures: CBCL/1.5-5 externalizing scale (effect size, 0.34; P < .001), internalizing scale (effect size, 0.35; P < .001), and total scores (effect size, 0.37; P < .001); 5 of 7 syndrome scales, including aggression (effect size, 0.36; P < .001), sleep (effect size, 0.24; P = .002), withdrawal (effect size, 0.25; P = .005), anxiety (effect size, 0.26; P = .003), and emotional problems (effect size, 0.31; P = .001); Inventory of Callous-Unemotional Traits callousness scores (effect size, 0.19; P = .03); and self-reported parenting skills (effect size, 0.53; P < .001). The study reveals the effectiveness and feasibility of an Internet-assisted parent training intervention offered for parents of preschool children with disruptive behavioral problems screened from the whole population. The strategy of population-based screening of children at an early age to offering parent training using digital technology and telephone coaching is a promising public health strategy for providing early intervention for a variety of child mental health problems. clinicaltrials.gov Identifier: NCT01750996.

  10. Mass Instruction or Higher Learning? The Impact of College Class Size on Student Retention and Graduation

    ERIC Educational Resources Information Center

    Bettinger, Eric P.; Long, Bridget Terry

    2018-01-01

    This paper measures the effects of collegiate class size on college retention and graduation. Class size is a perennial issue in research on primary and secondary schooling. Few researchers have focused on the causal impacts of collegiate class size, however. Whereas college students have greater choice of classes, selection problems and nonrandom…

  11. Instability improvement of the subgrade soils by lime addition at Borg El-Arab, Alexandria, Egypt

    NASA Astrophysics Data System (ADS)

    El Shinawi, A.

    2017-06-01

    Subgrade soils can affect the stability of any construction elsewhere, instability problems were found at Borg El-Arab, Alexandria, Egypt. This paper investigates geoengineering properties of lime treated subgrade soils at Borg El-Arab. Basic laboratory tests, such as water content, wet and dry density, grain size, specific gravity and Atterberg limits, were performed for twenty-five samples. Moisture-density (compaction); California Bearing Ratio (CBR) and Unconfined Compression Strength (UCS) were conducted on treated and natural soils. The measured geotechnical parameters of the treated soil shows that 6% lime is good enough to stabilize the subgrade soils. It was found that by adding lime, samples shifted to coarser side, Atterberg limits values of the treated soil samples decreased and this will improve the soil to be more stable. On the other hand, Subgrade soils improved as a result of the bonding fine particles, cemented together to form larger size and reduce the plastiCity index which increase soils strength. The environmental scanning electron microscope (ESEM) is point to the presence of innovative aggregated cement materials which reduce the porosity and increase the strength as a long-term curing. Consequently, the mixture of soil with the lime has acceptable mechanical characteristics where, it composed of a high strength base or sub-base materials and this mixture considered as subgrade soil for stabilizations and mitigation the instability problems that found at Borg Al-Arab, Egypt.

  12. Effects of pollution on land snail abundance, size and diversity as resources for pied flycatcher, Ficedula hypoleuca.

    PubMed

    Eeva, Tapio; Rainio, Kalle; Suominen, Otso

    2010-09-01

    Passerine birds need extra calcium during their breeding for developing egg shells and proper growth of nestling skeleton. Land snails are an important calcium source for many passerines and human-induced changes in snail populations may pose a severe problem for breeding birds. We studied from the bird's viewpoint how air pollution affects the shell mass, abundance and diversity of land snail communities along a pollution gradient of a copper smelter. We sampled remnant snail shells from the nests of an insectivorous passerine, the pied flycatcher, Ficedula hypoleuca, to find out how the availability of land snails varies along the pollution gradient. The total snail shell mass increased towards the pollution source but declined abruptly in the vicinity of the smelter. This spatial variation in shell mass was evident also within a single snail species and could not be wholly explained by spatially varying snail numbers or species composition. Instead, the total shell mass was related to their shell size, individuals being largest at the moderately polluted areas. Smaller shell size suggests inferior growth of snails in the most heavily polluted area. Our study shows that pollution affects the diversity, abundance (available shell mass) and individual quality of land snails, posing reproductive problems for birds that rely on snails as calcium sources during breeding. There are probably both direct pollution-related (heavy metal and calcium levels) and indirect (habitat change) effects behind the observed changes in snail populations. Copyright 2010 Elsevier B.V. All rights reserved.

  13. PowerPlay: Training an Increasingly General Problem Solver by Continually Searching for the Simplest Still Unsolvable Problem

    PubMed Central

    Schmidhuber, Jürgen

    2013-01-01

    Most of computer science focuses on automatically solving given computational problems. I focus on automatically inventing or discovering problems in a way inspired by the playful behavior of animals and humans, to train a more and more general problem solver from scratch in an unsupervised fashion. Consider the infinite set of all computable descriptions of tasks with possibly computable solutions. Given a general problem-solving architecture, at any given time, the novel algorithmic framework PowerPlay (Schmidhuber, 2011) searches the space of possible pairs of new tasks and modifications of the current problem solver, until it finds a more powerful problem solver that provably solves all previously learned tasks plus the new one, while the unmodified predecessor does not. Newly invented tasks may require to achieve a wow-effect by making previously learned skills more efficient such that they require less time and space. New skills may (partially) re-use previously learned skills. The greedy search of typical PowerPlay variants uses time-optimal program search to order candidate pairs of tasks and solver modifications by their conditional computational (time and space) complexity, given the stored experience so far. The new task and its corresponding task-solving skill are those first found and validated. This biases the search toward pairs that can be described compactly and validated quickly. The computational costs of validating new tasks need not grow with task repertoire size. Standard problem solver architectures of personal computers or neural networks tend to generalize by solving numerous tasks outside the self-invented training set; PowerPlay’s ongoing search for novelty keeps breaking the generalization abilities of its present solver. This is related to Gödel’s sequence of increasingly powerful formal theories based on adding formerly unprovable statements to the axioms without affecting previously provable theorems. The continually increasing repertoire of problem-solving procedures can be exploited by a parallel search for solutions to additional externally posed tasks. PowerPlay may be viewed as a greedy but practical implementation of basic principles of creativity (Schmidhuber, 2006a, 2010). A first experimental analysis can be found in separate papers (Srivastava et al., 2012a,b, 2013). PMID:23761771

  14. Comparing genetic algorithm and particle swarm optimization for solving capacitated vehicle routing problem

    NASA Astrophysics Data System (ADS)

    Iswari, T.; Asih, A. M. S.

    2018-04-01

    In the logistics system, transportation plays an important role to connect every element in the supply chain, but it can produces the greatest cost. Therefore, it is important to make the transportation costs as minimum as possible. Reducing the transportation cost can be done in several ways. One of the ways to minimizing the transportation cost is by optimizing the routing of its vehicles. It refers to Vehicle Routing Problem (VRP). The most common type of VRP is Capacitated Vehicle Routing Problem (CVRP). In CVRP, the vehicles have their own capacity and the total demands from the customer should not exceed the capacity of the vehicle. CVRP belongs to the class of NP-hard problems. These NP-hard problems make it more complex to solve such that exact algorithms become highly time-consuming with the increases in problem sizes. Thus, for large-scale problem instances, as typically found in industrial applications, finding an optimal solution is not practicable. Therefore, this paper uses two kinds of metaheuristics approach to solving CVRP. Those are Genetic Algorithm and Particle Swarm Optimization. This paper compares the results of both algorithms and see the performance of each algorithm. The results show that both algorithms perform well in solving CVRP but still needs to be improved. From algorithm testing and numerical example, Genetic Algorithm yields a better solution than Particle Swarm Optimization in total distance travelled.

  15. Reducing developmental risk for emotional/behavioral problems: a randomized controlled trial examining the Tools for Getting Along curriculum.

    PubMed

    Daunic, Ann P; Smith, Stephen W; Garvan, Cynthia W; Barber, Brian R; Becker, Mallory K; Peters, Christine D; Taylor, Gregory G; Van Loan, Christopher L; Li, Wei; Naranjo, Arlene H

    2012-04-01

    Researchers have demonstrated that cognitive-behavioral intervention strategies - such as social problem solving - provided in school settings can help ameliorate the developmental risk for emotional and behavioral difficulties. In this study, we report the results of a randomized controlled trial of Tools for Getting Along (TFGA), a social problem-solving universally delivered curriculum designed to reduce the developmental risk for serious emotional or behavioral problems among upper elementary grade students. We analyzed pre-intervention and post-intervention teacher-report and student self-report data from 14 schools, 87 classrooms, and a total of 1296 students using multilevel modeling. Results (effect sizes calculated using Hedges' g) indicated that students who were taught TFGA had a more positive approach to problem solving (g=.11) and a more rational problem-solving style (g=.16). Treated students with relatively poor baseline scores benefited from TFGA on (a) problem-solving knowledge (g=1.54); (b) teacher-rated executive functioning (g=.35 for Behavior Regulation and .32 for Metacognition), and proactive aggression (g=.20); and (c) self-reported trait anger (g=.17) and anger expression (g=.21). Thus, TFGA may reduce risk for emotional and behavioral difficulties by improving students' cognitive and emotional self-regulation and increasing their pro-social choices. Copyright © 2011 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  16. Enterprise Management Network Architecture Distributed Knowledge Base Support

    DTIC Science & Technology

    1990-11-01

    Advantages Potentially, this makes a distributed system more powerful than a conventional, centralized one in two ways: " First, it can be more reliable...does not completely apply [35]. The grain size of the processors measures the individual problem-solving power of the agents. In this definition...problem-solving power amounts to the conceptual size of a single action taken by an agent visible to the other agents in the system. If the grain is coarse

  17. Does Presentation Format Influence Visual Size Discrimination in Tufted Capuchin Monkeys (Sapajus spp.)?

    PubMed Central

    Truppa, Valentina; Carducci, Paola; Trapanese, Cinzia; Hanus, Daniel

    2015-01-01

    Most experimental paradigms to study visual cognition in humans and non-human species are based on discrimination tasks involving the choice between two or more visual stimuli. To this end, different types of stimuli and procedures for stimuli presentation are used, which highlights the necessity to compare data obtained with different methods. The present study assessed whether, and to what extent, capuchin monkeys’ ability to solve a size discrimination problem is influenced by the type of procedure used to present the problem. Capuchins’ ability to generalise knowledge across different tasks was also evaluated. We trained eight adult tufted capuchin monkeys to select the larger of two stimuli of the same shape and different sizes by using pairs of food items (Experiment 1), computer images (Experiment 1) and objects (Experiment 2). Our results indicated that monkeys achieved the learning criterion faster with food stimuli compared to both images and objects. They also required consistently fewer trials with objects than with images. Moreover, female capuchins had higher levels of acquisition accuracy with food stimuli than with images. Finally, capuchins did not immediately transfer the solution of the problem acquired in one task condition to the other conditions. Overall, these findings suggest that – even in relatively simple visual discrimination problems where a single perceptual dimension (i.e., size) has to be judged – learning speed strongly depends on the mode of presentation. PMID:25927363

  18. Graph pyramids as models of human problem solving

    NASA Astrophysics Data System (ADS)

    Pizlo, Zygmunt; Li, Zheng

    2004-05-01

    Prior theories have assumed that human problem solving involves estimating distances among states and performing search through the problem space. The role of mental representation in those theories was minimal. Results of our recent experiments suggest that humans are able to solve some difficult problems quickly and accurately. Specifically, in solving these problems humans do not seem to rely on distances or on search. It is quite clear that producing good solutions without performing search requires a very effective mental representation. In this paper we concentrate on studying the nature of this representation. Our theory takes the form of a graph pyramid. To verify the psychological plausibility of this theory we tested subjects in a Euclidean Traveling Salesman Problem in the presence of obstacles. The role of the number and size of obstacles was tested for problems with 6-50 cities. We analyzed the effect of experimental conditions on solution time per city and on solution error. The main result is that time per city is systematically affected only by the size of obstacles, but not by their number, or by the number of cities.

  19. Cost-efficient scheduling of FAST observations

    NASA Astrophysics Data System (ADS)

    Luo, Qi; Zhao, Laiping; Yu, Ce; Xiao, Jian; Sun, Jizhou; Zhu, Ming; Zhong, Yi

    2018-03-01

    A cost-efficient schedule for the Five-hundred-meter Aperture Spherical radio Telescope (FAST) requires to maximize the number of observable proposals and the overall scientific priority, and minimize the overall slew-cost generated by telescope shifting, while taking into account the constraints including the astronomical objects visibility, user-defined observable times, avoiding Radio Frequency Interference (RFI). In this contribution, first we solve the problem of maximizing the number of observable proposals and scientific priority by modeling it as a Minimum Cost Maximum Flow (MCMF) problem. The optimal schedule can be found by any MCMF solution algorithm. Then, for minimizing the slew-cost of the generated schedule, we devise a maximally-matchable edges detection-based method to reduce the problem size, and propose a backtracking algorithm to find the perfect matching with minimum slew-cost. Experiments on a real dataset from NASA/IPAC Extragalactic Database (NED) show that, the proposed scheduler can increase the usage of available times with high scientific priority and reduce the slew-cost significantly in a very short time.

  20. Fast scaffolding with small independent mixed integer programs

    PubMed Central

    Salmela, Leena; Mäkinen, Veli; Välimäki, Niko; Ylinen, Johannes; Ukkonen, Esko

    2011-01-01

    Motivation: Assembling genomes from short read data has become increasingly popular, but the problem remains computationally challenging especially for larger genomes. We study the scaffolding phase of sequence assembly where preassembled contigs are ordered based on mate pair data. Results: We present MIP Scaffolder that divides the scaffolding problem into smaller subproblems and solves these with mixed integer programming. The scaffolding problem can be represented as a graph and the biconnected components of this graph can be solved independently. We present a technique for restricting the size of these subproblems so that they can be solved accurately with mixed integer programming. We compare MIP Scaffolder to two state of the art methods, SOPRA and SSPACE. MIP Scaffolder is fast and produces better or as good scaffolds as its competitors on large genomes. Availability: The source code of MIP Scaffolder is freely available at http://www.cs.helsinki.fi/u/lmsalmel/mip-scaffolder/. Contact: leena.salmela@cs.helsinki.fi PMID:21998153

  1. On the estimation of the domain of attraction for discrete-time switched and hybrid nonlinear systems

    NASA Astrophysics Data System (ADS)

    Kit Luk, Chuen; Chesi, Graziano

    2015-11-01

    This paper addresses the estimation of the domain of attraction for discrete-time nonlinear systems where the vector field is subject to changes. First, the paper considers the case of switched systems, where the vector field is allowed to arbitrarily switch among the elements of a finite family. Second, the paper considers the case of hybrid systems, where the state space is partitioned into several regions described by polynomial inequalities, and the vector field is defined on each region independently from the other ones. In both cases, the problem consists of computing the largest sublevel set of a Lyapunov function included in the domain of attraction. An approach is proposed for solving this problem based on convex programming, which provides a guaranteed inner estimate of the sought sublevel set. The conservatism of the provided estimate can be decreased by increasing the size of the optimisation problem. Some numerical examples illustrate the proposed approach.

  2. Are larger dental practices more efficient? An analysis of dental services production.

    PubMed Central

    Lipscomb, J; Douglass, C W

    1986-01-01

    Whether cost-efficiency in dental services production increases with firm size is investigated through application of an activity analysis production function methodology to data from a national survey of dental practices. Under this approach, service delivery in a dental practice is modeled as a linear programming problem that acknowledges distinct input-output relationships for each service. These service-specific relationships are then combined to yield projections of overall dental practice productivity, subject to technical and organizational constraints. The activity analysis reported here represents arguably the most detailed evaluation yet of the relationship between dental practice size and cost-efficiency, controlling for such confounding factors as fee and service-mix differences across firms. We conclude that cost-efficiency does increase with practice size, over the range from solo to four-dentist practices. Largely because of data limitations, we were unable to test satisfactorily for scale economies in practices with five or more dentists. Within their limits, our findings are generally consistent with results from the neoclassical production function literature. From the standpoint of consumer welfare, the critical question raised (but not resolved) here is whether these apparent production efficiencies of group practice are ultimately translated by the market into lower fees, shorter queues, or other nonprice benefits. PMID:3102404

  3. Universality in a Neutral Evolution Model

    NASA Astrophysics Data System (ADS)

    King, Dawn; Scott, Adam; Maric, Nevena; Bahar, Sonya

    2013-03-01

    Agent-based models are ideal for investigating the complex problems of biodiversity and speciation because they allow for complex interactions between individuals and between individuals and the environment. Presented here is a ``null'' model that investigates three mating types - assortative, bacterial, and random - in phenotype space, as a function of the percentage of random death δ. Previous work has shown phase transition behavior in an assortative mating model with variable fitness landscapes as the maximum mutation size (μ) was varied (Dees and Bahar, 2010). Similarly, this behavior was recently presented in the work of Scott et al. (submitted), on a completely neutral landscape, for bacterial-like fission as well as for assortative mating. Here, in order to achieve an appropriate ``null'' hypothesis, the random death process was changed so each individual, in each generation, has the same probability of death. Results show a continuous nonequilibrium phase transition for the order parameters of the population size and the number of clusters (analogue of species) as δ is varied for three different mutation sizes of the system. The system shows increasing robustness as μ increases. Universality classes and percolation properties of this system are also explored. This research was supported by funding from: University of Missouri Research Board and James S. McDonnell Foundation

  4. The effect of low back pain on trunk muscle size/function and hip strength in elite football (soccer) players.

    PubMed

    Hides, Julie A; Oostenbroek, Tim; Franettovich Smith, Melinda M; Mendis, M Dilani

    2016-12-01

    Low back pain (LBP) is a common problem in football (soccer) players. The effect of LBP on the trunk and hip muscles in this group is unknown. The relationship between LBP and trunk muscle size and function in football players across the preseason was examined. A secondary aim was to assess hip muscle strength. Twenty-five elite soccer players participated in the study, with assessments conducted on 23 players at both the start and end of the preseason. LBP was assessed with questionnaires and ultrasound imaging was used to assess size and function of trunk muscles at the start and end of preseason. Dynamometry was used to assess hip muscle strength at the start of the preseason. At the start of the preseason, 28% of players reported the presence of LBP and this was associated with reduced size of the multifidus, increased contraction of the transversus abdominis and multifidus muscles. LBP decreased across the preseason, and size of the multifidus muscle improved over the preseason. Ability to contract the abdominal and multifidus muscles did not alter across the preseason. Asymmetry in hip adductor and abductor muscle strength was found between players with and without LBP. Identifying modifiable factors in players with LBP may allow development of more targeted preseason rehabilitation programmes.

  5. Role of grain size and particle velocity distribution in secondary electron emission in space plasmas

    NASA Technical Reports Server (NTRS)

    Chow, V. W.; Mendis, D. A.; Rosenberg, M.

    1993-01-01

    By virtue of being generally immersed in a plasma environment, cosmic dust is necessarily electrically charged. The fact that secondary emission plays an important role in determining the equilibrium grain potential has long been recognized, but the fact that the grain size plays a crucial role in this equilibrium potential, when secondary emission is important, has not been widely appreciated. Using both conducting and insulating spherical grains of various sizes and also both Maxwellian and generalized Lorentzian plasmas (which are believed to represent certain space plasmas), we have made a detailed study of this problem. In general, we find that the secondary emission yield delta increases with decreasing size and becomes very large for grains whose dimensions are comparable to the primary electron penetration depth, such as in the case of the very small grains observed at comet Halley and inferred in the interstellar medium. Moreover, we observed that delta is larger for insulators and equilibrium potentials are generally more positive when the plasma has a broad non-Maxwellian tail. Interestingly, we find that for thermal energies that are expected in several cosmic regions, grains of different sizes can have opposite charge, the smaller ones being positive while the larger ones are negative. This may have important consequences for grain accretion in polydisperse dusty space plasmas.

  6. Interaction of rate- and size-effect using a dislocation density based strain gradient viscoplasticity model

    NASA Astrophysics Data System (ADS)

    Nguyen, Trung N.; Siegmund, Thomas; Tomar, Vikas; Kruzic, Jamie J.

    2017-12-01

    Size effects occur in non-uniform plastically deformed metals confined in a volume on the scale of micrometer or sub-micrometer. Such problems have been well studied using strain gradient rate-independent plasticity theories. Yet, plasticity theories describing the time-dependent behavior of metals in the presence of size effects are presently limited, and there is no consensus about how the size effects vary with strain rates or whether there is an interaction between them. This paper introduces a constitutive model which enables the analysis of complex load scenarios, including loading rate sensitivity, creep, relaxation and interactions thereof under the consideration of plastic strain gradient effects. A strain gradient viscoplasticity constitutive model based on the Kocks-Mecking theory of dislocation evolution, namely the strain gradient Kocks-Mecking (SG-KM) model, is established and allows one to capture both rate and size effects, and their interaction. A formulation of the model in the finite element analysis framework is derived. Numerical examples are presented. In a special virtual creep test with the presence of plastic strain gradients, creep rates are found to diminish with the specimen size, and are also found to depend on the loading rate in an initial ramp loading step. Stress relaxation in a solid medium containing cylindrical microvoids is predicted to increase with decreasing void radius and strain rate in a prior ramp loading step.

  7. Child behavioural problems and body size among 2-6 year old children predisposed to overweight. results from the "healthy start" study.

    PubMed

    Olsen, Nanna J; Pedersen, Jeanett; Händel, Mina N; Stougaard, Maria; Mortensen, Erik L; Heitmann, Berit L

    2013-01-01

    Psychological adversities among young children may be associated with childhood overweight and obesity. We examined if an increased level of child behavioural problems was associated with body size among a selected group of 2-6 year old children, who were all predisposed to develop overweight. Cross-sectional analyses were conducted using baseline data from the "Healthy Start" intervention study. A total of 3058 children were invited to participate, and data from 583 children who were all predisposed for obesity was analyzed. The Danish version of the Strengths and Difficulties Questionnaire (SDQ) was used to assess child stress by the SDQ Total Difficulties (SDQ-TD) score and the Prosocial Behavior (PSB) score. Height and weight were measured, and BMI z-scores were calculated. A direct, but non-significant linear trend was found between SDQ-TD score and BMI z-score (β = 0.021, p = 0.11). Having an SDQ-TD score above the 90(th) percentile was associated with BMI z-score (β = 0.36, p = 0.05). PSB score was not associated with BMI z-score. Analyses were adjusted for parental socioeconomic status, parental BMI, family structure, dietary factors, physical activity, and family stress level. The results suggested a threshold effect between SDQ-TD score and BMI z-score, where BMI z-score was associated with childhood behavioural problems only for those with the highest scores of SDQ-TD. No significant association between PSB score and BMI z-score was found.

  8. Infrared problem in quantum acoustodynamics

    NASA Astrophysics Data System (ADS)

    Clougherty, Dennis P.; Sengupta, Sanghita

    2017-05-01

    Quantum electrodynamics (QED) provides a highly accurate description of phenomena involving the interaction of atoms with light. We argue that the quantum theory describing the interaction of cold atoms with a vibrating membrane—quantum acoustodynamics (QAD)—shares many issues and features with QED. Specifically, the adsorption of an atom on a vibrating membrane can be viewed as the counterpart to QED radiative electron capture. A calculation of the adsorption rate to lowest order in the atom-phonon coupling is finite; however, higher-order contributions suffer from an infrared problem mimicking the case of radiative capture in QED. Terms in the perturbation series for the adsorption rate diverge as a result of massless particles in the model (flexural phonons of the membrane in QAD and photons in QED). We treat this infrared problem in QAD explicitly to obtain finite results by regularizing with a low-frequency cutoff that corresponds to the inverse size of the membrane. Using a coherent-state basis for the soft-phonon final state, we then sum the dominant contributions to derive a new formula for the multiphonon adsorption rate of atoms on the membrane that gives results that are finite, nonperturbative in the atom-phonon coupling, and consistent with the Kinoshita-Lee-Nauenberg theorem. For micromembranes, we predict a reduction with increasing membrane size for the low-energy adsorption rate. We discuss the relevance of this to the adsorption of a cold gas of atomic hydrogen on suspended graphene.

  9. Quadratic constrained mixed discrete optimization with an adiabatic quantum optimizer

    NASA Astrophysics Data System (ADS)

    Chandra, Rishabh; Jacobson, N. Tobias; Moussa, Jonathan E.; Frankel, Steven H.; Kais, Sabre

    2014-07-01

    We extend the family of problems that may be implemented on an adiabatic quantum optimizer (AQO). When a quadratic optimization problem has at least one set of discrete controls and the constraints are linear, we call this a quadratic constrained mixed discrete optimization (QCMDO) problem. QCMDO problems are NP-hard, and no efficient classical algorithm for their solution is known. Included in the class of QCMDO problems are combinatorial optimization problems constrained by a linear partial differential equation (PDE) or system of linear PDEs. An essential complication commonly encountered in solving this type of problem is that the linear constraint may introduce many intermediate continuous variables into the optimization while the computational cost grows exponentially with problem size. We resolve this difficulty by developing a constructive mapping from QCMDO to quadratic unconstrained binary optimization (QUBO) such that the size of the QUBO problem depends only on the number of discrete control variables. With a suitable embedding, taking into account the physical constraints of the realizable coupling graph, the resulting QUBO problem can be implemented on an existing AQO. The mapping itself is efficient, scaling cubically with the number of continuous variables in the general case and linearly in the PDE case if an efficient preconditioner is available.

  10. Is the permeability of naturally fractured rocks scale dependent?

    NASA Astrophysics Data System (ADS)

    Azizmohammadi, Siroos; Matthäi, Stephan K.

    2017-09-01

    The equivalent permeability, keq of stratified fractured porous rocks and its anisotropy is important for hydrocarbon reservoir engineering, groundwater hydrology, and subsurface contaminant transport. However, it is difficult to constrain this tensor property as it is strongly influenced by infrequent large fractures. Boreholes miss them and their directional sampling bias affects the collected geostatistical data. Samples taken at any scale smaller than that of interest truncate distributions and this bias leads to an incorrect characterization and property upscaling. To better understand this sampling problem, we have investigated a collection of outcrop-data-based Discrete Fracture and Matrix (DFM) models with mechanically constrained fracture aperture distributions, trying to establish a useful Representative Elementary Volume (REV). Finite-element analysis and flow-based upscaling have been used to determine keq eigenvalues and anisotropy. While our results indicate a convergence toward a scale-invariant keq REV with increasing sample size, keq magnitude can have multi-modal distributions. REV size relates to the length of dilated fracture segments as opposed to overall fracture length. Tensor orientation and degree of anisotropy also converge with sample size. However, the REV for keq anisotropy is larger than that for keq magnitude. Across scales, tensor orientation varies spatially, reflecting inhomogeneity of the fracture patterns. Inhomogeneity is particularly pronounced where the ambient stress selectively activates late- as opposed to early (through-going) fractures. While we cannot detect any increase of keq with sample size as postulated in some earlier studies, our results highlight a strong keq anisotropy that influences scale dependence.

  11. Effect of body image on pregnancy weight gain.

    PubMed

    Mehta, Ushma J; Siega-Riz, Anna Maria; Herring, Amy H

    2011-04-01

    The majority of women gain more weight during pregnancy than what is recommended. Since gestational weight gain is related to short and long-term maternal health outcomes, it is important to identify women at greater risk of not adhering to guidelines. The objective of this study was to examine the relationship between body image and gestational weight gain. The Body Image Assessment for Obesity tool was used to measure ideal and current body sizes in 1,192 women participating in the Pregnancy, Infection and Nutrition Study. Descriptive and multivariable techniques were used to assess the effects of ideal body size and discrepancy score (current-ideal body sizes), which reflected the level of body dissatisfaction, on gestational weight gain. Women who preferred to be thinner had increased risk of excessive gain if they started the pregnancy at a BMI ≤26 kg/m(2) but a decreased risk if they were overweight or obese. Comparing those who preferred thin body silhouettes to those who preferred average size silhouettes, low income women had increased risk of inadequate weight gain [RR = 1.76 (1.08, 2.88)] while those with lower education were at risk of excessive gain [RR = 1.11 (1.00, 1.22)]. Our results revealed that body image was associated with gestational weight gain but the relationship is complex. Identifying factors that affect whether certain women are at greater risk of gaining outside of guidelines may improve our ability to decrease pregnancy-related health problems.

  12. Online versus offline: The Web as a medium for response time data collection.

    PubMed

    Chetverikov, Andrey; Upravitelev, Philipp

    2016-09-01

    The Internet provides a convenient environment for data collection in psychology. Modern Web programming languages, such as JavaScript or Flash (ActionScript), facilitate complex experiments without the necessity of experimenter presence. Yet there is always a question of how much noise is added due to the differences between the setups used by participants and whether it is compensated for by increased ecological validity and larger sample sizes. This is especially a problem for experiments that measure response times (RTs), because they are more sensitive (and hence more susceptible to noise) than, for example, choices per se. We used a simple visual search task with different set sizes to compare laboratory performance with Web performance. The results suggest that although the locations (means) of RT distributions are different, other distribution parameters are not. Furthermore, the effect of experiment setting does not depend on set size, suggesting that task difficulty is not important in the choice of a data collection method. We also collected an additional online sample to investigate the effects of hardware and software diversity on the accuracy of RT data. We found that the high diversity of browsers, operating systems, and CPU performance may have a detrimental effect, though it can partly be compensated for by increased sample sizes and trial numbers. In sum, the findings show that Web-based experiments are an acceptable source of RT data, comparable to a common keyboard-based setup in the laboratory.

  13. Parameter estimation methods for gene circuit modeling from time-series mRNA data: a comparative study.

    PubMed

    Fan, Ming; Kuwahara, Hiroyuki; Wang, Xiaolei; Wang, Suojin; Gao, Xin

    2015-11-01

    Parameter estimation is a challenging computational problem in the reverse engineering of biological systems. Because advances in biotechnology have facilitated wide availability of time-series gene expression data, systematic parameter estimation of gene circuit models from such time-series mRNA data has become an important method for quantitatively dissecting the regulation of gene expression. By focusing on the modeling of gene circuits, we examine here the performance of three types of state-of-the-art parameter estimation methods: population-based methods, online methods and model-decomposition-based methods. Our results show that certain population-based methods are able to generate high-quality parameter solutions. The performance of these methods, however, is heavily dependent on the size of the parameter search space, and their computational requirements substantially increase as the size of the search space increases. In comparison, online methods and model decomposition-based methods are computationally faster alternatives and are less dependent on the size of the search space. Among other things, our results show that a hybrid approach that augments computationally fast methods with local search as a subsequent refinement procedure can substantially increase the quality of their parameter estimates to the level on par with the best solution obtained from the population-based methods while maintaining high computational speed. These suggest that such hybrid methods can be a promising alternative to the more commonly used population-based methods for parameter estimation of gene circuit models when limited prior knowledge about the underlying regulatory mechanisms makes the size of the parameter search space vastly large. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  14. The relationship between socioeconomic development and malnutrition in children younger than 5 years in China during the period 1990 to 2010.

    PubMed

    Wu, Lifang; Yang, Zhenyu; Yin, Shi-an; Zhu, Mei; Gao, Huiyu

    2015-01-01

    BACKGROUND AND OBJECTIVES:More than 30 years of socioeconomic development in China has improved living conditions which contributed to a steep decline in malnutrition prevalence of children under 5 years. To elucidate the role of socioeconomic development in improving nutritional status and to identify appropriate policy priorities for intervention in nutrition improvement for younger children. We collected data on socioeconomic development, education, cultural and recreational services, food consumption, average family size and malnutrition prevalence from national surveys. From 1990 to 2010, Gross Domestic Product (GDP) per capita increased from 1644 Chinese Yuan (CNY) to 30,015 CNY; average disposable income and food expenditure per capita significantly increased in urban and rural areas; per capita consumption for education increased from 112 CNY to 1628 CNY and from 15.3 CNY to 367 CNY for other cultural services; illiteracy rate decreased from 15.9% to 4.1%; average family size from 3.97 to 3.10; and prevalence of stunting and underweight decreased from 33.1% to 9.9% and 13.7% to 3.6%, respectively. However, anaemia prevalence did not obviously decline between 1992 and 2000. After adjusting confounding effects of variables, negative relationships were observed between GDP per capita, average family size and stunting or underweight prevalence. However, no association was observed between illiteracy rate and prevalence of stunting and underweight, and there was no correlation between GDP per capita, illiteracy rate, average family size and anaemia prevalence. Our results indicated that economic development cannot solve all nutritional problems and comprehensive national developmental strategies should be considered to combat malnutrition.

  15. Microsatellite DNA Suggests that Group Size Affects Sex-biased Dispersal Patterns in Red Colobus Monkeys

    PubMed Central

    Miyamoto, Michael M.; Allen, Julie M.; Gogarten, Jan F.; Chapman, Colin A.

    2013-01-01

    Dispersal is a major life history trait of social organisms influencing the behavioral and genetic structure of their groups. Unfortunately, primate dispersal is difficult to quantify, because of the rarity of these events and our inability to ascertain if individuals dispersed or died when they disappear. Socioecological models have been partially developed to understand the ecological causes of different dispersal systems and their social consequences. However, these models have yielded confusing results when applied to folivores. The folivorous red colobus monkey (Procolobus rufomitratus) in Kibale National Park, Uganda is thought to exhibit female-biased dispersal, although both sexes have been observed to disperse and there remains considerable debate over the selective pressures favoring the transfers of males and females and the causes of variation in the proportion of each sex to leave the natal group. We circumvent this problem by using microsatellite DNA data to investigate the prediction that female dispersal will be more frequent in larger groups as compared to smaller ones. The rationale for this prediction is that red colobus exhibit increased within-group competition in bigger groups, which should favor higher female dispersal rates and ultimately lower female relatedness. Genetic data from two unequally sized neighboring groups of red colobus demonstrate increased female relatedness within the smaller group, suggesting females are less likely to disperse when there is less within-group competition. We suggest that the dispersal system is mediated to some degree by scramble competition and group size. Since red colobus group sizes have increased throughout Kibale by over 50% in the last decade, these changes may have major implications for the genetic structure and ultimately the population viability of this endangered primate. PMID:23307485

  16. Free lipid and computerized determination of adipocyte size.

    PubMed

    Svensson, Henrik; Olausson, Daniel; Holmäng, Agneta; Jennische, Eva; Edén, Staffan; Lönn, Malin

    2018-06-21

    The size distribution of adipocytes in a suspension, after collagenase digestion of adipose tissue, can be determined by computerized image analysis. Free lipid, forming droplets, in such suspensions implicates a bias since droplets present in the images may be identified as adipocytes. This problem is not always adjusted for and some reports state that distinguishing droplets and cells is a considerable problem. In addition, if the droplets originate mainly from rupture of large adipocytes, as often described, this will also bias size analysis. We here confirm that our ordinary manual means of distinguishing droplets and adipocytes in the images ensure correct and rapid identification before exclusion of the droplets. Further, in our suspensions, prepared with focus on gentle handling of tissue and cells, we find no association between the amount of free lipid and mean adipocyte size or proportion of large adipocytes.

  17. Fleet Sizing of Automated Material Handling Using Simulation Approach

    NASA Astrophysics Data System (ADS)

    Wibisono, Radinal; Ai, The Jin; Ratna Yuniartha, Deny

    2018-03-01

    Automated material handling tends to be chosen rather than using human power in material handling activity for production floor in manufacturing company. One critical issue in implementing automated material handling is designing phase to ensure that material handling activity more efficient in term of cost spending. Fleet sizing become one of the topic in designing phase. In this research, simulation approach is being used to solve fleet sizing problem in flow shop production to ensure optimum situation. Optimum situation in this research means minimum flow time and maximum capacity in production floor. Simulation approach is being used because flow shop can be modelled into queuing network and inter-arrival time is not following exponential distribution. Therefore, contribution of this research is solving fleet sizing problem with multi objectives in flow shop production using simulation approach with ARENA Software

  18. How self-reported hot flashes may relate to affect, cognitive performance and sleep.

    PubMed

    Regestein, Quentin; Friebely, Joan; Schiff, Isaac

    2015-08-01

    To explain the controversy about whether midlife women who self-report hot flashes have relatively increased affective symptoms, poor cognitive performance or worse sleep. Retrospective data from 88 women seeking relief from bothersome day and night hot flashes were submitted to mixed linear regression modeling to find if estimated hot flashes, as measured by Women's Health Questionnaire (WHQ) items, or diary-documented hot flashes recorded daily, were associated with each other, or with affective, cognitive or sleep measures. Subjects averaged 6.3 daytime diary-documented hot flashes and 2.4 nighttime diary-documented hot flashes per 24h. Confounder-controlled diary-documented hot flashes but not estimated hot flashes were associated with increased Leeds anxiety scores (F=4.9; t=2.8; p=0.01) and Leeds depression scores (3.4; 2.5; 0.02), decreased Stroop Color-Word test performance (9.4; 3.5; 0.001), increased subjective sleep disturbance (effect size=0.83) and increased objective sleep disturbance (effect size=0.35). Hot flash effects were small to moderate in size. Univariate but not multivariate analyses revealed that all hot flash measures were associated with all affect measures. Different measures of hot flashes associated differently with affect, cognition and sleep. Only nighttime diary-document hot flash consistently correlated with any affect measures in multivariate analyses. The use of differing measures for hot flashes, affect, cognition and sleep may account for the continually reported inconsistencies in menopause study outcomes. This problem impedes forging a consensus on whether hot flashes correlate with neuropsychological symptoms. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  19. Linear relationship between increasing amounts of extruded linseed in dairy cow diet and milk fatty acid composition and butter properties.

    PubMed

    Hurtaud, C; Faucon, F; Couvreur, S; Peyraud, J-L

    2010-04-01

    The aim of this experiment was to compare the effects of increasing amounts of extruded linseed in dairy cow diet on milk fat yield, milk fatty acid (FA) composition, milk fat globule size, and butter properties. Thirty-six Prim'Holstein cows at 104 d in milk were sorted into 3 groups by milk production and milk fat globule size. Three diets were assigned: a total mixed ration (control) consisting of corn silage (70%) and concentrate (30%), or a supplemented ration based on the control ration but where part of the concentrate energy was replaced on a dry matter basis by 2.1% (LIN1) or 4.3% (LIN2) extruded linseed. The increased amounts of extruded linseed linearly decreased milk fat content and milk fat globule size and linearly increased the percentage of milk unsaturated FA, specifically alpha-linolenic acid and trans FA. Extruded linseed had no significant effect on butter color or on the sensory properties of butters, with only butter texture in the mouth improved. The LIN2 treatment induced a net improvement of milk nutritional properties but also created problems with transforming the cream into butter. The butters obtained were highly spreadable and melt-in-the-mouth, with no pronounced deficiency in taste. The LIN1 treatment appeared to offer a good tradeoff of improved milk FA profile and little effect on butter-making while still offering butters with improved functional properties. Copyright (c) 2010 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  20. Effects of heel base size, walking speed, and slope angle on center of pressure trajectory and plantar pressure when wearing high-heeled shoes.

    PubMed

    Luximon, Yan; Cong, Yan; Luximon, Ameersing; Zhang, Ming

    2015-06-01

    High-heeled shoes are associated with instability and a high risk of fall, fracture, and ankle sprain. This study investigated the effects of heel base size (HBS) on walking stability under different walking speeds and slope angles. The trajectory of the center of pressure (COP), maximal peak pressure, pressure time integral, contact area, and perceived stability were analyzed. The results revealed that a small HBS increased the COP deviations, shifting the COP more medially at the beginning of the gait cycle. The slope angle mainly affected the COP in the anteroposterior direction. An increased slope angle shifted the COP posterior and caused greater pressure and a larger contact area in the midfoot and rearfoot regions, which can provide more support. Subjective measures on perceived stability were consistent with objective measures. The results suggested that high-heeled shoes with a small HBS did not provide stable plantar support, particularly on a small slope angle. The changes in the COP and pressure pattern caused by a small HBS might increase joint torque and muscle activity and induce lower limb problems. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Finite element mesh refinement criteria for stress analysis

    NASA Technical Reports Server (NTRS)

    Kittur, Madan G.; Huston, Ronald L.

    1990-01-01

    This paper discusses procedures for finite-element mesh selection and refinement. The objective is to improve accuracy. The procedures are based on (1) the minimization of the stiffness matrix race (optimizing node location); (2) the use of h-version refinement (rezoning, element size reduction, and increasing the number of elements); and (3) the use of p-version refinement (increasing the order of polynomial approximation of the elements). A step-by-step procedure of mesh selection, improvement, and refinement is presented. The criteria for 'goodness' of a mesh are based on strain energy, displacement, and stress values at selected critical points of a structure. An analysis of an aircraft lug problem is presented as an example.

  2. Arithmetic Problems at School: When There Is an Apparent Contradiction between the Situation Model and the Problem Model

    ERIC Educational Resources Information Center

    Coquin-Viennot, Daniele; Moreau, Stephanie

    2007-01-01

    Background: Understanding and solving problems involves different levels of representation. On the one hand, there are logico-mathematical representations, or problem models (PMs), which contain information such as "the size of the flock changed from 31 sheep to 42" while, on the other hand, there are more qualitative representations, or…

  3. Computer Power. Part 2: Electrical Power Problems and Their Amelioration.

    ERIC Educational Resources Information Center

    Price, Bennett J.

    1989-01-01

    Describes electrical power problems that affect computer users, including spikes, sags, outages, noise, frequency variations, and static electricity. Ways in which these problems may be diagnosed and cured are discussed. Sidebars consider transformers; power distribution units; surge currents/linear and non-linear loads; and sizing the power…

  4. An Interactive Multiobjective Programming Approach to Combinatorial Data Analysis.

    ERIC Educational Resources Information Center

    Brusco, Michael J.; Stahl, Stephanie

    2001-01-01

    Describes an interactive procedure for multiobjective asymmetric unidimensional seriation problems that uses a dynamic-programming algorithm to generate partially the efficient set of sequences for small to medium-sized problems and a multioperational heuristic to estimate the efficient set for larger problems. Applies the procedure to an…

  5. High-resolution, submicron particle size distribution analysis using gravitational-sweep sedimentation.

    PubMed Central

    Mächtle, W

    1999-01-01

    Sedimentation velocity is a powerful tool for the analysis of complex solutions of macromolecules. However, sample turbidity imposes an upper limit to the size of molecular complexes currently amenable to such analysis. Furthermore, the breadth of the particle size distribution, combined with possible variations in the density of different particles, makes it difficult to analyze extremely complex mixtures. These same problems are faced in the polymer industry, where dispersions of latices, pigments, lacquers, and emulsions must be characterized. There is a rich history of methods developed for the polymer industry finding use in the biochemical sciences. Two such methods are presented. These use analytical ultracentrifugation to determine the density and size distributions for submicron-sized particles. Both methods rely on Stokes' equations to estimate particle size and density, whereas turbidity, corrected using Mie's theory, provides the concentration measurement. The first method uses the sedimentation time in dispersion media of different densities to evaluate the particle density and size distribution. This method works provided the sample is chemically homogeneous. The second method splices together data gathered at different sample concentrations, thus permitting the high-resolution determination of the size distribution of particle diameters ranging from 10 to 3000 nm. By increasing the rotor speed exponentially from 0 to 40,000 rpm over a 1-h period, size distributions may be measured for extremely broadly distributed dispersions. Presented here is a short history of particle size distribution analysis using the ultracentrifuge, along with a description of the newest experimental methods. Several applications of the methods are provided that demonstrate the breadth of its utility, including extensions to samples containing nonspherical and chromophoric particles. PMID:9916040

  6. Interpreting Hydraulic Conditions from Morphology, Sedimentology, and Grain Size of Sand Bars in the Colorado River in Grand Canyon

    NASA Astrophysics Data System (ADS)

    Rubin, D. M.; Topping, D. J.; Schmidt, J. C.; Grams, P. E.; Buscombe, D.; East, A. E.; Wright, S. A.

    2015-12-01

    During three decades of research on sand bars and sediment transport in the Colorado River in Grand Canyon, we have collected unprecedented quantities of data on bar morphology, sedimentary structures, grain size of sand on the riverbed (~40,000 measurements), grain size of sand in flood deposits (dozens of vertical grain-size profiles), and time series of suspended sediment concentration and grain size (more than 3 million measurements using acoustic and laser-diffraction instruments sampling every 15 minutes at several locations). These data, which include measurements of flow and suspended sediment as well as sediment within the deposits, show that grain size within flood deposits generally coarsens or fines proportionally to the grain size of sediment that was in suspension when the beds were deposited. The inverse problem of calculating changing flow conditions from a vertical profile of grain size within a deposit is difficult because at least two processes can cause similar changes. For example, upward coarsening in a deposit can result from either an increase in discharge of the flow (causing coarser sand to be transported to the depositional site), or from winnowing of the upstream supply of sand (causing suspended sand to coarsen because a greater proportion of the bed that is supplying sediment is covered with coarse grains). These two processes can be easy to distinguish where suspended-sediment observations are available: flow-regulated changes cause concentration and grain size of sand in suspension to be positively correlated, whereas changes in supply can cause concentration and grain size of sand in suspension to be negatively correlated. The latter case (supply regulation) is more typical of flood deposits in Grand Canyon.

  7. Torque during canal instrumentation using rotary nickel-titanium files.

    PubMed

    Sattapan, B; Palamara, J E; Messer, H H

    2000-03-01

    Nickel-titanium engine-driven rotary instruments are used increasingly in endodontic practice. One frequently mentioned problem is fracture of an instrument in the root canal. Very few studies have been conducted on torsional characteristics of these instruments, and none has been done under dynamic conditions. The purposes of this study were to measure the torque generated and the apical force applied during instrumentation with a commercial engine-driven nickel-titanium file system, and to relate torque generated during simulated clinical use to torsional failure of the instruments. Ten extracted human teeth (five with small-sized and five with medium-sized straight root canals) were instrumented with Quantec Series 2000 files, and the torque and apical force generated were measured. The applied apical force was generally low, not exceeding 150 g in either small or medium canals. The torque depended on the tip size and taper of each instrument, and on canal size. Instruments with 0.05 and 0.06 taper generated the highest torque, which was greater in small than in medium canals. The torque at failure was significantly (p < 0.001) higher than torque during instrumentation, but with considerable variation in the extent of the difference.

  8. Design of crossed-mirror array to form floating 3D LED signs

    NASA Astrophysics Data System (ADS)

    Yamamoto, Hirotsugu; Bando, Hiroki; Kujime, Ryousuke; Suyama, Shiro

    2012-03-01

    3D representation of digital signage improves its significance and rapid notification of important points. Our goal is to realize floating 3D LED signs. The problem is there is no sufficient device to form floating 3D images from LEDs. LED lamp size is around 1 cm including wiring and substrates. Such large pitch increases display size and sometimes spoils image quality. The purpose of this paper is to develop optical device to meet the three requirements and to demonstrate floating 3D arrays of LEDs. We analytically investigate image formation by a crossed mirror structure with aerial aperture, called CMA (crossed-mirror array). CMA contains dihedral corner reflectors at each aperture. After double reflection, light rays emitted from an LED will converge into the corresponding image point. We have fabricated CMA for 3D array of LEDs. One CMA unit contains 20 x 20 apertures that are located diagonally. Floating image of LEDs was formed in wide range of incident angle. The image size of focused beam agreed to the apparent aperture size. When LEDs were located three-dimensionally (LEDs in three depths), the focused distances were the same as the distance between the real LED and the CMA.

  9. Synchronization in scale-free networks: The role of finite-size effects

    NASA Astrophysics Data System (ADS)

    Torres, D.; Di Muro, M. A.; La Rocca, C. E.; Braunstein, L. A.

    2015-06-01

    Synchronization problems in complex networks are very often studied by researchers due to their many applications to various fields such as neurobiology, e-commerce and completion of tasks. In particular, scale-free networks with degree distribution P(k)∼ k-λ , are widely used in research since they are ubiquitous in Nature and other real systems. In this paper we focus on the surface relaxation growth model in scale-free networks with 2.5< λ <3 , and study the scaling behavior of the fluctuations, in the steady state, with the system size N. We find a novel behavior of the fluctuations characterized by a crossover between two regimes at a value of N=N* that depends on λ: a logarithmic regime, found in previous research, and a constant regime. We propose a function that describes this crossover, which is in very good agreement with the simulations. We also find that, for a system size above N* , the fluctuations decrease with λ, which means that the synchronization of the system improves as λ increases. We explain this crossover analyzing the role of the network's heterogeneity produced by the system size N and the exponent of the degree distribution.

  10. Effectiveness and feasibility of Socratic feedback to increase awareness of deficits in patients with acquired brain injury: Four single-case experimental design (SCED) studies.

    PubMed

    Schrijnemaekers, Anne-Claire M C; Winkens, Ieke; Rasquin, Sascha M C; Verhaeg, Annette; Ponds, Rudolf W H M; van Heugten, Caroline M

    2018-06-29

    To investigate the effectiveness and feasibility of a Socratic feedback programme to improve awareness of deficits in patients with acquired brain injury (ABI). Rehabilitation centre. Four patients with ABI with awareness problems. A series of single-case experimental design studies with random intervention starting points (A-B + maintenance design). Rate of trainer-feedback and self-control behaviour on everyday tasks, patient competency rating scale (PCRS), self-regulating skills interview (SRSI), hospital anxiety and depression scale. All patients needed less trainer feedback, the change was significant in 3 out of 4. One patient increased in overt self-corrective behaviour. SRSI performance increased in all patients (medium to strong effect size), and PCRS performance increased in two patients (medium and strong effect size). Mood and anxiety levels were elevated in one patient at the beginning of the training and decreased to normal levels at the end of the training. The feasibility of the programme was scored 9 out of 10. The Socratic feedback method is a promising intervention for improving awareness of deficits in patients with ABI. Controlled studies with larger populations are needed to draw more solid conclusions about the effect of this method.

  11. Electric Machine with Boosted Inductance to Stabilize Current Control

    NASA Technical Reports Server (NTRS)

    Abel, Steve

    2013-01-01

    High-powered motors typically have very low resistance and inductance (R and L) in their windings. This makes the pulse-width modulated (PWM) control of the current very difficult, especially when the bus voltage (V) is high. These R and L values are dictated by the motor size, torque (Kt), and back-emf (Kb) constants. These constants are in turn set by the voltage and the actuation torque-speed requirements. This problem is often addressed by placing inductive chokes within the controller. This approach is undesirable in that space is taken and heat is added to the controller. By keeping the same motor frame, reducing the wire size, and placing a correspondingly larger number of turns in each slot, the resistance, inductance, torque constant, and back-emf constant are all increased. The increased inductance aids the current control but ruins the Kt and Kb selections. If, however, a fraction of the turns is moved from their "correct slot" to an "incorrect slot," the increased R and L values are retained, but the Kt and Kb values are restored to the desired values. This approach assumes that increased resistance is acceptable to a degree. In effect, the heat allocated to the added inductance has been moved from the controller to the motor body, which in some cases is preferred.

  12. Superfocusing of mutimode semiconductor lasers and light-emitting diodes

    NASA Astrophysics Data System (ADS)

    Sokolovskii, G. S.; Dudelev, V. V.; Losev, S. N.; Deryagin, A. G.; Kuchinskii, V. I.; Sibbett, W.; Rafailov, E. U.

    2012-05-01

    The problem of focusing multimode radiation of high-power semiconductor lasers and light-emitting diodes (LEDs) has been studied. In these sources, low spatial quality of the output beam determines theoretical limit of the focal spot size (one to two orders of magnitude exceeding the diffraction limit), thus restricting the possibility of increasing power density and creating optical field gradients that are necessary in many practical applications. In order to overcome this limitation, we have developed a method of superfocusing of multimode radiation with the aid of interference. It is shown that, using this method, the focal spot size of high-power semiconductor lasers and LEDs can be reduced to a level unachievable by means of traditional focusing. An approach to exceed the theoretical limit of power density for focusing of radiation with high propagation parameter M 2 is proposed.

  13. Review of biased solar array - Plasma interaction studies

    NASA Technical Reports Server (NTRS)

    Stevens, N. J.

    1981-01-01

    Possible high voltage surface interactions on the Solar Electric Propulsion System (SEPS) are examined, with particular regard for potential effects on SEPS performance. The SEPS is intended for use for geosynchronous and planetary missions, and derives power from deployed solar cell arrays which are susceptible to collecting ions and electrons from the charged and thermal particle environment of space. The charge exchange plasma which provides the thrust force can also enhance the natural charged particle environment and increase interactions between the thrust system and the biased solar array surface. Tests of small arrays have shown that snapover, where current collection becomes proportional to the panel area, can be avoided by larger cell sizes. Arcing is predicted to diminish with larger array sizes, while the problems of efflux environments are noted to be as yet undefined and require further study.

  14. Electron-Phonon Systems on a Universal Quantum Computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macridin, Alexandru; Spentzouris, Panagiotis; Amundson, James

    We present an algorithm that extends existing quantum algorithms forsimulating fermion systems in quantum chemistry and condensed matter physics toinclude phonons. The phonon degrees of freedom are represented with exponentialaccuracy on a truncated Hilbert space with a size that increases linearly withthe cutoff of the maximum phonon number. The additional number of qubitsrequired by the presence of phonons scales linearly with the size of thesystem. The additional circuit depth is constant for systems with finite-rangeelectron-phonon and phonon-phonon interactions and linear for long-rangeelectron-phonon interactions. Our algorithm for a Holstein polaron problem wasimplemented on an Atos Quantum Learning Machine (QLM) quantum simulatoremployingmore » the Quantum Phase Estimation method. The energy and the phonon numberdistribution of the polaron state agree with exact diagonalization results forweak, intermediate and strong electron-phonon coupling regimes.« less

  15. Single-Molecule Detection in Micron-Sized Capillaries

    NASA Astrophysics Data System (ADS)

    Ball, David A.; Shen, Guoqing; Davis, Lloyd M.

    2004-11-01

    The detection of individual molecules in solution by laser-induced fluorescence is becoming an increasingly important tool for biophysics research and biotechnology applications. In a typical single-molecule detection (SMD) experiment, diffusion is the dominant mode of transport of fluorophores through the focused laser beam. In order to more rapidly process a large number of slowly diffusing bio-molecules for applications in pharmaceutical drug discovery, a flow can be introduced within a capillary. If the flow speed is sufficient, bio-molecules will be carried through the probe volume significantly faster than by diffusion alone. Here we discuss SMD near the tip of, and in, such micron-sized capillaries, with a high numerical-aperture microscope objective used for confocal-epi-illumination along the axis of the capillary. Problems such as molecular adsorption to the glass are also addressed.

  16. Radiation effects in advanced microelectronics technologies

    NASA Astrophysics Data System (ADS)

    Johnston, A. H.

    1998-06-01

    The pace of device scaling has increased rapidly in recent years. Experimental CMOS devices have been produced with feature sizes below 0.1 /spl mu/m, demonstrating that devices with feature sizes between 0.1 and 0.25 /spl mu/m will likely be available in mainstream technologies after the year 2000. This paper discusses how the anticipated changes in device dimensions and design are likely to affect their radiation response in space environments. Traditional problems, such as total dose effects, SEU and latchup are discussed, along with new phenomena. The latter include hard errors from heavy ions (microdose and gate-rupture errors), and complex failure modes related to advanced circuit architecture. The main focus of the paper is on commercial devices, which are displacing hardened device technologies in many space applications. However, the impact of device scaling on hardened devices is also discussed.

  17. The Economic Demography of Mass Poverty.

    ERIC Educational Resources Information Center

    Abegaz, Berhanu, Ed.

    1986-01-01

    The four papers in this volume discuss various facets of the poverty-demography interaction: the rationale for the desired family size of the poor, the problems of attaining such size, the effect of family size/structure on household economy, and the future well-being of the children of the poor. "Mass Poverty, Demography, and Development…

  18. Applications and error correction for adiabatic quantum optimization

    NASA Astrophysics Data System (ADS)

    Pudenz, Kristen

    Adiabatic quantum optimization (AQO) is a fast-developing subfield of quantum information processing which holds great promise in the relatively near future. Here we develop an application, quantum anomaly detection, and an error correction code, Quantum Annealing Correction (QAC), for use with AQO. The motivation for the anomaly detection algorithm is the problematic nature of classical software verification and validation (V&V). The number of lines of code written for safety-critical applications such as cars and aircraft increases each year, and with it the cost of finding errors grows exponentially (the cost of overlooking errors, which can be measured in human safety, is arguably even higher). We approach the V&V problem by using a quantum machine learning algorithm to identify charateristics of software operations that are implemented outside of specifications, then define an AQO to return these anomalous operations as its result. Our error correction work is the first large-scale experimental demonstration of quantum error correcting codes. We develop QAC and apply it to USC's equipment, the first and second generation of commercially available D-Wave AQO processors. We first show comprehensive experimental results for the code's performance on antiferromagnetic chains, scaling the problem size up to 86 logical qubits (344 physical qubits) and recovering significant encoded success rates even when the unencoded success rates drop to almost nothing. A broader set of randomized benchmarking problems is then introduced, for which we observe similar behavior to the antiferromagnetic chain, specifically that the use of QAC is almost always advantageous for problems of sufficient size and difficulty. Along the way, we develop problem-specific optimizations for the code and gain insight into the various on-chip error mechanisms (most prominently thermal noise, since the hardware operates at finite temperature) and the ways QAC counteracts them. We finish by showing that the scheme is robust to qubit loss on-chip, a significant benefit when considering an implemented system.

  19. NAS Parallel Benchmarks. 2.4

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We describe a new problem size, called Class D, for the NAS Parallel Benchmarks (NPB), whose MPI source code implementation is being released as NPB 2.4. A brief rationale is given for how the new class is derived. We also describe the modifications made to the MPI (Message Passing Interface) implementation to allow the new class to be run on systems with 32-bit integers, and with moderate amounts of memory. Finally, we give the verification values for the new problem size.

  20. Pyramid algorithms as models of human cognition

    NASA Astrophysics Data System (ADS)

    Pizlo, Zygmunt; Li, Zheng

    2003-06-01

    There is growing body of experimental evidence showing that human perception and cognition involves mechanisms that can be adequately modeled by pyramid algorithms. The main aspect of those mechanisms is hierarchical clustering of information: visual images, spatial relations, and states as well as transformations of a problem. In this paper we review prior psychophysical and simulation results on visual size transformation, size discrimination, speed-accuracy tradeoff, figure-ground segregation, and the traveling salesman problem. We also present our new results on graph search and on the 15-puzzle.

  1. Ethics and Animal Numbers: Informal Analyses, Uncertain Sample Sizes, Inefficient Replications, and Type I Errors

    PubMed Central

    2011-01-01

    To obtain approval for the use vertebrate animals in research, an investigator must assure an ethics committee that the proposed number of animals is the minimum necessary to achieve a scientific goal. How does an investigator make that assurance? A power analysis is most accurate when the outcome is known before the study, which it rarely is. A ‘pilot study’ is appropriate only when the number of animals used is a tiny fraction of the numbers that will be invested in the main study because the data for the pilot animals cannot legitimately be used again in the main study without increasing the rate of type I errors (false discovery). Traditional significance testing requires the investigator to determine the final sample size before any data are collected and then to delay analysis of any of the data until all of the data are final. An investigator often learns at that point either that the sample size was larger than necessary or too small to achieve significance. Subjects cannot be added at this point in the study without increasing type I errors. In addition, journal reviewers may require more replications in quantitative studies than are truly necessary. Sequential stopping rules used with traditional significance tests allow incremental accumulation of data on a biomedical research problem so that significance, replicability, and use of a minimal number of animals can be assured without increasing type I errors. PMID:21838970

  2. Cross support overview and operations concept for future space missions

    NASA Technical Reports Server (NTRS)

    Stallings, William; Kaufeler, Jean-Francois

    1994-01-01

    Ground networks must respond to the requirements of future missions, which include smaller sizes, tighter budgets, increased numbers, and shorter development schedules. The Consultative Committee for Space Data Systems (CCSDS) is meeting these challenges by developing a general cross support concept, reference model, and service specifications for Space Link Extension services for space missions involving cross support among Space Agencies. This paper identifies and bounds the problem, describes the need to extend Space Link services, gives an overview of the operations concept, and introduces complimentary CCSDS work on standardizing Space Link Extension services.

  3. Functional description of the ISIS system

    NASA Technical Reports Server (NTRS)

    Berman, W. J.

    1979-01-01

    Development of software for avionic and aerospace applications (flight software) is influenced by a unique combination of factors which includes: (1) length of the life cycle of each project; (2) necessity for cooperation between the aerospace industry and NASA; (3) the need for flight software that is highly reliable; (4) the increasing complexity and size of flight software; and (5) the high quality of the programmers and the tightening of project budgets. The interactive software invocation system (ISIS) which is described is designed to overcome the problems created by this combination of factors.

  4. Study on the glaze ice accretion of wind turbine with various chord lengths

    NASA Astrophysics Data System (ADS)

    Liang, Jian; Liu, Maolian; Wang, Ruiqi; Wang, Yuhang

    2018-02-01

    Wind turbine icing often occurs in winter, which changes the aerodynamic characteristics of the blades and reduces the work efficiency of the wind turbine. In this paper, the glaze ice model is established for horizontal-axis wind turbine in 3-D. The model contains the grid generation, two-phase simulation, heat and mass transfer. Results show that smaller wind turbine suffers from more serious icing problem, which reflects on a larger ice thickness. Both the collision efficiency and heat transfer coefficient increase under smaller size condition.

  5. Nitrogen contamination of surficial aquifers - A growing legacy

    USGS Publications Warehouse

    Puckett, Larry J.; Tesoriero, Anthony J.; Dubrovsky, Neil M.

    2011-01-01

    The virtual ubiquity of fertilizer-fed agriculture, increasing over several decades, has become necessary to support the global human population. Ironically, widespread use of nitrogen (N) has contaminated another vital resource: surficial fresh groundwater. Further, as nitrous oxide (N2O) is a potent greenhouse gas, anthropogenic manipulation of N budgets has ramifications that can extend far beyond national borders. To get a handle on the size of the problem, Puckett et al. present an approach to track historical contamination and thus analyze trends now and in the past with implications for the future.

  6. Conic section function neural network circuitry for offline signature recognition.

    PubMed

    Erkmen, Burcu; Kahraman, Nihan; Vural, Revna A; Yildirim, Tulay

    2010-04-01

    In this brief, conic section function neural network (CSFNN) circuitry was designed for offline signature recognition. CSFNN is a unified framework for multilayer perceptron (MLP) and radial basis function (RBF) networks to make simultaneous use of advantages of both. The CSFNN circuitry architecture was developed using a mixed mode circuit implementation. The designed circuit system is problem independent. Hence, the general purpose neural network circuit system could be applied to various pattern recognition problems with different network sizes on condition with the maximum network size of 16-16-8. In this brief, CSFNN circuitry system has been applied to two different signature recognition problems. CSFNN circuitry was trained with chip-in-the-loop learning technique in order to compensate typical analog process variations. CSFNN hardware achieved highly comparable computational performances with CSFNN software for nonlinear signature recognition problems.

  7. Sibling relationship quality and psychopathology of children and adolescents: a meta-analysis.

    PubMed

    Buist, Kirsten L; Deković, Maja; Prinzie, Peter

    2013-02-01

    In the current meta-analysis, we investigated the link between child and adolescent sibling relationship quality (warmth, conflict and differential treatment) and internalizing and externalizing problems, and potential moderators of these associations. From 34 studies, we obtained 85 effect sizes, based on 12,257 children and adolescents. Results showed that more sibling warmth, less sibling conflict and less differential treatment were all significantly associated with less internalizing and externalizing problems. Effect sizes for sibling conflict were stronger than for sibling warmth and differential treatment, and associations for internalizing and externalizing problems were similar in strength. Effect sizes were moderated by sibling gender combination (stronger effects for higher percentage brother pairs), age difference between siblings (stronger effects for smaller age differences), and developmental period (stronger effect sizes for children than for adolescents). These results indicate that the sibling context is important when considering psychopathology. In addition to the overwhelming evidence of the impact of parent-child and marital relationships on child and adolescent development, the present meta-analysis is a reminder that the sibling relationship warrants more attention in research as well as in clinical settings. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. A review of hybrid implicit explicit finite difference time domain method

    NASA Astrophysics Data System (ADS)

    Chen, Juan

    2018-06-01

    The finite-difference time-domain (FDTD) method has been extensively used to simulate varieties of electromagnetic interaction problems. However, because of its Courant-Friedrich-Levy (CFL) condition, the maximum time step size of this method is limited by the minimum size of cell used in the computational domain. So the FDTD method is inefficient to simulate the electromagnetic problems which have very fine structures. To deal with this problem, the Hybrid Implicit Explicit (HIE)-FDTD method is developed. The HIE-FDTD method uses the hybrid implicit explicit difference in the direction with fine structures to avoid the confinement of the fine spatial mesh on the time step size. So this method has much higher computational efficiency than the FDTD method, and is extremely useful for the problems which have fine structures in one direction. In this paper, the basic formulations, time stability condition and dispersion error of the HIE-FDTD method are presented. The implementations of several boundary conditions, including the connect boundary, absorbing boundary and periodic boundary are described, then some applications and important developments of this method are provided. The goal of this paper is to provide an historical overview and future prospects of the HIE-FDTD method.

  9. Cognition and behavioural development in early childhood: the role of birth weight and postnatal growth.

    PubMed

    Huang, Cheng; Martorell, Reynaldo; Ren, Aiguo; Li, Zhiwen

    2013-02-01

    We evaluate the relative importance of birth weight and postnatal growth for cognition and behavioural development in 8389 Chinese children, 4-7 years of age. Method Weight was the only size measure available at birth. Weight, height, head circumference and intelligence quotient (IQ) were measured between 4 and 7 years of age. Z-scores of birth weight and postnatal conditional weight gain to 4-7 years, as well as height and head circumference at 4-7 years of age, were the exposure variables. Z-scores of weight at 4-7 years were regressed on birth weight Z-scores, and the residual was used as the measure of postnatal conditional weight gain. The outcomes were child's IQ, measured by the Chinese Wechsler Young Children Scale of Intelligence, as well as internalizing behavioural problems, externalizing behavioural problems and other behavioural problems, evaluated by the Child Behavior Checklist 4-18. Multivariate regressions were conducted to investigate the relationship of birth weight and postnatal growth variables with the outcomes, separately for preterm children and term children. Both birth weight and postnatal weight gain were associated with IQ among term children; 1 unit increment in Z-score of birth weight (∼450 g) was associated with an increase of 1.60 [Confidence interval (CI): 1.18-2.02; P < 0.001] points in IQ, and 1 unit increment in conditional postnatal weight was associated with an increase of 0.46 (CI: 0.06-0.86; P = 0.02) points in IQ, after adjustment for confounders; similar patterns were observed when Z-scores of postnatal height and head circumference at age 4-7 years were used as alternative measurements of postnatal growth. Effect sizes of relationships with IQ were smaller than 0.1 of a standard deviation in all cases. Neither birth weight nor postnatal growth indicators were associated with behavioural outcomes among term children. In preterm children, neither birth weight nor postnatal growth measures were associated with IQ or behavioural outcomes. Both birth weight and postnatal growth were associated with IQ but not behavioural outcomes for Chinese term children aged 4-7 years, but the effect sizes were small. No relation between either birth weight or postnatal growth and cognition or behavioural outcomes was observed among preterm children aged 4-7 years.

  10. Work as treatment? The effectiveness of re-employment programmes for unemployed persons with severe mental health problems on health and quality of life: a systematic review and meta-analysis.

    PubMed

    van Rijn, Rogier M; Carlier, Bouwine E; Schuring, Merel; Burdorf, Alex

    2016-04-01

    Given the importance of unemployment in health inequalities, re-employment of unemployed persons into paid employment may be a powerful intervention to increase population health. It is suggested that integrated programmes of vocational reintegration with health promotion may improve the likelihood of entering paid employment of long-term unemployed persons with severe mental health problems. However, the current evidence regarding whether entering paid employment of this population will contribute to a reduction in health problems remains unambiguous. This systematic review and meta-analysis aimed to assess the effects of re-employment programmes with regard to health and quality of life. Three electronic databases were searched (up to March 2015). Two reviewers independently selected articles and assessed the risk of bias on prespecified criteria. Measures of effects were pooled and random effect meta-analysis of randomised controlled trials was conducted, where possible. Sixteen studies were included. Nine studies described functioning as an outcome measure. Five studies with six comparisons provided enough information to calculate a pooled effect size of -0.01 (95% CI -0.13 to 0.11). Fifteen studies presented mental health as an outcome measure of which six with comparable psychiatric symptoms resulted in a pooled effect size of 0.20 (95% CI -0.23 to 0.62). Thirteen studies described quality of life as an outcome measure. Seven of these studies, describing eight comparisons, provided enough information to calculate a pooled effect size of 0.28 (95% CI 0.04 to 0.52). Re-employment programmes have a modest positive effect on the quality of life. No evidence was found for any effect of these re-employment programmes on functioning and mental health. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  11. Closed-loop optimization of chromatography column sizing strategies in biopharmaceutical manufacture.

    PubMed

    Allmendinger, Richard; Simaria, Ana S; Turner, Richard; Farid, Suzanne S

    2014-10-01

    This paper considers a real-world optimization problem involving the identification of cost-effective equipment sizing strategies for the sequence of chromatography steps employed to purify biopharmaceuticals. Tackling this problem requires solving a combinatorial optimization problem subject to multiple constraints, uncertain parameters, and time-consuming fitness evaluations. An industrially-relevant case study is used to illustrate that evolutionary algorithms can identify chromatography sizing strategies with significant improvements in performance criteria related to process cost, time and product waste over the base case. The results demonstrate also that evolutionary algorithms perform best when infeasible solutions are repaired intelligently, the population size is set appropriately, and elitism is combined with a low number of Monte Carlo trials (needed to account for uncertainty). Adopting this setup turns out to be more important for scenarios where less time is available for the purification process. Finally, a data-visualization tool is employed to illustrate how user preferences can be accounted for when it comes to selecting a sizing strategy to be implemented in a real industrial setting. This work demonstrates that closed-loop evolutionary optimization, when tuned properly and combined with a detailed manufacturing cost model, acts as a powerful decisional tool for the identification of cost-effective purification strategies. © 2013 The Authors. Journal of Chemical Technology & Biotechnology published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry.

  12. Closed-loop optimization of chromatography column sizing strategies in biopharmaceutical manufacture

    PubMed Central

    Allmendinger, Richard; Simaria, Ana S; Turner, Richard; Farid, Suzanne S

    2014-01-01

    BACKGROUND This paper considers a real-world optimization problem involving the identification of cost-effective equipment sizing strategies for the sequence of chromatography steps employed to purify biopharmaceuticals. Tackling this problem requires solving a combinatorial optimization problem subject to multiple constraints, uncertain parameters, and time-consuming fitness evaluations. RESULTS An industrially-relevant case study is used to illustrate that evolutionary algorithms can identify chromatography sizing strategies with significant improvements in performance criteria related to process cost, time and product waste over the base case. The results demonstrate also that evolutionary algorithms perform best when infeasible solutions are repaired intelligently, the population size is set appropriately, and elitism is combined with a low number of Monte Carlo trials (needed to account for uncertainty). Adopting this setup turns out to be more important for scenarios where less time is available for the purification process. Finally, a data-visualization tool is employed to illustrate how user preferences can be accounted for when it comes to selecting a sizing strategy to be implemented in a real industrial setting. CONCLUSION This work demonstrates that closed-loop evolutionary optimization, when tuned properly and combined with a detailed manufacturing cost model, acts as a powerful decisional tool for the identification of cost-effective purification strategies. © 2013 The Authors. Journal of Chemical Technology & Biotechnology published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry. PMID:25506115

  13. Theoretical and Experimental Evaluation of the Bond Strength Under Peeling Loads

    NASA Technical Reports Server (NTRS)

    Nayeb-Hashemi, Hamid; Jawad, Oussama Cherkaoui

    1997-01-01

    Reliable applications of adhesively bonded joints require understanding of the stress distribution along the bond-line and the stresses that are responsible for the joint failure. To properly evaluate factors affecting peel strength, effects of defects such as voids on the stress distribution in the overlap region must be understood. In this work, the peel stress distribution in a single lap joint is derived using a strength of materials approach. The bonded joint is modeled as Euler-Bernoulli beams, bonded together with an adhesive. which is modeled as an elastic foundation which can resist both peel and shear stresses. It is found that for certain adhesive and adherend geometries and properties, a central void with the size up to 50 percent of the overlap length has negligible effect on the peak peel and shear stresses. To verify the solutions obtained from the model, the problem is solved again by using the finite element method and by treating the adherends and the adhesive as elastic materials. It is found that the model used in the analysis not only predicts the correct trend for the peel stress distribution but also gives rather surprisingly close results to that of the finite element analysis. It is also found that both shear and peel stresses can be responsible for the joint performance and when a void is introduced, both of these stresses can contribute to the joint failure as the void size increases. Acoustic emission (AE) activities of aluminum-adhesive-aluminum specimens with different void sizes were monitored. The AE ringdown counts and energy were very sensitive and decreased significantly with the void size. It was observed that the AE events were shifting towards the edge of the overlap where the maximum peeling and shearing stresses were occurring as the void size increased.

  14. Estimating normative limits of Heidelberg Retina Tomograph optic disc rim area with quantile regression.

    PubMed

    Artes, Paul H; Crabb, David P

    2010-01-01

    To investigate why the specificity of the Moorfields Regression Analysis (MRA) of the Heidelberg Retina Tomograph (HRT) varies with disc size, and to derive accurate normative limits for neuroretinal rim area to address this problem. Two datasets from healthy subjects (Manchester, UK, n = 88; Halifax, Nova Scotia, Canada, n = 75) were used to investigate the physiological relationship between the optic disc and neuroretinal rim area. Normative limits for rim area were derived by quantile regression (QR) and compared with those of the MRA (derived by linear regression). Logistic regression analyses were performed to quantify the association between disc size and positive classifications with the MRA, as well as with the QR-derived normative limits. In both datasets, the specificity of the MRA depended on optic disc size. The odds of observing a borderline or outside-normal-limits classification increased by approximately 10% for each 0.1 mm(2) increase in disc area (P < 0.1). The lower specificity of the MRA with large optic discs could be explained by the failure of linear regression to model the extremes of the rim area distribution (observations far from the mean). In comparison, the normative limits predicted by QR were larger for smaller discs (less specific, more sensitive), and smaller for larger discs, such that false-positive rates became independent of optic disc size. Normative limits derived by quantile regression appear to remove the size-dependence of specificity with the MRA. Because quantile regression does not rely on the restrictive assumptions of standard linear regression, it may be a more appropriate method for establishing normative limits in other clinical applications where the underlying distributions are nonnormal or have nonconstant variance.

  15. Collective Framework and Performance Optimizations to Open MPI for Cray XT Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ladd, Joshua S; Gorentla Venkata, Manjunath; Shamis, Pavel

    2011-01-01

    The performance and scalability of collective operations plays a key role in the performance and scalability of many scientific applications. Within the Open MPI code base we have developed a general purpose hierarchical collective operations framework called Cheetah, and applied it at large scale on the Oak Ridge Leadership Computing Facility's Jaguar (OLCF) platform, obtaining better performance and scalability than the native MPI implementation. This paper discuss Cheetah's design and implementation, and optimizations to the framework for Cray XT 5 platforms. Our results show that the Cheetah's Broadcast and Barrier perform better than the native MPI implementation. For medium data,more » the Cheetah's Broadcast outperforms the native MPI implementation by 93% for 49,152 processes problem size. For small and large data, it out performs the native MPI implementation by 10% and 9%, respectively, at 24,576 processes problem size. The Cheetah's Barrier performs 10% better than the native MPI implementation for 12,288 processes problem size.« less

  16. Problems of allometric scaling analysis: examples from mammalian reproductive biology.

    PubMed

    Martin, Robert D; Genoud, Michel; Hemelrijk, Charlotte K

    2005-05-01

    Biological scaling analyses employing the widely used bivariate allometric model are beset by at least four interacting problems: (1) choice of an appropriate best-fit line with due attention to the influence of outliers; (2) objective recognition of divergent subsets in the data (allometric grades); (3) potential restrictions on statistical independence resulting from phylogenetic inertia; and (4) the need for extreme caution in inferring causation from correlation. A new non-parametric line-fitting technique has been developed that eliminates requirements for normality of distribution, greatly reduces the influence of outliers and permits objective recognition of grade shifts in substantial datasets. This technique is applied in scaling analyses of mammalian gestation periods and of neonatal body mass in primates. These analyses feed into a re-examination, conducted with partial correlation analysis, of the maternal energy hypothesis relating to mammalian brain evolution, which suggests links between body size and brain size in neonates and adults, gestation period and basal metabolic rate. Much has been made of the potential problem of phylogenetic inertia as a confounding factor in scaling analyses. However, this problem may be less severe than suspected earlier because nested analyses of variance conducted on residual variation (rather than on raw values) reveals that there is considerable variance at low taxonomic levels. In fact, limited divergence in body size between closely related species is one of the prime examples of phylogenetic inertia. One common approach to eliminating perceived problems of phylogenetic inertia in allometric analyses has been calculation of 'independent contrast values'. It is demonstrated that the reasoning behind this approach is flawed in several ways. Calculation of contrast values for closely related species of similar body size is, in fact, highly questionable, particularly when there are major deviations from the best-fit line for the scaling relationship under scrutiny.

  17. Leadership solves collective action problems in small-scale societies

    PubMed Central

    Glowacki, Luke; von Rueden, Chris

    2015-01-01

    Observation of leadership in small-scale societies offers unique insights into the evolution of human collective action and the origins of sociopolitical complexity. Using behavioural data from the Tsimane forager-horticulturalists of Bolivia and Nyangatom nomadic pastoralists of Ethiopia, we evaluate the traits of leaders and the contexts in which leadership becomes more institutional. We find that leaders tend to have more capital, in the form of age-related knowledge, body size or social connections. These attributes can reduce the costs leaders incur and increase the efficacy of leadership. Leadership becomes more institutional in domains of collective action, such as resolution of intragroup conflict, where collective action failure threatens group integrity. Together these data support the hypothesis that leadership is an important means by which collective action problems are overcome in small-scale societies. PMID:26503683

  18. Massively Scalable Near Duplicate Detection in Streams of Documents using MDSH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bogen, Paul Logasa; Symons, Christopher T; McKenzie, Amber T

    2013-01-01

    In a world where large-scale text collections are not only becoming ubiquitous but also are growing at increasing rates, near duplicate documents are becoming a growing concern that has the potential to hinder many different information filtering tasks. While others have tried to address this problem, prior techniques have only been used on limited collection sizes and static cases. We will briefly describe the problem in the context of Open Source Intelligence (OSINT) along with our additional constraints for performance. In this work we propose two variations on Multi-dimensional Spectral Hash (MDSH) tailored for working on extremely large, growing setsmore » of text documents. We analyze the memory and runtime characteristics of our techniques and provide an informal analysis of the quality of the near-duplicate clusters produced by our techniques.« less

  19. NGL Viewer: Web-based molecular graphics for large complexes.

    PubMed

    Rose, Alexander S; Bradley, Anthony R; Valasatava, Yana; Duarte, Jose M; Prlic, Andreas; Rose, Peter W

    2018-05-29

    The interactive visualization of very large macromolecular complexes on the web is becoming a challenging problem as experimental techniques advance at an unprecedented rate and deliver structures of increasing size. We have tackled this problem by developing highly memory-efficient and scalable extensions for the NGL WebGL-based molecular viewer and by using MMTF, a binary and compressed Macromolecular Transmission Format. These enable NGL to download and render molecular complexes with millions of atoms interactively on desktop computers and smartphones alike, making it a tool of choice for web-based molecular visualization in research and education. The source code is freely available under the MIT license at github.com/arose/ngl and distributed on NPM (npmjs.com/package/ngl). MMTF-JavaScript encoders and decoders are available at github.com/rcsb/mmtf-javascript. asr.moin@gmail.com.

  20. Leadership solves collective action problems in small-scale societies.

    PubMed

    Glowacki, Luke; von Rueden, Chris

    2015-12-05

    Observation of leadership in small-scale societies offers unique insights into the evolution of human collective action and the origins of sociopolitical complexity. Using behavioural data from the Tsimane forager-horticulturalists of Bolivia and Nyangatom nomadic pastoralists of Ethiopia, we evaluate the traits of leaders and the contexts in which leadership becomes more institutional. We find that leaders tend to have more capital, in the form of age-related knowledge, body size or social connections. These attributes can reduce the costs leaders incur and increase the efficacy of leadership. Leadership becomes more institutional in domains of collective action, such as resolution of intragroup conflict, where collective action failure threatens group integrity. Together these data support the hypothesis that leadership is an important means by which collective action problems are overcome in small-scale societies. © 2015 The Author(s).

  1. Stability of Dirac Liquids with Strong Coulomb Interaction.

    PubMed

    Tupitsyn, Igor S; Prokof'ev, Nikolay V

    2017-01-13

    We develop and apply the diagrammatic Monte Carlo technique to address the problem of the stability of the Dirac liquid state (in a graphene-type system) against the strong long-range part of the Coulomb interaction. So far, all attempts to deal with this problem in the field-theoretical framework were limited either to perturbative or random phase approximation and functional renormalization group treatments, with diametrically opposite conclusions. Our calculations aim at the approximation-free solution with controlled accuracy by computing vertex corrections from higher-order skeleton diagrams and establishing the renormalization group flow of the effective Coulomb coupling constant. We unambiguously show that with increasing the system size L (up to ln(L)∼40), the coupling constant always flows towards zero; i.e., the two-dimensional Dirac liquid is an asymptotically free T=0 state with divergent Fermi velocity.

  2. Large-for-size liver transplant: a single-center experience.

    PubMed

    Akdur, Aydincan; Kirnap, Mahir; Ozcay, Figen; Sezgin, Atilla; Ayvazoglu Soy, Hatice Ebru; Karakayali Yarbug, Feza; Yildirim, Sedat; Moray, Gokhan; Arslan, Gulnaz; Haberal, Mehmet

    2015-04-01

    The ideal ratio between liver transplant graft mass and recipient body weight is unknown, but the graft probably must weigh 0.8% to 2.0% recipient weight. When this ratio > 4%, there may be problems due to large-for-size transplant, especially in recipients < 10 kg. This condition is caused by discrepancy between the small abdominal cavity and large graft and is characterized by decreased blood supply to the liver graft and graft dysfunction. We evaluated our experience with large-for-size grafts. We retrospectively evaluated 377 orthotopic liver transplants that were performed from 2001-2014 in our center. We included 188 pediatric transplants in our study. There were 58 patients < 10 kg who had living-donor living transplant with graft-to-bodyweight ratio > 4%. In 2 patients, the abdomen was closed with a Bogota bag. In 5 patients, reoperation was performed due to vascular problems and abdominal hypertension, and the abdomen was closed with a Bogota bag. All Bogota bags were closed in 2 weeks. After closing the fascia, 10 patients had vascular problems that were diagnosed in the operating room by Doppler ultrasonography, and only the skin was closed without fascia closure. No graft loss occurred due to large-for-size transplant. There were 8 patients who died early after transplant (sepsis, 6 patients; brain death, 2 patients). There was no major donor morbidity or donor mortality. Large-for-size graft may cause abdominal compartment syndrome due to the small size of the recipient abdominal cavity, size discrepancies in vascular caliber, insufficient portal circulation, and disturbance of tissue oxygenation. Abdominal closure with a Bogota bag in these patients is safe and effective to avoid abdominal compartment syndrome. Early diagnosis by ultrasonography in the operating room after fascia closure and repeated ultrasonography at the clinic may help avoid graft loss.

  3. Space Station long term lubrication analysis. Phase 1 preliminary tribological survey

    NASA Technical Reports Server (NTRS)

    Dufrane, K. F.; Kannel, J. W.; Lowry, J. A.; Montgomery, E. E.

    1990-01-01

    Increases in the size, complexity, and life requirements of satellites and space vehicles have put increasing demands on the lubrication requirements for trouble-free service. Since the development costs of large systems are high, long lives with minimum maintenance are dictated. The Space Station represents the latest level of size and complexity in satellite development; it will be nearly 100 meters in major dimensions and will have a life requirement of thirty years. It will have numerous mechanisms critical to its success, some of which will be exposed to the space environment. Designing long-life lubrication systems and choosing appropriate lubricants for these systems will be necessary for their meeting the requirements and for avoiding failures with associated dependent mechanisms. The purpose of this program was to identify the various critical mechanisms and review their designs during the overall design and development stage so that problem areas could be avoided or minimized prior to the fabrication of hardware. The specific objectives were fourfold: (1) to perform a tribology survey of the Space Station for the purpose of documenting each wear point as to materials involved, environmental conditions, and operating characteristics; (2) to review each wear point (point of relative motion) as to the lubrication used and substrate materials selected in the context of its operating characteristics and the environmental conditions imposed; (3) to make recommendations for improvement in areas where the lubricant chosen and/or where the substrate (materials of the wear couple) are not considered optimum for the application; and (4) to make or recommend simulated or full scale tests in tribological areas where the state-of-the-art is being advanced, in areas where new designs are obviously being employed and a critical review would indicate that problems are a strong possibility, and/or where excessive wear, a malfunction, or excessive leakage would create fluid systems problems or contamination of exposed optical equipment.

  4. A scalable variational inequality approach for flow through porous media models with pressure-dependent viscosity

    NASA Astrophysics Data System (ADS)

    Mapakshi, N. K.; Chang, J.; Nakshatrala, K. B.

    2018-04-01

    Mathematical models for flow through porous media typically enjoy the so-called maximum principles, which place bounds on the pressure field. It is highly desirable to preserve these bounds on the pressure field in predictive numerical simulations, that is, one needs to satisfy discrete maximum principles (DMP). Unfortunately, many of the existing formulations for flow through porous media models do not satisfy DMP. This paper presents a robust, scalable numerical formulation based on variational inequalities (VI), to model non-linear flows through heterogeneous, anisotropic porous media without violating DMP. VI is an optimization technique that places bounds on the numerical solutions of partial differential equations. To crystallize the ideas, a modification to Darcy equations by taking into account pressure-dependent viscosity will be discretized using the lowest-order Raviart-Thomas (RT0) and Variational Multi-scale (VMS) finite element formulations. It will be shown that these formulations violate DMP, and, in fact, these violations increase with an increase in anisotropy. It will be shown that the proposed VI-based formulation provides a viable route to enforce DMP. Moreover, it will be shown that the proposed formulation is scalable, and can work with any numerical discretization and weak form. A series of numerical benchmark problems are solved to demonstrate the effects of heterogeneity, anisotropy and non-linearity on DMP violations under the two chosen formulations (RT0 and VMS), and that of non-linearity on solver convergence for the proposed VI-based formulation. Parallel scalability on modern computational platforms will be illustrated through strong-scaling studies, which will prove the efficiency of the proposed formulation in a parallel setting. Algorithmic scalability as the problem size is scaled up will be demonstrated through novel static-scaling studies. The performed static-scaling studies can serve as a guide for users to be able to select an appropriate discretization for a given problem size.

  5. Investigating Darcy-scale assumptions by means of a multiphysics algorithm

    NASA Astrophysics Data System (ADS)

    Tomin, Pavel; Lunati, Ivan

    2016-09-01

    Multiphysics (or hybrid) algorithms, which couple Darcy and pore-scale descriptions of flow through porous media in a single numerical framework, are usually employed to decrease the computational cost of full pore-scale simulations or to increase the accuracy of pure Darcy-scale simulations when a simple macroscopic description breaks down. Despite the massive increase in available computational power, the application of these techniques remains limited to core-size problems and upscaling remains crucial for practical large-scale applications. In this context, the Hybrid Multiscale Finite Volume (HMsFV) method, which constructs the macroscopic (Darcy-scale) problem directly by numerical averaging of pore-scale flow, offers not only a flexible framework to efficiently deal with multiphysics problems, but also a tool to investigate the assumptions used to derive macroscopic models and to better understand the relationship between pore-scale quantities and the corresponding macroscale variables. Indeed, by direct comparison of the multiphysics solution with a reference pore-scale simulation, we can assess the validity of the closure assumptions inherent to the multiphysics algorithm and infer the consequences for macroscopic models at the Darcy scale. We show that the definition of the scale ratio based on the geometric properties of the porous medium is well justified only for single-phase flow, whereas in case of unstable multiphase flow the nonlinear interplay between different forces creates complex fluid patterns characterized by new spatial scales, which emerge dynamically and weaken the scale-separation assumption. In general, the multiphysics solution proves very robust even when the characteristic size of the fluid-distribution patterns is comparable with the observation length, provided that all relevant physical processes affecting the fluid distribution are considered. This suggests that macroscopic constitutive relationships (e.g., the relative permeability) should account for the fact that they depend not only on the saturation but also on the actual characteristics of the fluid distribution.

  6. Implementation of the DPM Monte Carlo code on a parallel architecture for treatment planning applications.

    PubMed

    Tyagi, Neelam; Bose, Abhijit; Chetty, Indrin J

    2004-09-01

    We have parallelized the Dose Planning Method (DPM), a Monte Carlo code optimized for radiotherapy class problems, on distributed-memory processor architectures using the Message Passing Interface (MPI). Parallelization has been investigated on a variety of parallel computing architectures at the University of Michigan-Center for Advanced Computing, with respect to efficiency and speedup as a function of the number of processors. We have integrated the parallel pseudo random number generator from the Scalable Parallel Pseudo-Random Number Generator (SPRNG) library to run with the parallel DPM. The Intel cluster consisting of 800 MHz Intel Pentium III processor shows an almost linear speedup up to 32 processors for simulating 1 x 10(8) or more particles. The speedup results are nearly linear on an Athlon cluster (up to 24 processors based on availability) which consists of 1.8 GHz+ Advanced Micro Devices (AMD) Athlon processors on increasing the problem size up to 8 x 10(8) histories. For a smaller number of histories (1 x 10(8)) the reduction of efficiency with the Athlon cluster (down to 83.9% with 24 processors) occurs because the processing time required to simulate 1 x 10(8) histories is less than the time associated with interprocessor communication. A similar trend was seen with the Opteron Cluster (consisting of 1400 MHz, 64-bit AMD Opteron processors) on increasing the problem size. Because of the 64-bit architecture Opteron processors are capable of storing and processing instructions at a faster rate and hence are faster as compared to the 32-bit Athlon processors. We have validated our implementation with an in-phantom dose calculation study using a parallel pencil monoenergetic electron beam of 20 MeV energy. The phantom consists of layers of water, lung, bone, aluminum, and titanium. The agreement in the central axis depth dose curves and profiles at different depths shows that the serial and parallel codes are equivalent in accuracy.

  7. Quality-control issues on high-resolution diagnostic monitors.

    PubMed

    Parr, L F; Anderson, A L; Glennon, B K; Fetherston, P

    2001-06-01

    Previous literature indicates a need for more data collection in the area of quality control of high-resolution diagnostic monitors. Throughout acceptance testing, which began in June 2000, stability of monitor calibration was analyzed. Although image quality on all monitors was found to be acceptable upon initial acceptance testing using VeriLUM software by Image Smiths, Inc (Germantown, MD), it was determined to be unacceptable during the clinical phase of acceptance testing. High-resolution monitors were evaluated for quality assurance on a weekly basis from installation through acceptance testing and beyond. During clinical utilization determination (CUD), monitor calibration was identified as a problem and the manufacturer returned and recalibrated all workstations. From that time through final acceptance testing, high-resolution monitor calibration and monitor failure rate remained a problem. The monitor vendor then returned to the site to address these areas. Monitor defocus was still noticeable and calibration checks were increased to three times per week. White and black level drift on medium-resolution monitors had been attributed to raster size settings. Measurements of white and black level at several different size settings were taken to determine the effect of size on white and black level settings. Black level remained steady with size change. White level appeared to increase by 2.0 cd/m2 for every 0.1 inches decrease in horizontal raster size. This was determined not to be the cause of the observed brightness drift. Frequency of calibration/testing is an issue in a clinical environment. The increased frequency required at our site cannot be sustained. The medical physics division cannot provide dedicated personnel to conduct the quality-assurance testing on all monitors at this interval due to other physics commitments throughout the hospital. Monitor access is also an issue due to radiologists' need to read images. Some workstations are in use 7 AM to 11 PM daily. An appropriate monitor calibration frequency must be established during acceptance testing to ensure unacceptable drift is not masked by excessive calibration frequency. Standards for acceptable black level and white level drift also need to be determined. The monitor vendor and hospital staff agree that currently, very small printed text is an acceptable method of determining monitor blur, however, a better method of determining monitor blur is being pursued. Although monitors may show acceptable quality during initial acceptance testing, they need to show sustained quality during the clinical acceptance-testing phase. Defocus, black level, and white level are image quality concerns, which need to be evaluated during the clinical phase of acceptance testing. Image quality deficiencies can have a negative impact on patient care and raise serious medical-legal concerns. The attention to quality control required of the hospital staff needs to be realistic and not have a significant impact on radiology workflow.

  8. The 2-D magnetotelluric inverse problem solved with optimization

    NASA Astrophysics Data System (ADS)

    van Beusekom, Ashley E.; Parker, Robert L.; Bank, Randolph E.; Gill, Philip E.; Constable, Steven

    2011-02-01

    The practical 2-D magnetotelluric inverse problem seeks to determine the shallow-Earth conductivity structure using finite and uncertain data collected on the ground surface. We present an approach based on using PLTMG (Piecewise Linear Triangular MultiGrid), a special-purpose code for optimization with second-order partial differential equation (PDE) constraints. At each frequency, the electromagnetic field and conductivity are treated as unknowns in an optimization problem in which the data misfit is minimized subject to constraints that include Maxwell's equations and the boundary conditions. Within this framework it is straightforward to accommodate upper and lower bounds or other conditions on the conductivity. In addition, as the underlying inverse problem is ill-posed, constraints may be used to apply various kinds of regularization. We discuss some of the advantages and difficulties associated with using PDE-constrained optimization as the basis for solving large-scale nonlinear geophysical inverse problems. Combined transverse electric and transverse magnetic complex admittances from the COPROD2 data are inverted. First, we invert penalizing size and roughness giving solutions that are similar to those found previously. In a second example, conventional regularization is replaced by a technique that imposes upper and lower bounds on the model. In both examples the data misfit is better than that obtained previously, without any increase in model complexity.

  9. Has the world really survived the population bomb? (Commentary on "how the world survived the population bomb: lessons from 50 years of extraordinary demographic history").

    PubMed

    Becker, Stan

    2013-12-01

    In his PAA presidential address and corresponding article in Demography, David Lam (Demography 48:1231-1262, 2011) documented the extraordinary progress of humankind-vis-à-vis poverty alleviation, increased schooling, and reductions in mortality and fertility-since 1960 and noted that he expects further improvements by 2050. However, although Lam briefly covered the problems of global warming and pollution, he did not address several other major environmental problems that are closely related to the rapid human population growth in recent decades and to the progress he described. This commentary highlights some of these problems to provide a more balanced perspective on the situation of the world. Specifically, humans currently are using resources at an unsustainable level. Groundwater depletion and overuse of river water are major problems on multiple continents. Fossil fuel resources and several minerals are being depleted. Other major problems include deforestation, with the annual forest clearing globally estimated to be an area the size of New York State; and species extinction, with rates estimated to be 100 to 1,000 times higher than background rates. Principles of ecological economics are presented that allow an integration of ecology and economic development and better potential for preservation of the world for future generations.

  10. Prototype solar heating and cooling systems

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Eight prototype systems were developed. The systems are 3, 25, and 75-ton size units. The manufacture, test, installation, maintenance, problem resolution, and performance evaluation of the systems is described. Size activities for the various systems are included.

  11. The Multiple Pendulum Problem via Maple[R

    ERIC Educational Resources Information Center

    Salisbury, K. L.; Knight, D. G.

    2002-01-01

    The way in which computer algebra systems, such as Maple, have made the study of physical problems of some considerable complexity accessible to mathematicians and scientists with modest computational skills is illustrated by solving the multiple pendulum problem. A solution is obtained for four pendulums with no restriction on the size of the…

  12. The impact of ancillary services in optimal DER investment decisions

    DOE PAGES

    Cardoso, Goncalo; Stadler, Michael; Mashayekh, Salman; ...

    2017-04-25

    Microgrid resource sizing problems typically include the analysis of a combination of value streams such as peak shaving, load shifting, or load scheduling, which support the economic feasibility of the microgrid deployment. However, microgrid benefits can go beyond these, and the ability to provide ancillary grid services such as frequency regulation or spinning and non-spinning reserves is well known, despite typically not being considered in resource sizing problems. This paper proposes the expansion of the Distributed Energy Resources Customer Adoption Model (DER-CAM), a state-of-the-art microgrid resource sizing model, to include revenue streams resulting from the participation in ancillary service markets.more » Results suggest that participation in such markets may not only influence the optimum resource sizing, but also the operational dispatch, with results being strongly influenced by the exact market requirements and clearing prices.« less

  13. Particle size reduction to the nanometer range: a promising approach to improve buccal absorption of poorly water-soluble drugs

    PubMed Central

    Rao, Shasha; Song, Yunmei; Peddie, Frank; Evans, Allan M

    2011-01-01

    Poorly water-soluble drugs, such as phenylephrine, offer challenging problems for buccal drug delivery. In order to overcome these problems, particle size reduction (to the nanometer range) and cyclodextrin complexation were investigated for permeability enhancement. The apparent solubility in water and the buccal permeation of the original phenylephrine coarse powder, a phenylephrine–cyclodextrin complex and phenylephrine nanosuspensions were characterized. The particle size and particle surface properties of phenylephrine nanosuspensions were used to optimize the size reduction process. The optimized phenylephrine nanosuspension was then freeze dried and incorporated into a multi-layered buccal patch, consisting of a small tablet adhered to a mucoadhesive film, yielding a phenylephrine buccal product with good dosage accuracy and improved mucosal permeability. The design of the buccal patch allows for drug incorporation without the need to change the mucoadhesive component, and is potentially suited to a range of poorly water-soluble compounds. PMID:21753876

  14. Particle size reduction to the nanometer range: a promising approach to improve buccal absorption of poorly water-soluble drugs.

    PubMed

    Rao, Shasha; Song, Yunmei; Peddie, Frank; Evans, Allan M

    2011-01-01

    Poorly water-soluble drugs, such as phenylephrine, offer challenging problems for buccal drug delivery. In order to overcome these problems, particle size reduction (to the nanometer range) and cyclodextrin complexation were investigated for permeability enhancement. The apparent solubility in water and the buccal permeation of the original phenylephrine coarse powder, a phenylephrine-cyclodextrin complex and phenylephrine nanosuspensions were characterized. The particle size and particle surface properties of phenylephrine nanosuspensions were used to optimize the size reduction process. The optimized phenylephrine nanosuspension was then freeze dried and incorporated into a multi-layered buccal patch, consisting of a small tablet adhered to a mucoadhesive film, yielding a phenylephrine buccal product with good dosage accuracy and improved mucosal permeability. The design of the buccal patch allows for drug incorporation without the need to change the mucoadhesive component, and is potentially suited to a range of poorly water-soluble compounds.

  15. The impact of ancillary services in optimal DER investment decisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cardoso, Goncalo; Stadler, Michael; Mashayekh, Salman

    Microgrid resource sizing problems typically include the analysis of a combination of value streams such as peak shaving, load shifting, or load scheduling, which support the economic feasibility of the microgrid deployment. However, microgrid benefits can go beyond these, and the ability to provide ancillary grid services such as frequency regulation or spinning and non-spinning reserves is well known, despite typically not being considered in resource sizing problems. This paper proposes the expansion of the Distributed Energy Resources Customer Adoption Model (DER-CAM), a state-of-the-art microgrid resource sizing model, to include revenue streams resulting from the participation in ancillary service markets.more » Results suggest that participation in such markets may not only influence the optimum resource sizing, but also the operational dispatch, with results being strongly influenced by the exact market requirements and clearing prices.« less

  16. Heuristics for Multiobjective Optimization of Two-Sided Assembly Line Systems

    PubMed Central

    Jawahar, N.; Ponnambalam, S. G.; Sivakumar, K.; Thangadurai, V.

    2014-01-01

    Products such as cars, trucks, and heavy machinery are assembled by two-sided assembly line. Assembly line balancing has significant impacts on the performance and productivity of flow line manufacturing systems and is an active research area for several decades. This paper addresses the line balancing problem of a two-sided assembly line in which the tasks are to be assigned at L side or R side or any one side (addressed as E). Two objectives, minimum number of workstations and minimum unbalance time among workstations, have been considered for balancing the assembly line. There are two approaches to solve multiobjective optimization problem: first approach combines all the objectives into a single composite function or moves all but one objective to the constraint set; second approach determines the Pareto optimal solution set. This paper proposes two heuristics to evolve optimal Pareto front for the TALBP under consideration: Enumerative Heuristic Algorithm (EHA) to handle problems of small and medium size and Simulated Annealing Algorithm (SAA) for large-sized problems. The proposed approaches are illustrated with example problems and their performances are compared with a set of test problems. PMID:24790568

  17. Fast marching methods for the continuous traveling salesman problem.

    PubMed

    Andrews, June; Sethian, J A

    2007-01-23

    We consider a problem in which we are given a domain, a cost function which depends on position at each point in the domain, and a subset of points ("cities") in the domain. The goal is to determine the cheapest closed path that visits each city in the domain once. This can be thought of as a version of the traveling salesman problem, in which an underlying known metric determines the cost of moving through each point of the domain, but in which the actual shortest path between cities is unknown at the outset. We describe algorithms for both a heuristic and an optimal solution to this problem. The complexity of the heuristic algorithm is at worst case M.N log N, where M is the number of cities, and N the size of the computational mesh used to approximate the solutions to the shortest paths problems. The average runtime of the heuristic algorithm is linear in the number of cities and O(N log N) in the size N of the mesh.

  18. Problems and solutions for patients with fibromyalgia: Building new helping relationships.

    PubMed

    Montesó-Curto, Pilar; García-Martinez, Montserrat; Romaguera, Sara; Mateu, María Luisa; Cubí-Guillén, María Teresa; Sarrió-Colas, Lidia; Llàdser, Anna Núria; Bradley, Stephen; Panisello-Chavarria, María Luisa

    2018-02-01

    The aim of this study was to identify the main biological, psychological and sociological problems and potential solutions for patients diagnosed with fibromyalgia by use of Group Problem-Solving Therapy. Group Problem-Solving Therapy is a technique for identifying and solving problems, increasing assertiveness, self-esteem and eliminating negative thoughts. Qualitative phenomenological interpretive design: Group Problem-Solving Therapy sessions conducted with patients suffering fibromyalgia were studied; participants recruited via the Rheumatology Department at a general hospital and associations in Catalonia, Spain with sessions conducted in nearby university setting. The study included 44 people diagnosed with fibromyalgia (43 female, 1 male) from 6 Group Problem-Solving Therapy sessions. Data collected from March-June 2013. A total of 24 sessions were audio recorded, all with prior informed consent. Data were transcribed and then analysed in accordance with established methods of inductive thematic analysis, via a process of reduction to manage and classify data. Five themes were identified: (1) Current problems are often related to historical trauma; (2) There are no "one size fits all" solutions; (3) Fibromyalgia is life-changing; (4) Fibromyalgia is widely misunderstood; (5) Statistically Significant impacts on physical, psychological and social are described. The majority of patients' problems were associated with their previous history and the onset of fibromyalgia; which may be related to trauma in adolescence, early adulthood or later. The solutions provided during the groups appeared to be accepted by the participants. These findings can improve the self-management of fibromyalgia patients by helping to enhance adaptive behaviours and incorporating the female gender approach. © 2017 John Wiley & Sons Ltd.

  19. Voice problems and depression among adults in the United States.

    PubMed

    Marmor, Schelomo; Horvath, Keith J; Lim, Kelvin O; Misono, Stephanie

    2016-08-01

    Prior studies have observed a high prevalence of psychosocial distress, including depression, in patients with voice problems. However, these studies have largely been performed in care-seeking patients identified in tertiary care voice clinics. The objective of this study was to examine the association between depression and voice problems in the U.S. Cross-sectional analysis of National Health Interview Survey (NHIS) data. We identified adult cases reporting a voice problem in the preceding 12 months in the 2012 NHIS. Self-reported demographics and data regarding healthcare visits for voice problems, diagnoses given, severity of the voice problem, and depression symptoms were analyzed. The total weighted sample size was 52,816,364. The presence of depressive symptoms was associated with a nearly two-fold increase (odds ratio = 1.89, 95% confidence interval = 1.21-2.96) in the likelihood of reporting a voice problem in the past year. Patients who reported feeling depressed were less likely to receive care for the voice problem and less likely to report that treatment had helped than those who did not feel depressed. These findings indicate that the co-occurrence of voice problems and depressive symptoms is observed in the general population, not only in care-seeking patients, and that depressive symptoms may influence reported likelihood of receiving voice treatment and effectiveness. This suggests that voice care providers should take mental health symptoms into account when treating patients, and also indicates a need for further investigation. NA. Laryngoscope, 126:1859-1864, 2016. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.

  20. Active properties of living tissues lead to size-dependent dewetting

    NASA Astrophysics Data System (ADS)

    Perez-Gonzalez, Carlos; Alert, Ricard; Blanch-Mercader, Carles; Gomez-Gonzalez, Manuel; Casademunt, Jaume; Trepat, Xavier

    Key biological processes such as cancer and development are characterized by drastic transitions from 2D to a 3D geometry. These rearrangements have been classically studied as a wetting problem. According to this theory, wettability of a substrate by an epithelium is determined by the competition between cell-cell and cell-substrate adhesion energies. In contrast, we found that, far from a passive process, tissue dewetting is an active process driven by tissue internal forces. Experimentally, we reproduced epithelial dewetting by promoting a progressive formation of intercellular junctions in a monolayer of epithelial cells. Interestingly, the formation of intercellular junctions produces an increase in cell contractility, with the subsequent increase in traction and intercellular stress. At a certain time, tissue tension overcomes cell-substrate maximum adhesion and the monolayer spontaneously dewets the substrate. We developed an active polar fluid model, finding both theoretically and experimentally that critical contractility to promote wetting-dewetting transition depends on cell-substrate adhesion and, unexpectedly, on tissue size. As a whole, this work generalizes wetting theory to living tissues, unveiling unprecedented properties due to their unique active nature.

  1. Clinical features of subepithelial layer irregularities of cornea.

    PubMed

    Lee, Yong Woo; Gye, Hyo Jung; Choi, Chul Young

    2015-07-01

    To illustrate surgical outcomes of subepithelial irregularities that were identified incidentally during laser refractive surgery. The study group consisted of 406 patients who underwent 787 surface ablation refractive surgeries. Ophthalmologic evaluations were performed before each procedure and at 1, 3 and 6 months post-operatively. Subepithelial irregularities were evaluated by analyzing still photographs captured from video recordings. Sizes and locations were determined by a calibrated scale located at the major axis of the tracking system's reticle. Subepithelial irregularities were identified in 27 eyes during 787 surface ablation refractive surgeries. Most of the subepithelial irregularities did not show any abnormalities in the wavefront aberrometer. However, one case with diameter greater than 1.00 mm and one case of clustered multiple subepithelial irregularities with moderate size were corresponded significant coma (Z31) and increased higher order aberration (HOA) in the HOA gradient map. Corneal subepithelial irregularities may be related to problems that include significantly increased localized HOA and remaining permanent subepithelial opacity. Subepithelial irregularity should be considered even if the surface of the cornea is intact and there are no specific findings measured by corneal topography.

  2. Electrochemical Migration of Fine-Pitch Nanopaste Ag Interconnects

    NASA Astrophysics Data System (ADS)

    Tsou, Chia-Hung; Liu, Kai-Ning; Lin, Heng-Tien; Ouyang, Fan-Yi

    2016-12-01

    With the development of intelligent electronic products, usage of fine-pitch interconnects has become mainstream in high performance electronic devices. Electrochemical migration (ECM) of interconnects would be a serious reliability problem under temperature, humidity and biased voltage environments. In this study, ECM behavior of nanopaste Ag interconnects with pitch size from 20 μm to 50 μm was evaluated by thermal humidity bias (THB) and water drop (WD) tests with deionized water through in situ leakage current-versus-time (CVT) curve. The results indicate that the failure time of ECM in fine-pitch samples occurs within few seconds under WD testing and it increases with increasing pitch size. The microstructure examination indicated that intensive dendrite formation of Ag through the whole interface was found to bridge the two electrodes. In the THB test, the CVT curve exhibited two stages, incubation and ramp-up; failure time of ECM was about 173.7 min. In addition, intensive dendrite formation was observed only at the protrusion of the Ag interconnects due to the concentration of the electric field at the protrusion of the Ag interconnects.

  3. Is a larger refuge always better? Dispersal and dose in pesticide resistance evolution

    PubMed Central

    Takahashi, Daisuke; Yamanaka, Takehiko; Sudo, Masaaki; Andow, David A.

    2017-01-01

    The evolution of resistance against pesticides is an important problem of modern agriculture. The high‐dose/refuge strategy, which divides the landscape into treated and nontreated (refuge) patches, has proven effective at delaying resistance evolution. However, theoretical understanding is still incomplete, especially for combinations of limited dispersal and partially recessive resistance. We reformulate a two‐patch model based on the Comins model and derive a simple quadratic approximation to analyze the effects of limited dispersal, refuge size, and dominance for high efficacy treatments on the rate of evolution. When a small but substantial number of heterozygotes can survive in the treated patch, a larger refuge always reduces the rate of resistance evolution. However, when dominance is small enough, the evolutionary dynamics in the refuge population, which is indirectly driven by migrants from the treated patch, mainly describes the resistance evolution in the landscape. In this case, for small refuges, increasing the refuge size will increase the rate of resistance evolution. Our analysis distils major driving forces from the model, and can provide a framework for understanding directional selection in source‐sink environments. PMID:28422284

  4. A meta-analysis of perceptions of defeat and entrapment in depression, anxiety problems, posttraumatic stress disorder, and suicidality.

    PubMed

    Siddaway, Andy P; Taylor, Peter J; Wood, Alex M; Schulz, Joerg

    2015-09-15

    There is a burgeoning literature examining perceptions of being defeated or trapped in different psychiatric disorders. The disorders most frequently examined to date are depression, anxiety problems, posttraumatic stress disorder (PTSD), and suicidality. To quantify the size and consistency of perceptions of defeat and entrapment in depression, anxiety problems, PTSD and suicidality, test for differences across psychiatric disorders, and examine potential moderators and publication bias. Random-effects meta-analyses based on Pearson's correlation coefficient r. Forty studies were included in the meta-analysis (n = 10,072). Perceptions of defeat and entrapment were strong (around r = 0.60) and similar in size across all four psychiatric disorders. Perceptions of defeat were particularly strong in depression (r = 0.73). There was no between-study heterogeneity; therefore moderator analyses were conducted in an exploratory fashion. There was no evidence of publication bias. Analyses were cross-sectional, which precludes establishing temporal precedence or causality. Some of the meta-analyses were based on relatively small numbers of effect sizes, which may limit their generalisability. Perceptions of defeat and entrapment are clinically important in depression, anxiety problems, PTSD, and suicidality. Similar-sized, strong relationships across four different psychiatric disorders could suggest that perceptions of defeat and entrapment are transdiagnostic constructs. The results suggest that clinicians and researchers need to become more aware of perceptions of defeat and entrapment. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Comparison of the quench and fault current limiting characteristics of the flux-coupling type SFCL with single and three-phase transformer

    NASA Astrophysics Data System (ADS)

    Jung, Byung Ik; Cho, Yong Sun; Park, Hyoung Min; Chung, Dong Chul; Choi, Hyo Sang

    2013-01-01

    The South Korean power grid has a network structure for the flexible operation of the system. The continuously increasing power demand necessitated the increase of power facilities, which decreased the impedance in the power system. As a result, the size of the fault current in the event of a system fault increased. As this increased fault current size is threatening the breaking capacity of the circuit breaker, the main protective device, a solution to this problem is needed. The superconducting fault current limiter (SFCL) has been designed to address this problem. SFCL supports the stable operation of the circuit breaker through its excellent fault-current-limiting operation [1-5]. In this paper, the quench and fault current limiting characteristics of the flux-coupling-type SFCL with one three-phase transformer were compared with those of the same SFCL type but with three single-phase transformers. In the case of the three-phase transformers, both the superconducting elements of the fault and sound phases were quenched, whereas in the case of the single-phase transformer, only that of the fault phase was quenched. For the fault current limiting rate, both cases showed similar rates for the single line-to-ground fault, but for the three-wire earth fault, the fault current limiting rate of the single-phase transformer was over 90% whereas that of the three-phase transformer was about 60%. It appears that when the three-phase transformer was used, the limiting rate decreased because the fluxes by the fault current of each phase were linked in one core. When the power loads of the superconducting elements were compared by fault type, the initial (half-cycle) load was great when the single-phase transformer was applied, whereas for the three-phase transformer, its power load was slightly lower at the initial stage but became greater after the half fault cycle.

  6. The engineered biofiltration system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pisotti, D.A.

    1997-12-31

    For years, biofiltration has meant compost, peat, bark, leave mulch, or any combination of these as the substrate to house microorganisms. This has lead to a number of operational and maintenance problems, including: compaction, channeling, anaerobic zones, dry spots, pressure drop, and media degradation. All of these cause reduced efficiency and increased maintenance and increased operational costs. For these reasons inert media, including plastic beads and low grade carbons have been added to the media for buffering capacity, resists compaction, channeling and to increase efficiency. This has led to search for a more reliable and sturdy media. The media themore » authors chose was activated carbon. Pelletized activated carbon was the ideal candidate due to its uniform size and shape, its inherent hardness, adsorptive capacity, and its ability to withstand microbial degradation. The pressure drop of the system will remain constant after microbial growth occurs, due to the ability to wash the media bed. Carbon allows for the removal of excess biomass which can not be performed on organic media, this is one of the problems leading to media degradation, too many microbes and not enough food (i.e. VOCs). Carbon also allows for spike or increased loads to be treated without performance suffering. Carbon also has tremendous surface area, which allows more microorganisms to be present in a smaller volume, therefore reducing the overall size of the biofilter vessel. This paper will discuss further the findings of a pilot test that was performed using activated carbon as the media for microbial growth. This paper will show the performance of the carbon based biofilter system with respect to pressure drop, residence time, removal efficiency, microbial populations, temperature, moisture, and water requirements. The pilot unit is 350 acfm and operated for 4 months on an air stream in which the contaminant concentrations varied greatly every few minutes.« less

  7. A Dynamic Bioinspired Neural Network Based Real-Time Path Planning Method for Autonomous Underwater Vehicles

    PubMed Central

    2017-01-01

    Real-time path planning for autonomous underwater vehicle (AUV) is a very difficult and challenging task. Bioinspired neural network (BINN) has been used to deal with this problem for its many distinct advantages: that is, no learning process is needed and realization is also easy. However, there are some shortcomings when BINN is applied to AUV path planning in a three-dimensional (3D) unknown environment, including complex computing problem when the environment is very large and repeated path problem when the size of obstacles is bigger than the detection range of sensors. To deal with these problems, an improved dynamic BINN is proposed in this paper. In this proposed method, the AUV is regarded as the core of the BINN and the size of the BINN is based on the detection range of sensors. Then the BINN will move with the AUV and the computing could be reduced. A virtual target is proposed in the path planning method to ensure that the AUV can move to the real target effectively and avoid big-size obstacles automatically. Furthermore, a target attractor concept is introduced to improve the computing efficiency of neural activities. Finally, some experiments are conducted under various 3D underwater environments. The experimental results show that the proposed BINN based method can deal with the real-time path planning problem for AUV efficiently. PMID:28255297

  8. Energy Harvesting Based Body Area Networks for Smart Health.

    PubMed

    Hao, Yixue; Peng, Limei; Lu, Huimin; Hassan, Mohammad Mehedi; Alamri, Atif

    2017-07-10

    Body area networks (BANs) are configured with a great number of ultra-low power consumption wearable devices, which constantly monitor physiological signals of the human body and thus realize intelligent monitoring. However, the collection and transfer of human body signals consume energy, and considering the comfort demand of wearable devices, both the size and the capacity of a wearable device's battery are limited. Thus, minimizing the energy consumption of wearable devices and optimizing the BAN energy efficiency is still a challenging problem. Therefore, in this paper, we propose an energy harvesting-based BAN for smart health and discuss an optimal resource allocation scheme to improve BAN energy efficiency. Specifically, firstly, considering energy harvesting in a BAN and the time limits of human body signal transfer, we formulate the energy efficiency optimization problem of time division for wireless energy transfer and wireless information transfer. Secondly, we convert the optimization problem into a convex optimization problem under a linear constraint and propose a closed-form solution to the problem. Finally, simulation results proved that when the size of data acquired by the wearable devices is small, the proportion of energy consumed by the circuit and signal acquisition of the wearable devices is big, and when the size of data acquired by the wearable devices is big, the energy consumed by the signal transfer of the wearable device is decisive.

  9. A Dynamic Bioinspired Neural Network Based Real-Time Path Planning Method for Autonomous Underwater Vehicles.

    PubMed

    Ni, Jianjun; Wu, Liuying; Shi, Pengfei; Yang, Simon X

    2017-01-01

    Real-time path planning for autonomous underwater vehicle (AUV) is a very difficult and challenging task. Bioinspired neural network (BINN) has been used to deal with this problem for its many distinct advantages: that is, no learning process is needed and realization is also easy. However, there are some shortcomings when BINN is applied to AUV path planning in a three-dimensional (3D) unknown environment, including complex computing problem when the environment is very large and repeated path problem when the size of obstacles is bigger than the detection range of sensors. To deal with these problems, an improved dynamic BINN is proposed in this paper. In this proposed method, the AUV is regarded as the core of the BINN and the size of the BINN is based on the detection range of sensors. Then the BINN will move with the AUV and the computing could be reduced. A virtual target is proposed in the path planning method to ensure that the AUV can move to the real target effectively and avoid big-size obstacles automatically. Furthermore, a target attractor concept is introduced to improve the computing efficiency of neural activities. Finally, some experiments are conducted under various 3D underwater environments. The experimental results show that the proposed BINN based method can deal with the real-time path planning problem for AUV efficiently.

  10. Energy Harvesting Based Body Area Networks for Smart Health

    PubMed Central

    Hao, Yixue; Peng, Limei; Alamri, Atif

    2017-01-01

    Body area networks (BANs) are configured with a great number of ultra-low power consumption wearable devices, which constantly monitor physiological signals of the human body and thus realize intelligent monitoring. However, the collection and transfer of human body signals consume energy, and considering the comfort demand of wearable devices, both the size and the capacity of a wearable device’s battery are limited. Thus, minimizing the energy consumption of wearable devices and optimizing the BAN energy efficiency is still a challenging problem. Therefore, in this paper, we propose an energy harvesting-based BAN for smart health and discuss an optimal resource allocation scheme to improve BAN energy efficiency. Specifically, firstly, considering energy harvesting in a BAN and the time limits of human body signal transfer, we formulate the energy efficiency optimization problem of time division for wireless energy transfer and wireless information transfer. Secondly, we convert the optimization problem into a convex optimization problem under a linear constraint and propose a closed-form solution to the problem. Finally, simulation results proved that when the size of data acquired by the wearable devices is small, the proportion of energy consumed by the circuit and signal acquisition of the wearable devices is big, and when the size of data acquired by the wearable devices is big, the energy consumed by the signal transfer of the wearable device is decisive. PMID:28698501

  11. Design optimization of transmitting antennas for weakly coupled magnetic induction communication systems

    PubMed Central

    2017-01-01

    This work focuses on the design of transmitting coils in weakly coupled magnetic induction communication systems. We propose several optimization methods that reduce the active, reactive and apparent power consumption of the coil. These problems are formulated as minimization problems, in which the power consumed by the transmitting coil is minimized, under the constraint of providing a required magnetic field at the receiver location. We develop efficient numeric and analytic methods to solve the resulting problems, which are of high dimension, and in certain cases non-convex. For the objective of minimal reactive power an analytic solution for the optimal current distribution in flat disc transmitting coils is provided. This problem is extended to general three-dimensional coils, for which we develop an expression for the optimal current distribution. Considering the objective of minimal apparent power, a method is developed to reduce the computational complexity of the problem by transforming it to an equivalent problem of lower dimension, allowing a quick and accurate numeric solution. These results are verified experimentally by testing a number of coil geometries. The results obtained allow reduced power consumption and increased performances in magnetic induction communication systems. Specifically, for wideband systems, an optimal design of the transmitter coil reduces the peak instantaneous power provided by the transmitter circuitry, and thus reduces its size, complexity and cost. PMID:28192463

  12. On Two-Stage Multiple Comparison Procedures When There Are Unequal Sample Sizes in the First Stage.

    ERIC Educational Resources Information Center

    Wilcox, Rand R.

    1984-01-01

    Two stage multiple-comparison procedures give an exact solution to problems of power and Type I errors, but require equal sample sizes in the first stage. This paper suggests a method of evaluating the experimentwise Type I error probability when the first stage has unequal sample sizes. (Author/BW)

  13. Size-biased distributions in the generalized beta distribution family, with applications to forestry

    Treesearch

    Mark J. Ducey; Jeffrey H. Gove

    2015-01-01

    Size-biased distributions arise in many forestry applications, as well as other environmental, econometric, and biomedical sampling problems. We examine the size-biased versions of the generalized beta of the first kind, generalized beta of the second kind and generalized gamma distributions. These distributions include, as special cases, the Dagum (Burr Type III),...

  14. Modeling of Particle Agglomeration in Nanofluids

    NASA Astrophysics Data System (ADS)

    Kanagala, Hari Krishna

    Nanofluids are colloidal dispersions of nano sized particles (<100nm in diameter) in dispersion mediums. They are of great interest in industrial applications as heat transfer fluids owing to their enhanced thermal conductivities. Stability of nanofluids is a major problem hindering their industrial application. Agglomeration and then sedimentation are some reasons, which drastically decrease the shelf life of these nanofluids. Current research addresses the agglomeration effect and how it can affect the shelf life of a nanofluid. The reasons for agglomeration in nanofluids are attributable to the interparticle interactions which are quantified by the various theories. By altering the governing properties like volume fraction, pH and electrolyte concentration different nanofluids with instant agglomeration, slow agglomeration and no agglomeration can be produced. A numerical model is created based on the discretized population balance equations which analyses the particle size distribution at different times. Agglomeration effects have been analyzed for alumina nanoparticles with average particle size of 150nm dispersed in de-ionized water. As the pH was moved towards the isoelectric point of alumina nanofluids, the particle size distribution became broader and moved to bigger sizes rapidly with time. Particle size distributions became broader and moved to bigger sizes more quickly with time with increase in the electrolyte concentration. The two effects together can be used to create different temporal trends in the particle size distributions. Faster agglomeration is attributed to the decrease in the electrostatic double layer repulsion forces which is due to decrease in the induced charge and the double layer thickness around the particle. Bigger particle clusters show lesser agglomeration due to reaching the equilibrium size. The procedures and processes described in this work can be used to generate more stable nanofluids.

  15. Morphological change to birds over 120 years is not explained by thermal adaptation to climate change.

    PubMed

    Salewski, Volker; Siebenrock, Karl-Heinz; Hochachka, Wesley M; Woog, Friederike; Fiedler, Wolfgang

    2014-01-01

    Changes in morphology have been postulated as one of the responses of animals to global warming, with increasing ambient temperatures leading to decreasing body size. However, the results of previous studies are inconsistent. Problems related to the analyses of trends in body size may be related to the short-term nature of data sets, to the selection of surrogates for body size, to the appropriate models for data analyses, and to the interpretation as morphology may change in response to ecological drivers other than climate and irrespective of size. Using generalized additive models, we analysed trends in three morphological traits of 4529 specimens of eleven bird species collected between 1889 and 2010 in southern Germany and adjacent areas. Changes and trends in morphology over time were not consistent when all species and traits were considered. Six of the eleven species displayed a significant association of tarsus length with time but the direction of the association varied. Wing length decreased in the majority of species but there were few significant trends in wing pointedness. Few of the traits were significantly associated with mean ambient temperatures. We argue that although there are significant changes in morphology over time there is no consistent trend for decreasing body size and therefore no support for the hypothesis of decreasing body size because of climate change. Non-consistent trends of change in surrogates for size within species indicate that fluctuations are influenced by factors other than temperature, and that not all surrogates may represent size appropriately. Future analyses should carefully select measures of body size and consider alternative hypotheses for change.

  16. Persistent homology analysis of ion aggregations and hydrogen-bonding networks.

    PubMed

    Xia, Kelin

    2018-05-16

    Despite the great advancement of experimental tools and theoretical models, a quantitative characterization of the microscopic structures of ion aggregates and their associated water hydrogen-bonding networks still remains a challenging problem. In this paper, a newly-invented mathematical method called persistent homology is introduced, for the first time, to quantitatively analyze the intrinsic topological properties of ion aggregation systems and hydrogen-bonding networks. The two most distinguishable properties of persistent homology analysis of assembly systems are as follows. First, it does not require a predefined bond length to construct the ion or hydrogen-bonding network. Persistent homology results are determined by the morphological structure of the data only. Second, it can directly measure the size of circles or holes in ion aggregates and hydrogen-bonding networks. To validate our model, we consider two well-studied systems, i.e., NaCl and KSCN solutions, generated from molecular dynamics simulations. They are believed to represent two morphological types of aggregation, i.e., local clusters and extended ion networks. It has been found that the two aggregation types have distinguishable topological features and can be characterized by our topological model very well. Further, we construct two types of networks, i.e., O-networks and H2O-networks, for analyzing the topological properties of hydrogen-bonding networks. It is found that for both models, KSCN systems demonstrate much more dramatic variations in their local circle structures with a concentration increase. A consistent increase of large-sized local circle structures is observed and the sizes of these circles become more and more diverse. In contrast, NaCl systems show no obvious increase of large-sized circles. Instead a consistent decline of the average size of the circle structures is observed and the sizes of these circles become more and more uniform with a concentration increase. As far as we know, these unique intrinsic topological features in ion aggregation systems have never been pointed out before. More importantly, our models can be directly used to quantitatively analyze the intrinsic topological invariants, including circles, loops, holes, and cavities, of any network-like structures, such as nanomaterials, colloidal systems, biomolecular assemblies, among others. These topological invariants cannot be described by traditional graph and network models.

  17. Infrared variation reduction by simultaneous background suppression and target contrast enhancement for deep convolutional neural network-based automatic target recognition

    NASA Astrophysics Data System (ADS)

    Kim, Sungho

    2017-06-01

    Automatic target recognition (ATR) is a traditionally challenging problem in military applications because of the wide range of infrared (IR) image variations and the limited number of training images. IR variations are caused by various three-dimensional target poses, noncooperative weather conditions (fog and rain), and difficult target acquisition environments. Recently, deep convolutional neural network-based approaches for RGB images (RGB-CNN) showed breakthrough performance in computer vision problems, such as object detection and classification. The direct use of RGB-CNN to the IR ATR problem fails to work because of the IR database problems (limited database size and IR image variations). An IR variation-reduced deep CNN (IVR-CNN) to cope with the problems is presented. The problem of limited IR database size is solved by a commercial thermal simulator (OKTAL-SE). The second problem of IR variations is mitigated by the proposed shifted ramp function-based intensity transformation. This can suppress the background and enhance the target contrast simultaneously. The experimental results on the synthesized IR images generated by the thermal simulator (OKTAL-SE) validated the feasibility of IVR-CNN for military ATR applications.

  18. Multiple imputation of missing fMRI data in whole brain analysis

    PubMed Central

    Vaden, Kenneth I.; Gebregziabher, Mulugeta; Kuchinsky, Stefanie E.; Eckert, Mark A.

    2012-01-01

    Whole brain fMRI analyses rarely include the entire brain because of missing data that result from data acquisition limits and susceptibility artifact, in particular. This missing data problem is typically addressed by omitting voxels from analysis, which may exclude brain regions that are of theoretical interest and increase the potential for Type II error at cortical boundaries or Type I error when spatial thresholds are used to establish significance. Imputation could significantly expand statistical map coverage, increase power, and enhance interpretations of fMRI results. We examined multiple imputation for group level analyses of missing fMRI data using methods that leverage the spatial information in fMRI datasets for both real and simulated data. Available case analysis, neighbor replacement, and regression based imputation approaches were compared in a general linear model framework to determine the extent to which these methods quantitatively (effect size) and qualitatively (spatial coverage) increased the sensitivity of group analyses. In both real and simulated data analysis, multiple imputation provided 1) variance that was most similar to estimates for voxels with no missing data, 2) fewer false positive errors in comparison to mean replacement, and 3) fewer false negative errors in comparison to available case analysis. Compared to the standard analysis approach of omitting voxels with missing data, imputation methods increased brain coverage in this study by 35% (from 33,323 to 45,071 voxels). In addition, multiple imputation increased the size of significant clusters by 58% and number of significant clusters across statistical thresholds, compared to the standard voxel omission approach. While neighbor replacement produced similar results, we recommend multiple imputation because it uses an informed sampling distribution to deal with missing data across subjects that can include neighbor values and other predictors. Multiple imputation is anticipated to be particularly useful for 1) large fMRI data sets with inconsistent missing voxels across subjects and 2) addressing the problem of increased artifact at ultra-high field, which significantly limit the extent of whole brain coverage and interpretations of results. PMID:22500925

  19. Adolescents' strengths and difficulties: approach to attachment styles.

    PubMed

    Keskin, G; Cam, O

    2010-06-01

    This research is a descriptive field study conducted in order to investigate the relationship between adolescent difficulties and the attachment style. The study aims to investigate the relationship between adolescent attachment style and strength and difficulties in Turkey. Children attachment style and difficulties pattern in the group of adolescents aged 11-16 years old were compared with each other. Several different questionnaires, including The Strength and Difficulties Questionnaire, The Relationship Scale Questionnaire were applied to 384 (mean age 12.10 +/- 1.4 years) adolescents. The data were analysed using descriptive statistics, Pearson correlation coefficients, anova, t-test, Kruskall Wallis and effect sizes. The adolescent secure attachment style was associated with increased levels of prosocial behaviour, decreased levels of emotional symptoms, hyperactivity/inattention, peer relationship problems, conduct problems, total difficulties scores. The adolescent fearful attachment style was associated with increased levels of emotional symptoms, and total difficulties scores. The adolescent dismissing attachment style was significantly associated with higher levels of emotional symptoms, hyperactivity/inattention, total difficulties scores and lower levels of prosocial behaviour. Adolescent strengths and difficulties are associated with their attachment style. Insecure attachment styles of dismissing and fearful were associated with increased mental symptom reporting. It is suggested that further studies may illuminate the clinical value of the attachment disorder and quantify parental contribution to psychopathology. Giving the therapeutic, structured mental support programme to adolescents that have attachment problems could be beneficial in improving mental status of these individuals.

  20. Measuring non-recurrent congestion in small to medium sized urban areas.

    DOT National Transportation Integrated Search

    2013-05-01

    Understanding the relative magnitudes of recurrent vs. non-recurrent congestion in an urban area is critical to the selection of proper countermeasures and the appropriate allocation of resources to address congestion problems. Small to medium sized ...

Top