Sample records for time scale problem

  1. Scale invariance in biophysics

    NASA Astrophysics Data System (ADS)

    Stanley, H. Eugene

    2000-06-01

    In this general talk, we offer an overview of some problems of interest to biophysicists, medical physicists, and econophysicists. These include DNA sequences, brain plaques in Alzheimer patients, heartbeat intervals, and time series giving price fluctuations in economics. These problems have the common feature that they exhibit features that appear to be scale invariant. Particularly vexing is the problem that some of these scale invariant phenomena are not stationary-their statistical properties vary from one time interval to the next or form one position to the next. We will discuss methods, such as wavelet methods and multifractal methods, to cope with these problems. .

  2. Separation of time scales in the HCA model for sand

    NASA Astrophysics Data System (ADS)

    Niemunis, Andrzej; Wichtmann, Torsten

    2014-10-01

    Separation of time scales is used in a high cycle accumulation (HCA) model for sand. An important difficulty of the model is the limited applicability of the Miner's rule to multiaxial cyclic loadings applied simultaneously or in a combination with monotonic loading. Another problem is the lack of simplified objective HCA formulas for geotechnical settlement problems. Possible solutions of these problems are discussed.

  3. Scale problems in reporting landscape pattern at the regional scale

    Treesearch

    R.V. O' Neill; C.T. Hunsaker; S.P. Timmins; B.L. Jackson; K.B. Jones; Kurt H. Riitters; James D. Wickham

    1996-01-01

    Remotely sensed data for Southeastern United States (Standard Federal Region 4) are used to examine the scale problems involved in reporting landscape pattern for a large, heterogeneous region. Frequency distribu-tions of landscape indices illustrate problems associated with the grain or resolution of the data. Grain should be 2 to 5 times smaller than the...

  4. Acoustic streaming: an arbitrary Lagrangian-Eulerian perspective.

    PubMed

    Nama, Nitesh; Huang, Tony Jun; Costanzo, Francesco

    2017-08-25

    We analyse acoustic streaming flows using an arbitrary Lagrangian Eulerian (ALE) perspective. The formulation stems from an explicit separation of time scales resulting in two subproblems: a first-order problem, formulated in terms of the fluid displacement at the fast scale, and a second-order problem, formulated in terms of the Lagrangian flow velocity at the slow time scale. Following a rigorous time-averaging procedure, the second-order problem is shown to be intrinsically steady, and with exact boundary conditions at the oscillating walls. Also, as the second-order problem is solved directly for the Lagrangian velocity, the formulation does not need to employ the notion of Stokes drift, or any associated post-processing, thus facilitating a direct comparison with experiments. Because the first-order problem is formulated in terms of the displacement field, our formulation is directly applicable to more complex fluid-structure interaction problems in microacoustofluidic devices. After the formulation's exposition, we present numerical results that illustrate the advantages of the formulation with respect to current approaches.

  5. Acoustic streaming: an arbitrary Lagrangian–Eulerian perspective

    PubMed Central

    Nama, Nitesh; Huang, Tony Jun; Costanzo, Francesco

    2017-01-01

    We analyse acoustic streaming flows using an arbitrary Lagrangian Eulerian (ALE) perspective. The formulation stems from an explicit separation of time scales resulting in two subproblems: a first-order problem, formulated in terms of the fluid displacement at the fast scale, and a second-order problem, formulated in terms of the Lagrangian flow velocity at the slow time scale. Following a rigorous time-averaging procedure, the second-order problem is shown to be intrinsically steady, and with exact boundary conditions at the oscillating walls. Also, as the second-order problem is solved directly for the Lagrangian velocity, the formulation does not need to employ the notion of Stokes drift, or any associated post-processing, thus facilitating a direct comparison with experiments. Because the first-order problem is formulated in terms of the displacement field, our formulation is directly applicable to more complex fluid–structure interaction problems in microacoustofluidic devices. After the formulation’s exposition, we present numerical results that illustrate the advantages of the formulation with respect to current approaches. PMID:29051631

  6. Introducing the MCHF/OVRP/SDMP: Multicapacitated/Heterogeneous Fleet/Open Vehicle Routing Problems with Split Deliveries and Multiproducts

    PubMed Central

    Yilmaz Eroglu, Duygu; Caglar Gencosman, Burcu; Cavdur, Fatih; Ozmutlu, H. Cenk

    2014-01-01

    In this paper, we analyze a real-world OVRP problem for a production company. Considering real-world constrains, we classify our problem as multicapacitated/heterogeneous fleet/open vehicle routing problem with split deliveries and multiproduct (MCHF/OVRP/SDMP) which is a novel classification of an OVRP. We have developed a mixed integer programming (MIP) model for the problem and generated test problems in different size (10–90 customers) considering real-world parameters. Although MIP is able to find optimal solutions of small size (10 customers) problems, when the number of customers increases, the problem gets harder to solve, and thus MIP could not find optimal solutions for problems that contain more than 10 customers. Moreover, MIP fails to find any feasible solution of large-scale problems (50–90 customers) within time limits (7200 seconds). Therefore, we have developed a genetic algorithm (GA) based solution approach for large-scale problems. The experimental results show that the GA based approach reaches successful solutions with 9.66% gap in 392.8 s on average instead of 7200 s for the problems that contain 10–50 customers. For large-scale problems (50–90 customers), GA reaches feasible solutions of problems within time limits. In conclusion, for the real-world applications, GA is preferable rather than MIP to reach feasible solutions in short time periods. PMID:25045735

  7. Spatial and Temporal Scaling of Thermal Infrared Remote Sensing Data

    NASA Technical Reports Server (NTRS)

    Quattrochi, Dale A.; Goel, Narendra S.

    1995-01-01

    Although remote sensing has a central role to play in the acquisition of synoptic data obtained at multiple spatial and temporal scales to facilitate our understanding of local and regional processes as they influence the global climate, the use of thermal infrared (TIR) remote sensing data in this capacity has received only minimal attention. This results from some fundamental challenges that are associated with employing TIR data collected at different space and time scales, either with the same or different sensing systems, and also from other problems that arise in applying a multiple scaled approach to the measurement of surface temperatures. In this paper, we describe some of the more important problems associated with using TIR remote sensing data obtained at different spatial and temporal scales, examine why these problems appear as impediments to using multiple scaled TIR data, and provide some suggestions for future research activities that may address these problems. We elucidate the fundamental concept of scale as it relates to remote sensing and explore how space and time relationships affect TIR data from a problem-dependency perspective. We also describe how linearity and non-linearity observation versus parameter relationships affect the quantitative analysis of TIR data. Some insight is given on how the atmosphere between target and sensor influences the accurate measurement of surface temperatures and how these effects will be compounded in analyzing multiple scaled TIR data. Last, we describe some of the challenges in modeling TIR data obtained at different space and time scales and discuss how multiple scaled TIR data can be used to provide new and important information for measuring and modeling land-atmosphere energy balance processes.

  8. Open shop scheduling problem to minimize total weighted completion time

    NASA Astrophysics Data System (ADS)

    Bai, Danyu; Zhang, Zhihai; Zhang, Qiang; Tang, Mengqian

    2017-01-01

    A given number of jobs in an open shop scheduling environment must each be processed for given amounts of time on each of a given set of machines in an arbitrary sequence. This study aims to achieve a schedule that minimizes total weighted completion time. Owing to the strong NP-hardness of the problem, the weighted shortest processing time block (WSPTB) heuristic is presented to obtain approximate solutions for large-scale problems. Performance analysis proves the asymptotic optimality of the WSPTB heuristic in the sense of probability limits. The largest weight block rule is provided to seek optimal schedules in polynomial time for a special case. A hybrid discrete differential evolution algorithm is designed to obtain high-quality solutions for moderate-scale problems. Simulation experiments demonstrate the effectiveness of the proposed algorithms.

  9. Scaling Relations and Self-Similarity of 3-Dimensional Reynolds-Averaged Navier-Stokes Equations.

    PubMed

    Ercan, Ali; Kavvas, M Levent

    2017-07-25

    Scaling conditions to achieve self-similar solutions of 3-Dimensional (3D) Reynolds-Averaged Navier-Stokes Equations, as an initial and boundary value problem, are obtained by utilizing Lie Group of Point Scaling Transformations. By means of an open-source Navier-Stokes solver and the derived self-similarity conditions, we demonstrated self-similarity within the time variation of flow dynamics for a rigid-lid cavity problem under both up-scaled and down-scaled domains. The strength of the proposed approach lies in its ability to consider the underlying flow dynamics through not only from the governing equations under consideration but also from the initial and boundary conditions, hence allowing to obtain perfect self-similarity in different time and space scales. The proposed methodology can be a valuable tool in obtaining self-similar flow dynamics under preferred level of detail, which can be represented by initial and boundary value problems under specific assumptions.

  10. When Should Zero Be Included on a Scale Showing Magnitude?

    ERIC Educational Resources Information Center

    Kozak, Marcin

    2011-01-01

    This article addresses an important problem of graphing quantitative data: should one include zero on the scale showing magnitude? Based on a real time series example, the problem is discussed and some recommendations are proposed.

  11. Time and length scales within a fire and implications for numerical simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    TIESZEN,SHELDON R.

    2000-02-02

    A partial non-dimensionalization of the Navier-Stokes equations is used to obtain order of magnitude estimates of the rate-controlling transport processes in the reacting portion of a fire plume as a function of length scale. Over continuum length scales, buoyant times scales vary as the square root of the length scale; advection time scales vary as the length scale, and diffusion time scales vary as the square of the length scale. Due to the variation with length scale, each process is dominant over a given range. The relationship of buoyancy and baroclinc vorticity generation is highlighted. For numerical simulation, first principlesmore » solution for fire problems is not possible with foreseeable computational hardware in the near future. Filtered transport equations with subgrid modeling will be required as two to three decades of length scale are captured by solution of discretized conservation equations. By whatever filtering process one employs, one must have humble expectations for the accuracy obtainable by numerical simulation for practical fire problems that contain important multi-physics/multi-length-scale coupling with up to 10 orders of magnitude in length scale.« less

  12. Development of internalizing problems from adolescence to emerging adulthood: Accounting for heterotypic continuity with vertical scaling.

    PubMed

    Petersen, Isaac T; Lindhiem, Oliver; LeBeau, Brandon; Bates, John E; Pettit, Gregory S; Lansford, Jennifer E; Dodge, Kenneth A

    2018-03-01

    Manifestations of internalizing problems, such as specific symptoms of anxiety and depression, can change across development, even if individuals show strong continuity in rank-order levels of internalizing problems. This illustrates the concept of heterotypic continuity, and raises the question of whether common measures might be construct-valid for one age but not another. This study examines mean-level changes in internalizing problems across a long span of development at the same time as accounting for heterotypic continuity by using age-appropriate, changing measures. Internalizing problems from age 14-24 were studied longitudinally in a community sample (N = 585), using Achenbach's Youth Self-Report (YSR) and Young Adult Self-Report (YASR). Heterotypic continuity was evaluated with an item response theory (IRT) approach to vertical scaling, linking different measures over time to be on the same scale, as well as with a Thurstone scaling approach. With vertical scaling, internalizing problems peaked in mid-to-late adolescence and showed a group-level decrease from adolescence to early adulthood, a change that would not have been seen with the approach of using only age-common items. Individuals' trajectories were sometimes different than would have been seen with the common-items approach. Findings support the importance of considering heterotypic continuity when examining development and vertical scaling to account for heterotypic continuity with changing measures. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  13. Oil price and exchange rate co-movements in Asian countries: Detrended cross-correlation approach

    NASA Astrophysics Data System (ADS)

    Hussain, Muntazir; Zebende, Gilney Figueira; Bashir, Usman; Donghong, Ding

    2017-01-01

    Most empirical literature investigates the relation between oil prices and exchange rate through different models. These models measure this relationship on two time scales (long and short terms), and often fail to observe the co-movement of these variables at different time scales. We apply a detrended cross-correlation approach (DCCA) to investigate the co-movements of the oil price and exchange rate in 12 Asian countries. This model determines the co-movements of oil price and exchange rate at different time scale. The exchange rate and oil price time series indicate unit root problem. Their correlation and cross-correlation are very difficult to measure. The result becomes spurious when periodic trend or unit root problem occurs in these time series. This approach measures the possible cross-correlation at different time scale and controlling the unit root problem. Our empirical results support the co-movements of oil prices and exchange rate. Our results support a weak negative cross-correlation between oil price and exchange rate for most Asian countries included in our sample. The results have important monetary, fiscal, inflationary, and trade policy implications for these countries.

  14. Scale relativity: from quantum mechanics to chaotic dynamics.

    NASA Astrophysics Data System (ADS)

    Nottale, L.

    Scale relativity is a new approach to the problem of the origin of fundamental scales and of scaling laws in physics, which consists in generalizing Einstein's principle of relativity to the case of scale transformations of resolutions. We recall here how it leads one to the concept of fractal space-time, and to introduce a new complex time derivative operator which allows to recover the Schrödinger equation, then to generalize it. In high energy quantum physics, it leads to the introduction of a Lorentzian renormalization group, in which the Planck length is reinterpreted as a lowest, unpassable scale, invariant under dilatations. These methods are successively applied to two problems: in quantum mechanics, that of the mass spectrum of elementary particles; in chaotic dynamics, that of the distribution of planets in the Solar System.

  15. Psychosocial distress of part-time occlusion in children with intermittent exotropia.

    PubMed

    Kim, Ungsoo Samuel; Park, Subin; Yoo, Hee Jeong; Hwang, Jeong-Min

    2013-01-01

    To evaluate the psychosocial distress of part-time occlusion therapy in intermittent exotropia. A total of 25 children (15 males and 10 females, aged 3 to 7 years, mean age 4.7 years) with intermittent exotropia were enrolled. Behavioral and psychosocial problems were assessed by the Korean Child Behavior Checklist (K-CBCL), which consists of eight categories of withdrawal, somatic problems, depression/anxiety, social problems, thought problems, attention problems, delinquent behavior, and aggressive behavior, and the Amblyopia Treatment Index (ATI). The ATI was designed to evaluate the three factors of compliance, adverse effect, and social stigma. The Parenting Stress Index (PSI) is a parent self-report designed to identify potentially dysfunctional parent-child systems. The K-CBCL was obtained before and after occlusion therapy, and the ATI and PSI were taken from parents only after occlusion therapy. We evaluated the change on the K-CBCL and the correlation between the K-CBCL and ATI. The attention problem assessed by the K-CBCL significantly decreased after occlusion therapy. On the ATI, the social stigma was relatively lower than compliance and adverse effect factors (Likert scale 2.64, 3.11, and 3.11, respectively). The somatic problem assessed by the K-CBCL and compliance on the ATI were significantly correlated (p = 0.014). There was no significant change in percentile scores of each subscale (parental dominant scale and child dominant scale) of the PSI. Total stress index before and after occlusion therapy was 97.16 ± 8.38 and 97.00 ± 8.16 respectively (p = 0.382). Occlusion therapy may influence the psychosocial impact on intermittent exotropia patients. Part-time occlusion significantly decreased the attention problem in children with intermittent strabismus. Children with a high somatic problem score on the KCBCL showed poor compliance to the part-time occlusion.

  16. Hamilton-Jacobi-Bellman equations and approximate dynamic programming on time scales.

    PubMed

    Seiffertt, John; Sanyal, Suman; Wunsch, Donald C

    2008-08-01

    The time scales calculus is a key emerging area of mathematics due to its potential use in a wide variety of multidisciplinary applications. We extend this calculus to approximate dynamic programming (ADP). The core backward induction algorithm of dynamic programming is extended from its traditional discrete case to all isolated time scales. Hamilton-Jacobi-Bellman equations, the solution of which is the fundamental problem in the field of dynamic programming, are motivated and proven on time scales. By drawing together the calculus of time scales and the applied area of stochastic control via ADP, we have connected two major fields of research.

  17. Analytical Cost Metrics : Days of Future Past

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prajapati, Nirmal; Rajopadhye, Sanjay; Djidjev, Hristo Nikolov

    As we move towards the exascale era, the new architectures must be capable of running the massive computational problems efficiently. Scientists and researchers are continuously investing in tuning the performance of extreme-scale computational problems. These problems arise in almost all areas of computing, ranging from big data analytics, artificial intelligence, search, machine learning, virtual/augmented reality, computer vision, image/signal processing to computational science and bioinformatics. With Moore’s law driving the evolution of hardware platforms towards exascale, the dominant performance metric (time efficiency) has now expanded to also incorporate power/energy efficiency. Therefore the major challenge that we face in computing systems researchmore » is: “how to solve massive-scale computational problems in the most time/power/energy efficient manner?”« less

  18. Using the PORS Problems to Examine Evolutionary Optimization of Multiscale Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reinhart, Zachary; Molian, Vaelan; Bryden, Kenneth

    2013-01-01

    Nearly all systems of practical interest are composed of parts assembled across multiple scales. For example, an agrodynamic system is composed of flora and fauna on one scale; soil types, slope, and water runoff on another scale; and management practice and yield on another scale. Or consider an advanced coal-fired power plant: combustion and pollutant formation occurs on one scale, the plant components on another scale, and the overall performance of the power system is measured on another. In spite of this, there are few practical tools for the optimization of multiscale systems. This paper examines multiscale optimization of systemsmore » composed of discrete elements using the plus-one-recall-store (PORS) problem as a test case or study problem for multiscale systems. From this study, it is found that by recognizing the constraints and patterns present in discrete multiscale systems, the solution time can be significantly reduced and much more complex problems can be optimized.« less

  19. Overcoming time scale and finite size limitations to compute nucleation rates from small scale well tempered metadynamics simulations.

    PubMed

    Salvalaglio, Matteo; Tiwary, Pratyush; Maggioni, Giovanni Maria; Mazzotti, Marco; Parrinello, Michele

    2016-12-07

    Condensation of a liquid droplet from a supersaturated vapour phase is initiated by a prototypical nucleation event. As such it is challenging to compute its rate from atomistic molecular dynamics simulations. In fact at realistic supersaturation conditions condensation occurs on time scales that far exceed what can be reached with conventional molecular dynamics methods. Another known problem in this context is the distortion of the free energy profile associated to nucleation due to the small, finite size of typical simulation boxes. In this work the problem of time scale is addressed with a recently developed enhanced sampling method while contextually correcting for finite size effects. We demonstrate our approach by studying the condensation of argon, and showing that characteristic nucleation times of the order of magnitude of hours can be reliably calculated. Nucleation rates spanning a range of 10 orders of magnitude are computed at moderate supersaturation levels, thus bridging the gap between what standard molecular dynamics simulations can do and real physical systems.

  20. Overcoming time scale and finite size limitations to compute nucleation rates from small scale well tempered metadynamics simulations

    NASA Astrophysics Data System (ADS)

    Salvalaglio, Matteo; Tiwary, Pratyush; Maggioni, Giovanni Maria; Mazzotti, Marco; Parrinello, Michele

    2016-12-01

    Condensation of a liquid droplet from a supersaturated vapour phase is initiated by a prototypical nucleation event. As such it is challenging to compute its rate from atomistic molecular dynamics simulations. In fact at realistic supersaturation conditions condensation occurs on time scales that far exceed what can be reached with conventional molecular dynamics methods. Another known problem in this context is the distortion of the free energy profile associated to nucleation due to the small, finite size of typical simulation boxes. In this work the problem of time scale is addressed with a recently developed enhanced sampling method while contextually correcting for finite size effects. We demonstrate our approach by studying the condensation of argon, and showing that characteristic nucleation times of the order of magnitude of hours can be reliably calculated. Nucleation rates spanning a range of 10 orders of magnitude are computed at moderate supersaturation levels, thus bridging the gap between what standard molecular dynamics simulations can do and real physical systems.

  1. Does Problem-Solving Training for Family Caregivers Benefit Their Care Recipients With Severe Disabilities? A Latent Growth Model of the Project CLUES Randomized Clinical Trial

    PubMed Central

    Berry, Jack W.; Elliott, Timothy R.; Grant, Joan S.; Edwards, Gary; Fine, Philip R.

    2012-01-01

    Objective To examine whether an individualized problem-solving intervention provided to family caregivers of persons with severe disabilities provides benefits to both caregivers and their care recipients. Design Family caregivers were randomly assigned to an education-only control group or a problem-solving training (PST) intervention group. Participants received monthly contacts for 1 year. Participants Family caregivers (129 women, 18 men) and their care recipients (81 women, 66 men) consented to participate. Main Outcome Measures Caregivers completed the Social Problem-Solving Inventory–Revised, the Center for Epidemiological Studies-Depression scale, the Satisfaction with Life scale, and a measure of health complaints at baseline and in 3 additional assessments throughout the year. Care recipient depression was assessed with a short form of the Hamilton Depression Scale. Results Latent growth modeling was used to analyze data from the dyads. Caregivers who received PST reported a significant decrease in depression over time, and they also displayed gains in constructive problem-solving abilities and decreases in dysfunctional problem-solving abilities. Care recipients displayed significant decreases in depression over time, and these decreases were significantly associated with decreases in caregiver depression in response to training. Conclusions PST significantly improved the problem-solving skills of community-residing caregivers and also lessened their depressive symptoms. Care recipients in the PST group also had reductions in depression over time, and it appears that decreases in caregiver depression may account for this effect. PMID:22686549

  2. Does problem-solving training for family caregivers benefit their care recipients with severe disabilities? A latent growth model of the Project CLUES randomized clinical trial.

    PubMed

    Berry, Jack W; Elliott, Timothy R; Grant, Joan S; Edwards, Gary; Fine, Philip R

    2012-05-01

    To examine whether an individualized problem-solving intervention provided to family caregivers of persons with severe disabilities provides benefits to both caregivers and their care recipients. Family caregivers were randomly assigned to an education-only control group or a problem-solving training (PST) intervention group. Participants received monthly contacts for 1 year. Family caregivers (129 women, 18 men) and their care recipients (81 women, 66 men) consented to participate. Caregivers completed the Social Problem-Solving Inventory-Revised, the Center for Epidemiological Studies-Depression scale, the Satisfaction with Life scale, and a measure of health complaints at baseline and in 3 additional assessments throughout the year. Care recipient depression was assessed with a short form of the Hamilton Depression Scale. Latent growth modeling was used to analyze data from the dyads. Caregivers who received PST reported a significant decrease in depression over time, and they also displayed gains in constructive problem-solving abilities and decreases in dysfunctional problem-solving abilities. Care recipients displayed significant decreases in depression over time, and these decreases were significantly associated with decreases in caregiver depression in response to training. PST significantly improved the problem-solving skills of community-residing caregivers and also lessened their depressive symptoms. Care recipients in the PST group also had reductions in depression over time, and it appears that decreases in caregiver depression may account for this effect. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  3. Accelerating universe with time variation of G and Λ

    NASA Astrophysics Data System (ADS)

    Darabi, F.

    2012-03-01

    We study a gravitational model in which scale transformations play the key role in obtaining dynamical G and Λ. We take a non-scale invariant gravitational action with a cosmological constant and a gravitational coupling constant. Then, by a scale transformation, through a dilaton field, we obtain a new action containing cosmological and gravitational coupling terms which are dynamically dependent on the dilaton field with Higgs type potential. The vacuum expectation value of this dilaton field, through spontaneous symmetry breaking on the basis of anthropic principle, determines the time variations of G and Λ. The relevance of these time variations to the current acceleration of the universe, coincidence problem, Mach's cosmological coincidence and those problems of standard cosmology addressed by inflationary models, are discussed. The current acceleration of the universe is shown to be a result of phase transition from radiation toward matter dominated eras. No real coincidence problem between matter and vacuum energy densities exists in this model and this apparent coincidence together with Mach's cosmological coincidence are shown to be simple consequences of a new kind of scale factor dependence of the energy momentum density as ρ˜ a -4. This model also provides the possibility for a super fast expansion of the scale factor at very early universe by introducing exotic type matter like cosmic strings.

  4. The Effects of Attention Problems on Psychosocial Functioning in Childhood Brain Tumor Survivors: A 2-Year Postcraniospinal Irradiation Follow-up.

    PubMed

    Oh, Yunhye; Seo, Hyunjung; Sung, Ki Woong; Joung, Yoo Sook

    2017-03-01

    To examine the psychosocial outcomes and impact of attention problems in survivors of pediatric brain tumor. The survivors' cognitive functioning was measured using the Wechsler Intelligence Scale for Children. The Child Behavior Checklist-Attention Problems scale was used to screen for attention problems, and participants were classified as having attention problems (n=15) or normal attention (n=36). Psychosocial functioning was examined with the Korean Personality Rating scale for Children (K-PRC) at precraniospinal radiation and at 2-year follow-up. The attention problem group showed significantly higher depression and externalizing symptoms (delinquency, hyperactivity) and more significant impairment in family relationships than did the normal attention group at baseline. At follow-up, the attention problem group demonstrated significantly more delinquency and impaired family and social relationships. With the K-PRC scores, except for the somatization, social relationship subscale, there were significant differences between groups, but not in terms of treatment by time interaction or within time. At follow-up, multiple linear regressions showed that age at diagnosis significantly predicted K-PRC somatization (B=-1.7, P=0.004) and social relationships (B=-1.7, P=0.004), baseline full-scale intelligence quotient predicted K-PRC depression (B=-0.4, P=0.032) and somatization (B=-0.3, P=0.015), and attention problems at baseline predicted K-PRC depression (B=-15.2, P=0.036) and social relationships (B=-11.6, P=0.016). Pediatric brain tumor survivors, in particular, patients with attention problems, had worse psychosocial functioning at baseline and follow-up. Attention problems at baseline need to be carefully evaluated in assessing psychosocial functioning of pediatric brain tumor survivors.

  5. Application of Wavelet-Based Methods for Accelerating Multi-Time-Scale Simulation of Bistable Heterogeneous Catalysis

    DOE PAGES

    Gur, Sourav; Frantziskonis, George N.; Univ. of Arizona, Tucson, AZ; ...

    2017-02-16

    Here, we report results from a numerical study of multi-time-scale bistable dynamics for CO oxidation on a catalytic surface in a flowing, well-mixed gas stream. The problem is posed in terms of surface and gas-phase submodels that dynamically interact in the presence of stochastic perturbations, reflecting the impact of molecular-scale fluctuations on the surface and turbulence in the gas. Wavelet-based methods are used to encode and characterize the temporal dynamics produced by each submodel and detect the onset of sudden state shifts (bifurcations) caused by nonlinear kinetics. When impending state shifts are detected, a more accurate but computationally expensive integrationmore » scheme can be used. This appears to make it possible, at least in some cases, to decrease the net computational burden associated with simulating multi-time-scale, nonlinear reacting systems by limiting the amount of time in which the more expensive integration schemes are required. Critical to achieving this is being able to detect unstable temporal transitions such as the bistable shifts in the example problem considered here. Lastly, our results indicate that a unique wavelet-based algorithm based on the Lipschitz exponent is capable of making such detections, even under noisy conditions, and may find applications in critical transition detection problems beyond catalysis.« less

  6. The overconstraint of response time models: rethinking the scaling problem.

    PubMed

    Donkin, Chris; Brown, Scott D; Heathcote, Andrew

    2009-12-01

    Theories of choice response time (RT) provide insight into the psychological underpinnings of simple decisions. Evidence accumulation (or sequential sampling) models are the most successful theories of choice RT. These models all have the same "scaling" property--that a subset of their parameters can be multiplied by the same amount without changing their predictions. This property means that a single parameter must be fixed to allow the estimation of the remaining parameters. In the present article, we show that the traditional solution to this problem has overconstrained these models, unnecessarily restricting their ability to account for data and making implicit--and therefore unexamined--psychological assumptions. We show that versions of these models that address the scaling problem in a minimal way can provide a better description of data than can their overconstrained counterparts, even when increased model complexity is taken into account.

  7. Optimal dynamic voltage scaling for wireless sensor nodes with real-time constraints

    NASA Astrophysics Data System (ADS)

    Cassandras, Christos G.; Zhuang, Shixin

    2005-11-01

    Sensors are increasingly embedded in manufacturing systems and wirelessly networked to monitor and manage operations ranging from process and inventory control to tracking equipment and even post-manufacturing product monitoring. In building such sensor networks, a critical issue is the limited and hard to replenish energy in the devices involved. Dynamic voltage scaling is a technique that controls the operating voltage of a processor to provide desired performance while conserving energy and prolonging the overall network's lifetime. We consider such power-limited devices processing time-critical tasks which are non-preemptive, aperiodic and have uncertain arrival times. We treat voltage scaling as a dynamic optimization problem whose objective is to minimize energy consumption subject to hard or soft real-time execution constraints. In the case of hard constraints, we build on prior work (which engages a voltage scaling controller at task completion times) by developing an intra-task controller that acts at all arrival times of incoming tasks. We show that this optimization problem can be decomposed into two simpler ones whose solution leads to an algorithm that does not actually require solving any nonlinear programming problems. In the case of soft constraints, this decomposition must be partly relaxed, but it still leads to a scalable (linear in the number of tasks) algorithm. Simulation results are provided to illustrate performance improvements in systems with intra-task controllers compared to uncontrolled systems or those using inter-task control.

  8. Multiscale functions, scale dynamics, and applications to partial differential equations

    NASA Astrophysics Data System (ADS)

    Cresson, Jacky; Pierret, Frédéric

    2016-05-01

    Modeling phenomena from experimental data always begins with a choice of hypothesis on the observed dynamics such as determinism, randomness, and differentiability. Depending on these choices, different behaviors can be observed. The natural question associated to the modeling problem is the following: "With a finite set of data concerning a phenomenon, can we recover its underlying nature? From this problem, we introduce in this paper the definition of multi-scale functions, scale calculus, and scale dynamics based on the time scale calculus [see Bohner, M. and Peterson, A., Dynamic Equations on Time Scales: An Introduction with Applications (Springer Science & Business Media, 2001)] which is used to introduce the notion of scale equations. These definitions will be illustrated on the multi-scale Okamoto's functions. Scale equations are analysed using scale regimes and the notion of asymptotic model for a scale equation under a particular scale regime. The introduced formalism explains why a single scale equation can produce distinct continuous models even if the equation is scale invariant. Typical examples of such equations are given by the scale Euler-Lagrange equation. We illustrate our results using the scale Newton's equation which gives rise to a non-linear diffusion equation or a non-linear Schrödinger equation as asymptotic continuous models depending on the particular fractional scale regime which is considered.

  9. Classification of time series patterns from complex dynamic systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schryver, J.C.; Rao, N.

    1998-07-01

    An increasing availability of high-performance computing and data storage media at decreasing cost is making possible the proliferation of large-scale numerical databases and data warehouses. Numeric warehousing enterprises on the order of hundreds of gigabytes to terabytes are a reality in many fields such as finance, retail sales, process systems monitoring, biomedical monitoring, surveillance and transportation. Large-scale databases are becoming more accessible to larger user communities through the internet, web-based applications and database connectivity. Consequently, most researchers now have access to a variety of massive datasets. This trend will probably only continue to grow over the next several years. Unfortunately,more » the availability of integrated tools to explore, analyze and understand the data warehoused in these archives is lagging far behind the ability to gain access to the same data. In particular, locating and identifying patterns of interest in numerical time series data is an increasingly important problem for which there are few available techniques. Temporal pattern recognition poses many interesting problems in classification, segmentation, prediction, diagnosis and anomaly detection. This research focuses on the problem of classification or characterization of numerical time series data. Highway vehicles and their drivers are examples of complex dynamic systems (CDS) which are being used by transportation agencies for field testing to generate large-scale time series datasets. Tools for effective analysis of numerical time series in databases generated by highway vehicle systems are not yet available, or have not been adapted to the target problem domain. However, analysis tools from similar domains may be adapted to the problem of classification of numerical time series data.« less

  10. How do Rumination and Social Problem Solving Intensify Depression? A Longitudinal Study.

    PubMed

    Hasegawa, Akira; Kunisato, Yoshihiko; Morimoto, Hiroshi; Nishimura, Haruki; Matsuda, Yuko

    2018-01-01

    In order to examine how rumination and social problem solving intensify depression, the present study investigated longitudinal associations among each dimension of rumination and social problem solving and evaluated aspects of these constructs that predicted subsequent depression. A three-wave longitudinal study, with an interval of 4 weeks between waves, was conducted. Japanese university students completed the Beck Depression Inventory-Second Edition, Ruminative Responses Scale, Social Problem-Solving Inventory-Revised Short Version, and Interpersonal Stress Event Scale on three occasions 4 weeks apart ( n  = 284 at Time 1, 198 at Time 2, 165 at Time 3). Linear mixed models were analyzed to test whether each variable predicted subsequent depression, rumination, and each dimension of social problem solving. Rumination and negative problem orientation demonstrated a mutually enhancing relationship. Because these two variables were not associated with interpersonal conflict during the subsequent 4 weeks, rumination and negative problem orientation appear to strengthen each other without environmental change. Rumination and impulsivity/carelessness style were associated with subsequent depressive symptoms, after controlling for the effect of initial depression. Because rumination and impulsivity/carelessness style were not concurrently and longitudinally associated with each other, rumination and impulsive/careless problem solving style appear to be independent processes that serve to intensify depression.

  11. Applying Squeaky-Wheel Optimization Schedule Airborne Astronomy Observations

    NASA Technical Reports Server (NTRS)

    Frank, Jeremy; Kuerklue, Elif

    2004-01-01

    We apply the Squeaky Wheel Optimization (SWO) algorithm to the problem of scheduling astronomy observations for the Stratospheric Observatory for Infrared Astronomy, an airborne observatory. The problem contains complex constraints relating the feasibility of an astronomical observation to the position and time at which the observation begins, telescope elevation limits, special use airspace, and available fuel. Solving the problem requires making discrete choices (e.g. selection and sequencing of observations) and continuous ones (e.g. takeoff time and setting up observations by repositioning the aircraft). The problem also includes optimization criteria such as maximizing observing time while simultaneously minimizing total flight time. Previous approaches to the problem fail to scale when accounting for all constraints. We describe how to customize SWO to solve this problem, and show that it finds better flight plans, often with less computation time, than previous approaches.

  12. Permutation flow-shop scheduling problem to optimize a quadratic objective function

    NASA Astrophysics Data System (ADS)

    Ren, Tao; Zhao, Peng; Zhang, Da; Liu, Bingqian; Yuan, Huawei; Bai, Danyu

    2017-09-01

    A flow-shop scheduling model enables appropriate sequencing for each job and for processing on a set of machines in compliance with identical processing orders. The objective is to achieve a feasible schedule for optimizing a given criterion. Permutation is a special setting of the model in which the processing order of the jobs on the machines is identical for each subsequent step of processing. This article addresses the permutation flow-shop scheduling problem to minimize the criterion of total weighted quadratic completion time. With a probability hypothesis, the asymptotic optimality of the weighted shortest processing time schedule under a consistency condition (WSPT-CC) is proven for sufficiently large-scale problems. However, the worst case performance ratio of the WSPT-CC schedule is the square of the number of machines in certain situations. A discrete differential evolution algorithm, where a new crossover method with multiple-point insertion is used to improve the final outcome, is presented to obtain high-quality solutions for moderate-scale problems. A sequence-independent lower bound is designed for pruning in a branch-and-bound algorithm for small-scale problems. A set of random experiments demonstrates the performance of the lower bound and the effectiveness of the proposed algorithms.

  13. Methods of sequential estimation for determining initial data in numerical weather prediction. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Cohn, S. E.

    1982-01-01

    Numerical weather prediction (NWP) is an initial-value problem for a system of nonlinear differential equations, in which initial values are known incompletely and inaccurately. Observational data available at the initial time must therefore be supplemented by data available prior to the initial time, a problem known as meteorological data assimilation. A further complication in NWP is that solutions of the governing equations evolve on two different time scales, a fast one and a slow one, whereas fast scale motions in the atmosphere are not reliably observed. This leads to the so called initialization problem: initial values must be constrained to result in a slowly evolving forecast. The theory of estimation of stochastic dynamic systems provides a natural approach to such problems. For linear stochastic dynamic models, the Kalman-Bucy (KB) sequential filter is the optimal data assimilation method, for linear models, the optimal combined data assimilation-initialization method is a modified version of the KB filter.

  14. A Transition in the Cumulative Reaction Rate of Two Species Diffusion with Bimolecular Reaction

    NASA Astrophysics Data System (ADS)

    Rajaram, Harihar; Arshadi, Masoud

    2015-04-01

    Diffusion and bimolecular reaction between two initially separated reacting species is a prototypical small-scale description of reaction induced by transverse mixing. It is also relevant to diffusion controlled transport regimes as encountered in low-permeability matrix blocks in fractured media. In previous work, the reaction-diffusion problem has been analyzed as a Stefan problem involving a distinct moving boundary (reaction front), which predicts that front motion scales as √t, and the cumulative reaction rate scales as 1/√t-. We present a general non-dimensionalization of the problem and a perturbation analysis to show that there is an early time regime where the cumulative reaction rate scales as √t- rather than 1/√t. The duration of this early time regime (where the cumulative rate is kinetically rather than diffusion controlled) depends on the rate parameter, in a manner that is consistently predicted by our non-dimensionalization. We also present results on the scaling of the reaction front width. We present numerical simulations in homogeneous and heterogeneous porous media to demonstrate the limited influence of heterogeneity on the behavior of the reaction-diffusion system. We illustrate applications to the practical problem of in-situ chemical oxidation of TCE and PCE by permanganate, which is employed to remediate contaminated sites where the DNAPLs are largely dissolved in the rock matrix.

  15. Scale-down/scale-up studies leading to improved commercial beer fermentation.

    PubMed

    Nienow, Alvin W; Nordkvist, Mikkel; Boulton, Christopher A

    2011-08-01

    Scale-up/scale-down techniques are vital for successful and safe commercial-scale bioprocess design and operation. An example is given in this review of recent studies related to beer production. Work at the bench scale shows that brewing yeast is not compromised by mechanical agitation up to 4.5 W/kg; and that compared with fermentations mixed by CO(2) evolution, agitation ≥ 0.04 W/kg is able to reduce fermentation time by about 20%. Work at the commercial scale in cylindroconical fermenters shows that, without mechanical agitation, most of the yeast sediments into the cone for about 50% of the fermentation time, leading to poor temperature control. Stirrer mixing overcomes these problems and leads to a similar reduction in batch time as the bench-scale tests and greatly reduces its variability, but is difficult to install in extant fermenters. The mixing characteristics of a new jet mixer, a rotary jet mixer, which overcomes these difficulties, are reported, based on pilot-scale studies. This change enables the advantages of stirring to be achieved at the commercial scale without the problems. In addition, more of the fermentable sugars are converted into ethanol. This review shows the effectiveness of scale-up/scale-down studies for improving commercial operations. Suggestions for further studies are made: one concerning the impact of homogenization on the removal of vicinal diketones and the other on the location of bubble formation at the commercial scale. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Analytic model to estimate thermonuclear neutron yield in z-pinches using the magnetic Noh problem

    NASA Astrophysics Data System (ADS)

    Allen, Robert C.

    The objective was to build a model which could be used to estimate neutron yield in pulsed z-pinch experiments, benchmark future z-pinch simulation tools and to assist scaling for breakeven systems. To accomplish this, a recent solution to the magnetic Noh problem was utilized which incorporates a self-similar solution with cylindrical symmetry and azimuthal magnetic field (Velikovich, 2012). The self-similar solution provides the conditions needed to calculate the time dependent implosion dynamics from which batch burn is assumed and used to calculate neutron yield. The solution to the model is presented. The ion densities and time scales fix the initial mass and implosion velocity, providing estimates of the experimental results given specific initial conditions. Agreement is shown with experimental data (Coverdale, 2007). A parameter sweep was done to find the neutron yield, implosion velocity and gain for a range of densities and time scales for DD reactions and a curve fit was done to predict the scaling as a function of preshock conditions.

  17. Effective optimization using sample persistence: A case study on quantum annealers and various Monte Carlo optimization methods

    NASA Astrophysics Data System (ADS)

    Karimi, Hamed; Rosenberg, Gili; Katzgraber, Helmut G.

    2017-10-01

    We present and apply a general-purpose, multistart algorithm for improving the performance of low-energy samplers used for solving optimization problems. The algorithm iteratively fixes the value of a large portion of the variables to values that have a high probability of being optimal. The resulting problems are smaller and less connected, and samplers tend to give better low-energy samples for these problems. The algorithm is trivially parallelizable since each start in the multistart algorithm is independent, and could be applied to any heuristic solver that can be run multiple times to give a sample. We present results for several classes of hard problems solved using simulated annealing, path-integral quantum Monte Carlo, parallel tempering with isoenergetic cluster moves, and a quantum annealer, and show that the success metrics and the scaling are improved substantially. When combined with this algorithm, the quantum annealer's scaling was substantially improved for native Chimera graph problems. In addition, with this algorithm the scaling of the time to solution of the quantum annealer is comparable to the Hamze-de Freitas-Selby algorithm on the weak-strong cluster problems introduced by Boixo et al. Parallel tempering with isoenergetic cluster moves was able to consistently solve three-dimensional spin glass problems with 8000 variables when combined with our method, whereas without our method it could not solve any.

  18. A typology of time-scale mismatches and behavioral interventions to diagnose and solve conservation problems

    USGS Publications Warehouse

    Wilson, Robyn S.; Hardisty, David J.; Epanchin-Niell, Rebecca S.; Runge, Michael C.; Cottingham, Kathryn L.; Urban, Dean L.; Maguire, Lynn A.; Hastings, Alan; Mumby, Peter J.; Peters, Debra P.C.

    2016-01-01

    Ecological systems often operate on time scales significantly longer or shorter than the time scales typical of human decision making, which causes substantial difficulty for conservation and management in socioecological systems. For example, invasive species may move faster than humans can diagnose problems and initiate solutions, and climate systems may exhibit long-term inertia and short-term fluctuations that obscure learning about the efficacy of management efforts in many ecological systems. We adopted a management-decision framework that distinguishes decision makers within public institutions from individual actors within the social system, calls attention to the ways socioecological systems respond to decision makers’ actions, and notes institutional learning that accrues from observing these responses. We used this framework, along with insights from bedeviling conservation problems, to create a typology that identifies problematic time-scale mismatches occurring between individual decision makers in public institutions and between individual actors in the social or ecological system. We also considered solutions that involve modifying human perception and behavior at the individual level as a means of resolving these problematic mismatches. The potential solutions are derived from the behavioral economics and psychology literature on temporal challenges in decision making, such as the human tendency to discount future outcomes at irrationally high rates. These solutions range from framing environmental decisions to enhance the salience of long-term consequences, to using structured decision processes that make time scales of actions and consequences more explicit, to structural solutions aimed at altering the consequences of short-sighted behavior to make it less appealing. Additional application of these tools and long-term evaluation measures that assess not just behavioral changes but also associated changes in ecological systems are needed.

  19. A typology of time-scale mismatches and behavioral interventions to diagnose and solve conservation problems.

    PubMed

    Wilson, Robyn S; Hardisty, David J; Epanchin-Niell, Rebecca S; Runge, Michael C; Cottingham, Kathryn L; Urban, Dean L; Maguire, Lynn A; Hastings, Alan; Mumby, Peter J; Peters, Debra P C

    2016-02-01

    Ecological systems often operate on time scales significantly longer or shorter than the time scales typical of human decision making, which causes substantial difficulty for conservation and management in socioecological systems. For example, invasive species may move faster than humans can diagnose problems and initiate solutions, and climate systems may exhibit long-term inertia and short-term fluctuations that obscure learning about the efficacy of management efforts in many ecological systems. We adopted a management-decision framework that distinguishes decision makers within public institutions from individual actors within the social system, calls attention to the ways socioecological systems respond to decision makers' actions, and notes institutional learning that accrues from observing these responses. We used this framework, along with insights from bedeviling conservation problems, to create a typology that identifies problematic time-scale mismatches occurring between individual decision makers in public institutions and between individual actors in the social or ecological system. We also considered solutions that involve modifying human perception and behavior at the individual level as a means of resolving these problematic mismatches. The potential solutions are derived from the behavioral economics and psychology literature on temporal challenges in decision making, such as the human tendency to discount future outcomes at irrationally high rates. These solutions range from framing environmental decisions to enhance the salience of long-term consequences, to using structured decision processes that make time scales of actions and consequences more explicit, to structural solutions aimed at altering the consequences of short-sighted behavior to make it less appealing. Additional application of these tools and long-term evaluation measures that assess not just behavioral changes but also associated changes in ecological systems are needed. © 2015 Society for Conservation Biology.

  20. Perceived neighborhood problems: multilevel analysis to evaluate psychometric properties in a Southern adult Brazilian population

    PubMed Central

    2013-01-01

    Background Physical attributes of the places in which people live, as well as their perceptions of them, may be important health determinants. The perception of place in which people dwell may impact on individual health and may be a more telling indicator for individual health than objective neighborhood characteristics. This paper aims to evaluate psychometric and ecometric properties of a scale on the perceptions of neighborhood problems in adults from Florianopolis, Southern Brazil. Methods Individual, census tract level (per capita monthly familiar income) and neighborhood problems perception (physical and social disorders) variables were investigated. Multilevel models (items nested within persons, persons nested within neighborhoods) were run to assess ecometric properties of variables assessing neighborhood problems. Results The response rate was 85.3%, (1,720 adults). Participants were distributed in 63 census tracts. Two scales were identified using 16 items: Physical Problems and Social Disorder. The ecometric properties of the scales satisfactory: 0.24 to 0.28 for the intra-class correlation and 0.94 to 0.96 for reliability. Higher values on the scales of problems in the physical and social domains were associated with younger age, more length of time residing in the same neighborhood and lower census tract income level. Conclusions The findings support the usefulness of these scales to measure physical and social disorder problems in neighborhoods. PMID:24256619

  1. The island coalescence problem: Scaling of reconnection in extended fluid models including higher-order moments

    DOE PAGES

    Ng, Jonathan; Huang, Yi -Min; Hakim, Ammar; ...

    2015-11-05

    As modeling of collisionless magnetic reconnection in most space plasmas with realistic parameters is beyond the capability of today's simulations, due to the separation between global and kinetic length scales, it is important to establish scaling relations in model problems so as to extrapolate to realistic scales. Furthermore, large scale particle-in-cell simulations of island coalescence have shown that the time averaged reconnection rate decreases with system size, while fluid systems at such large scales in the Hall regime have not been studied. Here, we perform the complementary resistive magnetohydrodynamic (MHD), Hall MHD, and two fluid simulations using a ten-moment modelmore » with the same geometry. In contrast to the standard Harris sheet reconnection problem, Hall MHD is insufficient to capture the physics of the reconnection region. Additionally, motivated by the results of a recent set of hybrid simulations which show the importance of ion kinetics in this geometry, we evaluate the efficacy of the ten-moment model in reproducing such results.« less

  2. Comparison of an algebraic multigrid algorithm to two iterative solvers used for modeling ground water flow and transport

    USGS Publications Warehouse

    Detwiler, R.L.; Mehl, S.; Rajaram, H.; Cheung, W.W.

    2002-01-01

    Numerical solution of large-scale ground water flow and transport problems is often constrained by the convergence behavior of the iterative solvers used to solve the resulting systems of equations. We demonstrate the ability of an algebraic multigrid algorithm (AMG) to efficiently solve the large, sparse systems of equations that result from computational models of ground water flow and transport in large and complex domains. Unlike geometric multigrid methods, this algorithm is applicable to problems in complex flow geometries, such as those encountered in pore-scale modeling of two-phase flow and transport. We integrated AMG into MODFLOW 2000 to compare two- and three-dimensional flow simulations using AMG to simulations using PCG2, a preconditioned conjugate gradient solver that uses the modified incomplete Cholesky preconditioner and is included with MODFLOW 2000. CPU times required for convergence with AMG were up to 140 times faster than those for PCG2. The cost of this increased speed was up to a nine-fold increase in required random access memory (RAM) for the three-dimensional problems and up to a four-fold increase in required RAM for the two-dimensional problems. We also compared two-dimensional numerical simulations of steady-state transport using AMG and the generalized minimum residual method with an incomplete LU-decomposition preconditioner. For these transport simulations, AMG yielded increased speeds of up to 17 times with only a 20% increase in required RAM. The ability of AMG to solve flow and transport problems in large, complex flow systems and its ready availability make it an ideal solver for use in both field-scale and pore-scale modeling.

  3. An algorithm for generating modular hierarchical neural network classifiers: a step toward larger scale applications

    NASA Astrophysics Data System (ADS)

    Roverso, Davide

    2003-08-01

    Many-class learning is the problem of training a classifier to discriminate among a large number of target classes. Together with the problem of dealing with high-dimensional patterns (i.e. a high-dimensional input space), the many class problem (i.e. a high-dimensional output space) is a major obstacle to be faced when scaling-up classifier systems and algorithms from small pilot applications to large full-scale applications. The Autonomous Recursive Task Decomposition (ARTD) algorithm is here proposed as a solution to the problem of many-class learning. Example applications of ARTD to neural classifier training are also presented. In these examples, improvements in training time are shown to range from 4-fold to more than 30-fold in pattern classification tasks of both static and dynamic character.

  4. Optimal File-Distribution in Heterogeneous and Asymmetric Storage Networks

    NASA Astrophysics Data System (ADS)

    Langner, Tobias; Schindelhauer, Christian; Souza, Alexander

    We consider an optimisation problem which is motivated from storage virtualisation in the Internet. While storage networks make use of dedicated hardware to provide homogeneous bandwidth between servers and clients, in the Internet, connections between storage servers and clients are heterogeneous and often asymmetric with respect to upload and download. Thus, for a large file, the question arises how it should be fragmented and distributed among the servers to grant "optimal" access to the contents. We concentrate on the transfer time of a file, which is the time needed for one upload and a sequence of n downloads, using a set of m servers with heterogeneous bandwidths. We assume that fragments of the file can be transferred in parallel to and from multiple servers. This model yields a distribution problem that examines the question of how these fragments should be distributed onto those servers in order to minimise the transfer time. We present an algorithm, called FlowScaling, that finds an optimal solution within running time {O}(m log m). We formulate the distribution problem as a maximum flow problem, which involves a function that states whether a solution with a given transfer time bound exists. This function is then used with a scaling argument to determine an optimal solution within the claimed time complexity.

  5. Parameter estimation in large-scale systems biology models: a parallel and self-adaptive cooperative strategy.

    PubMed

    Penas, David R; González, Patricia; Egea, Jose A; Doallo, Ramón; Banga, Julio R

    2017-01-21

    The development of large-scale kinetic models is one of the current key issues in computational systems biology and bioinformatics. Here we consider the problem of parameter estimation in nonlinear dynamic models. Global optimization methods can be used to solve this type of problems but the associated computational cost is very large. Moreover, many of these methods need the tuning of a number of adjustable search parameters, requiring a number of initial exploratory runs and therefore further increasing the computation times. Here we present a novel parallel method, self-adaptive cooperative enhanced scatter search (saCeSS), to accelerate the solution of this class of problems. The method is based on the scatter search optimization metaheuristic and incorporates several key new mechanisms: (i) asynchronous cooperation between parallel processes, (ii) coarse and fine-grained parallelism, and (iii) self-tuning strategies. The performance and robustness of saCeSS is illustrated by solving a set of challenging parameter estimation problems, including medium and large-scale kinetic models of the bacterium E. coli, bakerés yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The results consistently show that saCeSS is a robust and efficient method, allowing very significant reduction of computation times with respect to several previous state of the art methods (from days to minutes, in several cases) even when only a small number of processors is used. The new parallel cooperative method presented here allows the solution of medium and large scale parameter estimation problems in reasonable computation times and with small hardware requirements. Further, the method includes self-tuning mechanisms which facilitate its use by non-experts. We believe that this new method can play a key role in the development of large-scale and even whole-cell dynamic models.

  6. A new heterogeneous asynchronous explicit-implicit time integrator for nonsmooth dynamics

    NASA Astrophysics Data System (ADS)

    Fekak, Fatima-Ezzahra; Brun, Michael; Gravouil, Anthony; Depale, Bruno

    2017-07-01

    In computational structural dynamics, particularly in the presence of nonsmooth behavior, the choice of the time-step and the time integrator has a critical impact on the feasibility of the simulation. Furthermore, in some cases, as in the case of a bridge crane under seismic loading, multiple time-scales coexist in the same problem. In that case, the use of multi-time scale methods is suitable. Here, we propose a new explicit-implicit heterogeneous asynchronous time integrator (HATI) for nonsmooth transient dynamics with frictionless unilateral contacts and impacts. Furthermore, we present a new explicit time integrator for contact/impact problems where the contact constraints are enforced using a Lagrange multiplier method. In other words, the aim of this paper consists in using an explicit time integrator with a fine time scale in the contact area for reproducing high frequency phenomena, while an implicit time integrator is adopted in the other parts in order to reproduce much low frequency phenomena and to optimize the CPU time. In a first step, the explicit time integrator is tested on a one-dimensional example and compared to Moreau-Jean's event-capturing schemes. The explicit algorithm is found to be very accurate and the scheme has generally a higher order of convergence than Moreau-Jean's schemes and provides also an excellent energy behavior. Then, the two time scales explicit-implicit HATI is applied to the numerical example of a bridge crane under seismic loading. The results are validated in comparison to a fine scale full explicit computation. The energy dissipated in the implicit-explicit interface is well controlled and the computational time is lower than a full-explicit simulation.

  7. Further Evidence of the Utility and Validity of a Measure of Outcome for Children and Adolescents

    ERIC Educational Resources Information Center

    Turchik, Jessica A.; Karpenko, Veronika; Ogles, Benjamin M.

    2007-01-01

    The "Ohio Youth Problems, Functioning, and Satisfaction Scales" (Ohio Scales) are a recently developed set of measures designed to be a brief, practical assessment of changes in behavior over time in children and adolescents. The authors explored the convergent validity of the Ohio Scales by examining the relationship between the scales and…

  8. Ten-Year Time Trends in Emotional and Behavioral Problems of Dutch Children Referred for Youth Care

    ERIC Educational Resources Information Center

    Veerman, Jan Willem; De Meyer, Ronald

    2012-01-01

    Emotional and behavioral problems assessed with the "Child Behavior Checklist" (CBCL) were analyzed from 2,739 Dutch children referred to Families First (FF) or Intensive Family Treatment (IFT) from 1999 to 2008, to examine time trends. From the year 2004 onward, six of the eight CBCL-syndrome scales yielded significant decreases from the…

  9. Reconciliation of Gene and Species Trees

    PubMed Central

    Rusin, L. Y.; Lyubetskaya, E. V.; Gorbunov, K. Y.; Lyubetsky, V. A.

    2014-01-01

    The first part of the paper briefly overviews the problem of gene and species trees reconciliation with the focus on defining and algorithmic construction of the evolutionary scenario. Basic ideas are discussed for the aspects of mapping definitions, costs of the mapping and evolutionary scenario, imposing time scales on a scenario, incorporating horizontal gene transfers, binarization and reconciliation of polytomous trees, and construction of species trees and scenarios. The review does not intend to cover the vast diversity of literature published on these subjects. Instead, the authors strived to overview the problem of the evolutionary scenario as a central concept in many areas of evolutionary research. The second part provides detailed mathematical proofs for the solutions of two problems: (i) inferring a gene evolution along a species tree accounting for various types of evolutionary events and (ii) trees reconciliation into a single species tree when only gene duplications and losses are allowed. All proposed algorithms have a cubic time complexity and are mathematically proved to find exact solutions. Solving algorithms for problem (ii) can be naturally extended to incorporate horizontal transfers, other evolutionary events, and time scales on the species tree. PMID:24800245

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gur, Sourav; Frantziskonis, George N.; Univ. of Arizona, Tucson, AZ

    Here, we report results from a numerical study of multi-time-scale bistable dynamics for CO oxidation on a catalytic surface in a flowing, well-mixed gas stream. The problem is posed in terms of surface and gas-phase submodels that dynamically interact in the presence of stochastic perturbations, reflecting the impact of molecular-scale fluctuations on the surface and turbulence in the gas. Wavelet-based methods are used to encode and characterize the temporal dynamics produced by each submodel and detect the onset of sudden state shifts (bifurcations) caused by nonlinear kinetics. When impending state shifts are detected, a more accurate but computationally expensive integrationmore » scheme can be used. This appears to make it possible, at least in some cases, to decrease the net computational burden associated with simulating multi-time-scale, nonlinear reacting systems by limiting the amount of time in which the more expensive integration schemes are required. Critical to achieving this is being able to detect unstable temporal transitions such as the bistable shifts in the example problem considered here. Lastly, our results indicate that a unique wavelet-based algorithm based on the Lipschitz exponent is capable of making such detections, even under noisy conditions, and may find applications in critical transition detection problems beyond catalysis.« less

  11. An unbalanced spectra classification method based on entropy

    NASA Astrophysics Data System (ADS)

    Liu, Zhong-bao; Zhao, Wen-juan

    2017-05-01

    How to solve the problem of distinguishing the minority spectra from the majority of the spectra is quite important in astronomy. In view of this, an unbalanced spectra classification method based on entropy (USCM) is proposed in this paper to deal with the unbalanced spectra classification problem. USCM greatly improves the performances of the traditional classifiers on distinguishing the minority spectra as it takes the data distribution into consideration in the process of classification. However, its time complexity is exponential with the training size, and therefore, it can only deal with the problem of small- and medium-scale classification. How to solve the large-scale classification problem is quite important to USCM. It can be easily obtained by mathematical computation that the dual form of USCM is equivalent to the minimum enclosing ball (MEB), and core vector machine (CVM) is introduced, USCM based on CVM is proposed to deal with the large-scale classification problem. Several comparative experiments on the 4 subclasses of K-type spectra, 3 subclasses of F-type spectra and 3 subclasses of G-type spectra from Sloan Digital Sky Survey (SDSS) verify USCM and USCM based on CVM perform better than kNN (k nearest neighbor) and SVM (support vector machine) in dealing with the problem of rare spectra mining respectively on the small- and medium-scale datasets and the large-scale datasets.

  12. Open problems of magnetic island control by electron cyclotron current drive

    DOE PAGES

    Grasso, Daniela; Lazzaro, E.; Borgogno, D.; ...

    2016-11-17

    This study reviews key aspects of the problem of magnetic islands control by electron cyclotron current drive in fusion devices. On the basis of the ordering of the basic spatial and time scales of the magnetic reconnection physics, we present the established results, highlighting some of the open issues posed by the small-scale structures that typically accompany the nonlinear evolution of the magnetic islands and constrain the effect of the control action.

  13. The dynamics of oceanic fronts. I - The Gulf Stream

    NASA Technical Reports Server (NTRS)

    Kao, T. W.

    1980-01-01

    The establishment and maintenance of the mean hydrographic properties of large-scale density fronts in the upper ocean is considered. The dynamics is studied by posing an initial value problem starting with a near-surface discharge of buoyant water with a prescribed density deficit into an ambient stationary fluid of uniform density; full time dependent diffusion and Navier-Stokes equations are then used with constant eddy diffusion and viscosity coefficients, together with a constant Coriolis parameter. Scaling analysis reveals three independent scales of the problem including the radius of deformation of the inertial length, buoyancy length, and diffusive length scales. The governing equations are then suitably scaled and the resulting normalized equations are shown to depend on the Ekman number alone for problems of oceanic interest. It is concluded that the mean Gulf Stream dynamics can be interpreted in terms of a solution of the Navier-Stokes and diffusion equations, with the cross-stream circulation responsible for the maintenance of the front; this mechanism is suggested for the maintenance of the Gulf Stream dynamics.

  14. Measuring Cognitive Load with Subjective Rating Scales during Problem Solving: Differences between Immediate and Delayed Ratings

    ERIC Educational Resources Information Center

    Schmeck, Annett; Opfermann, Maria; van Gog, Tamara; Paas, Fred; Leutner, Detlev

    2015-01-01

    Subjective cognitive load (CL) rating scales are widely used in educational research. However, there are still some open questions regarding the point of time at which such scales should be applied. Whereas some studies apply rating scales directly after each step or task and use an average of these ratings, others assess CL only once after the…

  15. A fast time-difference inverse solver for 3D EIT with application to lung imaging.

    PubMed

    Javaherian, Ashkan; Soleimani, Manuchehr; Moeller, Knut

    2016-08-01

    A class of sparse optimization techniques that require solely matrix-vector products, rather than an explicit access to the forward matrix and its transpose, has been paid much attention in the recent decade for dealing with large-scale inverse problems. This study tailors application of the so-called Gradient Projection for Sparse Reconstruction (GPSR) to large-scale time-difference three-dimensional electrical impedance tomography (3D EIT). 3D EIT typically suffers from the need for a large number of voxels to cover the whole domain, so its application to real-time imaging, for example monitoring of lung function, remains scarce since the large number of degrees of freedom of the problem extremely increases storage space and reconstruction time. This study shows the great potential of the GPSR for large-size time-difference 3D EIT. Further studies are needed to improve its accuracy for imaging small-size anomalies.

  16. The present development of time service in Brazil, with the application of the TV line-10 method for coordination and synchronization of atomic clocks

    NASA Technical Reports Server (NTRS)

    Silva, P. M.; Silva, I. M.

    1974-01-01

    Various methods presently used for the dissemination of time at several levels of precision are described along with future projects in the field. Different aspects of time coordination are reviewed and a list of future laboratories participating in a National Time Scale will be presented. A Brazilian Atomic Time Scale will be obtained from as many of these laboratories as possible. The problem of intercomparison between the Brazilian National Time Scale and the International one will be presented and probable solutions will be discussed. Needs related to the TV Line-10 method will be explained and comments will be made on the legal aspects of time dissemination throughout the country.

  17. Analysis of passive scalar advection in parallel shear flows: Sorting of modes at intermediate time scales

    NASA Astrophysics Data System (ADS)

    Camassa, Roberto; McLaughlin, Richard M.; Viotti, Claudio

    2010-11-01

    The time evolution of a passive scalar advected by parallel shear flows is studied for a class of rapidly varying initial data. Such situations are of practical importance in a wide range of applications from microfluidics to geophysics. In these contexts, it is well-known that the long-time evolution of the tracer concentration is governed by Taylor's asymptotic theory of dispersion. In contrast, we focus here on the evolution of the tracer at intermediate time scales. We show how intermediate regimes can be identified before Taylor's, and in particular, how the Taylor regime can be delayed indefinitely by properly manufactured initial data. A complete characterization of the sorting of these time scales and their associated spatial structures is presented. These analytical predictions are compared with highly resolved numerical simulations. Specifically, this comparison is carried out for the case of periodic variations in the streamwise direction on the short scale with envelope modulations on the long scales, and show how this structure can lead to "anomalously" diffusive transients in the evolution of the scalar onto the ultimate regime governed by Taylor dispersion. Mathematically, the occurrence of these transients can be viewed as a competition in the asymptotic dominance between large Péclet (Pe) numbers and the long/short scale aspect ratios (LVel/LTracer≡k), two independent nondimensional parameters of the problem. We provide analytical predictions of the associated time scales by a modal analysis of the eigenvalue problem arising in the separation of variables of the governing advection-diffusion equation. The anomalous time scale in the asymptotic limit of large k Pe is derived for the short scale periodic structure of the scalar's initial data, for both exactly solvable cases and in general with WKBJ analysis. In particular, the exactly solvable sawtooth flow is especially important in that it provides a short cut to the exact solution to the eigenvalue problem for the physically relevant vanishing Neumann boundary conditions in linear-shear channel flow. We show that the life of the corresponding modes at large Pe for this case is shorter than the ones arising from shear free zones in the fluid's interior. A WKBJ study of the latter modes provides a longer intermediate time evolution. This part of the analysis is technical, as the corresponding spectrum is dominated by asymptotically coalescing turning points in the limit of large Pe numbers. When large scale initial data components are present, the transient regime of the WKBJ (anomalous) modes evolves into one governed by Taylor dispersion. This is studied by a regular perturbation expansion of the spectrum in the small wavenumber regimes.

  18. Correlations of stock price fluctuations under multi-scale and multi-threshold scenarios

    NASA Astrophysics Data System (ADS)

    Sui, Guo; Li, Huajiao; Feng, Sida; Liu, Xueyong; Jiang, Meihui

    2018-01-01

    The multi-scale method is widely used in analyzing time series of financial markets and it can provide market information for different economic entities who focus on different periods. Through constructing multi-scale networks of price fluctuation correlation in the stock market, we can detect the topological relationship between each time series. Previous research has not addressed the problem that the original fluctuation correlation networks are fully connected networks and more information exists within these networks that is currently being utilized. Here we use listed coal companies as a case study. First, we decompose the original stock price fluctuation series into different time scales. Second, we construct the stock price fluctuation correlation networks at different time scales. Third, we delete the edges of the network based on thresholds and analyze the network indicators. Through combining the multi-scale method with the multi-threshold method, we bring to light the implicit information of fully connected networks.

  19. On the performance of exponential integrators for problems in magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Einkemmer, Lukas; Tokman, Mayya; Loffeld, John

    2017-02-01

    Exponential integrators have been introduced as an efficient alternative to explicit and implicit methods for integrating large stiff systems of differential equations. Over the past decades these methods have been studied theoretically and their performance was evaluated using a range of test problems. While the results of these investigations showed that exponential integrators can provide significant computational savings, the research on validating this hypothesis for large scale systems and understanding what classes of problems can particularly benefit from the use of the new techniques is in its initial stages. Resistive magnetohydrodynamic (MHD) modeling is widely used in studying large scale behavior of laboratory and astrophysical plasmas. In many problems numerical solution of MHD equations is a challenging task due to the temporal stiffness of this system in the parameter regimes of interest. In this paper we evaluate the performance of exponential integrators on large MHD problems and compare them to a state-of-the-art implicit time integrator. Both the variable and constant time step exponential methods of EPIRK-type are used to simulate magnetic reconnection and the Kevin-Helmholtz instability in plasma. Performance of these methods, which are part of the EPIC software package, is compared to the variable time step variable order BDF scheme included in the CVODE (part of SUNDIALS) library. We study performance of the methods on parallel architectures and with respect to magnitudes of important parameters such as Reynolds, Lundquist, and Prandtl numbers. We find that the exponential integrators provide superior or equal performance in most circumstances and conclude that further development of exponential methods for MHD problems is warranted and can lead to significant computational advantages for large scale stiff systems of differential equations such as MHD.

  20. Novel Technologies for Next Generation Memory

    DTIC Science & Technology

    2013-07-25

    charge in the capacitor) eventually fades unless the capacitor charge is refreshed , so the memory cells must be periodically refreshed (rewritten). The...reliability issues (such as poor data retention problem and refresh failure). In order to avoid those problems, a 3-dimensional channel structure...states during the refresh cycle (retention time). When the channel length is scaled down, it is difficult to guarantee sufficient retention time

  1. Computation of Transonic Nozzle Sound Transmission and Rotor Problems by the Dispersion-Relation-Preserving Scheme

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.; Aganin, Alexei

    2000-01-01

    The transonic nozzle transmission problem and the open rotor noise radiation problem are solved computationally. Both are multiple length scales problems. For efficient and accurate numerical simulation, the multiple-size-mesh multiple-time-step Dispersion-Relation-Preserving scheme is used to calculate the time periodic solution. To ensure an accurate solution, high quality numerical boundary conditions are also needed. For the nozzle problem, a set of nonhomogeneous, outflow boundary conditions are required. The nonhomogeneous boundary conditions not only generate the incoming sound waves but also, at the same time, allow the reflected acoustic waves and entropy waves, if present, to exit the computation domain without reflection. For the open rotor problem, there is an apparent singularity at the axis of rotation. An analytic extension approach is developed to provide a high quality axis boundary treatment.

  2. Numerical evaluation of the scale problem on the wind flow of a windbreak

    PubMed Central

    Liu, Benli; Qu, Jianjun; Zhang, Weimin; Tan, Lihai; Gao, Yanhong

    2014-01-01

    The airflow field around wind fences with different porosities, which are important in determining the efficiency of fences as a windbreak, is typically studied via scaled wind tunnel experiments and numerical simulations. However, the scale problem in wind tunnels or numerical models is rarely researched. In this study, we perform a numerical comparison between a scaled wind-fence experimental model and an actual-sized fence via computational fluid dynamics simulations. The results show that although the general field pattern can be captured in a reduced-scale wind tunnel or numerical model, several flow characteristics near obstacles are not proportional to the size of the model and thus cannot be extrapolated directly. For example, the small vortex behind a low-porosity fence with a scale of 1:50 is approximately 4 times larger than that behind a full-scale fence. PMID:25311174

  3. Perspectives on the geographic stability and mobility of people in cities

    PubMed Central

    Hanson, Susan

    2005-01-01

    A class of questions in the human environment sciences focuses on the relationship between individual or household behavior and local geographic context. Central to these questions is the nature of people's geographic mobility as well as the duration of their locational stability at varying spatial and temporal scales. The problem for researchers is that the processes of mobility/stability are temporally and spatially dynamic and therefore difficult to measure. Whereas time and space are continuous, analysts must select levels of aggregation for both length of time in place and spatial scale of place that fit with the problem in question. Previous work has emphasized mobility and suppressed stability as an analytic category. I focus here on stability and show how analyzing individuals' stability requires also analyzing their mobility. Through an empirical example centered on the relationship between entrepreneurship and place, I demonstrate how a spotlight on stability illuminates a resolution to the measurement problem by highlighting the interdependence between the time and space dimensions of stability/mobility. PMID:16230616

  4. Decomposition of Time Scales in Linear Systems and Markovian Decision Processes.

    DTIC Science & Technology

    1980-11-01

    this research. I, 3 iv U TABLE OF CONTENTS *Chapter Page *-1. INTRODUCTION .................................................. 1 2. EIGENSTRUCTTJRE...Components ..... o....... 16 2.4. Ordering of State Variables.. ......... ........ 20 2.5. Example - 8th Order Power System Model................ 22 3 ...results. In Chapter 3 we consider the time scale decomposition of singularly perturbed systems. For this problem (1.1) takes the form 12 + u (1.4) 2

  5. Gradient design for liquid chromatography using multi-scale optimization.

    PubMed

    López-Ureña, S; Torres-Lapasió, J R; Donat, R; García-Alvarez-Coque, M C

    2018-01-26

    In reversed phase-liquid chromatography, the usual solution to the "general elution problem" is the application of gradient elution with programmed changes of organic solvent (or other properties). A correct quantification of chromatographic peaks in liquid chromatography requires well resolved signals in a proper analysis time. When the complexity of the sample is high, the gradient program should be accommodated to the local resolution needs of each analyte. This makes the optimization of such situations rather troublesome, since enhancing the resolution for a given analyte may imply a collateral worsening of the resolution of other analytes. The aim of this work is to design multi-linear gradients that maximize the resolution, while fulfilling some restrictions: all peaks should be eluted before a given maximal time, the gradient should be flat or increasing, and sudden changes close to eluting peaks are penalized. Consequently, an equilibrated baseline resolution for all compounds is sought. This goal is achieved by splitting the optimization problem in a multi-scale framework. In each scale κ, an optimization problem is solved with N κ  ≈ 2 κ variables that are used to build the gradients. The N κ variables define cubic splines written in terms of a B-spline basis. This allows expressing gradients as polygonals of M points approximating the splines. The cubic splines are built using subdivision schemes, a technique of fast generation of smooth curves, compatible with the multi-scale framework. Owing to the nature of the problem and the presence of multiple local maxima, the algorithm used in the optimization problem of each scale κ should be "global", such as the pattern-search algorithm. The multi-scale optimization approach is successfully applied to find the best multi-linear gradient for resolving a mixture of amino acid derivatives. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Optimal control of singularly perturbed nonlinear systems with state-variable inequality constraints

    NASA Technical Reports Server (NTRS)

    Calise, A. J.; Corban, J. E.

    1990-01-01

    The established necessary conditions for optimality in nonlinear control problems that involve state-variable inequality constraints are applied to a class of singularly perturbed systems. The distinguishing feature of this class of two-time-scale systems is a transformation of the state-variable inequality constraint, present in the full order problem, to a constraint involving states and controls in the reduced problem. It is shown that, when a state constraint is active in the reduced problem, the boundary layer problem can be of finite time in the stretched time variable. Thus, the usual requirement for asymptotic stability of the boundary layer system is not applicable, and cannot be used to construct approximate boundary layer solutions. Several alternative solution methods are explored and illustrated with simple examples.

  7. Hybrid discrete-continuum modeling for transport, biofilm development and solid restructuring including electrostatic effects

    NASA Astrophysics Data System (ADS)

    Prechtel, Alexander; Ray, Nadja; Rupp, Andreas

    2017-04-01

    We want to present an approach for the mathematical, mechanistic modeling and numerical treatment of processes leading to the formation, stability, and turnover of soil micro-aggregates. This aims at deterministic aggregation models including detailed mechanistic pore-scale descriptions to account for the interplay of geochemistry and microbiology, and the link to soil functions as, e.g., the porosity. We therefore consider processes at the pore scale and the mesoscale (laboratory scale). At the pore scale transport by diffusion, advection, and drift emerging from electric forces can be taken into account, in addition to homogeneous and heterogeneous reactions of species. In the context of soil micro-aggregates the growth of biofilms or other glueing substances as EPS (extracellular polymeric substances) is important and affects the structure of the pore space in space and time. This model is upscaled mathematically in the framework of (periodic) homogenization to transfer it to the mesoscale resulting in effective coefficients/parameters there. This micro-macro model thus couples macroscopic equations that describe the transport and fluid flow at the scale of the porous medium (mesoscale) with averaged time- and space-dependent coefficient functions. These functions may be explicitly computed by means of auxiliary cell problems (microscale). Finally, the pore space in which the cell problems are defined is time and space dependent and its geometry inherits information from the transport equation's solutions. The microscale problems rely on versatile combinations of cellular automata and discontiuous Galerkin methods while on the mesoscale mixed finite elements are used. The numerical simulations allow to study the interplay between these processes.

  8. A hybrid meta-heuristic algorithm for the vehicle routing problem with stochastic travel times considering the driver's satisfaction

    NASA Astrophysics Data System (ADS)

    Tavakkoli-Moghaddam, Reza; Alinaghian, Mehdi; Salamat-Bakhsh, Alireza; Norouzi, Narges

    2012-05-01

    A vehicle routing problem is a significant problem that has attracted great attention from researchers in recent years. The main objectives of the vehicle routing problem are to minimize the traveled distance, total traveling time, number of vehicles and cost function of transportation. Reducing these variables leads to decreasing the total cost and increasing the driver's satisfaction level. On the other hand, this satisfaction, which will decrease by increasing the service time, is considered as an important logistic problem for a company. The stochastic time dominated by a probability variable leads to variation of the service time, while it is ignored in classical routing problems. This paper investigates the problem of the increasing service time by using the stochastic time for each tour such that the total traveling time of the vehicles is limited to a specific limit based on a defined probability. Since exact solutions of the vehicle routing problem that belong to the category of NP-hard problems are not practical in a large scale, a hybrid algorithm based on simulated annealing with genetic operators was proposed to obtain an efficient solution with reasonable computational cost and time. Finally, for some small cases, the related results of the proposed algorithm were compared with results obtained by the Lingo 8 software. The obtained results indicate the efficiency of the proposed hybrid simulated annealing algorithm.

  9. An iterative bidirectional heuristic placement algorithm for solving the two-dimensional knapsack packing problem

    NASA Astrophysics Data System (ADS)

    Shiangjen, Kanokwatt; Chaijaruwanich, Jeerayut; Srisujjalertwaja, Wijak; Unachak, Prakarn; Somhom, Samerkae

    2018-02-01

    This article presents an efficient heuristic placement algorithm, namely, a bidirectional heuristic placement, for solving the two-dimensional rectangular knapsack packing problem. The heuristic demonstrates ways to maximize space utilization by fitting the appropriate rectangle from both sides of the wall of the current residual space layer by layer. The iterative local search along with a shift strategy is developed and applied to the heuristic to balance the exploitation and exploration tasks in the solution space without the tuning of any parameters. The experimental results on many scales of packing problems show that this approach can produce high-quality solutions for most of the benchmark datasets, especially for large-scale problems, within a reasonable duration of computational time.

  10. Applied mathematical problems in modern electromagnetics

    NASA Astrophysics Data System (ADS)

    Kriegsman, Gregory

    1994-05-01

    We have primarily investigated two classes of electromagnetic problems. The first contains the quantitative description of microwave heating of dispersive and conductive materials. Such problems arise, for example, when biological tissue are exposed, accidentally or purposefully, to microwave radiation. Other instances occur in ceramic processing, such as sintering and microwave assisted chemical vapor infiltration and other industrial drying processes, such as the curing of paints and concrete. The second class characterizes the scattering of microwaves by complex targets which possess two or more disparate length and/or time scales. Spatially complex scatterers arise in a variety of applications, such as large gratings and slowly changing guiding structures. The former are useful in developing microstrip energy couplers while the later can be used to model anatomical subsystems (e.g., the open guiding structure composed of two legs and the adjoining lower torso). Temporally complex targets occur in applications involving dispersive media whose relaxation times differ by orders of magnitude from thermal and/or electromagnetic time scales. For both cases the mathematical description of the problems gives rise to complicated ill-conditioned boundary value problems, whose accurate solutions require a blend of both asymptotic techniques, such as multiscale methods and matched asymptotic expansions, and numerical methods incorporating radiation boundary conditions, such as finite differences and finite elements.

  11. A multiple scales approach to sound generation by vibrating bodies

    NASA Technical Reports Server (NTRS)

    Geer, James F.; Pope, Dennis S.

    1992-01-01

    The problem of determining the acoustic field in an inviscid, isentropic fluid generated by a solid body whose surface executes prescribed vibrations is formulated and solved as a multiple scales perturbation problem, using the Mach number M based on the maximum surface velocity as the perturbation parameter. Following the idea of multiple scales, new 'slow' spacial scales are introduced, which are defined as the usual physical spacial scale multiplied by powers of M. The governing nonlinear differential equations lead to a sequence of linear problems for the perturbation coefficient functions. However, it is shown that the higher order perturbation functions obtained in this manner will dominate the lower order solutions unless their dependence on the slow spacial scales is chosen in a certain manner. In particular, it is shown that the perturbation functions must satisfy an equation similar to Burgers' equation, with a slow spacial scale playing the role of the time-like variable. The method is illustrated by a simple one-dimenstional example, as well as by three different cases of a vibrating sphere. The results are compared with solutions obtained by purely numerical methods and some insights provided by the perturbation approach are discussed.

  12. A novel heuristic algorithm for capacitated vehicle routing problem

    NASA Astrophysics Data System (ADS)

    Kır, Sena; Yazgan, Harun Reşit; Tüncel, Emre

    2017-09-01

    The vehicle routing problem with the capacity constraints was considered in this paper. It is quite difficult to achieve an optimal solution with traditional optimization methods by reason of the high computational complexity for large-scale problems. Consequently, new heuristic or metaheuristic approaches have been developed to solve this problem. In this paper, we constructed a new heuristic algorithm based on the tabu search and adaptive large neighborhood search (ALNS) with several specifically designed operators and features to solve the capacitated vehicle routing problem (CVRP). The effectiveness of the proposed algorithm was illustrated on the benchmark problems. The algorithm provides a better performance on large-scaled instances and gained advantage in terms of CPU time. In addition, we solved a real-life CVRP using the proposed algorithm and found the encouraging results by comparison with the current situation that the company is in.

  13. A Novel Joint Problem of Routing, Scheduling, and Variable-Width Channel Allocation in WMNs

    PubMed Central

    Liu, Wan-Yu; Chou, Chun-Hung

    2014-01-01

    This paper investigates a novel joint problem of routing, scheduling, and channel allocation for single-radio multichannel wireless mesh networks in which multiple channel widths can be adjusted dynamically through a new software technology so that more concurrent transmissions and suppressed overlapping channel interference can be achieved. Although the previous works have studied this joint problem, their linear programming models for the problem were not incorporated with some delicate constraints. As a result, this paper first constructs a linear programming model with more practical concerns and then proposes a simulated annealing approach with a novel encoding mechanism, in which the configurations of multiple time slots are devised to characterize the dynamic transmission process. Experimental results show that our approach can find the same or similar solutions as the optimal solutions for smaller-scale problems and can efficiently find good-quality solutions for a variety of larger-scale problems. PMID:24982990

  14. Occupation-specific screening for future sickness absence: criterion validity of the trucker strain monitor (TSM).

    PubMed

    De Croon, Einar M; Blonk, Roland W B; Sluiter, Judith K; Frings-Dresen, Monique H W

    2005-02-01

    Monitoring psychological job strain may help occupational physicians to take preventive action at the appropriate time. For this purpose, the 10-item trucker strain monitor (TSM) assessing work-related fatigue and sleeping problems in truck drivers was developed. This study examined (1) test-retest reliability, (2) criterion validity of the TSM with respect to future sickness absence due to psychological health complaints and (3) usefulness of the TSM two-scales structure. The TSM and self-administered questionnaires, providing information about stressful working conditions (job control and job demands) and sickness absence, were sent to a random sample of 2000 drivers in 1998. Of the 1123 responders, 820 returned a completed questionnaire 2 years later (response: 72%). The TSM work-related fatigue scale, the TSM sleeping problems scale and the TSM composite scale showed satisfactory 2-year test-retest reliability (coefficient r=0.62, 0.66 and 0.67, respectively). The work-related fatigue, sleeping problems scale and composite scale had sensitivities of 61, 65 and 61%, respectively in identifying drivers with future sickness absence due to psychological health complaints. The specificity and positive predictive value of the TSM composite scale were 77 and 11%, respectively. The work-related fatigue scale and the sleeping problems scale were moderately strong correlated (r=0.62). However, stressful working conditions were differentially associated with the two scales. The results support the test-retest reliability, criterion validity and two-factor structure of the TSM. In general, the results suggest that the use of occupation-specific psychological job strain questionnaires is fruitful.

  15. OpenMP parallelization of a gridded SWAT (SWATG)

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Hou, Jinliang; Cao, Yongpan; Gu, Juan; Huang, Chunlin

    2017-12-01

    Large-scale, long-term and high spatial resolution simulation is a common issue in environmental modeling. A Gridded Hydrologic Response Unit (HRU)-based Soil and Water Assessment Tool (SWATG) that integrates grid modeling scheme with different spatial representations also presents such problems. The time-consuming problem affects applications of very high resolution large-scale watershed modeling. The OpenMP (Open Multi-Processing) parallel application interface is integrated with SWATG (called SWATGP) to accelerate grid modeling based on the HRU level. Such parallel implementation takes better advantage of the computational power of a shared memory computer system. We conducted two experiments at multiple temporal and spatial scales of hydrological modeling using SWATG and SWATGP on a high-end server. At 500-m resolution, SWATGP was found to be up to nine times faster than SWATG in modeling over a roughly 2000 km2 watershed with 1 CPU and a 15 thread configuration. The study results demonstrate that parallel models save considerable time relative to traditional sequential simulation runs. Parallel computations of environmental models are beneficial for model applications, especially at large spatial and temporal scales and at high resolutions. The proposed SWATGP model is thus a promising tool for large-scale and high-resolution water resources research and management in addition to offering data fusion and model coupling ability.

  16. An Optimization Code for Nonlinear Transient Problems of a Large Scale Multidisciplinary Mathematical Model

    NASA Astrophysics Data System (ADS)

    Takasaki, Koichi

    This paper presents a program for the multidisciplinary optimization and identification problem of the nonlinear model of large aerospace vehicle structures. The program constructs the global matrix of the dynamic system in the time direction by the p-version finite element method (pFEM), and the basic matrix for each pFEM node in the time direction is described by a sparse matrix similarly to the static finite element problem. The algorithm used by the program does not require the Hessian matrix of the objective function and so has low memory requirements. It also has a relatively low computational cost, and is suited to parallel computation. The program was integrated as a solver module of the multidisciplinary analysis system CUMuLOUS (Computational Utility for Multidisciplinary Large scale Optimization of Undense System) which is under development by the Aerospace Research and Development Directorate (ARD) of the Japan Aerospace Exploration Agency (JAXA).

  17. Psychometric properties of the Revised Chen Internet Addiction Scale (CIAS-R) in Chinese adolescents.

    PubMed

    Mak, Kwok-Kei; Lai, Ching-Man; Ko, Chih-Hung; Chou, Chien; Kim, Dong-Il; Watanabe, Hiroko; Ho, Roger C M

    2014-10-01

    The Revised Chen Internet Addiction Scale (CIAS-R) was developed to assess Internet addiction in Chinese populations, but its psychometric properties in adolescents have not been examined. This study aimed to evaluate the factor structure and psychometric properties of CIAS-R in Hong Kong Chinese adolescents. 860 Grade 7 to 13 students (38 % boys) completed the CIAS-R, the Young's Internet Addiction Test (IAT), and the Health of the Nation Outcome Scales for Children and Adolescents (HoNOSCA) in a survey. The prevalence of Internet addiction as assessed by CIAS-R was 18 %. High internal consistency and inter-item correlations were reported for the CIAS-R. Results from the confirmatory factor analysis suggested a four-factor structure of Compulsive Use and Withdrawal, Tolerance, Interpersonal and Health-related Problems, and Time Management Problems. Moreover, results of hierarchical multiple regression supported the incremental validity of the CIAS-R to predict mental health outcomes beyond the effects of demographic differences and self-reported time spent online. The CIAS is a reliable and valid measure of internet addiction problems in Hong Kong adolescents. Future study is warranted to validate the cutoffs of the CIAS-R for identification of adolescents with Internet use problems who may have mental health needs.

  18. Social Media Use and Episodic Heavy Drinking Among Adolescents.

    PubMed

    Brunborg, Geir Scott; Andreas, Jasmina Burdzovic; Kvaavik, Elisabeth

    2017-06-01

    Objectives Little is known about the consequences of adolescent social media use. The current study estimated the association between the amount of time adolescents spend on social media and the risk of episodic heavy drinking. Methods A school-based self-report cross-sectional study including 851 Norwegian middle and high school students (46.1% boys). frequency and quantity of social media use. Frequency of drinking four or six (girls and boys, respectively) alcoholic drinks during a single day (episodic heavy drinking). The MacArthur Scale of Subjective Social Status, the Barratt Impulsiveness Scale - Brief, the Brief Sensation Seeking Scale, the Patient Health Questionnaire-9 items for Adolescents, the Strengths and Difficulties Questionnaire Peer Relationship problems scale, gender, and school grade. Results Greater amount of time spent on social media was associated with greater likelihood of episodic heavy drinking among adolescents ( OR = 1.12, 95% CI (1.05, 1.19), p = 0.001), even after adjusting for school grade, impulsivity, sensation seeking, symptoms of depression, and peer relationship problems. Conclusion The results from the current study indicate that more time spent on social media is related to greater likelihood of episodic heavy drinking among adolescents.

  19. Breakup Effects on University Students' Perceived Academic Performance

    ERIC Educational Resources Information Center

    Field, Tiffany; Diego, Miguel; Pelaez, Martha; Deeds, Osvelia; Delgado, Jeannette

    2012-01-01

    The Problem: Problems that might be expected to affect perceived academic performance were studied in a sample of 283 university students. Results: Breakup Distress Scale scores, less time since the breakup and no new relationship contributed to 16% of the variance on perceived academic performance. Variables that were related to academic…

  20. DEMONSTRATION OF A MULTI-SCALE INTEGRATED MONITORING AND ASSESSMENT IN NY/NJ HARBOR

    EPA Science Inventory

    The Clean Water Act (CWA) requires states and tribes to assess the overall quality of their waters (Sec 305(b)), determine whether that quality is changing over time, identify problem areas and management actions necessary to resolve those problems, and evaluate the effectiveness...

  1. A Systematic Multi-Time Scale Solution for Regional Power Grid Operation

    NASA Astrophysics Data System (ADS)

    Zhu, W. J.; Liu, Z. G.; Cheng, T.; Hu, B. Q.; Liu, X. Z.; Zhou, Y. F.

    2017-10-01

    Many aspects need to be taken into consideration in a regional grid while making schedule plans. In this paper, a systematic multi-time scale solution for regional power grid operation considering large scale renewable energy integration and Ultra High Voltage (UHV) power transmission is proposed. In the time scale aspect, we discuss the problem from month, week, day-ahead, within-day to day-behind, and the system also contains multiple generator types including thermal units, hydro-plants, wind turbines and pumped storage stations. The 9 subsystems of the scheduling system are described, and their functions and relationships are elaborated. The proposed system has been constructed in a provincial power grid in Central China, and the operation results further verified the effectiveness of the system.

  2. Advanced computer architecture for large-scale real-time applications.

    DOT National Transportation Integrated Search

    1973-04-01

    Air traffic control automation is identified as a crucial problem which provides a complex, real-time computer application environment. A novel computer architecture in the form of a pipeline associative processor is conceived to achieve greater perf...

  3. Consensus for linear multi-agent system with intermittent information transmissions using the time-scale theory

    NASA Astrophysics Data System (ADS)

    Taousser, Fatima; Defoort, Michael; Djemai, Mohamed

    2016-01-01

    This paper investigates the consensus problem for linear multi-agent system with fixed communication topology in the presence of intermittent communication using the time-scale theory. Since each agent can only obtain relative local information intermittently, the proposed consensus algorithm is based on a discontinuous local interaction rule. The interaction among agents happens at a disjoint set of continuous-time intervals. The closed-loop multi-agent system can be represented using mixed linear continuous-time and linear discrete-time models due to intermittent information transmissions. The time-scale theory provides a powerful tool to combine continuous-time and discrete-time cases and study the consensus protocol under a unified framework. Using this theory, some conditions are derived to achieve exponential consensus under intermittent information transmissions. Simulations are performed to validate the theoretical results.

  4. Time-marching multi-grid seismic tomography

    NASA Astrophysics Data System (ADS)

    Tong, P.; Yang, D.; Liu, Q.

    2016-12-01

    From the classic ray-based traveltime tomography to the state-of-the-art full waveform inversion, because of the nonlinearity of seismic inverse problems, a good starting model is essential for preventing the convergence of the objective function toward local minima. With a focus on building high-accuracy starting models, we propose the so-called time-marching multi-grid seismic tomography method in this study. The new seismic tomography scheme consists of a temporal time-marching approach and a spatial multi-grid strategy. We first divide the recording period of seismic data into a series of time windows. Sequentially, the subsurface properties in each time window are iteratively updated starting from the final model of the previous time window. There are at least two advantages of the time-marching approach: (1) the information included in the seismic data of previous time windows has been explored to build the starting models of later time windows; (2) seismic data of later time windows could provide extra information to refine the subsurface images. Within each time window, we use a multi-grid method to decompose the scale of the inverse problem. Specifically, the unknowns of the inverse problem are sampled on a coarse mesh to capture the macro-scale structure of the subsurface at the beginning. Because of the low dimensionality, it is much easier to reach the global minimum on a coarse mesh. After that, finer meshes are introduced to recover the micro-scale properties. That is to say, the subsurface model is iteratively updated on multi-grid in every time window. We expect that high-accuracy starting models should be generated for the second and later time windows. We will test this time-marching multi-grid method by using our newly developed eikonal-based traveltime tomography software package tomoQuake. Real application results in the 2016 Kumamoto earthquake (Mw 7.0) region in Japan will be demonstrated.

  5. Strong CP and SUZ2

    NASA Astrophysics Data System (ADS)

    Albaid, Abdelhamid; Dine, Michael; Draper, Patrick

    2015-12-01

    Solutions to the strong CP problem typically introduce new scales associated with the spontaneous breaking of symmetries. Absent any anthropic argument for small overline{θ} , these scales require stabilization against ultraviolet corrections. Supersymmetry offers a tempting stabilization mechanism, since it can solve the "big" electroweak hierarchy problem at the same time. One family of solutions to strong CP, including generalized parity models, heavy axion models, and heavy η' models, introduces {Z}_2 copies of (part of) the Standard Model and an associated scale of {Z}_2 -breaking. We review why, without additional structure such as supersymmetry, the {Z}_2 -breaking scale is unacceptably tuned. We then study "SUZ2" models, supersymmetric theories with {Z}_2 copies of the MSSM. We find that the addition of SUSY typically destroys the {Z}_2 protection of overline{θ}=0 , even at tree level, once SUSY and {Z}_2 are broken. In theories like supersymmetric completions of the twin Higgs, where {Z}_2 addresses the little hierarchy problem but not strong CP, two axions can be used to relax overline{θ}.

  6. Turbulence dissipation challenge: particle-in-cell simulations

    NASA Astrophysics Data System (ADS)

    Roytershteyn, V.; Karimabadi, H.; Omelchenko, Y.; Germaschewski, K.

    2015-12-01

    We discuss application of three particle in cell (PIC) codes to the problems relevant to turbulence dissipation challenge. VPIC is a fully kinetic code extensively used to study a variety of diverse problems ranging from laboratory plasmas to astrophysics. PSC is a flexible fully kinetic code offering a variety of algorithms that can be advantageous to turbulence simulations, including high order particle shapes, dynamic load balancing, and ability to efficiently run on Graphics Processing Units (GPUs). Finally, HYPERS is a novel hybrid (kinetic ions+fluid electrons) code, which utilizes asynchronous time advance and a number of other advanced algorithms. We present examples drawn both from large-scale turbulence simulations and from the test problems outlined by the turbulence dissipation challenge. Special attention is paid to such issues as the small-scale intermittency of inertial range turbulence, mode content of the sub-proton range of scales, the formation of electron-scale current sheets and the role of magnetic reconnection, as well as numerical challenges of applying PIC codes to simulations of astrophysical turbulence.

  7. Accuracy of Time Integration Approaches for Stiff Magnetohydrodynamics Problems

    NASA Astrophysics Data System (ADS)

    Knoll, D. A.; Chacon, L.

    2003-10-01

    The simulation of complex physical processes with multiple time scales presents a continuing challenge to the computational plasma physisist due to the co-existence of fast and slow time scales. Within computational plasma physics, practitioners have developed and used linearized methods, semi-implicit methods, and time splitting in an attempt to tackle such problems. All of these methods are understood to generate numerical error. We are currently developing algorithms which remove such error for MHD problems [1,2]. These methods do not rely on linearization or time splitting. We are also attempting to analyze the errors introduced by existing ``implicit'' methods using modified equation analysis (MEA) [3]. In this presentation we will briefly cover the major findings in [3]. We will then extend this work further into MHD. This analysis will be augmented with numerical experiments with the hope of gaining insight, particularly into how these errors accumulate over many time steps. [1] L. Chacon,. D.A. Knoll, J.M. Finn, J. Comput. Phys., vol. 178, pp. 15-36 (2002) [2] L. Chacon and D.A. Knoll, J. Comput. Phys., vol. 188, pp. 573-592 (2003) [3] D.A. Knoll , L. Chacon, L.G. Margolin, V.A. Mousseau, J. Comput. Phys., vol. 185, pp. 583-611 (2003)

  8. Viscous decay of nonlinear oscillations of a spherical bubble at large Reynolds number

    NASA Astrophysics Data System (ADS)

    Smith, W. R.; Wang, Q. X.

    2017-08-01

    The long-time viscous decay of large-amplitude bubble oscillations is considered in an incompressible Newtonian fluid, based on the Rayleigh-Plesset equation. At large Reynolds numbers, this is a multi-scaled problem with a short time scale associated with inertial oscillation and a long time scale associated with viscous damping. A multi-scaled perturbation method is thus employed to solve the problem. The leading-order analytical solution of the bubble radius history is obtained to the Rayleigh-Plesset equation in a closed form including both viscous and surface tension effects. Some important formulae are derived including the following: the average energy loss rate of the bubble system during each cycle of oscillation, an explicit formula for the dependence of the oscillation frequency on the energy, and an implicit formula for the amplitude envelope of the bubble radius as a function of the energy. Our theory shows that the energy of the bubble system and the frequency of oscillation do not change on the inertial time scale at leading order, the energy loss rate on the long viscous time scale being inversely proportional to the Reynolds number. These asymptotic predictions remain valid during each cycle of oscillation whether or not compressibility effects are significant. A systematic parametric analysis is carried out using the above formula for the energy of the bubble system, frequency of oscillation, and minimum/maximum bubble radii in terms of the Reynolds number, the dimensionless initial pressure of the bubble gases, and the Weber number. Our results show that the frequency and the decay rate have substantial variations over the lifetime of a decaying oscillation. The results also reveal that large-amplitude bubble oscillations are very sensitive to small changes in the initial conditions through large changes in the phase shift.

  9. Two time scale output feedback regulation for ill-conditioned systems

    NASA Technical Reports Server (NTRS)

    Calise, A. J.; Moerder, D. D.

    1986-01-01

    Issues pertaining to the well-posedness of a two time scale approach to the output feedback regulator design problem are examined. An approximate quadratic performance index which reflects a two time scale decomposition of the system dynamics is developed. It is shown that, under mild assumptions, minimization of this cost leads to feedback gains providing a second-order approximation of optimal full system performance. A simplified approach to two time scale feedback design is also developed, in which gains are separately calculated to stabilize the slow and fast subsystem models. By exploiting the notion of combined control and observation spillover suppression, conditions are derived assuring that these gains will stabilize the full-order system. A sequential numerical algorithm is described which obtains output feedback gains minimizing a broad class of performance indices, including the standard LQ case. It is shown that the algorithm converges to a local minimum under nonrestrictive assumptions. This procedure is adapted to and demonstrated for the two time scale design formulations.

  10. Numerical methods for large-scale, time-dependent partial differential equations

    NASA Technical Reports Server (NTRS)

    Turkel, E.

    1979-01-01

    A survey of numerical methods for time dependent partial differential equations is presented. The emphasis is on practical applications to large scale problems. A discussion of new developments in high order methods and moving grids is given. The importance of boundary conditions is stressed for both internal and external flows. A description of implicit methods is presented including generalizations to multidimensions. Shocks, aerodynamics, meteorology, plasma physics and combustion applications are also briefly described.

  11. The Relation between Cosmological Redshift and Scale Factor for Photons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tian, Shuxun, E-mail: tshuxun@mail.bnu.edu.cn; Department of Physics, Wuhan University, Wuhan 430072

    The cosmological constant problem has become one of the most important ones in modern cosmology. In this paper, we try to construct a model that can avoid the cosmological constant problem and have the potential to explain the apparent late-time accelerating expansion of the universe in both luminosity distance and angular diameter distance measurement channels. In our model, the core is to modify the relation between cosmological redshift and scale factor for photons. We point out three ways to test our hypothesis: the supernova time dilation; the gravitational waves and its electromagnetic counterparts emitted by the binary neutron star systems;more » and the Sandage–Loeb effect. All of this method is feasible now or in the near future.« less

  12. Comparison of Fault Detection Algorithms for Real-time Diagnosis in Large-Scale System. Appendix E

    NASA Technical Reports Server (NTRS)

    Kirubarajan, Thiagalingam; Malepati, Venkat; Deb, Somnath; Ying, Jie

    2001-01-01

    In this paper, we present a review of different real-time capable algorithms to detect and isolate component failures in large-scale systems in the presence of inaccurate test results. A sequence of imperfect test results (as a row vector of I's and O's) are available to the algorithms. In this case, the problem is to recover the uncorrupted test result vector and match it to one of the rows in the test dictionary, which in turn will isolate the faults. In order to recover the uncorrupted test result vector, one needs the accuracy of each test. That is, its detection and false alarm probabilities are required. In this problem, their true values are not known and, therefore, have to be estimated online. Other major aspects in this problem are the large-scale nature and the real-time capability requirement. Test dictionaries of sizes up to 1000 x 1000 are to be handled. That is, results from 1000 tests measuring the state of 1000 components are available. However, at any time, only 10-20% of the test results are available. Then, the objective becomes the real-time fault diagnosis using incomplete and inaccurate test results with online estimation of test accuracies. It should also be noted that the test accuracies can vary with time --- one needs a mechanism to update them after processing each test result vector. Using Qualtech's TEAMS-RT (system simulation and real-time diagnosis tool), we test the performances of 1) TEAMSAT's built-in diagnosis algorithm, 2) Hamming distance based diagnosis, 3) Maximum Likelihood based diagnosis, and 4) HidderMarkov Model based diagnosis.

  13. A framework for simultaneous aerodynamic design optimization in the presence of chaos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Günther, Stefanie, E-mail: stefanie.guenther@scicomp.uni-kl.de; Gauger, Nicolas R.; Wang, Qiqi

    Integrating existing solvers for unsteady partial differential equations into a simultaneous optimization method is challenging due to the forward-in-time information propagation of classical time-stepping methods. This paper applies the simultaneous single-step one-shot optimization method to a reformulated unsteady constraint that allows for both forward- and backward-in-time information propagation. Especially in the presence of chaotic and turbulent flow, solving the initial value problem simultaneously with the optimization problem often scales poorly with the time domain length. The new formulation relaxes the initial condition and instead solves a least squares problem for the discrete partial differential equations. This enables efficient one-shot optimizationmore » that is independent of the time domain length, even in the presence of chaos.« less

  14. Two-machine flow shop scheduling integrated with preventive maintenance planning

    NASA Astrophysics Data System (ADS)

    Wang, Shijin; Liu, Ming

    2016-02-01

    This paper investigates an integrated optimisation problem of production scheduling and preventive maintenance (PM) in a two-machine flow shop with time to failure of each machine subject to a Weibull probability distribution. The objective is to find the optimal job sequence and the optimal PM decisions before each job such that the expected makespan is minimised. To investigate the value of integrated scheduling solution, computational experiments on small-scale problems with different configurations are conducted with total enumeration method, and the results are compared with those of scheduling without maintenance but with machine degradation, and individual job scheduling combined with independent PM planning. Then, for large-scale problems, four genetic algorithm (GA) based heuristics are proposed. The numerical results with several large problem sizes and different configurations indicate the potential benefits of integrated scheduling solution and the results also show that proposed GA-based heuristics are efficient for the integrated problem.

  15. Scalable Preconditioners for Structure Preserving Discretizations of Maxwell Equations in First Order Form

    DOE PAGES

    Phillips, Edward Geoffrey; Shadid, John N.; Cyr, Eric C.

    2018-05-01

    Here, we report multiple physical time-scales can arise in electromagnetic simulations when dissipative effects are introduced through boundary conditions, when currents follow external time-scales, and when material parameters vary spatially. In such scenarios, the time-scales of interest may be much slower than the fastest time-scales supported by the Maxwell equations, therefore making implicit time integration an efficient approach. The use of implicit temporal discretizations results in linear systems in which fast time-scales, which severely constrain the stability of an explicit method, can manifest as so-called stiff modes. This study proposes a new block preconditioner for structure preserving (also termed physicsmore » compatible) discretizations of the Maxwell equations in first order form. The intent of the preconditioner is to enable the efficient solution of multiple-time-scale Maxwell type systems. An additional benefit of the developed preconditioner is that it requires only a traditional multigrid method for its subsolves and compares well against alternative approaches that rely on specialized edge-based multigrid routines that may not be readily available. Lastly, results demonstrate parallel scalability at large electromagnetic wave CFL numbers on a variety of test problems.« less

  16. Scalable Preconditioners for Structure Preserving Discretizations of Maxwell Equations in First Order Form

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillips, Edward Geoffrey; Shadid, John N.; Cyr, Eric C.

    Here, we report multiple physical time-scales can arise in electromagnetic simulations when dissipative effects are introduced through boundary conditions, when currents follow external time-scales, and when material parameters vary spatially. In such scenarios, the time-scales of interest may be much slower than the fastest time-scales supported by the Maxwell equations, therefore making implicit time integration an efficient approach. The use of implicit temporal discretizations results in linear systems in which fast time-scales, which severely constrain the stability of an explicit method, can manifest as so-called stiff modes. This study proposes a new block preconditioner for structure preserving (also termed physicsmore » compatible) discretizations of the Maxwell equations in first order form. The intent of the preconditioner is to enable the efficient solution of multiple-time-scale Maxwell type systems. An additional benefit of the developed preconditioner is that it requires only a traditional multigrid method for its subsolves and compares well against alternative approaches that rely on specialized edge-based multigrid routines that may not be readily available. Lastly, results demonstrate parallel scalability at large electromagnetic wave CFL numbers on a variety of test problems.« less

  17. Nonlinear and Stochastic Dynamics in the Heart

    PubMed Central

    Qu, Zhilin; Hu, Gang; Garfinkel, Alan; Weiss, James N.

    2014-01-01

    In a normal human life span, the heart beats about 2 to 3 billion times. Under diseased conditions, a heart may lose its normal rhythm and degenerate suddenly into much faster and irregular rhythms, called arrhythmias, which may lead to sudden death. The transition from a normal rhythm to an arrhythmia is a transition from regular electrical wave conduction to irregular or turbulent wave conduction in the heart, and thus this medical problem is also a problem of physics and mathematics. In the last century, clinical, experimental, and theoretical studies have shown that dynamical theories play fundamental roles in understanding the mechanisms of the genesis of the normal heart rhythm as well as lethal arrhythmias. In this article, we summarize in detail the nonlinear and stochastic dynamics occurring in the heart and their links to normal cardiac functions and arrhythmias, providing a holistic view through integrating dynamics from the molecular (microscopic) scale, to the organelle (mesoscopic) scale, to the cellular, tissue, and organ (macroscopic) scales. We discuss what existing problems and challenges are waiting to be solved and how multi-scale mathematical modeling and nonlinear dynamics may be helpful for solving these problems. PMID:25267872

  18. Earlier school start times are associated with higher rates of behavioral problems in elementary schools.

    PubMed

    Keller, Peggy S; Gilbert, Lauren R; Haak, Eric A; Bi, Shuang; Smith, Olivia A

    2017-04-01

    Early school start times may curtail children's sleep and inadvertently promote sleep restriction. The current study examines the potential implications for early school start times for behavioral problems in public elementary schools (student ages 5-12 years) in Kentucky. School start times were obtained from school Web sites or by calling school offices; behavioral and disciplinary problems, along with demographic information about schools, were obtained from the Kentucky Department of Education. Estimated associations controlled for teacher/student ratio, racial composition, school rank, enrollment, and Appalachian location. Associations between early school start time and greater behavioral problems (harassment, in-school removals, suspensions, and expulsions) were observed, although some of these associations were found only for schools serving the non-Appalachian region. Findings support the growing body of research showing that early school start times may contribute to student problems, and extend this research through a large-scale examination of elementary schools, behavioral outcomes, and potential moderators of risk. Copyright © 2017 National Sleep Foundation. Published by Elsevier Inc. All rights reserved.

  19. Mobile robot motion estimation using Hough transform

    NASA Astrophysics Data System (ADS)

    Aldoshkin, D. N.; Yamskikh, T. N.; Tsarev, R. Yu

    2018-05-01

    This paper proposes an algorithm for estimation of mobile robot motion. The geometry of surrounding space is described with range scans (samples of distance measurements) taken by the mobile robot’s range sensors. A similar sample of space geometry in any arbitrary preceding moment of time or the environment map can be used as a reference. The suggested algorithm is invariant to isotropic scaling of samples or map that allows using samples measured in different units and maps made at different scales. The algorithm is based on Hough transform: it maps from measurement space to a straight-line parameters space. In the straight-line parameters, space the problems of estimating rotation, scaling and translation are solved separately breaking down a problem of estimating mobile robot localization into three smaller independent problems. The specific feature of the algorithm presented is its robustness to noise and outliers inherited from Hough transform. The prototype of the system of mobile robot orientation is described.

  20. Progress and Challenges in Subseasonal Prediction

    NASA Technical Reports Server (NTRS)

    Schubert, Siegfried

    2003-01-01

    While substantial advances have occurred over the last few decades in both weather and seasonal prediction, progress in improving predictions on subseasonal time scales (approximately 2 weeks to 2 months) has been slow. In this talk I will highlight some of the recent progress that has been made to improve forecasts on subseasonal time scales and outline the challenges that we face both from an observational and modeling perspective. The talk will be based primarily on the results and conclusions of a recent NASA-sponsored workshop that focused on the subseasonal prediction problem. One of the key conclusions of that workshop was that there is compelling evidence for predictability at forecast lead times substantially longer than two weeks, and that much of that predictability is currently untapped. Tropical diabatic heating and soil wetness were singled out as particularly important processes affecting predictability on these time scales. Predictability was also linked to various low-frequency atmospheric phenomena such as the annular modes in high latitudes (including their connections to the stratosphere), the Pacific/North American pattern, and the Madden-Julian Oscillation. I will end the talk by summarizing the recommendations and plans that have been put forward for accelerating progress on the subseasonal prediction problem.

  1. Slowly changing potential problems in Quantum Mechanics: Adiabatic theorems, ergodic theorems, and scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fishman, S., E-mail: fishman@physics.technion.ac.il; Soffer, A., E-mail: soffer@math.rutgers.edu

    2016-07-15

    We employ the recently developed multi-time scale averaging method to study the large time behavior of slowly changing (in time) Hamiltonians. We treat some known cases in a new way, such as the Zener problem, and we give another proof of the adiabatic theorem in the gapless case. We prove a new uniform ergodic theorem for slowly changing unitary operators. This theorem is then used to derive the adiabatic theorem, do the scattering theory for such Hamiltonians, and prove some classical propagation estimates and asymptotic completeness.

  2. Predicting the cosmological constant with the scale-factor cutoff measure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Simone, Andrea; Guth, Alan H.; Salem, Michael P.

    2008-09-15

    It is well known that anthropic selection from a landscape with a flat prior distribution of cosmological constant {lambda} gives a reasonable fit to observation. However, a realistic model of the multiverse has a physical volume that diverges with time, and the predicted distribution of {lambda} depends on how the spacetime volume is regulated. A very promising method of regulation uses a scale-factor cutoff, which avoids a number of serious problems that arise in other approaches. In particular, the scale-factor cutoff avoids the 'youngness problem' (high probability of living in a much younger universe) and the 'Q and G catastrophes'more » (high probability for the primordial density contrast Q and gravitational constant G to have extremely large or small values). We apply the scale-factor cutoff measure to the probability distribution of {lambda}, considering both positive and negative values. The results are in good agreement with observation. In particular, the scale-factor cutoff strongly suppresses the probability for values of {lambda} that are more than about 10 times the observed value. We also discuss qualitatively the prediction for the density parameter {omega}, indicating that with this measure there is a possibility of detectable negative curvature.« less

  3. Scaling Laws Applied to a Modal Formulation of the Aeroservoelastic Equations

    NASA Technical Reports Server (NTRS)

    Pototzky, Anthony S.

    2002-01-01

    A method of scaling is described that easily converts the aeroelastic equations of motion of a full-sized aircraft into ones of a wind-tunnel model. To implement the method, a set of rules is provided for the conversion process involving matrix operations with scale factors. In addition, a technique for analytically incorporating a spring mounting system into the aeroelastic equations is also presented. As an example problem, a finite element model of a full-sized aircraft is introduced from the High Speed Research (HSR) program to exercise the scaling method. With a set of scale factor values, a brief outline is given of a procedure to generate the first-order aeroservoelastic analytical model representing the wind-tunnel model. To verify the scaling process as applied to the example problem, the root-locus patterns from the full-sized vehicle and the wind-tunnel model are compared to see if the root magnitudes scale with the frequency scale factor value. Selected time-history results are given from a numerical simulation of an active-controlled wind-tunnel model to demonstrate the utility of the scaling process.

  4. Signatures Of Coronal Heating Driven By Footpoint Shuffling: Closed and Open Structures.

    NASA Astrophysics Data System (ADS)

    Velli, M. C. M.; Rappazzo, A. F.; Dahlburg, R. B.; Einaudi, G.; Ugarte-Urra, I.

    2017-12-01

    We have previously described the characteristic state of the confined coronal magnetic field as a special case of magnetically dominated magnetohydrodynamic (MHD) turbulence, where the free energy in the transverse magnetic field is continuously cascaded to small scales, even though the overall kinetic energy is small. This coronal turbulence problem is defined by the photospheric boundary conditions: here we discuss recent numerical simulations of the fully compressible 3D MHD equations using the HYPERION code. Loops are forced at their footpoints by random photospheric motions, energizing the field to a state with continuous formation and dissipation of field-aligned current sheets: energy is deposited at small scales where heating occurs. Only a fraction of the coronal mass and volume gets heated at any time. Temperature and density are highly structured at scales that, in the solar corona, remain observationally unresolved: the plasma of simulated loops is multithermal, where highly dynamical hotter and cooler plasma strands are scattered throughout the loop at sub-observational scales. We will also compare Reduced MHD simulations with fully compressible simulations and photospheric forcings with different time-scales compared to the Alfv'en transit time. Finally, we will discuss the differences between the closed field and open field (solar wind) turbulence heating problem, leading to observational consequences that may be amenable to Parker Solar Probe and Solar Orbiter.

  5. Estimating time-dependent connectivity in marine systems

    USGS Publications Warehouse

    Defne, Zafer; Ganju, Neil K.; Aretxabaleta, Alfredo

    2016-01-01

    Hydrodynamic connectivity describes the sources and destinations of water parcels within a domain over a given time. When combined with biological models, it can be a powerful concept to explain the patterns of constituent dispersal within marine ecosystems. However, providing connectivity metrics for a given domain is a three-dimensional problem: two dimensions in space to define the sources and destinations and a time dimension to evaluate connectivity at varying temporal scales. If the time scale of interest is not predefined, then a general approach is required to describe connectivity over different time scales. For this purpose, we have introduced the concept of a “retention clock” that highlights the change in connectivity through time. Using the example of connectivity between protected areas within Barnegat Bay, New Jersey, we show that a retention clock matrix is an informative tool for multitemporal analysis of connectivity.

  6. Scaling and long-range dependence in option pricing V: Multiscaling hedging and implied volatility smiles under the fractional Black-Scholes model with transaction costs

    NASA Astrophysics Data System (ADS)

    Wang, Xiao-Tian

    2011-05-01

    This paper deals with the problem of discrete time option pricing using the fractional Black-Scholes model with transaction costs. Through the ‘anchoring and adjustment’ argument in a discrete time setting, a European call option pricing formula is obtained. The minimal price of an option under transaction costs is obtained. In addition, the relation between scaling and implied volatility smiles is discussed.

  7. Multi-GPU hybrid programming accelerated three-dimensional phase-field model in binary alloy

    NASA Astrophysics Data System (ADS)

    Zhu, Changsheng; Liu, Jieqiong; Zhu, Mingfang; Feng, Li

    2018-03-01

    In the process of dendritic growth simulation, the computational efficiency and the problem scales have extremely important influence on simulation efficiency of three-dimensional phase-field model. Thus, seeking for high performance calculation method to improve the computational efficiency and to expand the problem scales has a great significance to the research of microstructure of the material. A high performance calculation method based on MPI+CUDA hybrid programming model is introduced. Multi-GPU is used to implement quantitative numerical simulations of three-dimensional phase-field model in binary alloy under the condition of multi-physical processes coupling. The acceleration effect of different GPU nodes on different calculation scales is explored. On the foundation of multi-GPU calculation model that has been introduced, two optimization schemes, Non-blocking communication optimization and overlap of MPI and GPU computing optimization, are proposed. The results of two optimization schemes and basic multi-GPU model are compared. The calculation results show that the use of multi-GPU calculation model can improve the computational efficiency of three-dimensional phase-field obviously, which is 13 times to single GPU, and the problem scales have been expanded to 8193. The feasibility of two optimization schemes is shown, and the overlap of MPI and GPU computing optimization has better performance, which is 1.7 times to basic multi-GPU model, when 21 GPUs are used.

  8. A gravitational puzzle.

    PubMed

    Caldwell, Robert R

    2011-12-28

    The challenge to understand the physical origin of the cosmic acceleration is framed as a problem of gravitation. Specifically, does the relationship between stress-energy and space-time curvature differ on large scales from the predictions of general relativity. In this article, we describe efforts to model and test a generalized relationship between the matter and the metric using cosmological observations. Late-time tracers of large-scale structure, including the cosmic microwave background, weak gravitational lensing, and clustering are shown to provide good tests of the proposed solution. Current data are very close to proving a critical test, leaving only a small window in parameter space in the case that the generalized relationship is scale free above galactic scales.

  9. Concurrent systems and time synchronization

    NASA Astrophysics Data System (ADS)

    Burgin, Mark; Grathoff, Annette

    2018-05-01

    In the majority of scientific fields, system dynamics is described assuming existence of unique time for the whole system. However, it is established theoretically, for example, in relativity theory or in the system theory of time, and validated experimentally that there are different times and time scales in a variety of real systems - physical, chemical, biological, social, etc. In spite of this, there are no wide-ranging scientific approaches to exploration of such systems. Therefore, the goal of this paper is to study systems with this property. We call them concurrent systems because processes in them can go, events can happen and actions can be performed in different time scales. The problem of time synchronization is specifically explored.

  10. Deviations from uniform power law scaling in nonstationary time series

    NASA Technical Reports Server (NTRS)

    Viswanathan, G. M.; Peng, C. K.; Stanley, H. E.; Goldberger, A. L.

    1997-01-01

    A classic problem in physics is the analysis of highly nonstationary time series that typically exhibit long-range correlations. Here we test the hypothesis that the scaling properties of the dynamics of healthy physiological systems are more stable than those of pathological systems by studying beat-to-beat fluctuations in the human heart rate. We develop techniques based on the Fano factor and Allan factor functions, as well as on detrended fluctuation analysis, for quantifying deviations from uniform power-law scaling in nonstationary time series. By analyzing extremely long data sets of up to N = 10(5) beats for 11 healthy subjects, we find that the fluctuations in the heart rate scale approximately uniformly over several temporal orders of magnitude. By contrast, we find that in data sets of comparable length for 14 subjects with heart disease, the fluctuations grow erratically, indicating a loss of scaling stability.

  11. The interactions between vegetation and climate seasonality, topography on different time scales under the Budyko framework: case study in China's Loess Plateau

    NASA Astrophysics Data System (ADS)

    Liu, W.; Ning, T.; Shen, H.; Li, Z.

    2017-12-01

    Vegetation, climate seasonality and topography are the main impact factors controlling the water and heat balance over a catchment, and they are usually empirically formulated into the controlling parameter in Budyko model. However, their interactions on different time scales have not been fully addressed. Taking 30 catchments in China's Loess Plateau as an example, on annual scale, vegetation coverage was found poorly correlated with climate seasonality index; therefore, they could be both parameterized into the Budyko model. On the long-term scale, vegetation coverage tended to have close relationships with topographic conditions and climate seasonality, which was confirmed by the multi-collinearity problems; in that sense, vegetation information could fit the controlling parameter exclusively. Identifying the dominant controlling factors over different time scales, this study simplified the empirical parameterization of the Budyko formula. Though the above relationships further investigation over the other regions/catchments.

  12. On the sub-model errors of a generalized one-way coupling scheme for linking models at different scales

    NASA Astrophysics Data System (ADS)

    Zeng, Jicai; Zha, Yuanyuan; Zhang, Yonggen; Shi, Liangsheng; Zhu, Yan; Yang, Jinzhong

    2017-11-01

    Multi-scale modeling of the localized groundwater flow problems in a large-scale aquifer has been extensively investigated under the context of cost-benefit controversy. An alternative is to couple the parent and child models with different spatial and temporal scales, which may result in non-trivial sub-model errors in the local areas of interest. Basically, such errors in the child models originate from the deficiency in the coupling methods, as well as from the inadequacy in the spatial and temporal discretizations of the parent and child models. In this study, we investigate the sub-model errors within a generalized one-way coupling scheme given its numerical stability and efficiency, which enables more flexibility in choosing sub-models. To couple the models at different scales, the head solution at parent scale is delivered downward onto the child boundary nodes by means of the spatial and temporal head interpolation approaches. The efficiency of the coupling model is improved either by refining the grid or time step size in the parent and child models, or by carefully locating the sub-model boundary nodes. The temporal truncation errors in the sub-models can be significantly reduced by the adaptive local time-stepping scheme. The generalized one-way coupling scheme is promising to handle the multi-scale groundwater flow problems with complex stresses and heterogeneity.

  13. Simulation and scaling analysis of a spherical particle-laden blast wave

    NASA Astrophysics Data System (ADS)

    Ling, Y.; Balachandar, S.

    2018-02-01

    A spherical particle-laden blast wave, generated by a sudden release of a sphere of compressed gas-particle mixture, is investigated by numerical simulation. The present problem is a multiphase extension of the classic finite-source spherical blast-wave problem. The gas-particle flow can be fully determined by the initial radius of the spherical mixture and the properties of gas and particles. In many applications, the key dimensionless parameters, such as the initial pressure and density ratios between the compressed gas and the ambient air, can vary over a wide range. Parametric studies are thus performed to investigate the effects of these parameters on the characteristic time and spatial scales of the particle-laden blast wave, such as the maximum radius the contact discontinuity can reach and the time when the particle front crosses the contact discontinuity. A scaling analysis is conducted to establish a scaling relation between the characteristic scales and the controlling parameters. A length scale that incorporates the initial pressure ratio is proposed, which is able to approximately collapse the simulation results for the gas flow for a wide range of initial pressure ratios. This indicates that an approximate similarity solution for a spherical blast wave exists, which is independent of the initial pressure ratio. The approximate scaling is also valid for the particle front if the particles are small and closely follow the surrounding gas.

  14. Simulation and scaling analysis of a spherical particle-laden blast wave

    NASA Astrophysics Data System (ADS)

    Ling, Y.; Balachandar, S.

    2018-05-01

    A spherical particle-laden blast wave, generated by a sudden release of a sphere of compressed gas-particle mixture, is investigated by numerical simulation. The present problem is a multiphase extension of the classic finite-source spherical blast-wave problem. The gas-particle flow can be fully determined by the initial radius of the spherical mixture and the properties of gas and particles. In many applications, the key dimensionless parameters, such as the initial pressure and density ratios between the compressed gas and the ambient air, can vary over a wide range. Parametric studies are thus performed to investigate the effects of these parameters on the characteristic time and spatial scales of the particle-laden blast wave, such as the maximum radius the contact discontinuity can reach and the time when the particle front crosses the contact discontinuity. A scaling analysis is conducted to establish a scaling relation between the characteristic scales and the controlling parameters. A length scale that incorporates the initial pressure ratio is proposed, which is able to approximately collapse the simulation results for the gas flow for a wide range of initial pressure ratios. This indicates that an approximate similarity solution for a spherical blast wave exists, which is independent of the initial pressure ratio. The approximate scaling is also valid for the particle front if the particles are small and closely follow the surrounding gas.

  15. Long time existence from interior gluing

    NASA Astrophysics Data System (ADS)

    Chruściel, Piotr T.

    2017-07-01

    We prove completeness-to-the-future of null hypersurfaces emanating outwards from large spheres, in vacuum space-times evolving from general asymptotically flat data with well-defined energy-momentum. The proof uses scaling and a gluing construction to reduce the problem to Bieri’s stability theorem.

  16. Asymptotic analysis of online algorithms and improved scheme for the flow shop scheduling problem with release dates

    NASA Astrophysics Data System (ADS)

    Bai, Danyu

    2015-08-01

    This paper discusses the flow shop scheduling problem to minimise the total quadratic completion time (TQCT) with release dates in offline and online environments. For this NP-hard problem, the investigation is focused on the performance of two online algorithms based on the Shortest Processing Time among Available jobs rule. Theoretical results indicate the asymptotic optimality of the algorithms as the problem scale is sufficiently large. To further enhance the quality of the original solutions, the improvement scheme is provided for these algorithms. A new lower bound with performance guarantee is provided, and computational experiments show the effectiveness of these heuristics. Moreover, several results of the single-machine TQCT problem with release dates are also obtained for the deduction of the main theorem.

  17. Scalable problems and memory bounded speedup

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He; Ni, Lionel M.

    1992-01-01

    In this paper three models of parallel speedup are studied. They are fixed-size speedup, fixed-time speedup and memory-bounded speedup. The latter two consider the relationship between speedup and problem scalability. Two sets of speedup formulations are derived for these three models. One set considers uneven workload allocation and communication overhead and gives more accurate estimation. Another set considers a simplified case and provides a clear picture on the impact of the sequential portion of an application on the possible performance gain from parallel processing. The simplified fixed-size speedup is Amdahl's law. The simplified fixed-time speedup is Gustafson's scaled speedup. The simplified memory-bounded speedup contains both Amdahl's law and Gustafson's scaled speedup as special cases. This study leads to a better understanding of parallel processing.

  18. Impact of spatio-temporal scale of adjustment on variational assimilation of hydrologic and hydrometeorological data in operational distributed hydrologic models

    NASA Astrophysics Data System (ADS)

    Lee, H.; Seo, D.; McKee, P.; Corby, R.

    2009-12-01

    One of the large challenges in data assimilation (DA) into distributed hydrologic models is to reduce the large degrees of freedom involved in the inverse problem to avoid overfitting. To assess the sensitivity of the performance of DA to the dimensionality of the inverse problem, we design and carry out real-world experiments in which the control vector in variational DA (VAR) is solved at different scales in space and time, e.g., lumped, semi-distributed, and fully-distributed in space, and hourly, 6 hourly, etc., in time. The size of the control vector is related to the degrees of freedom in the inverse problem. For the assessment, we use the prototype 4-dimenational variational data assimilator (4DVAR) that assimilates streamflow, precipitation and potential evaporation data into the NWS Hydrology Laboratory’s Research Distributed Hydrologic Model (HL-RDHM). In this talk, we present the initial results for a number of basins in Oklahoma and Texas.

  19. MapReduce in the Cloud: A Use Case Study for Efficient Co-Occurrence Processing of MEDLINE Annotations with MeSH.

    PubMed

    Kreuzthaler, Markus; Miñarro-Giménez, Jose Antonio; Schulz, Stefan

    2016-01-01

    Big data resources are difficult to process without a scaled hardware environment that is specifically adapted to the problem. The emergence of flexible cloud-based virtualization techniques promises solutions to this problem. This paper demonstrates how a billion of lines can be processed in a reasonable amount of time in a cloud-based environment. Our use case addresses the accumulation of concept co-occurrence data in MEDLINE annotation as a series of MapReduce jobs, which can be scaled and executed in the cloud. Besides showing an efficient way solving this problem, we generated an additional resource for the scientific community to be used for advanced text mining approaches.

  20. A new framework for climate sensitivity and prediction: a modelling perspective

    NASA Astrophysics Data System (ADS)

    Ragone, Francesco; Lucarini, Valerio; Lunkeit, Frank

    2016-03-01

    The sensitivity of climate models to increasing CO2 concentration and the climate response at decadal time-scales are still major factors of uncertainty for the assessment of the long and short term effects of anthropogenic climate change. While the relative slow progress on these issues is partly due to the inherent inaccuracies of numerical climate models, this also hints at the need for stronger theoretical foundations to the problem of studying climate sensitivity and performing climate change predictions with numerical models. Here we demonstrate that it is possible to use Ruelle's response theory to predict the impact of an arbitrary CO2 forcing scenario on the global surface temperature of a general circulation model. Response theory puts the concept of climate sensitivity on firm theoretical grounds, and addresses rigorously the problem of predictability at different time-scales. Conceptually, these results show that performing climate change experiments with general circulation models is a well defined problem from a physical and mathematical point of view. Practically, these results show that considering one single CO2 forcing scenario is enough to construct operators able to predict the response of climatic observables to any other CO2 forcing scenario, without the need to perform additional numerical simulations. We also introduce a general relationship between climate sensitivity and climate response at different time scales, thus providing an explicit definition of the inertia of the system at different time scales. This technique allows also for studying systematically, for a large variety of forcing scenarios, the time horizon at which the climate change signal (in an ensemble sense) becomes statistically significant. While what we report here refers to the linear response, the general theory allows for treating nonlinear effects as well. These results pave the way for redesigning and interpreting climate change experiments from a radically new perspective.

  1. Behavioral Problems and Childhood Epilepsy: Parent vs Child Perspectives.

    PubMed

    Eom, Soyong; Caplan, Rochelle; Berg, Anne T

    2016-12-01

    To test whether the reported association between pediatric epilepsy and behavioral problems may be distorted by the use of parental proxy report instruments. Children in the Connecticut Study of Epilepsy were assessed 8-9 years after their epilepsy diagnosis (time-1) with the parent-proxy Child Behavior Check List (CBCL) (ages 6-18 years) or the Young Adult Self-Report (≥18 years of age). For children <18 years of age, parents also completed the Child Health Questionnaire, which contains scales for impact of child's illness on the parents. The same study subjects completed the Adult Self-Report 6-8 years later (time-2). Sibling controls were also tested. Case-control differences were examined for evidence suggesting more behavioral problems in cases with epilepsy than in controls based on proxy- vs self-report measures. At time-1, parent-proxy CBCL scores were significantly higher (worse) for cases than controls (n = 140 matched pairs). After adjustment for Child Health Questionnaire scales reflecting parent emotional and time impact, only 1 case-control difference on the CBCL remained significant. Self-reported Young Adult Self-Report scores did not differ between cases and controls (n = 42 pairs). At time-2, there were no significant self-reported case-control differences on the Adult Self-Report (n = 105 pairs). Parent-proxy behavior measures appear to be influenced by the emotional impact of epilepsy on parents. This may contribute to apparent associations between behavioral problems and childhood epilepsy. Self-report measures in older adolescents (>18 years of age) and young adults do not confirm parental perceptions. Evidence suggesting more behavioral problems in children with epilepsy should be interpreted in light of the source of information. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Cove benchmark calculations using SAGUARO and FEMTRAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eaton, R.R.; Martinez, M.J.

    1986-10-01

    Three small-scale, time-dependent, benchmarking calculations have been made using the finite element codes SAGUARO, to determine hydraulic head and water velocity profiles, and FEMTRAN, to predict the solute transport. Sand and hard rock porous materials were used. Time scales for the problems, which ranged from tens of hours to thousands of years, have posed no particular diffculty for the two codes. Studies have been performed to determine the effects of computational mesh, boundary conditions, velocity formulation and SAGUARO/FEMTRAN code-coupling on water and solute transport. Results showed that mesh refinement improved mass conservation. Varying the drain-tile size in COVE 1N hadmore » a weak effect on the rate at which the tile field drained. Excellent agreement with published COVE 1N data was obtained for the hydrological field and reasonable agreement for the solute-concentration predictions. The question remains whether these types of calculations can be carried out on repository-scale problems using material characteristic curves representing tuff with fractures.« less

  3. Critical Problems in Very Large Scale Computer Systems

    DTIC Science & Technology

    1988-09-30

    253-6043 Srinivas Devadas (617) 253-0454 Thomas F. Knight, Jr. (617) 253-7807 F. Thomson Leighton (617) 253-3662 Charles E. Leiserson (617) 253-5833...J. Keen, P. Nuth, J. Larivee, and B . Totty, "Message-Driven Processor Architecture," MIT VLSI Memo No. 88-468, August 1988. *W. J. Dally and A. A...losses and gains) which are the first polynomial-time combinatorial algorithms for this problem. One algorithm runs in O(n2m2 lg 2 n Ig B ) time and the

  4. School readiness among children with behavior problems at entrance into kindergarten: results from a US national study.

    PubMed

    Montes, Guillermo; Lotyczewski, Bohdan S; Halterman, Jill S; Hightower, Alan D

    2012-03-01

    The impact of behavior problems on kindergarten readiness is not known. Our objective was to estimate the association between behavior problems and kindergarten readiness on a US national sample. In the US educational system, kindergarten is a natural point of entry into formal schooling at age 5 because fewer than half of the children enter kindergarten with prior formal preschool education. Parents of 1,200 children who were scheduled to enter kindergarten for the first time and were members of the Harris Interactive online national panel were surveyed. We defined behavior problems as an affirmative response to the question, "Has your child ever had behavior problems?" We validated this against attention deficit hyperactivity disorder diagnosis, scores on a reliable socioemotional scale, and child's receipt of early intervention services. We used linear, tobit, and logistic regression analyses to estimate the association between having behavior problems and scores in reliable scales of motor, play, speech and language, and school skills and an overall kindergarten readiness indicator. The sample included 176 children with behavior problems for a national prevalence of 14% (confidence interval, 11.5-17.5). Children with behavior problems were more likely to be male and live in households with lower income and parental education. We found that children with behavior problems entered kindergarten with lower speech and language, motor, play, and school skills, even after controlling for demographics and region. Delays were 0.6-1 SD below scores of comparable children without behavior problems. Parents of children with behavior problems were 5.2 times more likely to report their child was not ready for kindergarten. Childhood behavior problems are associated with substantial delays in motor, language, play, school, and socioemotional skills before entrance into kindergarten. Early screening and intervention is recommended.

  5. Topics in geophysical fluid dynamics: Atmospheric dynamics, dynamo theory, and climate dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghil, M.; Childress, S.

    1987-01-01

    This text is the first study to apply systematically the successive bifurcations approach to complex time-dependent processes in large scale atmospheric dynamics, geomagnetism, and theoretical climate dynamics. The presentation of recent results on planetary-scale phenomena in the earth's atmosphere, ocean, cryosphere, mantle and core provides an integral account of mathematical theory and methods together with physical phenomena and processes. The authors address a number of problems in rapidly developing areas of geophysics, bringing into closer contact the modern tools of nonlinear mathematics and the novel problems of global change in the environment.

  6. Multi-time Scale Joint Scheduling Method Considering the Grid of Renewable Energy

    NASA Astrophysics Data System (ADS)

    Zhijun, E.; Wang, Weichen; Cao, Jin; Wang, Xin; Kong, Xiangyu; Quan, Shuping

    2018-01-01

    Renewable new energy power generation prediction error like wind and light, brings difficulties to dispatch the power system. In this paper, a multi-time scale robust scheduling method is set to solve this problem. It reduces the impact of clean energy prediction bias to the power grid by using multi-time scale (day-ahead, intraday, real time) and coordinating the dispatching power output of various power supplies such as hydropower, thermal power, wind power, gas power and. The method adopts the robust scheduling method to ensure the robustness of the scheduling scheme. By calculating the cost of the abandon wind and the load, it transforms the robustness into the risk cost and optimizes the optimal uncertainty set for the smallest integrative costs. The validity of the method is verified by simulation.

  7. Step scaling and the Yang-Mills gradient flow

    NASA Astrophysics Data System (ADS)

    Lüscher, Martin

    2014-06-01

    The use of the Yang-Mills gradient flow in step-scaling studies of lattice QCD is expected to lead to results of unprecedented precision. Step scaling is usually based on the Schrödinger functional, where time ranges over an interval [0 , T] and all fields satisfy Dirichlet boundary conditions at time 0 and T. In these calculations, potentially important sources of systematic errors are boundary lattice effects and the infamous topology-freezing problem. The latter is here shown to be absent if Neumann instead of Dirichlet boundary conditions are imposed on the gauge field at time 0. Moreover, the expectation values of gauge-invariant local fields at positive flow time (and of other well localized observables) that reside in the center of the space-time volume are found to be largely insensitive to the boundary lattice effects.

  8. The Development of a Sport-Based Life Skills Scale for Youth to Young Adults, 11-23 Years of Age

    ERIC Educational Resources Information Center

    Cauthen, Hillary Ayn

    2013-01-01

    The purpose of this study was to develop a sport-based life skills scale that assesses 20 life skills: goal setting, time management, communication, coping, problem solving, leadership, critical thinking, teamwork, self-discipline, decision making, planning, organizing, resiliency, motivation, emotional control, patience, assertiveness, empathy,…

  9. Can Management Potential Be Revealed in Groups?

    ERIC Educational Resources Information Center

    Chartrand, P. J.; Jackson, D.

    1971-01-01

    Videotaping small group problem solving sessions and applying Bales Social Interaction scale can give valuable insight into areas where people (particularly managers) can profitably spend time developing themselves. (Author/EB)

  10. Global change and conservation triage on National Wildlife Refuges

    USGS Publications Warehouse

    Johnson, Fred A.; Eaton, Mitchell; McMahon, Gerard; Raye Nilius,; Mike Bryant,; Dave Case,; Martin, Julien; Wood, Nathan J.; Laura Taylor,

    2015-01-01

    National Wildlife Refuges (NWRs) in the United States play an important role in the adaptation of social-ecological systems to climate change, land-use change, and other global-change processes. Coastal refuges are already experiencing threats from sea-level rise and other change processes that are largely beyond their ability to influence, while at the same time facing tighter budgets and reduced staff. We engaged in workshops with NWR managers along the U.S. Atlantic coast to understand the problems they face from global-change processes and began a multidisciplinary collaboration to use decision science to help address them. We are applying a values-focused approach to base management decisions on the resource objectives of land managers, as well as those of stakeholders who may benefit from the goods and services produced by a refuge. Two insights that emerged from our workshops were a conspicuous mismatch between the scale at which management can influence outcomes and the scale of environmental processes, and the need to consider objectives related to ecosystem goods and services that traditionally have not been explicitly considered by refuges (e.g., protection from storm surge). The broadening of objectives complicates the decision-making process, but also provides opportunities for collaboration with stakeholders who may have agendas different from those of the refuge, as well as an opportunity for addressing problems across scales. From a practical perspective, we recognized the need to (1) efficiently allocate limited staff time and budgets for short-term management of existing programs and resources under the current refuge design and (2) develop long-term priorities for acquiring or protecting new land/habitat to supplement or replace the existing refuge footprint and thus sustain refuge values as the system evolves over time. Structuring the decision-making problem in this manner facilitated a better understanding of the issues of scale and suggested that a long-term solution will require a significant reassessment of objectives to better reflect the comprehensive values of refuges to society. We discuss some future considerations to integrate these two problems into a single framework by developing novel optimization approaches for dynamic problems that account for uncertainty in future conditions.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sreedharan, Priya

    The sudden release of toxic contaminants that reach indoor spaces can be hazardousto building occupants. To respond effectively, the contaminant release must be quicklydetected and characterized to determine unobserved parameters, such as release locationand strength. Characterizing the release requires solving an inverse problem. Designinga robust real-time sensor system that solves the inverse problem is challenging becausethe fate and transport of contaminants is complex, sensor information is limited andimperfect, and real-time estimation is computationally constrained.This dissertation uses a system-level approach, based on a Bayes Monte Carloframework, to develop sensor-system design concepts and methods. I describe threeinvestigations that explore complex relationships amongmore » sensors, network architecture,interpretation algorithms, and system performance. The investigations use data obtainedfrom tracer gas experiments conducted in a real building. The influence of individual sensor characteristics on the sensor-system performance for binary-type contaminant sensors is analyzed. Performance tradeoffs among sensor accuracy, threshold level and response time are identified; these attributes could not be inferred without a system-level analysis. For example, more accurate but slower sensors are found to outperform less accurate but faster sensors. Secondly, I investigate how the sensor-system performance can be understood in terms of contaminant transport processes and the model representation that is used to solve the inverse problem. The determination of release location and mass are shown to be related to and constrained by transport and mixing time scales. These time scales explain performance differences among different sensor networks. For example, the effect of longer sensor response times is comparably less for releases with longer mixing time scales. The third investigation explores how information fusion from heterogeneous sensors may improve the sensor-system performance and offset the need for more contaminant sensors. Physics- and algorithm-based frameworks are presented for selecting and fusing information from noncontaminant sensors. The frameworks are demonstrated with door-position sensors, which are found to be more useful in natural airflow conditions, but which cannot compensate for poor placement of contaminant sensors. The concepts and empirical findings have the potential to help in the design of sensor systems for more complex building systems. The research has broader relevance to additional environmental monitoring problems, fault detection and diagnostics, and system design.« less

  12. Detecting and characterizing high-frequency oscillations in epilepsy: a case study of big data analysis

    NASA Astrophysics Data System (ADS)

    Huang, Liang; Ni, Xuan; Ditto, William L.; Spano, Mark; Carney, Paul R.; Lai, Ying-Cheng

    2017-01-01

    We develop a framework to uncover and analyse dynamical anomalies from massive, nonlinear and non-stationary time series data. The framework consists of three steps: preprocessing of massive datasets to eliminate erroneous data segments, application of the empirical mode decomposition and Hilbert transform paradigm to obtain the fundamental components embedded in the time series at distinct time scales, and statistical/scaling analysis of the components. As a case study, we apply our framework to detecting and characterizing high-frequency oscillations (HFOs) from a big database of rat electroencephalogram recordings. We find a striking phenomenon: HFOs exhibit on-off intermittency that can be quantified by algebraic scaling laws. Our framework can be generalized to big data-related problems in other fields such as large-scale sensor data and seismic data analysis.

  13. Semi-implicit time integration of atmospheric flows with characteristic-based flux partitioning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghosh, Debojyoti; Constantinescu, Emil M.

    2016-06-23

    Here, this paper presents a characteristic-based flux partitioning for the semi-implicit time integration of atmospheric flows. Nonhydrostatic models require the solution of the compressible Euler equations. The acoustic time scale is significantly faster than the advective scale, yet it is typically not relevant to atmospheric and weather phenomena. The acoustic and advective components of the hyperbolic flux are separated in the characteristic space. High-order, conservative additive Runge-Kutta methods are applied to the partitioned equations so that the acoustic component is integrated in time implicitly with an unconditionally stable method, while the advective component is integrated explicitly. The time step ofmore » the overall algorithm is thus determined by the advective scale. Benchmark flow problems are used to demonstrate the accuracy, stability, and convergence of the proposed algorithm. The computational cost of the partitioned semi-implicit approach is compared with that of explicit time integration.« less

  14. Characteristic time scales for diffusion processes through layers and across interfaces

    NASA Astrophysics Data System (ADS)

    Carr, Elliot J.

    2018-04-01

    This paper presents a simple tool for characterizing the time scale for continuum diffusion processes through layered heterogeneous media. This mathematical problem is motivated by several practical applications such as heat transport in composite materials, flow in layered aquifers, and drug diffusion through the layers of the skin. In such processes, the physical properties of the medium vary across layers and internal boundary conditions apply at the interfaces between adjacent layers. To characterize the time scale, we use the concept of mean action time, which provides the mean time scale at each position in the medium by utilizing the fact that the transition of the transient solution of the underlying partial differential equation model, from initial state to steady state, can be represented as a cumulative distribution function of time. Using this concept, we define the characteristic time scale for a multilayer diffusion process as the maximum value of the mean action time across the layered medium. For given initial conditions and internal and external boundary conditions, this approach leads to simple algebraic expressions for characterizing the time scale that depend on the physical and geometrical properties of the medium, such as the diffusivities and lengths of the layers. Numerical examples demonstrate that these expressions provide useful insight into explaining how the parameters in the model affect the time it takes for a multilayer diffusion process to reach steady state.

  15. Characteristic time scales for diffusion processes through layers and across interfaces.

    PubMed

    Carr, Elliot J

    2018-04-01

    This paper presents a simple tool for characterizing the time scale for continuum diffusion processes through layered heterogeneous media. This mathematical problem is motivated by several practical applications such as heat transport in composite materials, flow in layered aquifers, and drug diffusion through the layers of the skin. In such processes, the physical properties of the medium vary across layers and internal boundary conditions apply at the interfaces between adjacent layers. To characterize the time scale, we use the concept of mean action time, which provides the mean time scale at each position in the medium by utilizing the fact that the transition of the transient solution of the underlying partial differential equation model, from initial state to steady state, can be represented as a cumulative distribution function of time. Using this concept, we define the characteristic time scale for a multilayer diffusion process as the maximum value of the mean action time across the layered medium. For given initial conditions and internal and external boundary conditions, this approach leads to simple algebraic expressions for characterizing the time scale that depend on the physical and geometrical properties of the medium, such as the diffusivities and lengths of the layers. Numerical examples demonstrate that these expressions provide useful insight into explaining how the parameters in the model affect the time it takes for a multilayer diffusion process to reach steady state.

  16. Time-Accurate Local Time Stepping and High-Order Time CESE Methods for Multi-Dimensional Flows Using Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary

    2013-01-01

    With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.

  17. Evaluation of scaling invariance embedded in short time series.

    PubMed

    Pan, Xue; Hou, Lei; Stephen, Mutua; Yang, Huijie; Zhu, Chenping

    2014-01-01

    Scaling invariance of time series has been making great contributions in diverse research fields. But how to evaluate scaling exponent from a real-world series is still an open problem. Finite length of time series may induce unacceptable fluctuation and bias to statistical quantities and consequent invalidation of currently used standard methods. In this paper a new concept called correlation-dependent balanced estimation of diffusion entropy is developed to evaluate scale-invariance in very short time series with length ~10(2). Calculations with specified Hurst exponent values of 0.2,0.3,...,0.9 show that by using the standard central moving average de-trending procedure this method can evaluate the scaling exponents for short time series with ignorable bias (≤0.03) and sharp confidential interval (standard deviation ≤0.05). Considering the stride series from ten volunteers along an approximate oval path of a specified length, we observe that though the averages and deviations of scaling exponents are close, their evolutionary behaviors display rich patterns. It has potential use in analyzing physiological signals, detecting early warning signals, and so on. As an emphasis, the our core contribution is that by means of the proposed method one can estimate precisely shannon entropy from limited records.

  18. Evaluation of Scaling Invariance Embedded in Short Time Series

    PubMed Central

    Pan, Xue; Hou, Lei; Stephen, Mutua; Yang, Huijie; Zhu, Chenping

    2014-01-01

    Scaling invariance of time series has been making great contributions in diverse research fields. But how to evaluate scaling exponent from a real-world series is still an open problem. Finite length of time series may induce unacceptable fluctuation and bias to statistical quantities and consequent invalidation of currently used standard methods. In this paper a new concept called correlation-dependent balanced estimation of diffusion entropy is developed to evaluate scale-invariance in very short time series with length . Calculations with specified Hurst exponent values of show that by using the standard central moving average de-trending procedure this method can evaluate the scaling exponents for short time series with ignorable bias () and sharp confidential interval (standard deviation ). Considering the stride series from ten volunteers along an approximate oval path of a specified length, we observe that though the averages and deviations of scaling exponents are close, their evolutionary behaviors display rich patterns. It has potential use in analyzing physiological signals, detecting early warning signals, and so on. As an emphasis, the our core contribution is that by means of the proposed method one can estimate precisely shannon entropy from limited records. PMID:25549356

  19. Cloud-based large-scale air traffic flow optimization

    NASA Astrophysics Data System (ADS)

    Cao, Yi

    The ever-increasing traffic demand makes the efficient use of airspace an imperative mission, and this paper presents an effort in response to this call. Firstly, a new aggregate model, called Link Transmission Model (LTM), is proposed, which models the nationwide traffic as a network of flight routes identified by origin-destination pairs. The traversal time of a flight route is assumed to be the mode of distribution of historical flight records, and the mode is estimated by using Kernel Density Estimation. As this simplification abstracts away physical trajectory details, the complexity of modeling is drastically decreased, resulting in efficient traffic forecasting. The predicative capability of LTM is validated against recorded traffic data. Secondly, a nationwide traffic flow optimization problem with airport and en route capacity constraints is formulated based on LTM. The optimization problem aims at alleviating traffic congestions with minimal global delays. This problem is intractable due to millions of variables. A dual decomposition method is applied to decompose the large-scale problem such that the subproblems are solvable. However, the whole problem is still computational expensive to solve since each subproblem is an smaller integer programming problem that pursues integer solutions. Solving an integer programing problem is known to be far more time-consuming than solving its linear relaxation. In addition, sequential execution on a standalone computer leads to linear runtime increase when the problem size increases. To address the computational efficiency problem, a parallel computing framework is designed which accommodates concurrent executions via multithreading programming. The multithreaded version is compared with its monolithic version to show decreased runtime. Finally, an open-source cloud computing framework, Hadoop MapReduce, is employed for better scalability and reliability. This framework is an "off-the-shelf" parallel computing model that can be used for both offline historical traffic data analysis and online traffic flow optimization. It provides an efficient and robust platform for easy deployment and implementation. A small cloud consisting of five workstations was configured and used to demonstrate the advantages of cloud computing in dealing with large-scale parallelizable traffic problems.

  20. A scale-invariant internal representation of time.

    PubMed

    Shankar, Karthik H; Howard, Marc W

    2012-01-01

    We propose a principled way to construct an internal representation of the temporal stimulus history leading up to the present moment. A set of leaky integrators performs a Laplace transform on the stimulus function, and a linear operator approximates the inversion of the Laplace transform. The result is a representation of stimulus history that retains information about the temporal sequence of stimuli. This procedure naturally represents more recent stimuli more accurately than less recent stimuli; the decrement in accuracy is precisely scale invariant. This procedure also yields time cells that fire at specific latencies following the stimulus with a scale-invariant temporal spread. Combined with a simple associative memory, this representation gives rise to a moment-to-moment prediction that is also scale invariant in time. We propose that this scale-invariant representation of temporal stimulus history could serve as an underlying representation accessible to higher-level behavioral and cognitive mechanisms. In order to illustrate the potential utility of this scale-invariant representation in a variety of fields, we sketch applications using minimal performance functions to problems in classical conditioning, interval timing, scale-invariant learning in autoshaping, and the persistence of the recency effect in episodic memory across timescales.

  1. Crack propagation monitoring in a full-scale aircraft fatigue test based on guided wave-Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Qiu, Lei; Yuan, Shenfang; Bao, Qiao; Mei, Hanfei; Ren, Yuanqiang

    2016-05-01

    For aerospace application of structural health monitoring (SHM) technology, the problem of reliable damage monitoring under time-varying conditions must be addressed and the SHM technology has to be fully validated on real aircraft structures under realistic load conditions on ground before it can reach the status of flight test. In this paper, the guided wave (GW) based SHM method is applied to a full-scale aircraft fatigue test which is one of the most similar test status to the flight test. To deal with the time-varying problem, a GW-Gaussian mixture model (GW-GMM) is proposed. The probability characteristic of GW features, which is introduced by time-varying conditions is modeled by GW-GMM. The weak cumulative variation trend of the crack propagation, which is mixed in time-varying influence can be tracked by the GW-GMM migration during on-line damage monitoring process. A best match based Kullback-Leibler divergence is proposed to measure the GW-GMM migration degree to reveal the crack propagation. The method is validated in the full-scale aircraft fatigue test. The validation results indicate that the reliable crack propagation monitoring of the left landing gear spar and the right wing panel under realistic load conditions are achieved.

  2. A short note on the use of the red-black tree in Cartesian adaptive mesh refinement algorithms

    NASA Astrophysics Data System (ADS)

    Hasbestan, Jaber J.; Senocak, Inanc

    2017-12-01

    Mesh adaptivity is an indispensable capability to tackle multiphysics problems with large disparity in time and length scales. With the availability of powerful supercomputers, there is a pressing need to extend time-proven computational techniques to extreme-scale problems. Cartesian adaptive mesh refinement (AMR) is one such method that enables simulation of multiscale, multiphysics problems. AMR is based on construction of octrees. Originally, an explicit tree data structure was used to generate and manipulate an adaptive Cartesian mesh. At least eight pointers are required in an explicit approach to construct an octree. Parent-child relationships are then used to traverse the tree. An explicit octree, however, is expensive in terms of memory usage and the time it takes to traverse the tree to access a specific node. For these reasons, implicit pointerless methods have been pioneered within the computer graphics community, motivated by applications requiring interactivity and realistic three dimensional visualization. Lewiner et al. [1] provides a concise review of pointerless approaches to generate an octree. Use of a hash table and Z-order curve are two key concepts in pointerless methods that we briefly discuss next.

  3. GPU Multi-Scale Particle Tracking and Multi-Fluid Simulations of the Radiation Belts

    NASA Astrophysics Data System (ADS)

    Ziemba, T.; Carscadden, J.; O'Donnell, D.; Winglee, R.; Harnett, E.; Cash, M.

    2007-12-01

    The properties of the radiation belts can vary dramatically under the influence of magnetic storms and storm-time substorms. The task of understanding and predicting radiation belt properties is made difficult because their properties determined by global processes as well as small-scale wave-particle interactions. A full solution to the problem will require major innovations in technique and computer hardware. The proposed work will demonstrates liked particle tracking codes with new multi-scale/multi-fluid global simulations that provide the first means to include small-scale processes within the global magnetospheric context. A large hurdle to the problem is having sufficient computer hardware that is able to handle the dissipate temporal and spatial scale sizes. A major innovation of the work is that the codes are designed to run of graphics processing units (GPUs). GPUs are intrinsically highly parallelized systems that provide more than an order of magnitude computing speed over a CPU based systems, for little more cost than a high end-workstation. Recent advancements in GPU technologies allow for full IEEE float specifications with performance up to several hundred GFLOPs per GPU and new software architectures have recently become available to ease the transition from graphics based to scientific applications. This allows for a cheap alternative to standard supercomputing methods and should increase the time to discovery. A demonstration of the code pushing more than 500,000 particles faster than real time is presented, and used to provide new insight into radiation belt dynamics.

  4. Development of the intoxicated personality scale.

    PubMed

    Ward, Rose Marie; Brinkman, Craig S; Miller, Ashlin; Doolittle, James J

    2015-01-01

    To develop the Intoxicated Personality Scale (IPS). Data were collected from 436 college students via an online survey. Through an iterative measurement development process, the resulting IPS was created. The 5 subscales (Good Time, Risky Choices, Risky Sex, Emotional, and Introvert) of the IPS positively related to alcohol consumption, alcohol problems, drinking motives, alcohol expectancies, and personality. The results suggest that the Intoxicated Personality Scale may be a useful tool for predicting problematic alcohol consumption, alcohol expectancies, and drinking motives.

  5. Scaling fixed-field alternating gradient accelerators with a small orbit excursion.

    PubMed

    Machida, Shinji

    2009-10-16

    A novel scaling type of fixed-field alternating gradient (FFAG) accelerator is proposed that solves the major problems of conventional scaling and nonscaling types. This scaling FFAG accelerator can achieve a much smaller orbit excursion by taking a larger field index k. A triplet focusing structure makes it possible to set the operating point in the second stability region of Hill's equation with a reasonable sensitivity to various errors. The orbit excursion is about 5 times smaller than in a conventional scaling FFAG accelerator and the beam size growth due to typical errors is at most 10%.

  6. The intergenerational transmission of problem gambling: The mediating role of parental psychopathology.

    PubMed

    Dowling, N A; Shandley, K; Oldenhof, E; Youssef, G J; Thomas, S A; Frydenberg, E; Jackson, A C

    2016-08-01

    The present study investigated the intergenerational transmission of problem gambling and the potential mediating role of parental psychopathology (problem drinking, drug use problems, and mental health issues). The study comprised 3953 participants (1938 males, 2015 females) recruited from a large-scale Australian community telephone survey of adults retrospectively reporting on parental problem gambling and psychopathology during their childhood. Overall, 4.0% [95%CI 3.0, 5.0] (n=157) of participants reported paternal problem gambling and 1.7% [95%CI 1.0, 2.0] (n=68) reported maternal problem gambling. Compared to their peers, participants reporting paternal problem gambling were 5.1 times more likely to be moderate risk gamblers and 10.7 times more likely to be problem gamblers. Participants reporting maternal problem gambling were 1.7 times more likely to be moderate risk gamblers and 10.6 times more likely to be problem gamblers. The results revealed that the relationships between paternal-and-participant and maternal-and-participant problem gambling were significant, but that only the relationship between paternal-and-participant problem gambling remained statistically significant after controlling for maternal problem gambling and sociodemographic factors. Paternal problem drinking and maternal drug use problems partially mediated the relationship between paternal-and-participant problem gambling, and fully mediated the relationship between maternal-and-participant problem gambling. In contrast, parental mental health issues failed to significantly mediate the transmission of gambling problems by either parent. When parental problem gambling was the mediator, there was full mediation of the effect between parental psychopathology and offspring problem gambling for fathers but not mothers. Overall, the study highlights the vulnerability of children from problem gambling households and suggests that it would be of value to target prevention and intervention efforts towards this cohort. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Frontal Neurons Modulate Memory Retrieval across Widely Varying Temporal Scales

    ERIC Educational Resources Information Center

    Zhang, Wen-Hua; Williams, Ziv M.

    2015-01-01

    Once a memory has formed, it is thought to undergo a gradual transition within the brain from short- to long-term storage. This putative process, however, also poses a unique problem to the memory system in that the same learned items must also be retrieved across broadly varying time scales. Here, we find that neurons in the ventrolateral…

  8. DEM GPU studies of industrial scale particle simulations for granular flow civil engineering applications

    NASA Astrophysics Data System (ADS)

    Pizette, Patrick; Govender, Nicolin; Wilke, Daniel N.; Abriak, Nor-Edine

    2017-06-01

    The use of the Discrete Element Method (DEM) for industrial civil engineering industrial applications is currently limited due to the computational demands when large numbers of particles are considered. The graphics processing unit (GPU) with its highly parallelized hardware architecture shows potential to enable solution of civil engineering problems using discrete granular approaches. We demonstrate in this study the pratical utility of a validated GPU-enabled DEM modeling environment to simulate industrial scale granular problems. As illustration, the flow discharge of storage silos using 8 and 17 million particles is considered. DEM simulations have been performed to investigate the influence of particle size (equivalent size for the 20/40-mesh gravel) and induced shear stress for two hopper shapes. The preliminary results indicate that the shape of the hopper significantly influences the discharge rates for the same material. Specifically, this work shows that GPU-enabled DEM modeling environments can model industrial scale problems on a single portable computer within a day for 30 seconds of process time.

  9. An accurate, fast, and scalable solver for high-frequency wave propagation

    NASA Astrophysics Data System (ADS)

    Zepeda-Núñez, L.; Taus, M.; Hewett, R.; Demanet, L.

    2017-12-01

    In many science and engineering applications, solving time-harmonic high-frequency wave propagation problems quickly and accurately is of paramount importance. For example, in geophysics, particularly in oil exploration, such problems can be the forward problem in an iterative process for solving the inverse problem of subsurface inversion. It is important to solve these wave propagation problems accurately in order to efficiently obtain meaningful solutions of the inverse problems: low order forward modeling can hinder convergence. Additionally, due to the volume of data and the iterative nature of most optimization algorithms, the forward problem must be solved many times. Therefore, a fast solver is necessary to make solving the inverse problem feasible. For time-harmonic high-frequency wave propagation, obtaining both speed and accuracy is historically challenging. Recently, there have been many advances in the development of fast solvers for such problems, including methods which have linear complexity with respect to the number of degrees of freedom. While most methods scale optimally only in the context of low-order discretizations and smooth wave speed distributions, the method of polarized traces has been shown to retain optimal scaling for high-order discretizations, such as hybridizable discontinuous Galerkin methods and for highly heterogeneous (and even discontinuous) wave speeds. The resulting fast and accurate solver is consequently highly attractive for geophysical applications. To date, this method relies on a layered domain decomposition together with a preconditioner applied in a sweeping fashion, which has limited straight-forward parallelization. In this work, we introduce a new version of the method of polarized traces which reveals more parallel structure than previous versions while preserving all of its other advantages. We achieve this by further decomposing each layer and applying the preconditioner to these new components separately and in parallel. We demonstrate that this produces an even more effective and parallelizable preconditioner for a single right-hand side. As before, additional speed can be gained by pipelining several right-hand-sides.

  10. Boosting Bayesian parameter inference of nonlinear stochastic differential equation models by Hamiltonian scale separation.

    PubMed

    Albert, Carlo; Ulzega, Simone; Stoop, Ruedi

    2016-04-01

    Parameter inference is a fundamental problem in data-driven modeling. Given observed data that is believed to be a realization of some parameterized model, the aim is to find parameter values that are able to explain the observed data. In many situations, the dominant sources of uncertainty must be included into the model for making reliable predictions. This naturally leads to stochastic models. Stochastic models render parameter inference much harder, as the aim then is to find a distribution of likely parameter values. In Bayesian statistics, which is a consistent framework for data-driven learning, this so-called posterior distribution can be used to make probabilistic predictions. We propose a novel, exact, and very efficient approach for generating posterior parameter distributions for stochastic differential equation models calibrated to measured time series. The algorithm is inspired by reinterpreting the posterior distribution as a statistical mechanics partition function of an object akin to a polymer, where the measurements are mapped on heavier beads compared to those of the simulated data. To arrive at distribution samples, we employ a Hamiltonian Monte Carlo approach combined with a multiple time-scale integration. A separation of time scales naturally arises if either the number of measurement points or the number of simulation points becomes large. Furthermore, at least for one-dimensional problems, we can decouple the harmonic modes between measurement points and solve the fastest part of their dynamics analytically. Our approach is applicable to a wide range of inference problems and is highly parallelizable.

  11. Anthropogenic effects on forest ecosystems at various spatio-temporal scales.

    PubMed

    Bredemeier, Michael

    2002-03-27

    The focus in this review of long-term effects on forest ecosystems is on human impact. As a classification of this differentiated and complex matter, three domains of long-term effects with different scales in space and time are distinguished: Exploitation and conversion history of forests in areas of extended human settlement, Long-range air pollution and acid deposition in industrialized regions, Current global loss of forests and soil degradation. There is an evident link between the first and the third point in the list. Cultivation of primary forestland--with its tremendous effects on land cover--took place in Europe many centuries ago and continued for centuries. Deforestation today is a phenomenon predominantly observed in the developing countries, yet it threatens biotic and soil resources on a global scale. Acidification of forest soils caused by long-range air pollution from anthropogenic emission sources is a regional to continental problem in industrialized parts of the world. As a result of emission reduction legislation, atmospheric acid deposition is currently on the retreat in the richer industrialized regions (e.g., Europe, U.S., Japan); however, because many other regions of the world are at present rapidly developing their polluting industries (e.g., China and India), "acid rain" will most probably remain a serious ecological problem on regional scales. It is believed to have caused considerable destabilization of forest ecosystems, adding to the strong structural and biogeochemical impacts resulting from exploitation history. Deforestation and soil degradation cause the most pressing ecological problems for the time being, at least on the global scale. In many of those regions where loss of forests and soils is now high, it may be extremely difficult or impossible to restore forest ecosystems and soil productivity. Moreover, the driving forces, which are predominantly of a demographic and socioeconomic nature, do not yet seem to be lessening in strength. It can only be hoped that a wise policy of international cooperation and shared aims can cope with this problem in the future.

  12. Optimal processor assignment for pipeline computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Simha, Rahul; Choudhury, Alok N.; Narahari, Bhagirath

    1991-01-01

    The availability of large scale multitasked parallel architectures introduces the following processor assignment problem for pipelined computations. Given a set of tasks and their precedence constraints, along with their experimentally determined individual responses times for different processor sizes, find an assignment of processor to tasks. Two objectives are of interest: minimal response given a throughput requirement, and maximal throughput given a response time requirement. These assignment problems differ considerably from the classical mapping problem in which several tasks share a processor; instead, it is assumed that a large number of processors are to be assigned to a relatively small number of tasks. Efficient assignment algorithms were developed for different classes of task structures. For a p processor system and a series parallel precedence graph with n constituent tasks, an O(np2) algorithm is provided that finds the optimal assignment for the response time optimization problem; it was found that the assignment optimizing the constrained throughput in O(np2log p) time. Special cases of linear, independent, and tree graphs are also considered.

  13. Intellectual, behavioral, and emotional functioning in children with syndromic craniosynostosis.

    PubMed

    Maliepaard, Marianne; Mathijssen, Irene M J; Oosterlaan, Jaap; Okkerse, Jolanda M E

    2014-06-01

    To examine intellectual, behavioral, and emotional functioning of children who have syndromic craniosynostosis and to explore differences between diagnostic subgroups. A national sample of children who have syndromic craniosynostosis participated in this study. Intellectual, behavioral, and emotional outcomes were assessed by using standardized measures: Wechsler Intelligence Scale for Children, Third Edition, Child Behavior Checklist (CBCL)/6-18, Disruptive Behavior Disorder rating scale (DBD), and the National Institute of Mental Health Diagnostic Interview Schedule for Children. We included 82 children (39 boys) aged 6 to 13 years who have syndromic craniosynostosis. Mean Full-Scale IQ (FSIQ) was in the normal range (M = 96.6; SD = 21.6). However, children who have syndromic craniosynostosis had a 1.9 times higher risk for developing intellectual disability (FSIQ < 85) compared with the normative population (P < .001) and had more behavioral and emotional problems compared with the normative population, including higher scores on the CBCL/6-18, DBD Total Problems (P < .001), Internalizing (P < .01), social problems (P < .001), attention problems (P < .001), and the DBD Inattention (P < .001). Children who have Apert syndrome had lower FSIQs (M = 76.7; SD = 13.3) and children who have Muenke syndrome had more social problems (P < .01), attention problems (P < .05), and inattention problems (P < .01) than normative population and with other diagnostic subgroups. Although children who have syndromic craniosynostosis have FSIQs similar to the normative population, they are at increased risk for developing intellectual disability, internalizing, social, and attention problems. Higher levels of behavioral and emotional problems were related to lower levels of intellectual functioning.

  14. A time-domain decomposition iterative method for the solution of distributed linear quadratic optimal control problems

    NASA Astrophysics Data System (ADS)

    Heinkenschloss, Matthias

    2005-01-01

    We study a class of time-domain decomposition-based methods for the numerical solution of large-scale linear quadratic optimal control problems. Our methods are based on a multiple shooting reformulation of the linear quadratic optimal control problem as a discrete-time optimal control (DTOC) problem. The optimality conditions for this DTOC problem lead to a linear block tridiagonal system. The diagonal blocks are invertible and are related to the original linear quadratic optimal control problem restricted to smaller time-subintervals. This motivates the application of block Gauss-Seidel (GS)-type methods for the solution of the block tridiagonal systems. Numerical experiments show that the spectral radii of the block GS iteration matrices are larger than one for typical applications, but that the eigenvalues of the iteration matrices decay to zero fast. Hence, while the GS method is not expected to convergence for typical applications, it can be effective as a preconditioner for Krylov-subspace methods. This is confirmed by our numerical tests.A byproduct of this research is the insight that certain instantaneous control techniques can be viewed as the application of one step of the forward block GS method applied to the DTOC optimality system.

  15. Satellite attitude prediction by multiple time scales method

    NASA Technical Reports Server (NTRS)

    Tao, Y. C.; Ramnath, R.

    1975-01-01

    An investigation is made of the problem of predicting the attitude of satellites under the influence of external disturbing torques. The attitude dynamics are first expressed in a perturbation formulation which is then solved by the multiple scales approach. The independent variable, time, is extended into new scales, fast, slow, etc., and the integration is carried out separately in the new variables. The theory is applied to two different satellite configurations, rigid body and dual spin, each of which may have an asymmetric mass distribution. The disturbing torques considered are gravity gradient and geomagnetic. Finally, as multiple time scales approach separates slow and fast behaviors of satellite attitude motion, this property is used for the design of an attitude control device. A nutation damping control loop, using the geomagnetic torque for an earth pointing dual spin satellite, is designed in terms of the slow equation.

  16. Naturalness of Electroweak Symmetry Breaking

    NASA Astrophysics Data System (ADS)

    Espinosa, J. R.

    2007-02-01

    After revisiting the hierarchy problem of the Standard Model and its implications for the scale of New Physics, I consider the fine tuning problem of electroweak symmetry breaking in two main scenarios beyond the Standard Model: SUSY and Little Higgs models. The main conclusions are that New Physics should appear on the reach of the LHC; that some SUSY models can solve the hierarchy problem with acceptable residual fine tuning and, finally, that Little Higgs models generically suffer from large tunings, many times hidden.

  17. Block Preconditioning to Enable Physics-Compatible Implicit Multifluid Plasma Simulations

    NASA Astrophysics Data System (ADS)

    Phillips, Edward; Shadid, John; Cyr, Eric; Miller, Sean

    2017-10-01

    Multifluid plasma simulations involve large systems of partial differential equations in which many time-scales ranging over many orders of magnitude arise. Since the fastest of these time-scales may set a restrictively small time-step limit for explicit methods, the use of implicit or implicit-explicit time integrators can be more tractable for obtaining dynamics at time-scales of interest. Furthermore, to enforce properties such as charge conservation and divergence-free magnetic field, mixed discretizations using volume, nodal, edge-based, and face-based degrees of freedom are often employed in some form. Together with the presence of stiff modes due to integrating over fast time-scales, the mixed discretization makes the required linear solves for implicit methods particularly difficult for black box and monolithic solvers. This work presents a block preconditioning strategy for multifluid plasma systems that segregates the linear system based on discretization type and approximates off-diagonal coupling in block diagonal Schur complement operators. By employing multilevel methods for the block diagonal subsolves, this strategy yields algorithmic and parallel scalability which we demonstrate on a range of problems.

  18. Communication and cooperation in underwater acoustic networks

    NASA Astrophysics Data System (ADS)

    Yerramalli, Srinivas

    In this thesis, we present a study of several problems related to underwater point to point communications and network formation. We explore techniques to improve the achievable data rate on a point to point link using better physical layer techniques and then study sensor cooperation which improves the throughput and reliability in an underwater network. Robust point-to-point communications in underwater networks has become increasingly critical in several military and civilian applications related to underwater communications. We present several physical layer signaling and detection techniques tailored to the underwater channel model to improve the reliability of data detection. First, a simplified underwater channel model in which the time scale distortion on each path is assumed to be the same (single scale channel model in contrast to a more general multi scale model). A novel technique, which exploits the nature of OFDM signaling and the time scale distortion, called Partial FFT Demodulation is derived. It is observed that this new technique has some unique interference suppression properties and performs better than traditional equalizers in several scenarios of interest. Next, we consider the multi scale model for the underwater channel and assume that single scale processing is performed at the receiver. We then derive optimized front end pre-processing techniques to reduce the interference caused during single scale processing of signals transmitted on a multi-scale channel. We then propose an improvised channel estimation technique using dictionary optimization methods for compressive sensing and show that significant performance gains can be obtained using this technique. In the next part of this thesis, we consider the problem of sensor node cooperation among rational nodes whose objective is to improve their individual data rates. We first consider the problem of transmitter cooperation in a multiple access channel and investigate the stability of the grand coalition of transmitters using tools from cooperative game theory and show that the grand coalition in both the asymptotic regimes of high and low SNR. Towards studying the problem of receiver cooperation for a broadcast channel, we propose a game theoretic model for the broadcast channel and then derive a game theoretic duality between the multiple access and the broadcast channel and show that how the equilibria of the broadcast channel are related to the multiple access channel and vice versa.

  19. A Constructive Mean-Field Analysis of Multi-Population Neural Networks with Random Synaptic Weights and Stochastic Inputs

    PubMed Central

    Faugeras, Olivier; Touboul, Jonathan; Cessac, Bruno

    2008-01-01

    We deal with the problem of bridging the gap between two scales in neuronal modeling. At the first (microscopic) scale, neurons are considered individually and their behavior described by stochastic differential equations that govern the time variations of their membrane potentials. They are coupled by synaptic connections acting on their resulting activity, a nonlinear function of their membrane potential. At the second (mesoscopic) scale, interacting populations of neurons are described individually by similar equations. The equations describing the dynamical and the stationary mean-field behaviors are considered as functional equations on a set of stochastic processes. Using this new point of view allows us to prove that these equations are well-posed on any finite time interval and to provide a constructive method for effectively computing their unique solution. This method is proved to converge to the unique solution and we characterize its complexity and convergence rate. We also provide partial results for the stationary problem on infinite time intervals. These results shed some new light on such neural mass models as the one of Jansen and Rit (1995): their dynamics appears as a coarse approximation of the much richer dynamics that emerges from our analysis. Our numerical experiments confirm that the framework we propose and the numerical methods we derive from it provide a new and powerful tool for the exploration of neural behaviors at different scales. PMID:19255631

  20. Assessing the impact of caring for a child with Dravet syndrome: Results of a caregiver survey.

    PubMed

    Campbell, Jonathan D; Whittington, Melanie D; Kim, Chong H; VanderVeen, Gina R; Knupp, Kelly G; Gammaitoni, Arnold

    2018-03-01

    The objective of this study was to describe and quantify the impact of caring for a child with Dravet syndrome (DS) on caregivers. We surveyed DS caregivers at a single institution with a large population of patient with DS. Survey domains included time spent/difficulty performing caregiving tasks (Oberst Caregiving Burden Scale, OCBS); caregiver health-related quality of life (EuroQoL 5D-5L, EQ-5D); and work/activity impairment (Work Productivity and Activity Impairment questionnaire, WPAI). Modified National Health Interview Survey (NHIS) questions were included to assess logistical challenges associated with coordinating medical care. Thirty-four primary caregivers responded, and 30/34 respondents completed the survey. From OCBS, providing transportation, personal care, and additional household tasks required the greatest caregiver time commitment; arranging for child care, communication, and managing behavioral problems presented the greatest difficulty. EuroQoL 5D-5L domains with the greatest impact on caregivers (0=none, 5=unable/extreme) were anxiety/depression (70% of respondents≥slight problems, 34%≥moderate) and discomfort/pain (57% of respondents≥slight problems, 23%≥moderate). The mean EQ-5D general health visual analogue scale (VAS) score (0=death; 100=perfect health) was 67 (range, 11-94). Respondents who scored <65 were two- to fourfold more likely to report ≥moderate time spent and difficulty managing child behavior problems and assisting with walking, suggesting that children with DS with high degrees of motor or neurodevelopmental problems have an especially high impact on caregiver health. On the WPAI, 26% of caregivers missed >1day of work in the previous week, with 43% reporting substantial impact (≥6, scale=1-10) on work productivity; 65% reported switching jobs, quitting jobs, or losing a job due to caregiving responsibilities. National Health Interview Survey responses indicated logistical burdens beyond the home; 50% of caregivers made ≥10 outpatient visits in the past year with their child with DS. Caring for patients with DS exerts physical, emotional, and time burdens on caregivers. Supportive services for DS families are identified to highlight an unmet need for DS treatments. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  1. Self-efficacy: a means of identifying problems in nursing education and career progress.

    PubMed

    Harvey, V; McMurray, N

    1994-10-01

    Two nursing self-efficacy scales (academic and clinical) were developed and refined for use in identifying problems in progress in undergraduate nurses. Emergent factors within each scale contained items representing important aspects of nursing education. Both measures showed good internal consistency, test-retest reliability, and construct validity. Sensitivity to content and focus of tuition at time of completion was shown with some changes in factor structure over samples of first year nursing students. Academic self-efficacy (but not clinical self-efficacy) was predictive of course withdrawal. Applications to nursing education, progress in pursuing a nursing career and attrition are discussed.

  2. The dynamics of oceanic fronts. Part 1: The Gulf Stream

    NASA Technical Reports Server (NTRS)

    Kao, T. W.

    1970-01-01

    The establishment and maintenance of the mean hydrographic properties of large scale density fronts in the upper ocean is considered. The dynamics is studied by posing an initial value problem starting with a near surface discharge of buoyant water with a prescribed density deficit into an ambient stationary fluid of uniform density. The full time dependent diffusion and Navier-Stokes equations for a constant Coriolis parameter are used in this study. Scaling analysis reveals three independent length scales of the problem, namely a radius of deformation or inertial length scale, Lo, a buoyance length scale, ho, and a diffusive length scale, hv. Two basic dimensionless parameters are then formed from these length scales, the thermal (or more precisely, the densimetric) Rossby number, Ro = Lo/ho and the Ekman number, E = hv/ho. The governing equations are then suitably scaled and the resulting normalized equations are shown to depend on E alone for problems of oceanic interest. Under this scaling, the solutions are similar for all Ro. It is also shown that 1/Ro is a measure of the frontal slope. The governing equations are solved numerically and the scaling analysis is confirmed. The solution indicates that an equilibrium state is established. The front can then be rendered stationary by a barotropic current from a larger scale along-front pressure gradient. In that quasisteady state, and for small values of E, the main thermocline and the inclined isopycnics forming the front have evolved, together with the along-front jet. Conservation of potential vorticity is also obtained in the light water pool. The surface jet exhibits anticyclonic shear in the light water pool and cyclonic shear across the front.

  3. Nonlinear zero-sum differential game analysis by singular perturbation methods

    NASA Technical Reports Server (NTRS)

    Sinar, J.; Farber, N.

    1982-01-01

    A class of nonlinear, zero-sum differential games, exhibiting time-scale separation properties, can be analyzed by singular-perturbation techniques. The merits of such an analysis, leading to an approximate game solution, as well as the 'well-posedness' of the formulation, are discussed. This approach is shown to be attractive for investigating pursuit-evasion problems; the original multidimensional differential game is decomposed to a 'simple pursuit' (free-stream) game and two independent (boundary-layer) optimal-control problems. Using multiple time-scale boundary-layer models results in a pair of uniformly valid zero-order composite feedback strategies. The dependence of suboptimal strategies on relative geometry and own-state measurements is demonstrated by a three dimensional, constant-speed example. For game analysis with realistic vehicle dynamics, the technique of forced singular perturbations and a variable modeling approach is proposed. Accuracy of the analysis is evaluated by comparison with the numerical solution of a time-optimal, variable-speed 'game of two cars' in the horizontal plane.

  4. Backpropagation and ordered derivatives in the time scales calculus.

    PubMed

    Seiffertt, John; Wunsch, Donald C

    2010-08-01

    Backpropagation is the most widely used neural network learning technique. It is based on the mathematical notion of an ordered derivative. In this paper, we present a formulation of ordered derivatives and the backpropagation training algorithm using the important emerging area of mathematics known as the time scales calculus. This calculus, with its potential for application to a wide variety of inter-disciplinary problems, is becoming a key area of mathematics. It is capable of unifying continuous and discrete analysis within one coherent theoretical framework. Using this calculus, we present here a generalization of backpropagation which is appropriate for cases beyond the specifically continuous or discrete. We develop a new multivariate chain rule of this calculus, define ordered derivatives on time scales, prove a key theorem about them, and derive the backpropagation weight update equations for a feedforward multilayer neural network architecture. By drawing together the time scales calculus and the area of neural network learning, we present the first connection of two major fields of research.

  5. Fully implicit adaptive mesh refinement solver for 2D MHD

    NASA Astrophysics Data System (ADS)

    Philip, B.; Chacon, L.; Pernice, M.

    2008-11-01

    Application of implicit adaptive mesh refinement (AMR) to simulate resistive magnetohydrodynamics is described. Solving this challenging multi-scale, multi-physics problem can improve understanding of reconnection in magnetically-confined plasmas. AMR is employed to resolve extremely thin current sheets, essential for an accurate macroscopic description. Implicit time stepping allows us to accurately follow the dynamical time scale of the developing magnetic field, without being restricted by fast Alfven time scales. At each time step, the large-scale system of nonlinear equations is solved by a Jacobian-free Newton-Krylov method together with a physics-based preconditioner. Each block within the preconditioner is solved optimally using the Fast Adaptive Composite grid method, which can be considered as a multiplicative Schwarz method on AMR grids. We will demonstrate the excellent accuracy and efficiency properties of the method with several challenging reduced MHD applications, including tearing, island coalescence, and tilt instabilities. B. Philip, L. Chac'on, M. Pernice, J. Comput. Phys., in press (2008)

  6. Time reversal imaging, Inverse problems and Adjoint Tomography}

    NASA Astrophysics Data System (ADS)

    Montagner, J.; Larmat, C. S.; Capdeville, Y.; Kawakatsu, H.; Fink, M.

    2010-12-01

    With the increasing power of computers and numerical techniques (such as spectral element methods), it is possible to address a new class of seismological problems. The propagation of seismic waves in heterogeneous media is simulated more and more accurately and new applications developed, in particular time reversal methods and adjoint tomography in the three-dimensional Earth. Since the pioneering work of J. Claerbout, theorized by A. Tarantola, many similarities were found between time-reversal methods, cross-correlations techniques, inverse problems and adjoint tomography. By using normal mode theory, we generalize the scalar approach of Draeger and Fink (1999) and Lobkis and Weaver (2001) to the 3D- elastic Earth, for theoretically understanding time-reversal method on global scale. It is shown how to relate time-reversal methods on one hand, with auto-correlations of seismograms for source imaging and on the other hand, with cross-correlations between receivers for structural imaging and retrieving Green function. Time-reversal methods were successfully applied in the past to acoustic waves in many fields such as medical imaging, underwater acoustics, non destructive testing and to seismic waves in seismology for earthquake imaging. In the case of source imaging, time reversal techniques make it possible an automatic location in time and space as well as the retrieval of focal mechanism of earthquakes or unknown environmental sources . We present here some applications at the global scale of these techniques on synthetic tests and on real data, such as Sumatra-Andaman (Dec. 2004), Haiti (Jan. 2010), as well as glacial earthquakes and seismic hum.

  7. The influence of chronic health problems on work ability and productivity at work: a longitudinal study among older employees.

    PubMed

    Leijten, Fenna R M; van den Heuvel, Swenne G; Ybema, Jan Fekke; van der Beek, Allard J; Robroek, Suzan J W; Burdorf, Alex

    2014-09-01

    This study aimed to assess the influence of chronic health problems on work ability and productivity at work among older employees using different methodological approaches in the analysis of longitudinal studies. Data from employees, aged 45-64, of the longitudinal Study on Transitions in Employment, Ability and Motivation was used (N=8411). Using three annual online questionnaires, we assessed the presence of seven chronic health problems, work ability (scale 0-10), and productivity at work (scale 0-10). Three linear regression generalized estimating equations were used. The time-lag model analyzed the relation of health problems with work ability and productivity at work after one year; the autoregressive model adjusted for work ability and productivity in the preceding year; and the third model assessed the relation of incidence and recovery with changes in work ability and productivity at work within the same year. Workers with health problems had lower work ability at one-year follow-up than workers without these health problems, varying from a 2.0% reduction with diabetes mellitus to a 9.5% reduction with psychological health problems relative to the overall mean (time-lag). Work ability of persons with health problems decreased slightly more during one-year follow-up than that of persons without these health problems, ranging from 1.4% with circulatory to 5.9% with psychological health problems (autoregressive). Incidence related to larger decreases in work ability, from 0.6% with diabetes mellitus to 19.0% with psychological health problems, than recovery related to changes in work ability, from a 1.8% decrease with circulatory to an 8.5% increase with psychological health problems (incidence-recovery). Only workers with musculoskeletal and psychological health problems had lower productivity at work at one-year follow-up than workers without those health problems (1.2% and 5.6%, respectively, time-lag). All methodological approaches indicated that chronic health problems were associated with decreased work ability and, to a much lesser extent, lower productivity at work. The choice for a particular methodological approach considerably influenced the strength of the associations, with the incidence of health problems resulting in the largest decreases in work ability and productivity at work.

  8. Analysis of DNA Sequences by an Optical Time-Integrating Correlator: Proposal

    DTIC Science & Technology

    1991-11-01

    OF THE PROBLEM AND CURRENT TECHNOLOGY 2 3.0 TIME-INTEGRATING CORRELATOR 2 4.0 REPRESENTATIONS OF THE DNA BASES 8 5.0 DNA ANALYSIS STRATEGY 8 6.0... DNA bases where each base is represented by a 7-bits long pseudorandom sequence. 9 Figure 5: The flow of data in a DNA analysis system based on an...logarithmic scale and a linear scale. 15 x LIST OF TABLES PAGE Table 1: Short representations of the DNA bases where each base is represented by 7-bits

  9. Exponential bound in the quest for absolute zero

    NASA Astrophysics Data System (ADS)

    Stefanatos, Dionisis

    2017-10-01

    In most studies for the quantification of the third law of thermodynamics, the minimum temperature which can be achieved with a long but finite-time process scales as a negative power of the process duration. In this article, we use our recent complete solution for the optimal control problem of the quantum parametric oscillator to show that the minimum temperature which can be obtained in this system scales exponentially with the available time. The present work is expected to motivate further research in the active quest for absolute zero.

  10. Exponential bound in the quest for absolute zero.

    PubMed

    Stefanatos, Dionisis

    2017-10-01

    In most studies for the quantification of the third law of thermodynamics, the minimum temperature which can be achieved with a long but finite-time process scales as a negative power of the process duration. In this article, we use our recent complete solution for the optimal control problem of the quantum parametric oscillator to show that the minimum temperature which can be obtained in this system scales exponentially with the available time. The present work is expected to motivate further research in the active quest for absolute zero.

  11. Distributed Optimization of Multi-Agent Systems: Framework, Local Optimizer, and Applications

    NASA Astrophysics Data System (ADS)

    Zu, Yue

    Convex optimization problem can be solved in a centralized or distributed manner. Compared with centralized methods based on single-agent system, distributed algorithms rely on multi-agent systems with information exchanging among connected neighbors, which leads to great improvement on the system fault tolerance. Thus, a task within multi-agent system can be completed with presence of partial agent failures. By problem decomposition, a large-scale problem can be divided into a set of small-scale sub-problems that can be solved in sequence/parallel. Hence, the computational complexity is greatly reduced by distributed algorithm in multi-agent system. Moreover, distributed algorithm allows data collected and stored in a distributed fashion, which successfully overcomes the drawbacks of using multicast due to the bandwidth limitation. Distributed algorithm has been applied in solving a variety of real-world problems. Our research focuses on the framework and local optimizer design in practical engineering applications. In the first one, we propose a multi-sensor and multi-agent scheme for spatial motion estimation of a rigid body. Estimation performance is improved in terms of accuracy and convergence speed. Second, we develop a cyber-physical system and implement distributed computation devices to optimize the in-building evacuation path when hazard occurs. The proposed Bellman-Ford Dual-Subgradient path planning method relieves the congestion in corridor and the exit areas. At last, highway traffic flow is managed by adjusting speed limits to minimize the fuel consumption and travel time in the third project. Optimal control strategy is designed through both centralized and distributed algorithm based on convex problem formulation. Moreover, a hybrid control scheme is presented for highway network travel time minimization. Compared with no controlled case or conventional highway traffic control strategy, the proposed hybrid control strategy greatly reduces total travel time on test highway network.

  12. Time-Domain Evaluation of Fractional Order Controllers’ Direct Discretization Methods

    NASA Astrophysics Data System (ADS)

    Ma, Chengbin; Hori, Yoichi

    Fractional Order Control (FOC), in which the controlled systems and/or controllers are described by fractional order differential equations, has been applied to various control problems. Though it is not difficult to understand FOC’s theoretical superiority, realization issue keeps being somewhat problematic. Since the fractional order systems have an infinite dimension, proper approximation by finite difference equation is needed to realize the designed fractional order controllers. In this paper, the existing direct discretization methods are evaluated by their convergences and time-domain comparison with the baseline case. Proposed sampling time scaling property is used to calculate the baseline case with full memory length. This novel discretization method is based on the classical trapezoidal rule but with scaled sampling time. Comparative studies show good performance and simple algorithm make the Short Memory Principle method most practically superior. The FOC research is still at its primary stage. But its applications in modeling and robustness against non-linearities reveal the promising aspects. Parallel to the development of FOC theories, applying FOC to various control problems is also crucially important and one of top priority issues.

  13. Associations among types of impulsivity, substance use problems and neurexin-3 polymorphisms.

    PubMed

    Stoltenberg, Scott F; Lehmann, Melissa K; Christ, Christa C; Hersrud, Samantha L; Davies, Gareth E

    2011-12-15

    Some of the genetic vulnerability for addiction may be mediated by impulsivity. This study investigated relationships among impulsivity, substance use problems and six neurexin-3 (NRXN3) polymorphisms. Neurexins (NRXNs) are presynaptic transmembrane proteins that play a role in the development and function of synapses. Impulsivity was assessed with the Barratt Impulsiveness Scale Version 11 (BIS-11), the Boredom Proneness Scale (BPS) and the TIME paradigm; alcohol problems with the Michigan Alcoholism Screening Test (MAST); drug problems with the Drug Abuse Screening Test (DAST-20); and regular tobacco use with a single question. Participants (n=439 Caucasians, 64.7% female) donated buccal cells for genotyping. Six NRXN3 polymorphisms were genotyped: rs983795, rs11624704, rs917906, rs1004212, rs10146997 and rs8019381. A dual luciferase assay was conducted to determine whether allelic variation at rs917906 regulated gene expression. In general, impulsivity was significantly higher in those who regularly used tobacco and/or had alcohol or drug problems. In men, there were modest associations between rs11624704 and attentional impulsivity (p=0.005) and between rs1004212 and alcohol problems (p=0.009). In women, there were weak associations between rs10146997 and TIME estimation (p=0.03); and between rs1004212 and drug problems (p=0.03). The dual luciferase assay indicated that C and T alleles of rs917906 did not differentially regulate gene expression in vitro. Associations between impulsivity, substance use problems and polymorphisms in NRXN3 may be gender specific. Impulsivity is associated with substance use problems and may provide a useful intermediate phenotype for addiction. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  14. Harmonizing Screening for Gambling Problems in Epidemiological Surveys – Development of the Rapid Screener for Problem Gambling (RSPG)

    PubMed Central

    Challet-Bouju, Gaëlle; Perrot, Bastien; Romo, Lucia; Valleur, Marc; Magalon, David; Fatséas, Mélina; Chéreau-Boudet, Isabelle; Luquiens, Amandine; Grall-Bronnec, Marie; Hardouin, Jean-Benoit

    2016-01-01

    Background and aims The aim of this study was to test the screening properties of several combinations of items from gambling scales, in order to harmonize screening of gambling problems in epidemiological surveys. The objective was to propose two brief screening tools (three items or less) for a use in interviews and self-administered questionnaires. Methods We tested the screening properties of combinations of items from several gambling scales, in a sample of 425 gamblers (301 non-problem gamblers and 124 disordered gamblers). Items tested included interview-based items (Pathological Gambling section of the DSM-IV, lifetime history of problem gambling, monthly expenses in gambling, and abstinence of 1 month or more) and self-report items (South Oaks Gambling Screen, Gambling Attitudes, and Beliefs Survey). The gold standard used was the diagnosis of a gambling disorder according to the DSM-5. Results Two versions of the Rapid Screener for Problem Gambling (RSPG) were developed: the RSPG-Interview (RSPG-I), being composed of two interview items (increasing bets and loss of control), and the RSPG-Self-Assessment (RSPG-SA), being composed of three self-report items (chasing, guiltiness, and perceived inability to stop). Discussion and conclusions We recommend using the RSPG-SA/I for screening problem gambling in epidemiological surveys, with the version adapted for each purpose (RSPG-I for interview-based surveys and RSPG-SA for self-administered surveys). This first triage of potential problem gamblers must be supplemented by further assessment, as it may overestimate the proportion of problem gamblers. However, a first triage has the great advantage of saving time and energy in large-scale screening for problem gambling. PMID:27348558

  15. Spatial, temporal, and hybrid decompositions for large-scale vehicle routing with time windows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, Russell W

    This paper studies the use of decomposition techniques to quickly find high-quality solutions to large-scale vehicle routing problems with time windows. It considers an adaptive decomposition scheme which iteratively decouples a routing problem based on the current solution. Earlier work considered vehicle-based decompositions that partitions the vehicles across the subproblems. The subproblems can then be optimized independently and merged easily. This paper argues that vehicle-based decompositions, although very effective on various problem classes also have limitations. In particular, they do not accommodate temporal decompositions and may produce spatial decompositions that are not focused enough. This paper then proposes customer-based decompositionsmore » which generalize vehicle-based decouplings and allows for focused spatial and temporal decompositions. Experimental results on class R2 of the extended Solomon benchmarks demonstrates the benefits of the customer-based adaptive decomposition scheme and its spatial, temporal, and hybrid instantiations. In particular, they show that customer-based decompositions bring significant benefits over large neighborhood search in contrast to vehicle-based decompositions.« less

  16. Set-free Markov state model building

    NASA Astrophysics Data System (ADS)

    Weber, Marcus; Fackeldey, Konstantin; Schütte, Christof

    2017-03-01

    Molecular dynamics (MD) simulations face challenging problems since the time scales of interest often are much longer than what is possible to simulate; and even if sufficiently long simulations are possible the complex nature of the resulting simulation data makes interpretation difficult. Markov State Models (MSMs) help to overcome these problems by making experimentally relevant time scales accessible via coarse grained representations that also allow for convenient interpretation. However, standard set-based MSMs exhibit some caveats limiting their approximation quality and statistical significance. One of the main caveats results from the fact that typical MD trajectories repeatedly re-cross the boundary between the sets used to build the MSM which causes statistical bias in estimating the transition probabilities between these sets. In this article, we present a set-free approach to MSM building utilizing smooth overlapping ansatz functions instead of sets and an adaptive refinement approach. This kind of meshless discretization helps to overcome the recrossing problem and yields an adaptive refinement procedure that allows us to improve the quality of the model while exploring state space and inserting new ansatz functions into the MSM.

  17. Monge-Ampére simulation of fourth order PDEs in two dimensions with application to elastic-electrostatic contact problems

    NASA Astrophysics Data System (ADS)

    DiPietro, Kelsey L.; Lindsay, Alan E.

    2017-11-01

    We present an efficient moving mesh method for the simulation of fourth order nonlinear partial differential equations (PDEs) in two dimensions using the Parabolic Monge-Ampére (PMA) equation. PMA methods have been successfully applied to the simulation of second order problems, but not on systems with higher order equations which arise in many topical applications. Our main application is the resolution of fine scale behavior in PDEs describing elastic-electrostatic interactions. The PDE system considered has multiple parameter dependent singular solution modalities, including finite time singularities and sharp interface dynamics. We describe how to construct a dynamic mesh algorithm for such problems which incorporates known self similar or boundary layer scalings of the underlying equation to locate and dynamically resolve fine scale solution features in these singular regimes. We find a key step in using the PMA equation for mesh generation in fourth order problems is the adoption of a high order representation of the transformation from the computational to physical mesh. We demonstrate the efficacy of the new method on a variety of examples and establish several new results and conjectures on the nature of self-similar singularity formation in higher order PDEs.

  18. Convective organization in the Pacific ITCZ: Merging OLR, TOVS, and SSM/I information

    NASA Technical Reports Server (NTRS)

    Hayes, Patrick M.; Mcguirk, James P.

    1993-01-01

    One of the most striking features of the planet's long-time average cloudiness is the zonal band of concentrated convection lying near the equator. Large-scale variability of the Intertropical Convergence Zone (ITCZ) has been well documented in studies of the planetary spatial scales and seasonal/annual/interannual temporal cycles of convection. Smaller-scale variability is difficult to study over the tropical oceans for several reasons. Conventional surface and upper-air data are virtually non-existent in some regions; diurnal and annual signals overwhelm fluctuations on other time scales; and analyses of variables such as geopotential and moisture are generally less reliable in the tropics. These problems make the use of satellite data an attractive alternative and the preferred means to study variability of tropical weather systems.

  19. Investigations on the hierarchy of reference frames in geodesy and geodynamics

    NASA Technical Reports Server (NTRS)

    Grafarend, E. W.; Mueller, I. I.; Papo, H. B.; Richter, B.

    1979-01-01

    Problems related to reference directions were investigated. Space and time variant angular parameters are illustrated in hierarchic structures or towers. Using least squares techniques, model towers of triads are presented which allow the formation of linear observation equations. Translational and rotational degrees of freedom (origin and orientation) are discussed along with and the notion of length and scale degrees of freedom. According to the notion of scale parallelism, scale factors with respect to a unit length are given. Three-dimensional geodesy was constructed from the set of three base vectors (gravity, earth-rotation and the ecliptic normal vector). Space and time variations are given with respect to a polar and singular value decomposition or in terms of changes in translation, rotation, deformation (shear, dilatation or angular and scale distortions).

  20. A global time-dependent model of thunderstorm electricity. I - Mathematical properties of the physical and numerical models

    NASA Technical Reports Server (NTRS)

    Browning, G. L.; Tzur, I.; Roble, R. G.

    1987-01-01

    A time-dependent model is introduced that can be used to simulate the interaction of a thunderstorm with its global electrical environment. The model solves the continuity equation of the Maxwell current, which is assumed to be composed of the conduction, displacement, and source currents. Boundary conditions which can be used in conjunction with the continuity equation to form a well-posed initial-boundary value problem are determined. Properties of various components of solutions of the initial-boundary value problem are analytically determined. The results indicate that the problem has two time scales, one determined by the background electrical conductivity and the other by the time variation of the source function. A numerical method for obtaining quantitative results is introduced, and its properties are studied. Some simulation results on the evolution of the displacement and conduction currents during the electrification of a storm are presented.

  1. Variable Generation Power Forecasting as a Big Data Problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haupt, Sue Ellen; Kosovic, Branko

    To blend growing amounts of power from renewable resources into utility operations requires accurate forecasts. For both day ahead planning and real-time operations, the power from the wind and solar resources must be predicted based on real-time observations and a series of models that span the temporal and spatial scales of the problem, using the physical and dynamical knowledge as well as computational intelligence. Accurate prediction is a Big Data problem that requires disparate data, multiple models that are each applicable for a specific time frame, and application of computational intelligence techniques to successfully blend all of the model andmore » observational information in real-time and deliver it to the decision makers at utilities and grid operators. This paper describes an example system that has been used for utility applications and how it has been configured to meet utility needs while addressing the Big Data issues.« less

  2. Variable Generation Power Forecasting as a Big Data Problem

    DOE PAGES

    Haupt, Sue Ellen; Kosovic, Branko

    2016-10-10

    To blend growing amounts of power from renewable resources into utility operations requires accurate forecasts. For both day ahead planning and real-time operations, the power from the wind and solar resources must be predicted based on real-time observations and a series of models that span the temporal and spatial scales of the problem, using the physical and dynamical knowledge as well as computational intelligence. Accurate prediction is a Big Data problem that requires disparate data, multiple models that are each applicable for a specific time frame, and application of computational intelligence techniques to successfully blend all of the model andmore » observational information in real-time and deliver it to the decision makers at utilities and grid operators. This paper describes an example system that has been used for utility applications and how it has been configured to meet utility needs while addressing the Big Data issues.« less

  3. Performance of Quantum Annealers on Hard Scheduling Problems

    NASA Astrophysics Data System (ADS)

    Pokharel, Bibek; Venturelli, Davide; Rieffel, Eleanor

    Quantum annealers have been employed to attack a variety of optimization problems. We compared the performance of the current D-Wave 2X quantum annealer to that of the previous generation D-Wave Two quantum annealer on scheduling-type planning problems. Further, we compared the effect of different anneal times, embeddings of the logical problem, and different settings of the ferromagnetic coupling JF across the logical vertex-model on the performance of the D-Wave 2X quantum annealer. Our results show that at the best settings, the scaling of expected anneal time to solution for D-WAVE 2X is better than that of the DWave Two, but still inferior to that of state of the art classical solvers on these problems. We discuss the implication of our results for the design and programming of future quantum annealers. Supported by NASA Ames Research Center.

  4. IAU resolutions on reference systems and time scales in practice

    NASA Astrophysics Data System (ADS)

    Brumberg, V. A.; Groten, E.

    2001-03-01

    To be consistent with IAU/IUGG (1991) resolutions ICRS and ITRS should be treated as four-dimensional reference systems with TCB and TCG time scales, respectively, interrelated by a four-dimensional general relativistic transformation. This two-way transformation is given in the form adapted for actual application. The use of TB and TT instead of TCB and TCG, respectively, involves scaling factors complicating the use of this transformation in practice. New IAU B1 (2000) resolution is commented taking in mind some points of possible confusion in its practical application. The problem of the relationship of the theory of reference systems with the parameters of common relevance to astronomy, geodesy and geodynamics is briefly outlined.

  5. Robust preview control for a class of uncertain discrete-time systems with time-varying delay.

    PubMed

    Li, Li; Liao, Fucheng

    2018-02-01

    This paper proposes a concept of robust preview tracking control for uncertain discrete-time systems with time-varying delay. Firstly, a model transformation is employed for an uncertain discrete system with time-varying delay. Then, the auxiliary variables related to the system state and input are introduced to derive an augmented error system that includes future information on the reference signal. This leads to the tracking problem being transformed into a regulator problem. Finally, for the augmented error system, a sufficient condition of asymptotic stability is derived and the preview controller design method is proposed based on the scaled small gain theorem and linear matrix inequality (LMI) technique. The method proposed in this paper not only solves the difficulty problem of applying the difference operator to the time-varying matrices but also simplifies the structure of the augmented error system. The numerical simulation example also illustrates the effectiveness of the results presented in the paper. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Development of a scale to measure problem use of short message service: the SMS Problem Use Diagnostic Questionnaire.

    PubMed

    Rutland, J Brian; Sheets, Tilman; Young, Tony

    2007-12-01

    This exploratory study examines a subset of mobile phone use, the compulsive use of short message service (SMS) text messaging. A measure of SMS use, the SMS Problem Use Diagnostic Questionnaire (SMS-PUDQ), was developed and found to possess acceptable reliability and validity when compared to other measures such as self-reports of time spent using SMS and scores on a survey of problem mobile phone use. Implications for the field of addiction research, technological and behavioral addictions in particular, are discussed, and directions for future research are suggested.

  7. Asymptotic analysis of SPTA-based algorithms for no-wait flow shop scheduling problem with release dates.

    PubMed

    Ren, Tao; Zhang, Chuan; Lin, Lin; Guo, Meiting; Xie, Xionghang

    2014-01-01

    We address the scheduling problem for a no-wait flow shop to optimize total completion time with release dates. With the tool of asymptotic analysis, we prove that the objective values of two SPTA-based algorithms converge to the optimal value for sufficiently large-sized problems. To further enhance the performance of the SPTA-based algorithms, an improvement scheme based on local search is provided for moderate scale problems. New lower bound is presented for evaluating the asymptotic optimality of the algorithms. Numerical simulations demonstrate the effectiveness of the proposed algorithms.

  8. Asymptotic Analysis of SPTA-Based Algorithms for No-Wait Flow Shop Scheduling Problem with Release Dates

    PubMed Central

    Ren, Tao; Zhang, Chuan; Lin, Lin; Guo, Meiting; Xie, Xionghang

    2014-01-01

    We address the scheduling problem for a no-wait flow shop to optimize total completion time with release dates. With the tool of asymptotic analysis, we prove that the objective values of two SPTA-based algorithms converge to the optimal value for sufficiently large-sized problems. To further enhance the performance of the SPTA-based algorithms, an improvement scheme based on local search is provided for moderate scale problems. New lower bound is presented for evaluating the asymptotic optimality of the algorithms. Numerical simulations demonstrate the effectiveness of the proposed algorithms. PMID:24764774

  9. Combined aerodynamic and structural dynamic problem emulating routines (CASPER): Theory and implementation

    NASA Technical Reports Server (NTRS)

    Jones, William H.

    1985-01-01

    The Combined Aerodynamic and Structural Dynamic Problem Emulating Routines (CASPER) is a collection of data-base modification computer routines that can be used to simulate Navier-Stokes flow through realistic, time-varying internal flow fields. The Navier-Stokes equation used involves calculations in all three dimensions and retains all viscous terms. The only term neglected in the current implementation is gravitation. The solution approach is of an interative, time-marching nature. Calculations are based on Lagrangian aerodynamic elements (aeroelements). It is assumed that the relationships between a particular aeroelement and its five nearest neighbor aeroelements are sufficient to make a valid simulation of Navier-Stokes flow on a small scale and that the collection of all small-scale simulations makes a valid simulation of a large-scale flow. In keeping with these assumptions, it must be noted that CASPER produces an imitation or simulation of Navier-Stokes flow rather than a strict numerical solution of the Navier-Stokes equation. CASPER is written to operate under the Parallel, Asynchronous Executive (PAX), which is described in a separate report.

  10. Inference of Vohradský's Models of Genetic Networks by Solving Two-Dimensional Function Optimization Problems

    PubMed Central

    Kimura, Shuhei; Sato, Masanao; Okada-Hatakeyama, Mariko

    2013-01-01

    The inference of a genetic network is a problem in which mutual interactions among genes are inferred from time-series of gene expression levels. While a number of models have been proposed to describe genetic networks, this study focuses on a mathematical model proposed by Vohradský. Because of its advantageous features, several researchers have proposed the inference methods based on Vohradský's model. When trying to analyze large-scale networks consisting of dozens of genes, however, these methods must solve high-dimensional non-linear function optimization problems. In order to resolve the difficulty of estimating the parameters of the Vohradský's model, this study proposes a new method that defines the problem as several two-dimensional function optimization problems. Through numerical experiments on artificial genetic network inference problems, we showed that, although the computation time of the proposed method is not the shortest, the method has the ability to estimate parameters of Vohradský's models more effectively with sufficiently short computation times. This study then applied the proposed method to an actual inference problem of the bacterial SOS DNA repair system, and succeeded in finding several reasonable regulations. PMID:24386175

  11. Adaptive LES Methodology for Turbulent Flow Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oleg V. Vasilyev

    2008-06-12

    Although turbulent flows are common in the world around us, a solution to the fundamental equations that govern turbulence still eludes the scientific community. Turbulence has often been called one of the last unsolved problem in classical physics, yet it is clear that the need to accurately predict the effect of turbulent flows impacts virtually every field of science and engineering. As an example, a critical step in making modern computational tools useful in designing aircraft is to be able to accurately predict the lift, drag, and other aerodynamic characteristics in numerical simulations in a reasonable amount of time. Simulationsmore » that take months to years to complete are much less useful to the design cycle. Much work has been done toward this goal (Lee-Rausch et al. 2003, Jameson 2003) and as cost effective accurate tools for simulating turbulent flows evolve, we will all benefit from new scientific and engineering breakthroughs. The problem of simulating high Reynolds number (Re) turbulent flows of engineering and scientific interest would have been solved with the advent of Direct Numerical Simulation (DNS) techniques if unlimited computing power, memory, and time could be applied to each particular problem. Yet, given the current and near future computational resources that exist and a reasonable limit on the amount of time an engineer or scientist can wait for a result, the DNS technique will not be useful for more than 'unit' problems for the foreseeable future (Moin & Kim 1997, Jimenez & Moin 1991). The high computational cost for the DNS of three dimensional turbulent flows results from the fact that they have eddies of significant energy in a range of scales from the characteristic length scale of the flow all the way down to the Kolmogorov length scale. The actual cost of doing a three dimensional DNS scales as Re{sup 9/4} due to the large disparity in scales that need to be fully resolved. State-of-the-art DNS calculations of isotropic turbulence have recently been completed at the Japanese Earth Simulator (Yokokawa et al. 2002, Kaneda et al. 2003) using a resolution of 40963 (approximately 10{sup 11}) grid points with a Taylor-scale Reynolds number of 1217 (Re {approx} 10{sup 6}). Impressive as these calculations are, performed on one of the world's fastest super computers, more brute computational power would be needed to simulate the flow over the fuselage of a commercial aircraft at cruising speed. Such a calculation would require on the order of 10{sup 16} grid points and would have a Reynolds number in the range of 108. Such a calculation would take several thousand years to simulate one minute of flight time on today's fastest super computers (Moin & Kim 1997). Even using state-of-the-art zonal approaches, which allow DNS calculations that resolve the necessary range of scales within predefined 'zones' in the flow domain, this calculation would take far too long for the result to be of engineering interest when it is finally obtained. Since computing power, memory, and time are all scarce resources, the problem of simulating turbulent flows has become one of how to abstract or simplify the complexity of the physics represented in the full Navier-Stokes (NS) equations in such a way that the 'important' physics of the problem is captured at a lower cost. To do this, a portion of the modes of the turbulent flow field needs to be approximated by a low order model that is cheaper than the full NS calculation. This model can then be used along with a numerical simulation of the 'important' modes of the problem that cannot be well represented by the model. The decision of what part of the physics to model and what kind of model to use has to be based on what physical properties are considered 'important' for the problem. It should be noted that 'nothing is free', so any use of a low order model will by definition lose some information about the original flow.« less

  12. Beyond Self-Report: Tools to Compare Estimated and Real-World Smartphone Use

    PubMed Central

    Andrews, Sally; Ellis, David A.; Shaw, Heather; Piwek, Lukasz

    2015-01-01

    Psychologists typically rely on self-report data when quantifying mobile phone usage, despite little evidence of its validity. In this paper we explore the accuracy of using self-reported estimates when compared with actual smartphone use. We also include source code to process and visualise these data. We compared 23 participants’ actual smartphone use over a two-week period with self-reported estimates and the Mobile Phone Problem Use Scale. Our results indicate that estimated time spent using a smartphone may be an adequate measure of use, unless a greater resolution of data are required. Estimates concerning the number of times an individual used their phone across a typical day did not correlate with actual smartphone use. Neither estimated duration nor number of uses correlated with the Mobile Phone Problem Use Scale. We conclude that estimated smartphone use should be interpreted with caution in psychological research. PMID:26509895

  13. A Reliable and Real-Time Tracking Method with Color Distribution

    PubMed Central

    Zhao, Zishu; Han, Yuqi; Xu, Tingfa; Li, Xiangmin; Song, Haiping; Luo, Jiqiang

    2017-01-01

    Occlusion is a challenging problem in visual tracking. Therefore, in recent years, many trackers have been explored to solve this problem, but most of them cannot track the target in real time because of the heavy computational cost. A spatio-temporal context (STC) tracker was proposed to accelerate the task by calculating context information in the Fourier domain, alleviating the performance in handling occlusion. In this paper, we take advantage of the high efficiency of the STC tracker and employ salient prior model information based on color distribution to improve the robustness. Furthermore, we exploit a scale pyramid for accurate scale estimation. In particular, a new high-confidence update strategy and a re-searching mechanism are used to avoid the model corruption and handle occlusion. Extensive experimental results demonstrate our algorithm outperforms several state-of-the-art algorithms on the OTB2015 dataset. PMID:28994748

  14. Beyond Self-Report: Tools to Compare Estimated and Real-World Smartphone Use.

    PubMed

    Andrews, Sally; Ellis, David A; Shaw, Heather; Piwek, Lukasz

    2015-01-01

    Psychologists typically rely on self-report data when quantifying mobile phone usage, despite little evidence of its validity. In this paper we explore the accuracy of using self-reported estimates when compared with actual smartphone use. We also include source code to process and visualise these data. We compared 23 participants' actual smartphone use over a two-week period with self-reported estimates and the Mobile Phone Problem Use Scale. Our results indicate that estimated time spent using a smartphone may be an adequate measure of use, unless a greater resolution of data are required. Estimates concerning the number of times an individual used their phone across a typical day did not correlate with actual smartphone use. Neither estimated duration nor number of uses correlated with the Mobile Phone Problem Use Scale. We conclude that estimated smartphone use should be interpreted with caution in psychological research.

  15. Priority Scale of Drainage Rehabilitation of Cilacap City

    NASA Astrophysics Data System (ADS)

    Rudiono, Jatmiko

    2018-03-01

    Characteristics of physical condition of Cilacap City is relatively flat and low to sea level (approximately 6 m above sea level). In the event of a relatively heavy rainfall resulting in inundation at several locations. The problem of inundation is a serious problem if there is in a dense residential area or occurs in publicly-used infrastructure, such as roads and settlements. These problems require improved management of which include how to plan a sustainable urban drainage system and environmentally friendly. The development of Cilacap City is increasing rapidly, this causes drainage system based on the Drainage Masterplan Cilacap made in 2006 has not been able to accommodate rain water, so, it is necessary to evaluate the drainage masterplan for subsequent rehabilitation. Priority scale rehabilitation of the drainage sections as a guideline is an urgent need of rehabilitation in the next time period.

  16. A derivation and scalable implementation of the synchronous parallel kinetic Monte Carlo method for simulating long-time dynamics

    NASA Astrophysics Data System (ADS)

    Byun, Hye Suk; El-Naggar, Mohamed Y.; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya

    2017-10-01

    Kinetic Monte Carlo (KMC) simulations are used to study long-time dynamics of a wide variety of systems. Unfortunately, the conventional KMC algorithm is not scalable to larger systems, since its time scale is inversely proportional to the simulated system size. A promising approach to resolving this issue is the synchronous parallel KMC (SPKMC) algorithm, which makes the time scale size-independent. This paper introduces a formal derivation of the SPKMC algorithm based on local transition-state and time-dependent Hartree approximations, as well as its scalable parallel implementation based on a dual linked-list cell method. The resulting algorithm has achieved a weak-scaling parallel efficiency of 0.935 on 1024 Intel Xeon processors for simulating biological electron transfer dynamics in a 4.2 billion-heme system, as well as decent strong-scaling parallel efficiency. The parallel code has been used to simulate a lattice of cytochrome complexes on a bacterial-membrane nanowire, and it is broadly applicable to other problems such as computational synthesis of new materials.

  17. The Time on Task Effect in Reading and Problem Solving Is Moderated by Task Difficulty and Skill: Insights from a Computer-Based Large-Scale Assessment

    ERIC Educational Resources Information Center

    Goldhammer, Frank; Naumann, Johannes; Stelter, Annette; Tóth, Krisztina; Rölke, Heiko; Klieme, Eckhard

    2014-01-01

    Computer-based assessment can provide new insights into behavioral processes of task completion that cannot be uncovered by paper-based instruments. Time presents a major characteristic of the task completion process. Psychologically, time on task has 2 different interpretations, suggesting opposing associations with task outcome: Spending more…

  18. Dynamic decision making for dam-break emergency management - Part 1: Theoretical framework

    NASA Astrophysics Data System (ADS)

    Peng, M.; Zhang, L. M.

    2013-02-01

    An evacuation decision for dam breaks is a very serious issue. A late decision may lead to loss of lives and properties, but a very early evacuation will incur unnecessary expenses. This paper presents a risk-based framework of dynamic decision making for dam-break emergency management (DYDEM). The dam-break emergency management in both time scale and space scale is introduced first to define the dynamic decision problem. The probability of dam failure is taken as a stochastic process and estimated using a time-series analysis method. The flood consequences are taken as functions of warning time and evaluated with a human risk analysis model (HURAM) based on Bayesian networks. A decision criterion is suggested to decide whether to evacuate the population at risk (PAR) or to delay the decision. The optimum time for evacuating the PAR is obtained by minimizing the expected total loss, which integrates the time-related probabilities and flood consequences. When a delayed decision is chosen, the decision making can be updated with available new information. A specific dam-break case study is presented in a companion paper to illustrate the application of this framework to complex dam-breaching problems.

  19. Coping, problem solving, depression, and health-related quality of life in patients receiving outpatient stroke rehabilitation.

    PubMed

    Visser, Marieke M; Heijenbrok-Kal, Majanka H; Spijker, Adriaan Van't; Oostra, Kristine M; Busschbach, Jan J; Ribbers, Gerard M

    2015-08-01

    To investigate whether patients with high and low depression scores after stroke use different coping strategies and problem-solving skills and whether these variables are related to psychosocial health-related quality of life (HRQOL) independent of depression. Cross-sectional study. Two rehabilitation centers. Patients participating in outpatient stroke rehabilitation (N=166; mean age, 53.06±10.19y; 53% men; median time poststroke, 7.29mo). Not applicable. Coping strategy was measured using the Coping Inventory for Stressful Situations; problem-solving skills were measured using the Social Problem Solving Inventory-Revised: Short Form; depression was assessed using the Center for Epidemiologic Studies Depression Scale; and HRQOL was measured using the five-level EuroQol five-dimensional questionnaire and the Stroke-Specific Quality of Life Scale. Independent samples t tests and multivariable regression analyses, adjusted for patient characteristics, were performed. Compared with patients with low depression scores, patients with high depression scores used less positive problem orientation (P=.002) and emotion-oriented coping (P<.001) and more negative problem orientation (P<.001) and avoidance style (P<.001). Depression score was related to all domains of both general HRQOL (visual analog scale: β=-.679; P<.001; utility: β=-.009; P<.001) and stroke-specific HRQOL (physical HRQOL: β=-.020; P=.001; psychosocial HRQOL: β=-.054, P<.001; total HRQOL: β=-.037; P<.001). Positive problem orientation was independently related to psychosocial HRQOL (β=.086; P=.018) and total HRQOL (β=.058; P=.031). Patients with high depression scores use different coping strategies and problem-solving skills than do patients with low depression scores. Independent of depression, positive problem-solving skills appear to be most significantly related to better HRQOL. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  20. Eigenvalue Solvers for Modeling Nuclear Reactors on Leadership Class Machines

    DOE PAGES

    Slaybaugh, R. N.; Ramirez-Zweiger, M.; Pandya, Tara; ...

    2018-02-20

    In this paper, three complementary methods have been implemented in the code Denovo that accelerate neutral particle transport calculations with methods that use leadership-class computers fully and effectively: a multigroup block (MG) Krylov solver, a Rayleigh quotient iteration (RQI) eigenvalue solver, and a multigrid in energy (MGE) preconditioner. The MG Krylov solver converges more quickly than Gauss Seidel and enables energy decomposition such that Denovo can scale to hundreds of thousands of cores. RQI should converge in fewer iterations than power iteration (PI) for large and challenging problems. RQI creates shifted systems that would not be tractable without the MGmore » Krylov solver. It also creates ill-conditioned matrices. The MGE preconditioner reduces iteration count significantly when used with RQI and takes advantage of the new energy decomposition such that it can scale efficiently. Each individual method has been described before, but this is the first time they have been demonstrated to work together effectively. The combination of solvers enables the RQI eigenvalue solver to work better than the other available solvers for large reactors problems on leadership-class machines. Using these methods together, RQI converged in fewer iterations and in less time than PI for a full pressurized water reactor core. These solvers also performed better than an Arnoldi eigenvalue solver for a reactor benchmark problem when energy decomposition is needed. The MG Krylov, MGE preconditioner, and RQI solver combination also scales well in energy. Finally, this solver set is a strong choice for very large and challenging problems.« less

  1. Eigenvalue Solvers for Modeling Nuclear Reactors on Leadership Class Machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slaybaugh, R. N.; Ramirez-Zweiger, M.; Pandya, Tara

    In this paper, three complementary methods have been implemented in the code Denovo that accelerate neutral particle transport calculations with methods that use leadership-class computers fully and effectively: a multigroup block (MG) Krylov solver, a Rayleigh quotient iteration (RQI) eigenvalue solver, and a multigrid in energy (MGE) preconditioner. The MG Krylov solver converges more quickly than Gauss Seidel and enables energy decomposition such that Denovo can scale to hundreds of thousands of cores. RQI should converge in fewer iterations than power iteration (PI) for large and challenging problems. RQI creates shifted systems that would not be tractable without the MGmore » Krylov solver. It also creates ill-conditioned matrices. The MGE preconditioner reduces iteration count significantly when used with RQI and takes advantage of the new energy decomposition such that it can scale efficiently. Each individual method has been described before, but this is the first time they have been demonstrated to work together effectively. The combination of solvers enables the RQI eigenvalue solver to work better than the other available solvers for large reactors problems on leadership-class machines. Using these methods together, RQI converged in fewer iterations and in less time than PI for a full pressurized water reactor core. These solvers also performed better than an Arnoldi eigenvalue solver for a reactor benchmark problem when energy decomposition is needed. The MG Krylov, MGE preconditioner, and RQI solver combination also scales well in energy. Finally, this solver set is a strong choice for very large and challenging problems.« less

  2. Low rank approximation methods for MR fingerprinting with large scale dictionaries.

    PubMed

    Yang, Mingrui; Ma, Dan; Jiang, Yun; Hamilton, Jesse; Seiberlich, Nicole; Griswold, Mark A; McGivney, Debra

    2018-04-01

    This work proposes new low rank approximation approaches with significant memory savings for large scale MR fingerprinting (MRF) problems. We introduce a compressed MRF with randomized singular value decomposition method to significantly reduce the memory requirement for calculating a low rank approximation of large sized MRF dictionaries. We further relax this requirement by exploiting the structures of MRF dictionaries in the randomized singular value decomposition space and fitting them to low-degree polynomials to generate high resolution MRF parameter maps. In vivo 1.5T and 3T brain scan data are used to validate the approaches. T 1 , T 2 , and off-resonance maps are in good agreement with that of the standard MRF approach. Moreover, the memory savings is up to 1000 times for the MRF-fast imaging with steady-state precession sequence and more than 15 times for the MRF-balanced, steady-state free precession sequence. The proposed compressed MRF with randomized singular value decomposition and dictionary fitting methods are memory efficient low rank approximation methods, which can benefit the usage of MRF in clinical settings. They also have great potentials in large scale MRF problems, such as problems considering multi-component MRF parameters or high resolution in the parameter space. Magn Reson Med 79:2392-2400, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  3. A Stabilized Sparse-Matrix U-D Square-Root Implementation of a Large-State Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Boggs, D.; Ghil, M.; Keppenne, C.

    1995-01-01

    The full nonlinear Kalman filter sequential algorithm is, in theory, well-suited to the four-dimensional data assimilation problem in large-scale atmospheric and oceanic problems. However, it was later discovered that this algorithm can be very sensitive to computer roundoff, and that results may cease to be meaningful as time advances. Implementations of a modified Kalman filter are given.

  4. Sparse time-frequency decomposition based on dictionary adaptation.

    PubMed

    Hou, Thomas Y; Shi, Zuoqiang

    2016-04-13

    In this paper, we propose a time-frequency analysis method to obtain instantaneous frequencies and the corresponding decomposition by solving an optimization problem. In this optimization problem, the basis that is used to decompose the signal is not known a priori. Instead, it is adapted to the signal and is determined as part of the optimization problem. In this sense, this optimization problem can be seen as a dictionary adaptation problem, in which the dictionary is adaptive to one signal rather than a training set in dictionary learning. This dictionary adaptation problem is solved by using the augmented Lagrangian multiplier (ALM) method iteratively. We further accelerate the ALM method in each iteration by using the fast wavelet transform. We apply our method to decompose several signals, including signals with poor scale separation, signals with outliers and polluted by noise and a real signal. The results show that this method can give accurate recovery of both the instantaneous frequencies and the intrinsic mode functions. © 2016 The Author(s).

  5. Galaxy Zoo: evidence for diverse star formation histories through the green valley

    NASA Astrophysics Data System (ADS)

    Smethurst, R. J.; Lintott, C. J.; Simmons, B. D.; Schawinski, K.; Marshall, P. J.; Bamford, S.; Fortson, L.; Kaviraj, S.; Masters, K. L.; Melvin, T.; Nichol, R. C.; Skibba, R. A.; Willett, K. W.

    2015-06-01

    Does galaxy evolution proceed through the green valley via multiple pathways or as a single population? Motivated by recent results highlighting radically different evolutionary pathways between early- and late-type galaxies, we present results from a simple Bayesian approach to this problem wherein we model the star formation history (SFH) of a galaxy with two parameters, [t, τ] and compare the predicted and observed optical and near-ultraviolet colours. We use a novel method to investigate the morphological differences between the most probable SFHs for both disc-like and smooth-like populations of galaxies, by using a sample of 126 316 galaxies (0.01 < z < 0.25) with probabilistic estimates of morphology from Galaxy Zoo. We find a clear difference between the quenching time-scales preferred by smooth- and disc-like galaxies, with three possible routes through the green valley dominated by smooth- (rapid time-scales, attributed to major mergers), intermediate- (intermediate time-scales, attributed to minor mergers and galaxy interactions) and disc-like (slow time-scales, attributed to secular evolution) galaxies. We hypothesize that morphological changes occur in systems which have undergone quenching with an exponential time-scale τ < 1.5 Gyr, in order for the evolution of galaxies in the green valley to match the ratio of smooth to disc galaxies observed in the red sequence. These rapid time-scales are instrumental in the formation of the red sequence at earlier times; however, we find that galaxies currently passing through the green valley typically do so at intermediate time-scales.†

  6. A theoretically consistent stochastic cascade for temporal disaggregation of intermittent rainfall

    NASA Astrophysics Data System (ADS)

    Lombardo, F.; Volpi, E.; Koutsoyiannis, D.; Serinaldi, F.

    2017-06-01

    Generating fine-scale time series of intermittent rainfall that are fully consistent with any given coarse-scale totals is a key and open issue in many hydrological problems. We propose a stationary disaggregation method that simulates rainfall time series with given dependence structure, wet/dry probability, and marginal distribution at a target finer (lower-level) time scale, preserving full consistency with variables at a parent coarser (higher-level) time scale. We account for the intermittent character of rainfall at fine time scales by merging a discrete stochastic representation of intermittency and a continuous one of rainfall depths. This approach yields a unique and parsimonious mathematical framework providing general analytical formulations of mean, variance, and autocorrelation function (ACF) for a mixed-type stochastic process in terms of mean, variance, and ACFs of both continuous and discrete components, respectively. To achieve the full consistency between variables at finer and coarser time scales in terms of marginal distribution and coarse-scale totals, the generated lower-level series are adjusted according to a procedure that does not affect the stochastic structure implied by the original model. To assess model performance, we study rainfall process as intermittent with both independent and dependent occurrences, where dependence is quantified by the probability that two consecutive time intervals are dry. In either case, we provide analytical formulations of main statistics of our mixed-type disaggregation model and show their clear accordance with Monte Carlo simulations. An application to rainfall time series from real world is shown as a proof of concept.

  7. Prospects for Improved Forecasts of Weather and Short-Term Climate Variability on Subseasonal (2-Week to 2-Month) Times Scales

    NASA Technical Reports Server (NTRS)

    Schubert, Siegfried; Dole, Randall; vandenDool, Huug; Suarez, Max; Waliser, Duane

    2002-01-01

    This workshop, held in April 2002, brought together various Earth Sciences experts to focus on the subseasonal prediction problem. While substantial advances have occurred over the last few decades in both weather and seasonal prediction, progress in improving predictions on these intermediate time scales (time scales ranging from about two weeks to two months) has been slow. The goals of the workshop were to get an assessment of the "state of the art" in predictive skill on these time scales, to determine the potential sources of "untapped" predictive skill, and to make recommendations for a course of action that will accelerate progress in this area. One of the key conclusions of the workshop was that there is compelling evidence for predictability at forecast lead times substantially longer than two weeks. Tropical diabatic heating and soil wetness were singled out as particularly important processes affecting predictability on these time scales. Predictability was also linked to various low-frequency atmospheric "phenomena" such as the annular modes in high latitudes (including their connections to the stratosphere), the Pacific/North American (PNA) pattern, and the Madden Julian Oscillation (MJO). The latter, in particular, was highlighted as a key source of untapped predictability in the tropics and subtropics, including the Asian and Australian monsoon regions.

  8. Guidance and flight control law development for hypersonic vehicles

    NASA Technical Reports Server (NTRS)

    Calise, A. J.; Markopoulos, N.

    1993-01-01

    During the third reporting period our efforts were focused on a reformulation of the optimal control problem involving active state-variable inequality constraints. In the reformulated problem the optimization is carried out not with respect to all controllers, but only with respect to asymptotic controllers leading to the state constraint boundary. Intimately connected with the traditional formulation is the fact that when the reduced solution for such problems lies on a state constraint boundary, the corresponding boundary layer transitions are of finite time in the stretched time scale. Thus, it has been impossible so far to apply the classical asymptotic boundary layer theory to such problems. Moreover, the traditional formulation leads to optimal controllers that are one-sided, that is, they break down when a disturbance throws the system on the prohibited side of the state constraint boundary.

  9. Multiple-time-scale motion in molecularly linked nanoparticle arrays.

    PubMed

    George, Christopher; Szleifer, Igal; Ratner, Mark

    2013-01-22

    We explore the transport of electrons between electrodes that encase a two-dimensional array of metallic quantum dots linked by molecular bridges (such as α,ω alkaline dithiols). Because the molecules can move at finite temperatures, the entire transport structure comprising the quantum dots and the molecules is in dynamical motion while the charge is being transported. There are then several physical processes (physical excursions of molecules and quantum dots, electronic migration, ordinary vibrations), all of which influence electronic transport. Each can occur on a different time scale. It is therefore not appropriate to use standard approaches to this sort of electron transfer problem. Instead, we present a treatment in which three different theoretical approaches-kinetic Monte Carlo, classical molecular dynamics, and quantum transport-are all employed. In certain limits, some of the dynamical effects are unimportant. But in general, the transport seems to follow a sort of dynamic bond percolation picture, an approach originally introduced as formal models and later applied to polymer electrolytes. Different rate-determining steps occur in different limits. This approach offers a powerful scheme for dealing with multiple time scale transport problems, as will exist in many situations with several pathways through molecular arrays or even individual molecules that are dynamically disordered.

  10. Reducing computational costs in large scale 3D EIT by using a sparse Jacobian matrix with block-wise CGLS reconstruction.

    PubMed

    Yang, C L; Wei, H Y; Adler, A; Soleimani, M

    2013-06-01

    Electrical impedance tomography (EIT) is a fast and cost-effective technique to provide a tomographic conductivity image of a subject from boundary current-voltage data. This paper proposes a time and memory efficient method for solving a large scale 3D EIT inverse problem using a parallel conjugate gradient (CG) algorithm. The 3D EIT system with a large number of measurement data can produce a large size of Jacobian matrix; this could cause difficulties in computer storage and the inversion process. One of challenges in 3D EIT is to decrease the reconstruction time and memory usage, at the same time retaining the image quality. Firstly, a sparse matrix reduction technique is proposed using thresholding to set very small values of the Jacobian matrix to zero. By adjusting the Jacobian matrix into a sparse format, the element with zeros would be eliminated, which results in a saving of memory requirement. Secondly, a block-wise CG method for parallel reconstruction has been developed. The proposed method has been tested using simulated data as well as experimental test samples. Sparse Jacobian with a block-wise CG enables the large scale EIT problem to be solved efficiently. Image quality measures are presented to quantify the effect of sparse matrix reduction in reconstruction results.

  11. Co-optimizing Generation and Transmission Expansion with Wind Power in Large-Scale Power Grids Implementation in the US Eastern Interconnection

    DOE PAGES

    You, Shutang; Hadley, Stanton W.; Shankar, Mallikarjun; ...

    2016-01-12

    This paper studies the generation and transmission expansion co-optimization problem with a high wind power penetration rate in the US Eastern Interconnection (EI) power grid. In this paper, the generation and transmission expansion problem for the EI system is modeled as a mixed-integer programming (MIP) problem. Our paper also analyzed a time series generation method to capture the variation and correlation of both load and wind power across regions. The obtained series can be easily introduced into the expansion planning problem and then solved through existing MIP solvers. Simulation results show that the proposed planning model and series generation methodmore » can improve the expansion result significantly through modeling more detailed information of wind and load variation among regions in the US EI system. Moreover, the improved expansion plan that combines generation and transmission will aid system planners and policy makers to maximize the social welfare in large-scale power grids.« less

  12. Changing Schools from the inside out: Small Wins in Hard Times. Third Edition

    ERIC Educational Resources Information Center

    Larson, Robert

    2011-01-01

    At any time, public schools labor under great economic, political, and social pressures that make it difficult to create large-scale, "whole school" change. But current top-down mandates require that schools close achievement gaps while teaching more problem solving, inquiry, and research skills--with fewer resources. Failure to meet test-based…

  13. The Atwood machine: Two special cases

    NASA Astrophysics Data System (ADS)

    West, Joseph O.; Weliver, Barry N.

    1999-02-01

    The effects of the variation of Earth's gravitational field on a simple Atwood's machine with identical masses is considered. From rest, the time required for one of the masses to reach the ground is independent of the scale of the problem.

  14. Multi-time Scale Coordination of Distributed Energy Resources in Isolated Power Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayhorn, Ebony; Xie, Le; Butler-Purry, Karen

    2016-03-31

    In isolated power systems, including microgrids, distributed assets, such as renewable energy resources (e.g. wind, solar) and energy storage, can be actively coordinated to reduce dependency on fossil fuel generation. The key challenge of such coordination arises from significant uncertainty and variability occurring at small time scales associated with increased penetration of renewables. Specifically, the problem is with ensuring economic and efficient utilization of DERs, while also meeting operational objectives such as adequate frequency performance. One possible solution is to reduce the time step at which tertiary controls are implemented and to ensure feedback and look-ahead capability are incorporated tomore » handle variability and uncertainty. However, reducing the time step of tertiary controls necessitates investigating time-scale coupling with primary controls so as not to exacerbate system stability issues. In this paper, an optimal coordination (OC) strategy, which considers multiple time-scales, is proposed for isolated microgrid systems with a mix of DERs. This coordination strategy is based on an online moving horizon optimization approach. The effectiveness of the strategy was evaluated in terms of economics, technical performance, and computation time by varying key parameters that significantly impact performance. The illustrative example with realistic scenarios on a simulated isolated microgrid test system suggests that the proposed approach is generalizable towards designing multi-time scale optimal coordination strategies for isolated power systems.« less

  15. Multi-period natural gas market modeling Applications, stochastic extensions and solution approaches

    NASA Astrophysics Data System (ADS)

    Egging, Rudolf Gerardus

    This dissertation develops deterministic and stochastic multi-period mixed complementarity problems (MCP) for the global natural gas market, as well as solution approaches for large-scale stochastic MCP. The deterministic model is unique in the combination of the level of detail of the actors in the natural gas markets and the transport options, the detailed regional and global coverage, the multi-period approach with endogenous capacity expansions for transportation and storage infrastructure, the seasonal variation in demand and the representation of market power according to Nash-Cournot theory. The model is applied to several scenarios for the natural gas market that cover the formation of a cartel by the members of the Gas Exporting Countries Forum, a low availability of unconventional gas in the United States, and cost reductions in long-distance gas transportation. 1 The results provide insights in how different regions are affected by various developments, in terms of production, consumption, traded volumes, prices and profits of market participants. The stochastic MCP is developed and applied to a global natural gas market problem with four scenarios for a time horizon until 2050 with nineteen regions and containing 78,768 variables. The scenarios vary in the possibility of a gas market cartel formation and varying depletion rates of gas reserves in the major gas importing regions. Outcomes for hedging decisions of market participants show some significant shifts in the timing and location of infrastructure investments, thereby affecting local market situations. A first application of Benders decomposition (BD) is presented to solve a large-scale stochastic MCP for the global gas market with many hundreds of first-stage capacity expansion variables and market players exerting various levels of market power. The largest problem solved successfully using BD contained 47,373 variables of which 763 first-stage variables, however using BD did not result in shorter solution times relative to solving the extensive-forms. Larger problems, up to 117,481 variables, were solved in extensive-form, but not when applying BD due to numerical issues. It is discussed how BD could significantly reduce the solution time of large-scale stochastic models, but various challenges remain and more research is needed to assess the potential of Benders decomposition for solving large-scale stochastic MCP. 1 www.gecforum.org

  16. The Impact of Competing Time Delays in Stochastic Coordination Problems

    NASA Astrophysics Data System (ADS)

    Korniss, G.; Hunt, D.; Szymanski, B. K.

    2011-03-01

    Coordinating, distributing, and balancing resources in coupled systems is a complex task as these operations are very sensitive to time delays. Delays are present in most real communication and information systems, including info-social and neuro-biological networks, and can be attributed to both non-zero transmission times between different units of the system and to non-zero times it takes to process the information and execute the desired action at the individual units. Here, we investigate the importance and impact of these two types of delays in a simple coordination (synchronization) problem in a noisy environment. We establish the scaling theory for the phase boundary of synchronization and for the steady-state fluctuations in the synchronizable regime. Further, we provide the asymptotic behavior near the boundary of the synchronizable regime. Our results also imply the potential for optimization and trade-offs in stochastic synchronization and coordination problems with time delays. Supported in part by DTRA, ARL, and ONR.

  17. Initial test of large panels of structural flakeboard from southern hardwoods

    Treesearch

    Eddie W. Price

    1975-01-01

    A strong structural exterior flakeboard from mixed southern hardwoods has been developed on a laboratory scale; the problem is transfer of the technique to pilot-plant scale in the manufacture of 4- by 8-ft panels. From the pilot-plant trial here reported, it is concluded that a specific platen pressure of at least 575 psi and a hot press closing time of about 45...

  18. Wave induced density modification in RF sheaths and close to wave launchers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Eester, D., E-mail: d.van.eester@fz-juelich.de; Crombé, K.; Department of Applied Physics, Ghent University, Ghent

    2015-12-10

    With the return to full metal walls - a necessary step towards viable fusion machines - and due to the high power densities of current-day ICRH (Ion Cyclotron Resonance Heating) or RF (radio frequency) antennas, there is ample renewed interest in exploring the reasons for wave-induced sputtering and formation of hot spots. Moreover, there is experimental evidence on various machines that RF waves influence the density profile close to the wave launchers so that waves indirectly influence their own coupling efficiency. The present study presents a return to first principles and describes the wave-particle interaction using a 2-time scale modelmore » involving the equation of motion, the continuity equation and the wave equation on each of the time scales. Through the changing density pattern, the fast time scale dynamics is affected by the slow time scale events. In turn, the slow time scale density and flows are modified by the presence of the RF waves through quasilinear terms. Although finite zero order flows are identified, the usual cold plasma dielectric tensor - ignoring such flows - is adopted as a first approximation to describe the wave response to the RF driver. The resulting set of equations is composed of linear and nonlinear equations and is tackled in 1D in the present paper. Whereas the former can be solved using standard numerical techniques, the latter require special handling. At the price of multiple iterations, a simple ’derivative switch-on’ procedure allows to reformulate the nonlinear problem as a sequence of linear problems. Analytical expressions allow a first crude assessment - revealing that the ponderomotive potential plays a role similar to that of the electrostatic potential arising from charge separation - but numerical implementation is required to get a feeling of the full dynamics. A few tentative examples are provided to illustrate the phenomena involved.« less

  19. Factors affecting the social problem-solving ability of baccalaureate nursing students.

    PubMed

    Lau, Ying

    2014-01-01

    The hospital environment is characterized by time pressure, uncertain information, conflicting goals, high stakes, stress, and dynamic conditions. These demands mean there is a need for nurses with social problem-solving skills. This study set out to (1) investigate the social problem-solving ability of Chinese baccalaureate nursing students in Macao and (2) identify the association between communication skill, clinical interaction, interpersonal dysfunction, and social problem-solving ability. All nursing students were recruited in one public institute through the census method. The research design was exploratory, cross-sectional, and quantitative. The study used the Chinese version of the Social Problem Solving Inventory short form (C-SPSI-R), Communication Ability Scale (CAS), Clinical Interactive Scale (CIS), and Interpersonal Dysfunction Checklist (IDC). Macao nursing students were more likely to use the two constructive or adaptive dimensions rather than the three dysfunctional dimensions of the C-SPSI-R to solve their problems. Multiple linear regression analysis revealed that communication ability (ß=.305, p<.0001), clinical interaction (ß=.129, p=.047), and interpersonal dysfunction (ß=-.402, p<.0001) were associated with social problem-solving after controlling for covariates. Macao has had no problem-solving training in its educational curriculum; an effective problem-solving training should be implemented as part of the curriculum. With so many changes in healthcare today, nurses must be good social problem-solvers in order to deliver holistic care. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Matching problem for primary and secondary signals in dual-phase TPC detectors

    NASA Astrophysics Data System (ADS)

    Radics, B.; Burjons, E.; Rubbia, A.

    2018-05-01

    The problem of matching primary and secondary light signals, belonging to the same event, is presented in the context of dual-phase time projection chambers. In large scale detectors the secondary light emission could be delayed up to order of milliseconds, which, combined with high signal rates, could make the matching of the signals challenging. A possible approach is offered in the framework of the Stable Marriage and the College Admission problem, for both of which solutions are given by the Gale-Shapley algorithm.

  1. Dark matter self-interactions and small scale structure

    NASA Astrophysics Data System (ADS)

    Tulin, Sean; Yu, Hai-Bo

    2018-02-01

    We review theories of dark matter (DM) beyond the collisionless paradigm, known as self-interacting dark matter (SIDM), and their observable implications for astrophysical structure in the Universe. Self-interactions are motivated, in part, due to the potential to explain long-standing (and more recent) small scale structure observations that are in tension with collisionless cold DM (CDM) predictions. Simple particle physics models for SIDM can provide a universal explanation for these observations across a wide range of mass scales spanning dwarf galaxies, low and high surface brightness spiral galaxies, and clusters of galaxies. At the same time, SIDM leaves intact the success of ΛCDM cosmology on large scales. This report covers the following topics: (1) small scale structure issues, including the core-cusp problem, the diversity problem for rotation curves, the missing satellites problem, and the too-big-to-fail problem, as well as recent progress in hydrodynamical simulations of galaxy formation; (2) N-body simulations for SIDM, including implications for density profiles, halo shapes, substructure, and the interplay between baryons and self-interactions; (3) semi-analytic Jeans-based methods that provide a complementary approach for connecting particle models with observations; (4) merging systems, such as cluster mergers (e.g., the Bullet Cluster) and minor infalls, along with recent simulation results for mergers; (5) particle physics models, including light mediator models and composite DM models; and (6) complementary probes for SIDM, including indirect and direct detection experiments, particle collider searches, and cosmological observations. We provide a summary and critical look for all current constraints on DM self-interactions and an outline for future directions.

  2. Double Resonances and Spectral Scaling in the Weak Turbulence Theory of Rotating and Stratified Turbulence

    NASA Technical Reports Server (NTRS)

    Rubinstein, Robert

    1999-01-01

    In rotating turbulence, stably stratified turbulence, and in rotating stratified turbulence, heuristic arguments concerning the turbulent time scale suggest that the inertial range energy spectrum scales as k(exp -2). From the viewpoint of weak turbulence theory, there are three possibilities which might invalidate these arguments: four-wave interactions could dominate three-wave interactions leading to a modified inertial range energy balance, double resonances could alter the time scale, and the energy flux integral might not converge. It is shown that although double resonances exist in all of these problems, they do not influence overall energy transfer. However, the resonance conditions cause the flux integral for rotating turbulence to diverge logarithmically when evaluated for a k(exp -2) energy spectrum; therefore, this spectrum requires logarithmic corrections. Finally, the role of four-wave interactions is briefly discussed.

  3. Numerical viscosity and resolution of high-order weighted essentially nonoscillatory schemes for compressible flows with high Reynolds numbers.

    PubMed

    Zhang, Yong-Tao; Shi, Jing; Shu, Chi-Wang; Zhou, Ye

    2003-10-01

    A quantitative study is carried out in this paper to investigate the size of numerical viscosities and the resolution power of high-order weighted essentially nonoscillatory (WENO) schemes for solving one- and two-dimensional Navier-Stokes equations for compressible gas dynamics with high Reynolds numbers. A one-dimensional shock tube problem, a one-dimensional example with parameters motivated by supernova and laser experiments, and a two-dimensional Rayleigh-Taylor instability problem are used as numerical test problems. For the two-dimensional Rayleigh-Taylor instability problem, or similar problems with small-scale structures, the details of the small structures are determined by the physical viscosity (therefore, the Reynolds number) in the Navier-Stokes equations. Thus, to obtain faithful resolution to these small-scale structures, the numerical viscosity inherent in the scheme must be small enough so that the physical viscosity dominates. A careful mesh refinement study is performed to capture the threshold mesh for full resolution, for specific Reynolds numbers, when WENO schemes of different orders of accuracy are used. It is demonstrated that high-order WENO schemes are more CPU time efficient to reach the same resolution, both for the one-dimensional and two-dimensional test problems.

  4. Studies of Inviscid Flux Schemes for Acoustics and Turbulence Problems

    NASA Technical Reports Server (NTRS)

    Morris, C. I.

    2013-01-01

    The last two decades have witnessed tremendous growth in computational power, the development of computational fluid dynamics (CFD) codes which scale well over thousands of processors, and the refinement of unstructured grid-generation tools which facilitate rapid surface and volume gridding of complex geometries. Thus, engineering calculations of 10(exp 7) - 10(exp 8) finite-volume cells have become routine for some types of problems. Although the Reynolds Averaged Navier Stokes (RANS) approach to modeling turbulence is still in extensive and wide use, increasingly large-eddy simulation (LES) and hybrid RANS-LES approaches are being applied to resolve the largest scales of turbulence in many engineering problems. However, it has also become evident that LES places different requirements on the numerical approaches for both the spatial and temporal discretization of the Navier Stokes equations than does RANS. In particular, LES requires high time accuracy and minimal intrinsic numerical dispersion and dissipation over a wide spectral range. In this paper, the performance of both central-difference and upwind-biased spatial discretizations is examined for a one-dimensional acoustic standing wave problem, the Taylor-Green vortex problem, and the turbulent channel fl ow problem.

  5. Studies of Inviscid Flux Schemes for Acoustics and Turbulence Problems

    NASA Technical Reports Server (NTRS)

    Morris, Christopher I.

    2013-01-01

    The last two decades have witnessed tremendous growth in computational power, the development of computational fluid dynamics (CFD) codes which scale well over thousands of processors, and the refinement of unstructured grid-generation tools which facilitate rapid surface and volume gridding of complex geometries. Thus, engineering calculations of 10(exp 7) - 10(exp 8) finite-volume cells have become routine for some types of problems. Although the Reynolds Averaged Navier Stokes (RANS) approach to modeling turbulence is still in extensive and wide use, increasingly large-eddy simulation (LES) and hybrid RANS-LES approaches are being applied to resolve the largest scales of turbulence in many engineering problems. However, it has also become evident that LES places different requirements on the numerical approaches for both the spatial and temporal discretization of the Navier Stokes equations than does RANS. In particular, LES requires high time accuracy and minimal intrinsic numerical dispersion and dissipation over a wide spectral range. In this paper, the performance of both central-difference and upwind-biased spatial discretizations is examined for a one-dimensional acoustic standing wave problem, the Taylor-Green vortex problem, and the turbulent channel ow problem.

  6. Dynamic ruptures on faults of complex geometry: insights from numerical simulations, from large-scale curvature to small-scale fractal roughness

    NASA Astrophysics Data System (ADS)

    Ulrich, T.; Gabriel, A. A.

    2016-12-01

    The geometry of faults is subject to a large degree of uncertainty. As buried structures being not directly observable, their complex shapes may only be inferred from surface traces, if available, or through geophysical methods, such as reflection seismology. As a consequence, most studies aiming at assessing the potential hazard of faults rely on idealized fault models, based on observable large-scale features. Yet, real faults are known to be wavy at all scales, their geometric features presenting similar statistical properties from the micro to the regional scale. The influence of roughness on the earthquake rupture process is currently a driving topic in the computational seismology community. From the numerical point of view, rough faults problems are challenging problems that require optimized codes able to run efficiently on high-performance computing infrastructure and simultaneously handle complex geometries. Physically, simulated ruptures hosted by rough faults appear to be much closer to source models inverted from observation in terms of complexity. Incorporating fault geometry on all scales may thus be crucial to model realistic earthquake source processes and to estimate more accurately seismic hazard. In this study, we use the software package SeisSol, based on an ADER-Discontinuous Galerkin scheme, to run our numerical simulations. SeisSol allows solving the spontaneous dynamic earthquake rupture problem and the wave propagation problem with high-order accuracy in space and time efficiently on large-scale machines. In this study, the influence of fault roughness on dynamic rupture style (e.g. onset of supershear transition, rupture front coherence, propagation of self-healing pulses, etc) at different length scales is investigated by analyzing ruptures on faults of varying roughness spectral content. In particular, we investigate the existence of a minimum roughness length scale in terms of rupture inherent length scales below which the rupture ceases to be sensible. Finally, the effect of fault geometry on ground-motions, in the near-field, is considered. Our simulations feature a classical linear slip weakening on the fault and a viscoplastic constitutive model off the fault. The benefits of using a more elaborate fast velocity-weakening friction law will also be considered.

  7. Detecting oscillatory patterns and time lags from proxy records with non-uniform sampling: Some pitfalls and possible solutions

    NASA Astrophysics Data System (ADS)

    Donner, Reik

    2013-04-01

    Time series analysis offers a rich toolbox for deciphering information from high-resolution geological and geomorphological archives and linking the thus obtained results to distinct climate and environmental processes. Specifically, on various time-scales from inter-annual to multi-millenial, underlying driving forces exhibit more or less periodic oscillations, the detection of which in proxy records often allows linking them to specific mechanisms by which the corresponding drivers may have affected the archive under study. A persistent problem in geomorphology is that available records do not present a clear signal of the variability of environmental conditions, but exhibit considerable uncertainties of both the measured proxy variables and the associated age model. Particularly, time-scale uncertainty as well as the heterogeneity of sampling in the time domain are source of severe conceptual problems that may lead to false conclusions about the presence or absence of oscillatory patterns and their mutual phasing in different archives. In my presentation, I will discuss how one can cope with non-uniformly sampled proxy records to detect and quantify oscillatory patterns in one or more data sets. For this purpose, correlation analysis is reformulated using kernel estimates which are found superior to classical estimators based on interpolation or Fourier transform techniques. In order to characterize non-stationary or noisy periodicities and their relative phasing between different records, an extension of continuous wavelet transform is utilized. The performance of both methods is illustrated for different case studies. An extension to explicitly considering time-scale uncertainties by means of Bayesian techniques is briefly outlined.

  8. Fluid limit of nonintegrable continuous-time random walks in terms of fractional differential equations.

    PubMed

    Sánchez, R; Carreras, B A; van Milligen, B Ph

    2005-01-01

    The fluid limit of a recently introduced family of nonintegrable (nonlinear) continuous-time random walks is derived in terms of fractional differential equations. In this limit, it is shown that the formalism allows for the modeling of the interaction between multiple transport mechanisms with not only disparate spatial scales but also different temporal scales. For this reason, the resulting fluid equations may find application in the study of a large number of nonlinear multiscale transport problems, ranging from the study of self-organized criticality to the modeling of turbulent transport in fluids and plasmas.

  9. Concise calculation of the scaling function, exponents, and probability functional of the Edwards-Wilkinson equation with correlated noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Y.; Pang, N.; Halpin-Healy, T.

    1994-12-01

    The linear Langevin equation proposed by Edwards and Wilkinson [Proc. R. Soc. London A 381, 17 (1982)] is solved in closed form for noise of arbitrary space and time correlation. Furthermore, the temporal development of the full probability functional describing the height fluctuations is derived exactly, exhibiting an interesting evolution between two distinct Gaussian forms. We determine explicitly the dynamic scaling function for the interfacial width for any given initial condition, isolate the early-time behavior, and discover an invariance that was unsuspected in this problem of arbitrary spatiotemporal noise.

  10. Scaling in non-stationary time series. (II). Teen birth phenomenon

    NASA Astrophysics Data System (ADS)

    Ignaccolo, M.; Allegrini, P.; Grigolini, P.; Hamilton, P.; West, B. J.

    2004-05-01

    This paper is devoted to the problem of statistical mechanics raised by the analysis of an issue of sociological interest: the teen birth phenomenon. It is expected that these data are characterized by correlated fluctuations, reflecting the cooperative properties of the process. However, the assessment of the anomalous scaling generated by these correlations is made difficult, and ambiguous as well, by the non-stationary nature of the data that shows a clear dependence on seasonal periodicity (periodic component) and an average changing slowly in time (slow component) as well. We use the detrending techniques described in the companion paper [The earlier companion paper], to safely remove all the biases and to derive the genuine scaling of the teen birth phenomenon.

  11. 3-D imaging of large scale buried structure by 1-D inversion of very early time electromagnetic (VETEM) data

    USGS Publications Warehouse

    Aydmer, A.A.; Chew, W.C.; Cui, T.J.; Wright, D.L.; Smith, D.V.; Abraham, J.D.

    2001-01-01

    A simple and efficient method for large scale three-dimensional (3-D) subsurface imaging of inhomogeneous background is presented. One-dimensional (1-D) multifrequency distorted Born iterative method (DBIM) is employed in the inversion. Simulation results utilizing synthetic scattering data are given. Calibration of the very early time electromagnetic (VETEM) experimental waveforms is detailed along with major problems encountered in practice and their solutions. This discussion is followed by the results of a large scale application of the method to the experimental data provided by the VETEM system of the U.S. Geological Survey. The method is shown to have a computational complexity that is promising for on-site inversion.

  12. Controllability of multiplex, multi-time-scale networks

    NASA Astrophysics Data System (ADS)

    Pósfai, Márton; Gao, Jianxi; Cornelius, Sean P.; Barabási, Albert-László; D'Souza, Raissa M.

    2016-09-01

    The paradigm of layered networks is used to describe many real-world systems, from biological networks to social organizations and transportation systems. While recently there has been much progress in understanding the general properties of multilayer networks, our understanding of how to control such systems remains limited. One fundamental aspect that makes this endeavor challenging is that each layer can operate at a different time scale; thus, we cannot directly apply standard ideas from structural control theory of individual networks. Here we address the problem of controlling multilayer and multi-time-scale networks focusing on two-layer multiplex networks with one-to-one interlayer coupling. We investigate the practically relevant case when the control signal is applied to the nodes of one layer. We develop a theory based on disjoint path covers to determine the minimum number of inputs (Ni) necessary for full control. We show that if both layers operate on the same time scale, then the network structure of both layers equally affect controllability. In the presence of time-scale separation, controllability is enhanced if the controller interacts with the faster layer: Ni decreases as the time-scale difference increases up to a critical time-scale difference, above which Ni remains constant and is completely determined by the faster layer. We show that the critical time-scale difference is large if layer I is easy and layer II is hard to control in isolation. In contrast, control becomes increasingly difficult if the controller interacts with the layer operating on the slower time scale and increasing time-scale separation leads to increased Ni, again up to a critical value, above which Ni still depends on the structure of both layers. This critical value is largely determined by the longest path in the faster layer that does not involve cycles. By identifying the underlying mechanisms that connect time-scale difference and controllability for a simplified model, we provide crucial insight into disentangling how our ability to control real interacting complex systems is affected by a variety of sources of complexity.

  13. Fabrication of electron beam deposited tip for atomic-scale atomic force microscopy in liquid.

    PubMed

    Miyazawa, K; Izumi, H; Watanabe-Nakayama, T; Asakawa, H; Fukuma, T

    2015-03-13

    Recently, possibilities of improving operation speed and force sensitivity in atomic-scale atomic force microscopy (AFM) in liquid using a small cantilever with an electron beam deposited (EBD) tip have been intensively explored. However, the structure and properties of an EBD tip suitable for such an application have not been well-understood and hence its fabrication process has not been established. In this study, we perform atomic-scale AFM measurements with a small cantilever and clarify two major problems: contaminations from a cantilever and tip surface, and insufficient mechanical strength of an EBD tip having a high aspect ratio. To solve these problems, here we propose a fabrication process of an EBD tip, where we attach a 2 μm silica bead at the cantilever end and fabricate a 500-700 nm EBD tip on the bead. The bead height ensures sufficient cantilever-sample distance and enables to suppress long-range interaction between them even with a short EBD tip having high mechanical strength. After the tip fabrication, we coat the whole cantilever and tip surface with Si (30 nm) to prevent the generation of contamination. We perform atomic-scale AFM imaging and hydration force measurements at a mica-water interface using the fabricated tip and demonstrate its applicability to such an atomic-scale application. With a repeated use of the proposed process, we can reuse a small cantilever for atomic-scale measurements for several times. Therefore, the proposed method solves the two major problems and enables the practical use of a small cantilever in atomic-scale studies on various solid-liquid interfacial phenomena.

  14. Fine-Scale Structure Design for 3D Printing

    NASA Astrophysics Data System (ADS)

    Panetta, Francis Julian

    Modern additive fabrication technologies can manufacture shapes whose geometric complexities far exceed what existing computational design tools can analyze or optimize. At the same time, falling costs have placed these fabrication technologies within the average consumer's reach. Especially for inexpert designers, new software tools are needed to take full advantage of 3D printing technology. This thesis develops such tools and demonstrates the exciting possibilities enabled by fine-tuning objects at the small scales achievable by 3D printing. The thesis applies two high-level ideas to invent these tools: two-scale design and worst-case analysis. The two-scale design approach addresses the problem that accurately simulating--let alone optimizing--the full-resolution geometry sent to the printer requires orders of magnitude more computational power than currently available. However, we can decompose the design problem into a small-scale problem (designing tileable structures achieving a particular deformation behavior) and a macro-scale problem (deciding where to place these structures in the larger object). This separation is particularly effective, since structures for every useful behavior can be designed once, stored in a database, then reused for many different macroscale problems. Worst-case analysis refers to determining how likely an object is to fracture by studying the worst possible scenario: the forces most efficiently breaking it. This analysis is needed when the designer has insufficient knowledge or experience to predict what forces an object will undergo, or when the design is intended for use in many different scenarios unknown a priori. The thesis begins by summarizing the physics and mathematics necessary to rigorously approach these design and analysis problems. Specifically, the second chapter introduces linear elasticity and periodic homogenization. The third chapter presents a pipeline to design microstructures achieving a wide range of effective isotropic elastic material properties on a single-material 3D printer. It also proposes a macroscale optimization algorithm placing these microstructures to achieve deformation goals under prescribed loads. The thesis then turns to worst-case analysis, first considering the macroscale problem: given a user's design, the fourth chapter aims to determine the distribution of pressures over the surface creating the highest stress at any point in the shape. Solving this problem exactly is difficult, so we introduce two heuristics: one to focus our efforts on only regions likely to concentrate stresses and another converting the pressure optimization into an efficient linear program. Finally, the fifth chapter introduces worst-case analysis at the microscopic scale, leveraging the insight that the structure of periodic homogenization enables us to solve the problem exactly and efficiently. Then we use this worst-case analysis to guide a shape optimization, designing structures with prescribed deformation behavior that experience minimal stresses in generic use.

  15. REAL TIME CONTROL OF SEWERS: US EPA MANUAL

    EPA Science Inventory

    The problem of sewage spills and local flooding has traditionally been addressed by large scale capital improvement programs that focus on construction alternatives such as sewer separation or construction of storage facilities. The cost of such projects is often high, especiall...

  16. Homogenization of locally resonant acoustic metamaterials towards an emergent enriched continuum.

    PubMed

    Sridhar, A; Kouznetsova, V G; Geers, M G D

    This contribution presents a novel homogenization technique for modeling heterogeneous materials with micro-inertia effects such as locally resonant acoustic metamaterials. Linear elastodynamics is used to model the micro and macro scale problems and an extended first order Computational Homogenization framework is used to establish the coupling. Craig Bampton Mode Synthesis is then applied to solve and eliminate the microscale problem, resulting in a compact closed form description of the microdynamics that accurately captures the Local Resonance phenomena. The resulting equations represent an enriched continuum in which additional kinematic degrees of freedom emerge to account for Local Resonance effects which would otherwise be absent in a classical continuum. Such an approach retains the accuracy and robustness offered by a standard Computational Homogenization implementation, whereby the problem and the computational time are reduced to the on-line solution of one scale only.

  17. An Implicit Solver on A Parallel Block-Structured Adaptive Mesh Grid for FLASH

    NASA Astrophysics Data System (ADS)

    Lee, D.; Gopal, S.; Mohapatra, P.

    2012-07-01

    We introduce a fully implicit solver for FLASH based on a Jacobian-Free Newton-Krylov (JFNK) approach with an appropriate preconditioner. The main goal of developing this JFNK-type implicit solver is to provide efficient high-order numerical algorithms and methodology for simulating stiff systems of differential equations on large-scale parallel computer architectures. A large number of natural problems in nonlinear physics involve a wide range of spatial and time scales of interest. A system that encompasses such a wide magnitude of scales is described as "stiff." A stiff system can arise in many different fields of physics, including fluid dynamics/aerodynamics, laboratory/space plasma physics, low Mach number flows, reactive flows, radiation hydrodynamics, and geophysical flows. One of the big challenges in solving such a stiff system using current-day computational resources lies in resolving time and length scales varying by several orders of magnitude. We introduce FLASH's preliminary implementation of a time-accurate JFNK-based implicit solver in the framework of FLASH's unsplit hydro solver.

  18. A validated non-linear Kelvin-Helmholtz benchmark for numerical hydrodynamics

    NASA Astrophysics Data System (ADS)

    Lecoanet, D.; McCourt, M.; Quataert, E.; Burns, K. J.; Vasil, G. M.; Oishi, J. S.; Brown, B. P.; Stone, J. M.; O'Leary, R. M.

    2016-02-01

    The non-linear evolution of the Kelvin-Helmholtz instability is a popular test for code verification. To date, most Kelvin-Helmholtz problems discussed in the literature are ill-posed: they do not converge to any single solution with increasing resolution. This precludes comparisons among different codes and severely limits the utility of the Kelvin-Helmholtz instability as a test problem. The lack of a reference solution has led various authors to assert the accuracy of their simulations based on ad hoc proxies, e.g. the existence of small-scale structures. This paper proposes well-posed two-dimensional Kelvin-Helmholtz problems with smooth initial conditions and explicit diffusion. We show that in many cases numerical errors/noise can seed spurious small-scale structure in Kelvin-Helmholtz problems. We demonstrate convergence to a reference solution using both ATHENA, a Godunov code, and DEDALUS, a pseudo-spectral code. Problems with constant initial density throughout the domain are relatively straightforward for both codes. However, problems with an initial density jump (which are the norm in astrophysical systems) exhibit rich behaviour and are more computationally challenging. In the latter case, ATHENA simulations are prone to an instability of the inner rolled-up vortex; this instability is seeded by grid-scale errors introduced by the algorithm, and disappears as resolution increases. Both ATHENA and DEDALUS exhibit late-time chaos. Inviscid simulations are riddled with extremely vigorous secondary instabilities which induce more mixing than simulations with explicit diffusion. Our results highlight the importance of running well-posed test problems with demonstrated convergence to a reference solution. To facilitate future comparisons, we include as supplementary material the resolved, converged solutions to the Kelvin-Helmholtz problems in this paper in machine-readable form.

  19. Progress in fast, accurate multi-scale climate simulations

    DOE PAGES

    Collins, W. D.; Johansen, H.; Evans, K. J.; ...

    2015-06-01

    We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enablingmore » improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less

  20. Late-time cosmological phase transitions

    NASA Technical Reports Server (NTRS)

    Schramm, David N.

    1991-01-01

    It is shown that the potential galaxy formation and large scale structure problems of objects existing at high redshifts (Z approx. greater than 5), structures existing on scales of 100 M pc as well as velocity flows on such scales, and minimal microwave anisotropies ((Delta)T/T) (approx. less than 10(exp -5)) can be solved if the seeds needed to generate structure form in a vacuum phase transition after decoupling. It is argued that the basic physics of such a phase transition is no more exotic than that utilized in the more traditional GUT scale phase transitions, and that, just as in the GUT case, significant random Gaussian fluctuations and/or topological defects can form. Scale lengths of approx. 100 M pc for large scale structure as well as approx. 1 M pc for galaxy formation occur naturally. Possible support for new physics that might be associated with such a late-time transition comes from the preliminary results of the SAGE solar neutrino experiment, implying neutrino flavor mixing with values similar to those required for a late-time transition. It is also noted that a see-saw model for the neutrino masses might also imply a tau neutrino mass that is an ideal hot dark matter candidate. However, in general either hot or cold dark matter can be consistent with a late-time transition.

  1. Present status of aircraft instruments

    NASA Technical Reports Server (NTRS)

    1932-01-01

    This report gives a brief description of the present state of development and of the performance characteristics of instruments included in the following group: speed instruments, altitude instruments, navigation instruments, power-plant instruments, oxygen instruments, instruments for aerial photography, fog-flying instruments, general problems, summary of instrument and research problems. The items considered under performance include sensitivity, scale errors, effects of temperature and pressure, effects of acceleration and vibration, time lag, damping, leaks, elastic defects, and friction.

  2. ENERGY DISSIPATION PROCESSES IN SOLAR WIND TURBULENCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Y.; Wei, F. S.; Feng, X. S.

    Turbulence is a chaotic flow regime filled by irregular flows. The dissipation of turbulence is a fundamental problem in the realm of physics. Theoretically, dissipation ultimately cannot be achieved without collisions, and so how turbulent kinetic energy is dissipated in the nearly collisionless solar wind is a challenging problem. Wave particle interactions and magnetic reconnection (MR) are two possible dissipation mechanisms, but which mechanism dominates is still a controversial topic. Here we analyze the dissipation region scaling around a solar wind MR region. We find that the MR region shows unique multifractal scaling in the dissipation range, while the ambientmore » solar wind turbulence reveals a monofractal dissipation process for most of the time. These results provide the first observational evidences for intermittent multifractal dissipation region scaling around a MR site, and they also have significant implications for the fundamental energy dissipation process.« less

  3. Perspectives on scaling and multiscaling in passive scalar turbulence

    NASA Astrophysics Data System (ADS)

    Banerjee, Tirthankar; Basu, Abhik

    2018-05-01

    We revisit the well-known problem of multiscaling in substances passively advected by homogeneous and isotropic turbulent flows or passive scalar turbulence. To that end we propose a two-parameter continuum hydrodynamic model for an advected substance concentration θ , parametrized jointly by y and y ¯, that characterize the spatial scaling behavior of the variances of the advecting stochastic velocity and the stochastic additive driving force, respectively. We analyze it within a one-loop dynamic renormalization group method to calculate the multiscaling exponents of the equal-time structure functions of θ . We show how the interplay between the advective velocity and the additive force may lead to simple scaling or multiscaling. In one limit, our results reduce to the well-known results from the Kraichnan model for passive scalar. Our framework of analysis should be of help for analytical approaches for the still intractable problem of fluid turbulence itself.

  4. Estimation of Time Scales in Unsteady Flows in a Turbomachinery Rig

    NASA Technical Reports Server (NTRS)

    Lewalle, Jacques; Ashpis, David E.

    2004-01-01

    Time scales in turbulent and transitional flow provide a link between experimental data and modeling, both in terms of physical content and for quantitative assessment. The problem of interest here is the definition of time scales in an unsteady flow. Using representative samples of data from GEAE low pressure turbine experiment in low speed research turbine facility with wake-induced transition, we document several methods to extract dominant frequencies, and compare the results. We show that conventional methods of time scale evaluation (based on autocorrelation functions and on Fourier spectra) and wavelet-based methods provide similar information when applied to stationary signals. We also show the greater flexibility of the wavelet-based methods when dealing with intermittent or strongly modulated data, as are encountered in transitioning boundary layers and in flows with unsteady forcing associated with wake passing. We define phase-averaged dominant frequencies that characterize the turbulence associated with freestream conditions and with the passing wakes downstream of a rotor. The relevance of these results for modeling is discussed in the paper.

  5. [Reliability and validity of warning signs checklist for screening psychological, behavioral and developmental problems of children].

    PubMed

    Huang, X N; Zhang, Y; Feng, W W; Wang, H S; Cao, B; Zhang, B; Yang, Y F; Wang, H M; Zheng, Y; Jin, X M; Jia, M X; Zou, X B; Zhao, C X; Robert, J; Jing, Jin

    2017-06-02

    Objective: To evaluate the reliability and validity of warning signs checklist developed by the National Health and Family Planning Commission of the People's Republic of China (NHFPC), so as to determine the screening effectiveness of warning signs on developmental problems of early childhood. Method: Stratified random sampling method was used to assess the reliability and validity of checklist of warning sign and 2 110 children 0 to 6 years of age(1 513 low-risk subjects and 597 high-risk subjects) were recruited from 11 provinces of China. The reliability evaluation for the warning signs included the test-retest reliability and interrater reliability. With the use of Age and Stage Questionnaire (ASQ) and Gesell Development Diagnosis Scale (GESELL) as the criterion scales, criterion validity was assessed by determining the correlation and consistency between the screening results of warning signs and the criterion scales. Result: In terms of the warning signs, the screening positive rates at different ages ranged from 10.8%(21/141) to 26.2%(51/137). The median (interquartile) testing time for each subject was 1(0.6) minute. Both the test-retest reliability and interrater reliability of warning signs reached 0.7 or above, indicating that the stability was good. In terms of validity assessment, there was remarkable consistency between ASQ and warning signs, with the Kappa value of 0.63. With the use of GESELL as criterion, it was determined that the sensitivity of warning signs in children with suspected developmental delay was 82.2%, and the specificity was 77.7%. The overall Youden index was 0.6. Conclusion: The reliability and validity of warning signs checklist for screening early childhood developmental problems have met the basic requirements of psychological screening scales, with the characteristics of short testing time and easy operation. Thus, this warning signs checklist can be used for screening psychological and behavioral problems of early childhood, especially in community settings.

  6. Maximizing algebraic connectivity in air transportation networks

    NASA Astrophysics Data System (ADS)

    Wei, Peng

    In air transportation networks the robustness of a network regarding node and link failures is a key factor for its design. An experiment based on the real air transportation network is performed to show that the algebraic connectivity is a good measure for network robustness. Three optimization problems of algebraic connectivity maximization are then formulated in order to find the most robust network design under different constraints. The algebraic connectivity maximization problem with flight routes addition or deletion is first formulated. Three methods to optimize and analyze the network algebraic connectivity are proposed. The Modified Greedy Perturbation Algorithm (MGP) provides a sub-optimal solution in a fast iterative manner. The Weighted Tabu Search (WTS) is designed to offer a near optimal solution with longer running time. The relaxed semi-definite programming (SDP) is used to set a performance upper bound and three rounding techniques are discussed to find the feasible solution. The simulation results present the trade-off among the three methods. The case study on two air transportation networks of Virgin America and Southwest Airlines show that the developed methods can be applied in real world large scale networks. The algebraic connectivity maximization problem is extended by adding the leg number constraint, which considers the traveler's tolerance for the total connecting stops. The Binary Semi-Definite Programming (BSDP) with cutting plane method provides the optimal solution. The tabu search and 2-opt search heuristics can find the optimal solution in small scale networks and the near optimal solution in large scale networks. The third algebraic connectivity maximization problem with operating cost constraint is formulated. When the total operating cost budget is given, the number of the edges to be added is not fixed. Each edge weight needs to be calculated instead of being pre-determined. It is illustrated that the edge addition and the weight assignment can not be studied separately for the problem with operating cost constraint. Therefore a relaxed SDP method with golden section search is developed to solve both at the same time. The cluster decomposition is utilized to solve large scale networks.

  7. Near-Optimal Guidance Method for Maximizing the Reachable Domain of Gliding Aircraft

    NASA Astrophysics Data System (ADS)

    Tsuchiya, Takeshi

    This paper proposes a guidance method for gliding aircraft by using onboard computers to calculate a near-optimal trajectory in real-time, and thereby expanding the reachable domain. The results are applicable to advanced aircraft and future space transportation systems that require high safety. The calculation load of the optimal control problem that is used to maximize the reachable domain is too large for current computers to calculate in real-time. Thus the optimal control problem is divided into two problems: a gliding distance maximization problem in which the aircraft motion is limited to a vertical plane, and an optimal turning flight problem in a horizontal direction. First, the former problem is solved using a shooting method. It can be solved easily because its scale is smaller than that of the original problem, and because some of the features of the optimal solution are obtained in the first part of this paper. Next, in the latter problem, the optimal bank angle is computed from the solution of the former; this is an analytical computation, rather than an iterative computation. Finally, the reachable domain obtained from the proposed near-optimal guidance method is compared with that obtained from the original optimal control problem.

  8. a Novel Discrete Optimal Transport Method for Bayesian Inverse Problems

    NASA Astrophysics Data System (ADS)

    Bui-Thanh, T.; Myers, A.; Wang, K.; Thiery, A.

    2017-12-01

    We present the Augmented Ensemble Transform (AET) method for generating approximate samples from a high-dimensional posterior distribution as a solution to Bayesian inverse problems. Solving large-scale inverse problems is critical for some of the most relevant and impactful scientific endeavors of our time. Therefore, constructing novel methods for solving the Bayesian inverse problem in more computationally efficient ways can have a profound impact on the science community. This research derives the novel AET method for exploring a posterior by solving a sequence of linear programming problems, resulting in a series of transport maps which map prior samples to posterior samples, allowing for the computation of moments of the posterior. We show both theoretical and numerical results, indicating this method can offer superior computational efficiency when compared to other SMC methods. Most of this efficiency is derived from matrix scaling methods to solve the linear programming problem and derivative-free optimization for particle movement. We use this method to determine inter-well connectivity in a reservoir and the associated uncertainty related to certain parameters. The attached file shows the difference between the true parameter and the AET parameter in an example 3D reservoir problem. The error is within the Morozov discrepancy allowance with lower computational cost than other particle methods.

  9. 3-Dimensional Root Cause Diagnosis via Co-analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Ziming; Lan, Zhiling; Yu, Li

    2012-01-01

    With the growth of system size and complexity, reliability has become a major concern for large-scale systems. Upon the occurrence of failure, system administrators typically trace the events in Reliability, Availability, and Serviceability (RAS) logs for root cause diagnosis. However, RAS log only contains limited diagnosis information. Moreover, the manual processing is time-consuming, error-prone, and not scalable. To address the problem, in this paper we present an automated root cause diagnosis mechanism for large-scale HPC systems. Our mechanism examines multiple logs to provide a 3-D fine-grained root cause analysis. Here, 3-D means that our analysis will pinpoint the failure layer,more » the time, and the location of the event that causes the problem. We evaluate our mechanism by means of real logs collected from a production IBM Blue Gene/P system at Oak Ridge National Laboratory. It successfully identifies failure layer information for 219 failures during 23-month period. Furthermore, it effectively identifies the triggering events with time and location information, even when the triggering events occur hundreds of hours before the resulting failures.« less

  10. Trends in modern system theory

    NASA Technical Reports Server (NTRS)

    Athans, M.

    1976-01-01

    The topics considered are related to linear control system design, adaptive control, failure detection, control under failure, system reliability, and large-scale systems and decentralized control. It is pointed out that the design of a linear feedback control system which regulates a process about a desirable set point or steady-state condition in the presence of disturbances is a very important problem. The linearized dynamics of the process are used for design purposes. The typical linear-quadratic design involving the solution of the optimal control problem of a linear time-invariant system with respect to a quadratic performance criterion is considered along with gain reduction theorems and the multivariable phase margin theorem. The stumbling block in many adaptive design methodologies is associated with the amount of real time computation which is necessary. Attention is also given to the desperate need to develop good theories for large-scale systems, the beginning of a microprocessor revolution, the translation of the Wiener-Hopf theory into the time domain, and advances made in dynamic team theory, dynamic stochastic games, and finite memory stochastic control.

  11. Does the Hall Effect Solve the Flux Pileup Saturation Problem?

    NASA Technical Reports Server (NTRS)

    Dorelli, John C.

    2010-01-01

    It is well known that magnetic flux pileup can significantly speed up the rate of magnetic reconnection in high Lundquist number resistive MHD,allowing reconnection to proceed at a rate which is insensitive to the plasma resistivity over a wide range of Lundquist number. Hence, pileup is a possible solution to the Sweet-Parker time scale problem. Unfortunately, pileup tends to saturate above a critical value of the Lundquist number, S_c, where the value ofS_c depends on initial and boundary conditions, with Sweet-Parker scaling returning above S_c. It has been argued (see Dorelli and Bim [2003] and Dorelli [2003]) that the Hall effect can allow flux pileup to saturate (when the scale of the current sheet approaches ion inertial scale, di) before the reconnection rate begins to stall. However, the resulting saturated reconnection rate, while insensitive to the plasma resistivity, was found to depend strongly on the di. In this presentation, we revisit the problem of magnetic island coalescence (which is a well known example of flux pileup reconnection), addressing the dependence of the maximum coalescence rate on the ratio of di in the "large island" limit in which the following inequality is always satisfied: l_eta di lambda, where I_eta is the resistive diffusion length and lambda is the island wavelength.

  12. An Extended, Problem-Based Learning Laboratory Exercise on the Diagnosis of Infectious Diseases Suitable for Large Level 1 Undergraduate Biology Classes

    ERIC Educational Resources Information Center

    Tatner, Mary; Tierney, Anne

    2016-01-01

    The development and evaluation of a two-week laboratory class, based on the diagnosis of human infectious diseases, is described. It can be easily scaled up or down, to suit class sizes from 50 to 600 and completed in a shorter time scale, and to different audiences as desired. Students employ a range of techniques to solve a real-life and…

  13. Triplet supertree heuristics for the tree of life

    PubMed Central

    Lin, Harris T; Burleigh, J Gordon; Eulenstein, Oliver

    2009-01-01

    Background There is much interest in developing fast and accurate supertree methods to infer the tree of life. Supertree methods combine smaller input trees with overlapping sets of taxa to make a comprehensive phylogenetic tree that contains all of the taxa in the input trees. The intrinsically hard triplet supertree problem takes a collection of input species trees and seeks a species tree (supertree) that maximizes the number of triplet subtrees that it shares with the input trees. However, the utility of this supertree problem has been limited by a lack of efficient and effective heuristics. Results We introduce fast hill-climbing heuristics for the triplet supertree problem that perform a step-wise search of the tree space, where each step is guided by an exact solution to an instance of a local search problem. To realize time efficient heuristics we designed the first nontrivial algorithms for two standard search problems, which greatly improve on the time complexity to the best known (naïve) solutions by a factor of n and n2 (the number of taxa in the supertree). These algorithms enable large-scale supertree analyses based on the triplet supertree problem that were previously not possible. We implemented hill-climbing heuristics that are based on our new algorithms, and in analyses of two published supertree data sets, we demonstrate that our new heuristics outperform other standard supertree methods in maximizing the number of triplets shared with the input trees. Conclusion With our new heuristics, the triplet supertree problem is now computationally more tractable for large-scale supertree analyses, and it provides a potentially more accurate alternative to existing supertree methods. PMID:19208181

  14. Initiatives (Part 1): Keystone; Cycle Time Puzzle, A Full-Scale Challenge; Foot to Foot, Face to Face; Change Management; Multi-Element Team Challenge.

    ERIC Educational Resources Information Center

    Schoel, Jim; Butler, Steve; Murray, Mark; Gass, Mike; Carrick, Moe

    2001-01-01

    Presents five group problem-solving initiatives for use in adventure and experiential settings, focusing on conflict resolution, corporate workplace issues, or adjustment to change. Includes target group, group size, time and space needs, activity level, overview, goals, props, instructions, and suggestions for framing and debriefing the…

  15. Parallel methodology to capture cyclic variability in motored engines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ameen, Muhsin M.; Yang, Xiaofeng; Kuo, Tang-Wei

    2016-07-28

    Numerical prediction of of cycle-to-cycle variability (CCV) in SI engines is extremely challenging for two key reasons: (i) high-fidelity methods such as large eddy simulation (LES) are require to accurately capture the in-cylinder turbulent flowfield, and (ii) CCV is experienced over long timescales and hence the simulations need to be performed for hundreds of consecutive cycles. In this study, a new methodology is proposed to dissociate this long time-scale problem into several shorter time-scale problems, which can considerably reduce the computational time without sacrificing the fidelity of the simulations. The strategy is to perform multiple single-cycle simulations in parallel bymore » effectively perturbing the simulation parameters such as the initial and boundary conditions. It is shown that by perturbing the initial velocity field effectively based on the intensity of the in-cylinder turbulence, the mean and variance of the in-cylinder flowfield is captured reasonably well. Adding perturbations in the initial pressure field and the boundary pressure improves the predictions. It is shown that this new approach is able to give accurate predictions of the flowfield statistics in less than one-tenth of time required for the conventional approach of simulating consecutive engine cycles.« less

  16. Fault tolerance of artificial neural networks with applications in critical systems

    NASA Technical Reports Server (NTRS)

    Protzel, Peter W.; Palumbo, Daniel L.; Arras, Michael K.

    1992-01-01

    This paper investigates the fault tolerance characteristics of time continuous recurrent artificial neural networks (ANN) that can be used to solve optimization problems. The principle of operations and performance of these networks are first illustrated by using well-known model problems like the traveling salesman problem and the assignment problem. The ANNs are then subjected to 13 simultaneous 'stuck at 1' or 'stuck at 0' faults for network sizes of up to 900 'neurons'. The effects of these faults is demonstrated and the cause for the observed fault tolerance is discussed. An application is presented in which a network performs a critical task for a real-time distributed processing system by generating new task allocations during the reconfiguration of the system. The performance degradation of the ANN under the presence of faults is investigated by large-scale simulations, and the potential benefits of delegating a critical task to a fault tolerant network are discussed.

  17. A prospective study to evaluate a residential community reintegration program for patients with chronic acquired brain injury.

    PubMed

    Geurtsen, Gert J; van Heugten, Caroline M; Martina, Juan D; Rietveld, Antonius C; Meijer, Ron; Geurts, Alexander C

    2011-05-01

    To examine the effects of a residential community reintegration program on independent living, societal participation, emotional well-being, and quality of life in patients with chronic acquired brain injury and psychosocial problems hampering societal participation. A prospective cohort study with a 3-month waiting list control period and 1-year follow up. A tertiary rehabilitation center for acquired brain injury. Patients (N=70) with acquired brain injury (46 men; mean age, 25.1y; mean time post-onset, 5.2y; at follow up n=67). A structured residential treatment program was offered directed at improving independence in domestic life, work, leisure time, and social interactions. Community Integration Questionnaire (CIQ), Employability Rating Scale, living situation, school, work situation, work hours, Center for Epidemiological Studies Depression Scale, EuroQOL quality of life scale (2 scales), World Health Organization Quality of Life Scale Abbreviated (WHOQOL-BREF; 5 scales), and the Global Assessment of Functioning (GAF) scale. There was an overall significant time effect for all outcome measures (multiple analysis of variance T(2)=26.16; F(36,557) 134.9; P=.000). There was no spontaneous recovery during the waiting-list period. The effect sizes for the CIQ, Employability Rating Scale, work hours, and GAF were large (partial η(2)=0.25, 0.35, 0.22, and 0.72, respectively). The effect sizes were moderate for 7 of the 8 emotional well-being and quality of life (sub)scales (partial η(2)=0.11-0.20). The WHOQOL-BREF environment subscale showed a small effect size (partial η(2)=0.05). Living independently rose from 25.4% before treatment to 72.4% after treatment and was still 65.7% at follow up. This study shows that a residential community reintegration program leads to significant and relevant improvements of independent living, societal participation, emotional well-being, and quality of life in patients with chronic acquired brain injury and psychosocial problems hampering societal participation. Copyright © 2011 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  18. Ground-water flow in low permeability environments

    USGS Publications Warehouse

    Neuzil, Christopher E.

    1986-01-01

    Certain geologic media are known to have small permeability; subsurface environments composed of these media and lacking well developed secondary permeability have groundwater flow sytems with many distinctive characteristics. Moreover, groundwater flow in these environments appears to influence the evolution of certain hydrologic, geologic, and geochemical systems, may affect the accumulation of pertroleum and ores, and probably has a role in the structural evolution of parts of the crust. Such environments are also important in the context of waste disposal. This review attempts to synthesize the diverse contributions of various disciplines to the problem of flow in low-permeability environments. Problems hindering analysis are enumerated together with suggested approaches to overcoming them. A common thread running through the discussion is the significance of size- and time-scale limitations of the ability to directly observe flow behavior and make measurements of parameters. These limitations have resulted in rather distinct small- and large-scale approaches to the problem. The first part of the review considers experimental investigations of low-permeability flow, including in situ testing; these are generally conducted on temporal and spatial scales which are relatively small compared with those of interest. Results from this work have provided increasingly detailed information about many aspects of the flow but leave certain questions unanswered. Recent advances in laboratory and in situ testing techniques have permitted measurements of permeability and storage properties in progressively “tighter” media and investigation of transient flow under these conditions. However, very large hydraulic gradients are still required for the tests; an observational gap exists for typical in situ gradients. The applicability of Darcy's law in this range is therefore untested, although claims of observed non-Darcian behavior appear flawed. Two important nonhydraulic flow phenomena, osmosis and ultrafiltration, are experimentally well established in prepared clays but have been incompletely investigated, particularly in undisturbed geologic media. Small-scale experimental results form much of the basis for analyses of flow in low-permeability environments which occurs on scales of time and size too large to permit direct observation. Such large-scale flow behavior is the focus of the second part of the review. Extrapolation of small-scale experimental experience becomes an important and sometimes controversial problem in this context. In large flow systems under steady state conditions the regional permeability can sometimes be determined, but systems with transient flow are more difficult to analyze. The complexity of the problem is enhanced by the sensitivity of large-scale flow to the effects of slow geologic processes. One-dimensional studies have begun to elucidate how simple burial or exhumation can generate transient flow conditions by changing the state of stress and temperature and by burial metamorphism. Investigation of the more complex problem of the interaction of geologic processes and flow in two and three dimensions is just beginning. Because these transient flow analyses have largely been based on flow in experimental scale systems or in relatively permeable systems, deformation in response to effective stress changes is generally treated as linearly elastic; however, this treatment creates difficulties for the long periods of interest because viscoelastic deformation is probably significant. Also, large-scale flow simulations in argillaceous environments generally have neglected osmosis and ultrafiltration, in part because extrapolation of laboratory experience with coupled flow to large scales under in situ conditions is controversial. Nevertheless, the effects are potentially quite important because the coupled flow might cause ultra long lived transient conditions. The difficulties associated with analysis are matched by those of characterizing hydrologic conditions in tight environments; measurements of hydraulic head and sampling of pore fluids have been done only rarely because of the practical difficulties involved. These problems are also discussed in the second part of this paper.

  19. Sex Differences in Mathematics Attainment at GCE Ordinary Level

    ERIC Educational Resources Information Center

    Wood, Robert

    1976-01-01

    In a comparison of mathematical abilities of boys and girls, after allowing for school effects, boys are seen to excel on problems involving scale or measurement, probability, and space-time relationships. Possible explanations for the observed differences are made. (Author/AV)

  20. Networked high-speed auroral observations combined with radar measurements for multi-scale insights

    NASA Astrophysics Data System (ADS)

    Hirsch, M.; Semeter, J. L.

    2015-12-01

    Networks of ground-based instruments to study terrestrial aurora for the purpose of analyzing particle precipitation characteristics driving the aurora have been established. Additional funding is pouring into future ground-based auroral observation networks consisting of combinations of tossable, portable, and fixed installation ground-based legacy equipment. Our approach to this problem using the High Speed Tomography (HiST) system combines tightly-synchronized filtered auroral optical observations capturing temporal features of order 10 ms with supporting measurements from incoherent scatter radar (ISR). ISR provides a broader spatial context up to order 100 km laterally on one minute time scales, while our camera field of view (FOV) is chosen to be order 10 km at auroral altitudes in order to capture 100 m scale lateral auroral features. The dual-scale observations of ISR and HiST fine-scale optical observations may be coupled through a physical model using linear basis functions to estimate important ionospheric quantities such as electron number density in 3-D (time, perpendicular and parallel to the geomagnetic field).Field measurements and analysis using HiST and PFISR are presented from experiments conducted at the Poker Flat Research Range in central Alaska. Other multiscale configuration candidates include supplementing networks of all-sky cameras such as THEMIS with co-locations of HiST-like instruments to fuse wide FOV measurements with the fine-scale HiST precipitation characteristic estimates. Candidate models for this coupling include GLOW and TRANSCAR. Future extensions of this work may include incorporating line of sight total electron count estimates from ground-based networks of GPS receivers in a sensor fusion problem.

  1. Echoes from the abyss: Tentative evidence for Planck-scale structure at black hole horizons

    NASA Astrophysics Data System (ADS)

    Abedi, Jahed; Dykaar, Hannah; Afshordi, Niayesh

    2017-10-01

    In classical general relativity (GR), an observer falling into an astrophysical black hole is not expected to experience anything dramatic as she crosses the event horizon. However, tentative resolutions to problems in quantum gravity, such as the cosmological constant problem, or the black hole information paradox, invoke significant departures from classicality in the vicinity of the horizon. It was recently pointed out that such near-horizon structures can lead to late-time echoes in the black hole merger gravitational wave signals that are otherwise indistinguishable from GR. We search for observational signatures of these echoes in the gravitational wave data released by the advanced Laser Interferometer Gravitational-Wave Observatory (LIGO), following the three black hole merger events GW150914, GW151226, and LVT151012. In particular, we look for repeating damped echoes with time delays of 8 M log M (+spin corrections, in Planck units), corresponding to Planck-scale departures from GR near their respective horizons. Accounting for the "look elsewhere" effect due to uncertainty in the echo template, we find tentative evidence for Planck-scale structure near black hole horizons at false detection probability of 1% (corresponding to 2.5 σ

  2. Mesoscale Models of Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Boghosian, Bruce M.; Hadjiconstantinou, Nicolas G.

    During the last half century, enormous progress has been made in the field of computational materials modeling, to the extent that in many cases computational approaches are used in a predictive fashion. Despite this progress, modeling of general hydrodynamic behavior remains a challenging task. One of the main challenges stems from the fact that hydrodynamics manifests itself over a very wide range of length and time scales. On one end of the spectrum, one finds the fluid's "internal" scale characteristic of its molecular structure (in the absence of quantum effects, which we omit in this chapter). On the other end, the "outer" scale is set by the characteristic sizes of the problem's domain. The resulting scale separation or lack thereof as well as the existence of intermediate scales are key to determining the optimal approach. Successful treatments require a judicious choice of the level of description which is a delicate balancing act between the conflicting requirements of fidelity and manageable computational cost: a coarse description typically requires models for underlying processes occuring at smaller length and time scales; on the other hand, a fine-scale model will incur a significantly larger computational cost.

  3. Electron acceleration by an obliquely propagating electromagnetic wave in the regime of validity of the Fokker-Planck-Kolmogorov approach

    NASA Technical Reports Server (NTRS)

    Hizanidis, Kyriakos; Vlahos, L.; Polymilis, C.

    1989-01-01

    The relativistic motion of an ensemble of electrons in an intense monochromatic electromagnetic wave propagating obliquely in a uniform external magnetic field is studied. The problem is formulated from the viewpoint of Hamiltonian theory and the Fokker-Planck-Kolmogorov approach analyzed by Hizanidis (1989), leading to a one-dimensional diffusive acceleration along paths of constant zeroth-order generalized Hamiltonian. For values of the wave amplitude and the propagating angle inside the analytically predicted stochastic region, the numerical results suggest that the diffusion probes proceeds in stages. In the first stage, the electrons are accelerated to relatively high energies by sampling the first few overlapping resonances one by one. During that stage, the ensemble-average square deviation of the variable involved scales quadratically with time. During the second stage, they scale linearly with time. For much longer times, deviation from linear scaling slowly sets in.

  4. Oscillation criteria for half-linear dynamic equations on time scales

    NASA Astrophysics Data System (ADS)

    Hassan, Taher S.

    2008-09-01

    This paper is concerned with oscillation of the second-order half-linear dynamic equation(r(t)(x[Delta])[gamma])[Delta]+p(t)x[gamma](t)=0, on a time scale where [gamma] is the quotient of odd positive integers, r(t) and p(t) are positive rd-continuous functions on . Our results solve a problem posed by [R.P. Agarwal, D. O'Regan, S.H. Saker, Philos-type oscillation criteria for second-order half linear dynamic equations, Rocky Mountain J. Math. 37 (2007) 1085-1104; S.H. Saker, Oscillation criteria of second order half-linear dynamic equations on time scales, J. Comput. Appl. Math. 177 (2005) 375-387] and our results in the special cases when and involve and improve some oscillation results for second-order differential and difference equations; and when , and , etc., our oscillation results are essentially newE Some examples illustrating the importance of our results are also included.

  5. Overview of Sea-Ice Properties, Distribution and Temporal Variations, for Application to Ice-Atmosphere Chemical Processes.

    NASA Astrophysics Data System (ADS)

    Moritz, R. E.

    2005-12-01

    The properties, distribution and temporal variation of sea-ice are reviewed for application to problems of ice-atmosphere chemical processes. Typical vertical structure of sea-ice is presented for different ice types, including young ice, first-year ice and multi-year ice, emphasizing factors relevant to surface chemistry and gas exchange. Time average annual cycles of large scale variables are presented, including ice concentration, ice extent, ice thickness and ice age. Spatial and temporal variability of these large scale quantities is considered on time scales of 1-50 years, emphasizing recent and projected changes in the Arctic pack ice. The amount and time evolution of open water and thin ice are important factors that influence ocean-ice-atmosphere chemical processes. Observations and modeling of the sea-ice thickness distribution function are presented to characterize the range of variability in open water and thin ice.

  6. Efficient Simulation of Compressible, Viscous Fluids using Multi-rate Time Integration

    NASA Astrophysics Data System (ADS)

    Mikida, Cory; Kloeckner, Andreas; Bodony, Daniel

    2017-11-01

    In the numerical simulation of problems of compressible, viscous fluids with single-rate time integrators, the global timestep used is limited to that of the finest mesh point or fastest physical process. This talk discusses the application of multi-rate Adams-Bashforth (MRAB) integrators to an overset mesh framework to solve compressible viscous fluid problems of varying scale with improved efficiency, with emphasis on the strategy of timescale separation and the application of the resulting numerical method to two sample problems: subsonic viscous flow over a cylinder and a viscous jet in crossflow. The results presented indicate the numerical efficacy of MRAB integrators, outline a number of outstanding code challenges, demonstrate the expected reduction in time enabled by MRAB, and emphasize the need for proper load balancing through spatial decomposition in order for parallel runs to achieve the predicted time-saving benefit. This material is based in part upon work supported by the Department of Energy, National Nuclear Security Administration, under Award Number DE-NA0002374.

  7. Trapping in scale-free networks with hierarchical organization of modularity.

    PubMed

    Zhang, Zhongzhi; Lin, Yuan; Gao, Shuyang; Zhou, Shuigeng; Guan, Jihong; Li, Mo

    2009-11-01

    A wide variety of real-life networks share two remarkable generic topological properties: scale-free behavior and modular organization, and it is natural and important to study how these two features affect the dynamical processes taking place on such networks. In this paper, we investigate a simple stochastic process--trapping problem, a random walk with a perfect trap fixed at a given location, performed on a family of hierarchical networks that exhibit simultaneously striking scale-free and modular structure. We focus on a particular case with the immobile trap positioned at the hub node having the largest degree. Using a method based on generating functions, we determine explicitly the mean first-passage time (MFPT) for the trapping problem, which is the mean of the node-to-trap first-passage time over the entire network. The exact expression for the MFPT is calculated through the recurrence relations derived from the special construction of the hierarchical networks. The obtained rigorous formula corroborated by extensive direct numerical calculations exhibits that the MFPT grows algebraically with the network order. Concretely, the MFPT increases as a power-law function of the number of nodes with the exponent much less than 1. We demonstrate that the hierarchical networks under consideration have more efficient structure for transport by diffusion in contrast with other analytically soluble media including some previously studied scale-free networks. We argue that the scale-free and modular topologies are responsible for the high efficiency of the trapping process on the hierarchical networks.

  8. P-Hint-Hunt: a deep parallelized whole genome DNA methylation detection tool.

    PubMed

    Peng, Shaoliang; Yang, Shunyun; Gao, Ming; Liao, Xiangke; Liu, Jie; Yang, Canqun; Wu, Chengkun; Yu, Wenqiang

    2017-03-14

    The increasing studies have been conducted using whole genome DNA methylation detection as one of the most important part of epigenetics research to find the significant relationships among DNA methylation and several typical diseases, such as cancers and diabetes. In many of those studies, mapping the bisulfite treated sequence to the whole genome has been the main method to study DNA cytosine methylation. However, today's relative tools almost suffer from inaccuracies and time-consuming problems. In our study, we designed a new DNA methylation prediction tool ("Hint-Hunt") to solve the problem. By having an optimal complex alignment computation and Smith-Waterman matrix dynamic programming, Hint-Hunt could analyze and predict the DNA methylation status. But when Hint-Hunt tried to predict DNA methylation status with large-scale dataset, there are still slow speed and low temporal-spatial efficiency problems. In order to solve the problems of Smith-Waterman dynamic programming and low temporal-spatial efficiency, we further design a deep parallelized whole genome DNA methylation detection tool ("P-Hint-Hunt") on Tianhe-2 (TH-2) supercomputer. To the best of our knowledge, P-Hint-Hunt is the first parallel DNA methylation detection tool with a high speed-up to process large-scale dataset, and could run both on CPU and Intel Xeon Phi coprocessors. Moreover, we deploy and evaluate Hint-Hunt and P-Hint-Hunt on TH-2 supercomputer in different scales. The experimental results illuminate our tools eliminate the deviation caused by bisulfite treatment in mapping procedure and the multi-level parallel program yields a 48 times speed-up with 64 threads. P-Hint-Hunt gain a deep acceleration on CPU and Intel Xeon Phi heterogeneous platform, which gives full play of the advantages of multi-cores (CPU) and many-cores (Phi).

  9. Rehabilitation needs and participation restriction in patients with cognitive disorder in the chronic phase of traumatic brain injury

    PubMed Central

    Sashika, Hironobu; Takada, Kaoruko; Kikuchi, Naohisa

    2017-01-01

    Abstract The purpose of this study was to clarify psychosocial factors/problems, social participation, quality of life (QOL), and rehabilitation needs in chronic-phase traumatic brain injury (TBI) patients with cognitive disorder discharged from the level-1 trauma center (L1-TC), and to inspect the effects of rehabilitation intervention to these subjects. A mixed-method research (cross-sectional and qualitative study) was conducted at an outpatient rehabilitation department. Inclusion criteria of subjects were transfer to the L1-TC due to TBI; acute-stage rehabilitation treatment received in the L1-TC from November 2006 to October 2011; age of ≥18 and <70 years at the time of injury; a score of 0–3 on the Modified Rankin Scale at discharge and that of 4–5 due to physical or severe aggressive behavioral comorbid disorders. Study details were sent, via mail, to 84 suitable candidates, of whom 36 replied. Thirty-one subjects (median age: 33.4 years; male: 17; and average time since injury: 48.1 months), who had consented to study participation, were participated. Cognitive function, social participation, QOL, psychosocial factors/problems, rehabilitation needs, and chronic-phase rehabilitation outcomes were evaluated using the Wechsler Adult Intelligence Scale, Third Edition, the Wechsler Memory Scale-Revised, the Zung Self-Rating Depression Scale, the Sydney Psychosocial Reintegration Scale, Version 2, and the Short Form 36, Version 2, qualitative analysis of semistructured interviews, etc. Participants were classified into achieved-social-participation (n = 11; employed: 8), difficult-social-participation (n = 12; unemployed: 8), and no-cognitive-dysfunction groups (n = 8; no social participation restriction). Relative to the achieved-social-participation group, the difficult-social-participation group showed greater injury and cognitive dysfunction and lower Sydney Psychosocial Reintegration Scale and Short Form 36 role/social component summary scores (64.9/49.1 vs 44.3/30.4, respectively, P < 0.05). Linear regression analysis showed that the social participation status was greatly affected by the later cognitive disorders and psychosocial factors/problems not by the severity of TBI. No changes were observed in these scores following chronic-phase rehabilitation intervention. Chronic-phase TBI with cognitive disorder led to rehabilitation needs, and improvement of subjects’ psychosocial problems and QOL was difficult. PMID:28121947

  10. Rehabilitation needs and participation restriction in patients with cognitive disorder in the chronic phase of traumatic brain injury.

    PubMed

    Sashika, Hironobu; Takada, Kaoruko; Kikuchi, Naohisa

    2017-01-01

    The purpose of this study was to clarify psychosocial factors/problems, social participation, quality of life (QOL), and rehabilitation needs in chronic-phase traumatic brain injury (TBI) patients with cognitive disorder discharged from the level-1 trauma center (L1-TC), and to inspect the effects of rehabilitation intervention to these subjects.A mixed-method research (cross-sectional and qualitative study) was conducted at an outpatient rehabilitation department.Inclusion criteria of subjects were transfer to the L1-TC due to TBI; acute-stage rehabilitation treatment received in the L1-TC from November 2006 to October 2011; age of ≥18 and <70 years at the time of injury; a score of 0-3 on the Modified Rankin Scale at discharge and that of 4-5 due to physical or severe aggressive behavioral comorbid disorders. Study details were sent, via mail, to 84 suitable candidates, of whom 36 replied. Thirty-one subjects (median age: 33.4 years; male: 17; and average time since injury: 48.1 months), who had consented to study participation, were participated. Cognitive function, social participation, QOL, psychosocial factors/problems, rehabilitation needs, and chronic-phase rehabilitation outcomes were evaluated using the Wechsler Adult Intelligence Scale, Third Edition, the Wechsler Memory Scale-Revised, the Zung Self-Rating Depression Scale, the Sydney Psychosocial Reintegration Scale, Version 2, and the Short Form 36, Version 2, qualitative analysis of semistructured interviews, etc.Participants were classified into achieved-social-participation (n = 11; employed: 8), difficult-social-participation (n = 12; unemployed: 8), and no-cognitive-dysfunction groups (n = 8; no social participation restriction). Relative to the achieved-social-participation group, the difficult-social-participation group showed greater injury and cognitive dysfunction and lower Sydney Psychosocial Reintegration Scale and Short Form 36 role/social component summary scores (64.9/49.1 vs 44.3/30.4, respectively, P < 0.05). Linear regression analysis showed that the social participation status was greatly affected by the later cognitive disorders and psychosocial factors/problems not by the severity of TBI. No changes were observed in these scores following chronic-phase rehabilitation intervention.Chronic-phase TBI with cognitive disorder led to rehabilitation needs, and improvement of subjects' psychosocial problems and QOL was difficult.

  11. Spatial Distribution of Stony Desertification and Key Influencing Factors on Different Sampling Scales in Small Karst Watersheds.

    PubMed

    Zhang, Zhenming; Zhou, Yunchao; Wang, Shijie; Huang, Xianfei

    2018-04-13

    Karst areas are typical ecologically fragile areas, and stony desertification has become the most serious ecological and economic problems in these areas worldwide as well as a source of disasters and poverty. A reasonable sampling scale is of great importance for research on soil science in karst areas. In this paper, the spatial distribution of stony desertification characteristics and its influencing factors in karst areas are studied at different sampling scales using a grid sampling method based on geographic information system (GIS) technology and geo-statistics. The rock exposure obtained through sampling over a 150 m × 150 m grid in the Houzhai River Basin was utilized as the original data, and five grid scales (300 m × 300 m, 450 m × 450 m, 600 m × 600 m, 750 m × 750 m, and 900 m × 900 m) were used as the subsample sets. The results show that the rock exposure does not vary substantially from one sampling scale to another, while the average values of the five subsamples all fluctuate around the average value of the entire set. As the sampling scale increases, the maximum value and the average value of the rock exposure gradually decrease, and there is a gradual increase in the coefficient of variability. At the scale of 150 m × 150 m, the areas of minor stony desertification, medium stony desertification, and major stony desertification in the Houzhai River Basin are 7.81 km², 4.50 km², and 1.87 km², respectively. The spatial variability of stony desertification at small scales is influenced by many factors, and the variability at medium scales is jointly influenced by gradient, rock content, and rock exposure. At large scales, the spatial variability of stony desertification is mainly influenced by soil thickness and rock content.

  12. Puberty suppression in adolescents with gender identity disorder: a prospective follow-up study.

    PubMed

    de Vries, Annelou L C; Steensma, Thomas D; Doreleijers, Theo A H; Cohen-Kettenis, Peggy T

    2011-08-01

    Puberty suppression by means of gonadotropin-releasing hormone analogues (GnRHa) is used for young transsexuals between 12 and 16 years of age. The purpose of this intervention is to relieve the suffering caused by the development of secondary sex characteristics and to provide time to make a balanced decision regarding actual gender reassignment. To compare psychological functioning and gender dysphoria before and after puberty suppression in gender dysphoric adolescents. Of the first 70 eligible candidates who received puberty suppression between 2000 and 2008, psychological functioning and gender dysphoria were assessed twice: at T0, when attending the gender identity clinic, before the start of GnRHa; and at T1, shortly before the start of cross-sex hormone treatment. Behavioral and emotional problems (Child Behavior Checklist and the Youth-Self Report), depressive symptoms (Beck Depression Inventory), anxiety and anger (the Spielberger Trait Anxiety and Anger Scales), general functioning (the clinician's rated Children's Global Assessment Scale), gender dysphoria (the Utrecht Gender Dysphoria Scale), and body satisfaction (the Body Image Scale) were assessed. Behavioral and emotional problems and depressive symptoms decreased, while general functioning improved significantly during puberty suppression. Feelings of anxiety and anger did not change between T0 and T1. While changes over time were equal for both sexes, compared with natal males, natal females were older when they started puberty suppression and showed more problem behavior at both T0 and T1. Gender dysphoria and body satisfaction did not change between T0 and T1. No adolescent withdrew from puberty suppression, and all started cross-sex hormone treatment, the first step of actual gender reassignment. Puberty suppression may be considered a valuable contribution in the clinical management of gender dysphoria in adolescents. © 2010 International Society for Sexual Medicine.

  13. Relationship of corporal punishment and antisocial behavior by neighborhood.

    PubMed

    Grogan-Kaylor, Andrew

    2005-10-01

    To examine the relationship of corporal punishment with children's behavior problems while accounting for neighborhood context and while using stronger statistical methods than previous literature in this area, and to examine whether different levels of corporal punishment have different effects in different neighborhood contexts. Longitudinal cohort study. General community. 1943 mother-child pairs from the National Longitudinal Survey of Youth. Internalizing and externalizing behavior problem scales of the Behavior Problems Index. Parental use of corporal punishment was associated with a 0.71 increase (P<.05) in children's externalizing behavior problems even when several parenting behaviors, neighborhood quality, and all time-invariant variables were accounted for. The association of corporal punishment and children's externalizing behavior problems was not dependent on neighborhood context. The research found no discernible relationship between corporal punishment and internalizing behavior problems.

  14. Solute-defect interactions in Al-Mg alloys from diffusive variational Gaussian calculations

    NASA Astrophysics Data System (ADS)

    Dontsova, E.; Rottler, J.; Sinclair, C. W.

    2014-11-01

    Resolving atomic-scale defect topologies and energetics with accurate atomistic interaction models provides access to the nonlinear phenomena inherent at atomic length and time scales. Coarse graining the dynamics of such simulations to look at the migration of, e.g., solute atoms, while retaining the rich atomic-scale detail required to properly describe defects, is a particular challenge. In this paper, we present an adaptation of the recently developed "diffusive molecular dynamics" model to describe the energetics and kinetics of binary alloys on diffusive time scales. The potential of the technique is illustrated by applying it to the classic problems of solute segregation to a planar boundary (stacking fault) and edge dislocation in the Al-Mg system. Our approach provides fully dynamical solutions in situations with an evolving energy landscape in a computationally efficient way, where atomistic kinetic Monte Carlo simulations are difficult or impractical to perform.

  15. Parallel-In-Time For Moving Meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falgout, R. D.; Manteuffel, T. A.; Southworth, B.

    2016-02-04

    With steadily growing computational resources available, scientists must develop e ective ways to utilize the increased resources. High performance, highly parallel software has be- come a standard. However until recent years parallelism has focused primarily on the spatial domain. When solving a space-time partial di erential equation (PDE), this leads to a sequential bottleneck in the temporal dimension, particularly when taking a large number of time steps. The XBraid parallel-in-time library was developed as a practical way to add temporal parallelism to existing se- quential codes with only minor modi cations. In this work, a rezoning-type moving mesh is appliedmore » to a di usion problem and formulated in a parallel-in-time framework. Tests and scaling studies are run using XBraid and demonstrate excellent results for the simple model problem considered herein.« less

  16. Development of a nursing care problems coping scale for male caregivers for people with dementia living at home.

    PubMed

    Nishio, Midori; Ono, Mitsu

    2015-01-01

    The number of male caregivers has increased, but male caregivers face several problems that reduce their quality of life and psychological condition. This study focused on the coping problems of men who care for people with dementia at home. It aimed to develop a coping scale for male caregivers so that they can continue caring for people with dementia at home and improve their own quality of life. The study also aimed to verify the reliability and validity of the scale. The subjects were 759 men who care for people with dementia at home. The Care Problems Coping Scale consists of 21 questions based on elements of questions extracted from a pilot study. Additionally, subjects completed three self-administered questionnaires: the Japanese version of the Zarit Caregiver Burden Scale, the Depressive Symptoms and the Self-esteem Emotional Scale, and Rosenberg Self-Esteem Scale. There were 274 valid responses (36.1% response rate). Regarding the answer distribution, each average value of the 21 items ranged from 1.56 to 2.68. The median answer distribution of the 21 items was 39 (SD = 6.6). Five items had a ceiling effect, and two items had a floor effect. The scale stability was about 50%, and Cronbach's α was 0.49. There were significant correlations between the Care Problems Coping Scale and total scores of the Japanese version of the Zarit Caregiver Burden Scale, the Depressive Symptoms and Self-esteem Emotional Scale, and the Rosenberg Self-Esteem Scale. The answers provided on the Care Problems Coping Scale questionnaire indicated that male caregivers experience care problems. In terms of validity, there were significant correlations between the external questionnaires and 19 of the 21 items in this scale. This scale can therefore be used to measure problems with coping for male caregivers who care for people with dementia at home.

  17. Negative urgency partially accounts for the relationship between major depressive disorder and marijuana problems.

    PubMed

    Gunn, Rachel L; Jackson, Kristina M; Borsari, Brian; Metrik, Jane

    2018-01-01

    To goal of this study was to better understand mechanisms underlying associations between Major Depressive Disorder (MDD) and marijuana use and problems. Specifically, it was hypothesized that negative urgency (NU), the tendency to act rashly while experiencing negative mood states, would uniquely (compared to other impulsivity traits: positive urgency, sensation seeking, premeditation, and perseverance) account for the relationship between MDD and marijuana use and problems. Data were collected from a sample ( N  = 357) of veterans ( M age = 33.63) recruited from a Veterans Affairs hospital who used marijuana at least once in their lifetime. Participants completed the SCID-NP to assess MDD, a marijuana problems scale, a Time-Line Follow-back to assess six-month marijuana use, and the UPPS-P Impulsive Behavior Scale for impulsivity. Path analysis was conducted using bootstrapped ( k  = 20,000) and bias-corrected 95% confidence intervals (CIs) to estimate mediation (indirect) effects, controlling for age, sex, and race. Analyses revealed a significant direct effect of MDD on NU and NU on marijuana problems. Regarding mediational analyses, there was a significant indirect effect of MDD on marijuana problems via NU. The direct effect of MDD on marijuana problems was reduced, but remained significant, suggesting partial mediation. No other impulsivity scales accounted for the relationship between MDD and marijuana problems. In predicting marijuana use, there were no significant indirect effects for any impulsivity traits, including NU, despite significant bivariate associations between use and NU and MDD. Results suggest that high levels of NU may partially explain associations between MDD and marijuana problems, but not marijuana use. No other facets of impulsivity accounted for the relationship between MDD and marijuana use or problems, underscoring the specificity of NU as a putative mechanism and the importance of assessing NU in treatment settings.

  18. Scalable implicit incompressible resistive MHD with stabilized FE and fully-coupled Newton–Krylov-AMG

    DOE PAGES

    Shadid, J. N.; Pawlowski, R. P.; Cyr, E. C.; ...

    2016-02-10

    Here, we discuss that the computational solution of the governing balance equations for mass, momentum, heat transfer and magnetic induction for resistive magnetohydrodynamics (MHD) systems can be extremely challenging. These difficulties arise from both the strong nonlinear, nonsymmetric coupling of fluid and electromagnetic phenomena, as well as the significant range of time- and length-scales that the interactions of these physical mechanisms produce. This paper explores the development of a scalable, fully-implicit stabilized unstructured finite element (FE) capability for 3D incompressible resistive MHD. The discussion considers the development of a stabilized FE formulation in context of the variational multiscale (VMS) method,more » and describes the scalable implicit time integration and direct-to-steady-state solution capability. The nonlinear solver strategy employs Newton–Krylov methods, which are preconditioned using fully-coupled algebraic multilevel preconditioners. These preconditioners are shown to enable a robust, scalable and efficient solution approach for the large-scale sparse linear systems generated by the Newton linearization. Verification results demonstrate the expected order-of-accuracy for the stabilized FE discretization. The approach is tested on a variety of prototype problems, that include MHD duct flows, an unstable hydromagnetic Kelvin–Helmholtz shear layer, and a 3D island coalescence problem used to model magnetic reconnection. Initial results that explore the scaling of the solution methods are also presented on up to 128K processors for problems with up to 1.8B unknowns on a CrayXK7.« less

  19. Towards a rigorous mesoscale modeling of reactive flow and transport in an evolving porous medium and its applications to soil science

    NASA Astrophysics Data System (ADS)

    Ray, Nadja; Rupp, Andreas; Knabner, Peter

    2016-04-01

    Soil is arguably the most prominent example of a natural porous medium that is composed of a porous matrix and a pore space. Within this framework and in terms of soil's heterogeneity, we first consider transport and fluid flow at the pore scale. From there, we develop a mechanistic model and upscale it mathematically to transfer our model from the small scale to that of the mesoscale (laboratory scale). The mathematical framework of (periodic) homogenization (in principal) rigorously facilitates such processes by exactly computing the effective coefficients/parameters by means of the pore geometry and processes. In our model, various small-scale soil processes may be taken into account: molecular diffusion, convection, drift emerging from electric forces, and homogeneous reactions of chemical species in a solvent. Additionally, our model may consider heterogeneous reactions at the porous matrix, thus altering both the porosity and the matrix. Moreover, our model may additionally address biophysical processes, such as the growth of biofilms and how this affects the shape of the pore space. Both of the latter processes result in an intrinsically variable soil structure in space and time. Upscaling such models under the assumption of a locally periodic setting must be performed meticulously to preserve information regarding the complex coupling of processes in the evolving heterogeneous medium. Generally, a micro-macro model emerges that is then comprised of several levels of couplings: Macroscopic equations that describe the transport and fluid flow at the scale of the porous medium (mesoscale) include averaged time- and space-dependent coefficient functions. These functions may be explicitly computed by means of auxiliary cell problems (microscale). Finally, the pore space in which the cell problems are defined is time- and space dependent and its geometry inherits information from the transport equation's solutions. Numerical computations using mixed finite elements and potentially random initial data, e.g. that of porosity, complement our theoretical results. Our investigations contribute to the theoretical understanding of the link between soil formation and soil functions. This general framework may be applied to various problems in soil science for a range of scales, such as the formation and turnover of microaggregates or soil remediation.

  20. Scaling and dimensional analysis of acoustic streaming jets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moudjed, B.; Botton, V.; Henry, D.

    2014-09-15

    This paper focuses on acoustic streaming free jets. This is to say that progressive acoustic waves are used to generate a steady flow far from any wall. The derivation of the governing equations under the form of a nonlinear hydrodynamics problem coupled with an acoustic propagation problem is made on the basis of a time scale discrimination approach. This approach is preferred to the usually invoked amplitude perturbations expansion since it is consistent with experimental observations of acoustic streaming flows featuring hydrodynamic nonlinearities and turbulence. Experimental results obtained with a plane transducer in water are also presented together with amore » review of the former experimental investigations using similar configurations. A comparison of the shape of the acoustic field with the shape of the velocity field shows that diffraction is a key ingredient in the problem though it is rarely accounted for in the literature. A scaling analysis is made and leads to two scaling laws for the typical velocity level in acoustic streaming free jets; these are both observed in our setup and in former studies by other teams. We also perform a dimensional analysis of this problem: a set of seven dimensionless groups is required to describe a typical acoustic experiment. We find that a full similarity is usually not possible between two acoustic streaming experiments featuring different fluids. We then choose to relax the similarity with respect to sound attenuation and to focus on the case of a scaled water experiment representing an acoustic streaming application in liquid metals, in particular, in liquid silicon and in liquid sodium. We show that small acoustic powers can yield relatively high Reynolds numbers and velocity levels; this could be a virtue for heat and mass transfer applications, but a drawback for ultrasonic velocimetry.« less

  1. Bullying, sleep/wake patterns and subjective sleep disorders: findings from a cross-sectional survey.

    PubMed

    Kubiszewski, Violaine; Fontaine, Roger; Potard, Catherine; Gimenes, Guillaume

    2014-05-01

    The aim of this study was to explore: (a) sleep patterns and disorders possibly associated with adolescent bullying profiles (pure bully, pure victim, bully/victim and neutral) and (b) the effect of sleep on psychosocial problems (externalized and internalized) related to bullying. The sample consisted of 1422 students aged 10-18 (mean = 14.3, SD = 2.7; 57% male) from five socioeconomically diverse schools in France. Bullying profiles were obtained using the revised Bully-Victim Questionnaire. Subjective sleep disorders were assessed using the Athens Insomnia Scale. School-week and weekend sleep/wake patterns were recorded. Internalizing problems were investigated using a Perceived Social Disintegration Scale and a Psychological Distress Scale. Externalizing behaviors were assessed using a General Aggressiveness Scale and an Antisocial Behavior Scale. These questionnaires were administered during individual interviews at school. After controlling for effects of gender and age, victims of bullying showed significantly more subjective sleep disturbances than the pure-bully or neutral groups (p < 0.001). Bullies' sleep schedules were more irregular (p < 0.001 for bedtime irregularity and p<0.01 for wake-up time irregularity) and their sleep duration was shorter than their schoolmates (p < 0.001 for the school week and p < 0.05 for the weekend). There was an effect of sleep on psychosocial problems related to bullying, and our results indicate that sleep has a moderating effect on aggression in bullies (p < 0.001). This would suggest a higher vulnerability of bullies to sleep deprivation. These results show differences in sleep problems and patterns in school-bullying profiles. Findings of this study open up new perspectives for understanding and preventing bullying in schools, with implications for research and clinical applications.

  2. Naturalness of Electroweak Symmetry Breaking while Waiting for the LHC

    NASA Astrophysics Data System (ADS)

    Espinosa, J. R.

    2007-06-01

    After revisiting the hierarchy problem of the Standard Model and its implications for the scale of New Physics, I consider the finetuning problem of electroweak symmetry breaking in several scenarios beyond the Standard Model: SUSY, Little Higgs and "improved naturalness" models. The main conclusions are that: New Physics should appear on the reach of the LHC; some SUSY models can solve the hierarchy problem with acceptable residual tuning; Little Higgs models generically suffer from large tunings, many times hidden; and, finally, that "improved naturalness" models do not generically improve the naturalness of the SM.

  3. Development and Initial Psychometric Evaluation of the Sport Interference Checklist

    ERIC Educational Resources Information Center

    Donohue, Brad; Silver, N. Clayton; Dickens, Yani; Covassin, Tracey; Lancer, Kevin

    2007-01-01

    The Sport Interference Checklist (SIC) was developed in 141 athletes to assist in the concurrent assessment of cognitive and behavioral problems experienced by athletes in both training (Problems in Sports Training Scale, PSTS) and competition (Problems in Sports Competition Scale, PSCS). An additional scale (Desire for Sport Psychology Scale,…

  4. The asymptotic homogenization elasticity tensor properties for composites with material discontinuities

    NASA Astrophysics Data System (ADS)

    Penta, Raimondo; Gerisch, Alf

    2017-01-01

    The classical asymptotic homogenization approach for linear elastic composites with discontinuous material properties is considered as a starting point. The sharp length scale separation between the fine periodic structure and the whole material formally leads to anisotropic elastic-type balance equations on the coarse scale, where the arising fourth rank operator is to be computed solving single periodic cell problems on the fine scale. After revisiting the derivation of the problem, which here explicitly points out how the discontinuity in the individual constituents' elastic coefficients translates into stress jump interface conditions for the cell problems, we prove that the gradient of the cell problem solution is minor symmetric and that its cell average is zero. This property holds for perfect interfaces only (i.e., when the elastic displacement is continuous across the composite's interface) and can be used to assess the accuracy of the computed numerical solutions. These facts are further exploited, together with the individual constituents' elastic coefficients and the specific form of the cell problems, to prove a theorem that characterizes the fourth rank operator appearing in the coarse-scale elastic-type balance equations as a composite material effective elasticity tensor. We both recover known facts, such as minor and major symmetries and positive definiteness, and establish new facts concerning the Voigt and Reuss bounds. The latter are shown for the first time without assuming any equivalence between coarse and fine-scale energies ( Hill's condition), which, in contrast to the case of representative volume elements, does not identically hold in the context of asymptotic homogenization. We conclude with instructive three-dimensional numerical simulations of a soft elastic matrix with an embedded cubic stiffer inclusion to show the profile of the physically relevant elastic moduli (Young's and shear moduli) and Poisson's ratio at increasing (up to 100 %) inclusion's volume fraction, thus providing a proxy for the design of artificial elastic composites.

  5. Long-term wave measurements in a climate change perspective.

    NASA Astrophysics Data System (ADS)

    Pomaro, Angela; Bertotti, Luciana; Cavaleri, Luigi; Lionello, Piero; Portilla-Yandun, Jesus

    2017-04-01

    At present multi-decadal time series of wave data needed for climate studies are generally provided by long term model simulations (hindcasts) covering the area of interest. Examples, among many, at different scales are wave hindcasts adopting the wind fields of the ERA-Interim reanalysis of the European Centre for Medium-Range Weather Forecasts (ECMWF, Reading, U.K.) at the global level and by regional re-analysis as for the Mediterranean Sea (Lionello and Sanna, 2006). Valuable as they are, these estimates are necessarily affected by the approximations involved, the more so because of the problems encountered within modelling processes in small basins using coarse resolution wind fields (Cavaleri and Bertotti, 2004). On the contrary, multi-decadal observed time series are rare. They have the evident advantage of somehow representing the real evolution of the waves, without the shortcomings associated with the limitation of models in reproducing the actual processes and the real variability within the wave fields. Obviously, observed wave time series are not exempt of problems. They represent a very local information, hence their use to describe the wave evolution at large scale is sometimes arguable and, in general, it needs the support of model simulations assessing to which extent the local value is representative of a large scale evolution. Local effects may prevent the identification of trends that are indeed present at large scale. Moreover, a regular maintenance, accurate monitoring and metadata information are crucial issues when considering the reliability of a time series for climate applications. Of course, where available, especially if for several decades, measured data are of great value for a number of reasons and can be valuable clues to delve further into the physics of the processes of interest, especially if considering that waves, as an integrated product of the local climate, if available in an area sensitive to even limited changes of the large scale pattern, can provide related compact and meaningful information. In addition, the availability for the area of interest of a 20-year long dataset of directional spectra (in frequency and direction) offers an independent, but theoretically corresponding and significantly long dataset, allowing to penetrate the wave problem through different perspectives. In particular, we investigate the contribution of the individual wave systems that modulate the variability of waves in the Adriatic Sea. A characterization of wave conditions based on wave spectra in fact brings out a more detailed description of the different wave regimes, their associated meteorological conditions and their variation in time and geographical space.

  6. Modeling Time-Dependent Behavior of Concrete Affected by Alkali Silica Reaction in Variable Environmental Conditions.

    PubMed

    Alnaggar, Mohammed; Di Luzio, Giovanni; Cusatis, Gianluca

    2017-04-28

    Alkali Silica Reaction (ASR) is known to be a serious problem for concrete worldwide, especially in high humidity and high temperature regions. ASR is a slow process that develops over years to decades and it is influenced by changes in environmental and loading conditions of the structure. The problem becomes even more complicated if one recognizes that other phenomena like creep and shrinkage are coupled with ASR. This results in synergistic mechanisms that can not be easily understood without a comprehensive computational model. In this paper, coupling between creep, shrinkage and ASR is modeled within the Lattice Discrete Particle Model (LDPM) framework. In order to achieve this, a multi-physics formulation is used to compute the evolution of temperature, humidity, cement hydration, and ASR in both space and time, which is then used within physics-based formulations of cracking, creep and shrinkage. The overall model is calibrated and validated on the basis of experimental data available in the literature. Results show that even during free expansions (zero macroscopic stress), a significant degree of coupling exists because ASR induced expansions are relaxed by meso-scale creep driven by self-equilibriated stresses at the meso-scale. This explains and highlights the importance of considering ASR and other time dependent aging and deterioration phenomena at an appropriate length scale in coupled modeling approaches.

  7. Modeling Time-Dependent Behavior of Concrete Affected by Alkali Silica Reaction in Variable Environmental Conditions

    PubMed Central

    Alnaggar, Mohammed; Di Luzio, Giovanni; Cusatis, Gianluca

    2017-01-01

    Alkali Silica Reaction (ASR) is known to be a serious problem for concrete worldwide, especially in high humidity and high temperature regions. ASR is a slow process that develops over years to decades and it is influenced by changes in environmental and loading conditions of the structure. The problem becomes even more complicated if one recognizes that other phenomena like creep and shrinkage are coupled with ASR. This results in synergistic mechanisms that can not be easily understood without a comprehensive computational model. In this paper, coupling between creep, shrinkage and ASR is modeled within the Lattice Discrete Particle Model (LDPM) framework. In order to achieve this, a multi-physics formulation is used to compute the evolution of temperature, humidity, cement hydration, and ASR in both space and time, which is then used within physics-based formulations of cracking, creep and shrinkage. The overall model is calibrated and validated on the basis of experimental data available in the literature. Results show that even during free expansions (zero macroscopic stress), a significant degree of coupling exists because ASR induced expansions are relaxed by meso-scale creep driven by self-equilibriated stresses at the meso-scale. This explains and highlights the importance of considering ASR and other time dependent aging and deterioration phenomena at an appropriate length scale in coupled modeling approaches. PMID:28772829

  8. Large-Scale CTRW Analysis of Push-Pull Tracer Tests and Other Transport in Heterogeneous Porous Media

    NASA Astrophysics Data System (ADS)

    Hansen, S. K.; Berkowitz, B.

    2014-12-01

    Recently, we developed an alternative CTRW formulation which uses a "latching" upscaling scheme to rigorously map continuous or fine-scale stochastic solute motion onto discrete transitions on an arbitrarily coarse lattice (with spacing potentially on the meter scale or more). This approach enables model simplification, among many other things. Under advection, for example, we see that many relevant anomalous transport problems may be mapped into 1D, with latching to a sequence of successive, uniformly spaced planes. On this formulation (which we term RP-CTRW), the spatial transition vector may generally be made deterministic, with CTRW waiting time distributions encapsulating all the stochastic behavior. We demonstrate the excellent performance of this technique alongside Pareto-distributed waiting times in explaining experiments across a variety of scales using only two degrees of freedom. An interesting new application of the RP-CTRW technique is the analysis of radial (push-pull) tracer tests. Given modern computational power, random walk simulations are a natural fit for the inverse problem of inferring subsurface parameters from push-pull test data, and we propose them as an alternative to the classical type curve approach. In particular, we explore the visibility of heterogeneity through non-Fickian behavior in push-pull tests, and illustrate the ability of a radial RP-CTRW technique to encapsulate this behavior using a sparse parameterization which has predictive value.

  9. Leveraging unsupervised training sets for multi-scale compartmentalization in renal pathology

    NASA Astrophysics Data System (ADS)

    Lutnick, Brendon; Tomaszewski, John E.; Sarder, Pinaki

    2017-03-01

    Clinical pathology relies on manual compartmentalization and quantification of biological structures, which is time consuming and often error-prone. Application of computer vision segmentation algorithms to histopathological image analysis, in contrast, can offer fast, reproducible, and accurate quantitative analysis to aid pathologists. Algorithms tunable to different biologically relevant structures can allow accurate, precise, and reproducible estimates of disease states. In this direction, we have developed a fast, unsupervised computational method for simultaneously separating all biologically relevant structures from histopathological images in multi-scale. Segmentation is achieved by solving an energy optimization problem. Representing the image as a graph, nodes (pixels) are grouped by minimizing a Potts model Hamiltonian, adopted from theoretical physics, modeling interacting electron spins. Pixel relationships (modeled as edges) are used to update the energy of the partitioned graph. By iteratively improving the clustering, the optimal number of segments is revealed. To reduce computational time, the graph is simplified using a Cantor pairing function to intelligently reduce the number of included nodes. The classified nodes are then used to train a multiclass support vector machine to apply the segmentation over the full image. Accurate segmentations of images with as many as 106 pixels can be completed only in 5 sec, allowing for attainable multi-scale visualization. To establish clinical potential, we employed our method in renal biopsies to quantitatively visualize for the first time scale variant compartments of heterogeneous intra- and extraglomerular structures simultaneously. Implications of the utility of our method extend to fields such as oncology, genomics, and non-biological problems.

  10. Computational Challenges in the Analysis of Petrophysics Using Microtomography and Upscaling

    NASA Astrophysics Data System (ADS)

    Liu, J.; Pereira, G.; Freij-Ayoub, R.; Regenauer-Lieb, K.

    2014-12-01

    Microtomography provides detailed 3D internal structures of rocks in micro- to tens of nano-meter resolution and is quickly turning into a new technology for studying petrophysical properties of materials. An important step is the upscaling of these properties as micron or sub-micron resolution can only be done on the sample-scale of millimeters or even less than a millimeter. We present here a recently developed computational workflow for the analysis of microstructures including the upscaling of material properties. Computations of properties are first performed using conventional material science simulations at micro to nano-scale. The subsequent upscaling of these properties is done by a novel renormalization procedure based on percolation theory. We have tested the workflow using different rock samples, biological and food science materials. We have also applied the technique on high-resolution time-lapse synchrotron CT scans. In this contribution we focus on the computational challenges that arise from the big data problem of analyzing petrophysical properties and its subsequent upscaling. We discuss the following challenges: 1) Characterization of microtomography for extremely large data sets - our current capability. 2) Computational fluid dynamics simulations at pore-scale for permeability estimation - methods, computing cost and accuracy. 3) Solid mechanical computations at pore-scale for estimating elasto-plastic properties - computational stability, cost, and efficiency. 4) Extracting critical exponents from derivative models for scaling laws - models, finite element meshing, and accuracy. Significant progress in each of these challenges is necessary to transform microtomography from the current research problem into a robust computational big data tool for multi-scale scientific and engineering problems.

  11. Heterogeneous quantum computing for satellite constellation optimization: solving the weighted k-clique problem

    NASA Astrophysics Data System (ADS)

    Bass, Gideon; Tomlin, Casey; Kumar, Vaibhaw; Rihaczek, Pete; Dulny, Joseph, III

    2018-04-01

    NP-hard optimization problems scale very rapidly with problem size, becoming unsolvable with brute force methods, even with supercomputing resources. Typically, such problems have been approximated with heuristics. However, these methods still take a long time and are not guaranteed to find an optimal solution. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. Current quantum annealing (QA) devices are designed to solve difficult optimization problems, but they are limited by hardware size and qubit connectivity restrictions. We present a novel heterogeneous computing stack that combines QA and classical machine learning, allowing the use of QA on problems larger than the hardware limits of the quantum device. These results represent experiments on a real-world problem represented by the weighted k-clique problem. Through this experiment, we provide insight into the state of quantum machine learning.

  12. Coordinated control of active and reactive power of distribution network with distributed PV cluster via model predictive control

    NASA Astrophysics Data System (ADS)

    Ji, Yu; Sheng, Wanxing; Jin, Wei; Wu, Ming; Liu, Haitao; Chen, Feng

    2018-02-01

    A coordinated optimal control method of active and reactive power of distribution network with distributed PV cluster based on model predictive control is proposed in this paper. The method divides the control process into long-time scale optimal control and short-time scale optimal control with multi-step optimization. The models are transformed into a second-order cone programming problem due to the non-convex and nonlinear of the optimal models which are hard to be solved. An improved IEEE 33-bus distribution network system is used to analyse the feasibility and the effectiveness of the proposed control method

  13. Singular perturbation, state aggregation and nonlinear filtering

    NASA Technical Reports Server (NTRS)

    Hijab, O.; Sastry, S.

    1981-01-01

    Consideration is given to a state process evolving in R(n), whose motion is that of a pure jump process in R(n) in the 0(1) time scale, upon which is superimposed a continuous motion along the orbits of a gradient-like vector field g in R(n) in the 0(1/epsilon) time scale. The infinitesimal generator of the state process is, in other words, of the form L + (1/epsilon)g. It follows from the main results presented that the projected filters converge to the finite state Wonham filter corresponding to the problem of estimating the finite state process in the presence of additive white noise.

  14. Multifractality and heteroscedastic dynamics: An application to time series analysis

    NASA Astrophysics Data System (ADS)

    Nascimento, C. M.; Júnior, H. B. N.; Jennings, H. D.; Serva, M.; Gleria, Iram; Viswanathan, G. M.

    2008-01-01

    An increasingly important problem in physics concerns scale invariance symmetry in diverse complex systems, often characterized by heteroscedastic dynamics. We investigate the nature of the relationship between the heteroscedastic and fractal aspects of the dynamics of complex systems, by analyzing the sensitivity to heteroscedasticity of the scaling properties of weakly nonstationary time series. By using multifractal detrended fluctuation analysis, we study the singularity spectra of currency exchange rate fluctuations, after partially or completely eliminating n-point correlations via data shuffling techniques. We conclude that heteroscedasticity can significantly increase multifractality and interpret these findings in the context of self-organizing and adaptive complex systems.

  15. On Time/Space Aggregation of Fine-Scale Error Estimates (Invited)

    NASA Astrophysics Data System (ADS)

    Huffman, G. J.

    2013-12-01

    Estimating errors inherent in fine time/space-scale satellite precipitation data sets is still an on-going problem and a key area of active research. Complicating features of these data sets include the intrinsic intermittency of the precipitation in space and time and the resulting highly skewed distribution of precipitation rates. Additional issues arise from the subsampling errors that satellites introduce, the errors due to retrieval algorithms, and the correlated error that retrieval and merger algorithms sometimes introduce. Several interesting approaches have been developed recently that appear to make progress on these long-standing issues. At the same time, the monthly averages over 2.5°x2.5° grid boxes in the Global Precipitation Climatology Project (GPCP) Satellite-Gauge (SG) precipitation data set follow a very simple sampling-based error model (Huffman 1997) with coefficients that are set using coincident surface and GPCP SG data. This presentation outlines the unsolved problem of how to aggregate the fine-scale errors (discussed above) to an arbitrary time/space averaging volume for practical use in applications, reducing in the limit to simple Gaussian expressions at the monthly 2.5°x2.5° scale. Scatter diagrams with different time/space averaging show that the relationship between the satellite and validation data improves due to the reduction in random error. One of the key, and highly non-linear, issues is that fine-scale estimates tend to have large numbers of cases with points near the axes on the scatter diagram (one of the values is exactly or nearly zero, while the other value is higher). Averaging 'pulls' the points away from the axes and towards the 1:1 line, which usually happens for higher precipitation rates before lower rates. Given this qualitative observation of how aggregation affects error, we observe that existing aggregation rules, such as the Steiner et al. (2003) power law, only depend on the aggregated precipitation rate. Is this sufficient, or is it necessary to aggregate the precipitation error estimates across the time/space data cube used for averaging? At least for small time/space data cubes it would seem that the detailed variables that affect each precipitation error estimate in the aggregation, such as sensor type, land/ocean surface type, convective/stratiform type, and so on, drive variations that must be accounted for explicitly.

  16. Modified relativistic dynamics

    NASA Astrophysics Data System (ADS)

    Qadir, Asghar; Lee, Hyung Won; Kim, Kyoung Yee

    One of the major problems in Cosmology is the fact that there is no good candidate of dark matter in the Standard Model of Particle Physics or any experimentally supported modifications of it. At the same time, one of the major problems of General Relativity is that it cannot be unified with Quantum Theory. Here, we present a program to see if there is not a common source of both problems. The idea is that an interaction term between matter fields and the gravitational field in the total Lagrangian, analogous to that for Electromagnetism, could possibly provide the dynamical effect for which the dark matter is postulated, on the one hand and a Quantum-Field Theory (QFT) incorporating Gravity, that does not have unmanageable divergences, on the other. One could first check that the modified relativistic dynamics, if fitted for the dark matter in individual galaxies fits also for systems and clusters of galaxies, at all scales. If there is no problem with the explanation of the dynamics usually explained by dark matter at all scales, we could check if it leads to a workable QFT of Relativity.

  17. [Emotional distress in elderly people with heart disease].

    PubMed

    Martínez Santamaría, Emilia; Lameiras Fernández, María; González Lorenzo, Manuel; Rodríguez Castro, Yolanda

    2006-06-30

    To analyse the emotional distress associated with ageing, and its prevalence among elderly people who suffer from heart disease. Personal interviews with elderly people with and without heart problems. Interviews were conducted in public hospitals and old people's homes in the south of Galicia, Spain. The sample was made up of 130 elderly people (65 with heart problems and 65 without). The Inventory of Coping Strategies, of Halroyd and Reynolk (1984); Scheir, Caver, and Bridges Test (1984); the Life Satisfaction Scale of Diener, Emmuns, Larsen, and Griffen (1985); Rosenberg's Self-Esteem Scale (1965); and an instrument to measure Associated Symptoms (SCL-90; Derogatis, 1975). Elderly people with heart problems experienced greater anxiety and had lower self-esteem than those without such problems. Heart patients also tended to suffer more phobic anxiety and to retreat from social interaction more. With the passing of time, heart patients over 60 showed more anxiety, irritability and psychosomatic disorders. This study clearly shows the existence of emotional distress in elderly heart patients. This makes it particularly important to conduct risk-prevention programmes, since a lot of heart disease is brought on by unhealthy conduct.

  18. Effects of prenatal methamphetamine exposure on behavioral and cognitive findings at 7.5 years of age.

    PubMed

    Diaz, Sabrina D; Smith, Lynne M; LaGasse, Linda L; Derauf, Chris; Newman, Elana; Shah, Rizwan; Arria, Amelia; Huestis, Marilyn A; Della Grotta, Sheri; Dansereau, Lynne M; Neal, Charles; Lester, Barry M

    2014-06-01

    To examine child behavioral and cognitive outcomes after prenatal exposure to methamphetamine. We enrolled 412 mother-infant pairs (204 methamphetamine-exposed and 208 unexposed matched comparisons) in the Infant Development, Environment, and Lifestyle study. The 151 children exposed to methamphetamine and 147 comparisons who attended the 7.5-year visit were included. Exposure was determined by maternal self-report and/or positive meconium toxicology. Maternal interviews assessed behavioral and cognitive outcomes using the Conners' Parent Rating Scale-Revised: Short Form. After adjusting for covariates, children exposed to methamphetamine had significantly higher cognitive problems subscale scores than comparisons and were 2.8 times more likely to have cognitive problems scores that were above average on the Conners' Parent Rating Scale-Revised: Short Form. No association between prenatal methamphetamine exposure and behavioral problems, measured by the oppositional, hyperactivity, and attention-deficit/hyperactivity disorder index subscales, were found. Prenatal methamphetamine exposure was associated with increased cognitive problems, which may affect academic achievement and lead to increased negative behavioral outcomes. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. [Knowledge of Emotion Regulation Strategies, Problem Behavior, and Prosocial Behavior in Preschool Age].

    PubMed

    Gust, Nicole; Koglin, Ute; Petermann, Franz

    2015-01-01

    The present study examines the relation between knowledge of emotion regulation strategies and social behavior in preschoolers. Knowledge of emotion regulation strategies of 210 children (mean age 55 months) was assessed. Teachers rated children's social behavior with SDQ. Linear regression analysis examined how knowledge of emotion regulation strategies influenced social behavior of children. Significant effects of gender on SDQ scales "prosocial behavior", "hyperactivity", "behavior problems", and SDQ total problem scale were identified. Age was a significant predictor of SDQ scales "prosocial behavior", "hyperactivity", "problems with peers" and SDQ total problem scale. Knowledge of emotion regulation strategies predicted SDQ total problem scores. Results suggest that deficits in knowledge of emotion regulation strategies are linked with increased problem behavior.

  20. SCALE PROBLEMS IN REPORTING LANDSCAPE PATTERN AT THE REGIONAL SCALE

    EPA Science Inventory

    Remotely sensed data for Southeastern United States (Standard Federal Region 4) are used to examine the scale problems involved in reporting landscape pattern for a large, heterogeneous region. Frequency distributions of landscape indices illustrate problems associated with the g...

  1. Principles for problem aggregation and assignment in medium scale multiprocessors

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Saltz, Joel H.

    1987-01-01

    One of the most important issues in parallel processing is the mapping of workload to processors. This paper considers a large class of problems having a high degree of potential fine grained parallelism, and execution requirements that are either not predictable, or are too costly to predict. The main issues in mapping such a problem onto medium scale multiprocessors are those of aggregation and assignment. We study a method of parameterized aggregation that makes few assumptions about the workload. The mapping of aggregate units of work onto processors is uniform, and exploits locality of workload intensity to balance the unknown workload. In general, a finer aggregate granularity leads to a better balance at the price of increased communication/synchronization costs; the aggregation parameters can be adjusted to find a reasonable granularity. The effectiveness of this scheme is demonstrated on three model problems: an adaptive one-dimensional fluid dynamics problem with message passing, a sparse triangular linear system solver on both a shared memory and a message-passing machine, and a two-dimensional time-driven battlefield simulation employing message passing. Using the model problems, the tradeoffs are studied between balanced workload and the communication/synchronization costs. Finally, an analytical model is used to explain why the method balances workload and minimizes the variance in system behavior.

  2. A minimal scale invariant axion solution to the strong CP-problem

    NASA Astrophysics Data System (ADS)

    Tokareva, Anna

    2018-05-01

    We present a scale-invariant extension of the Standard model allowing for the Kim-Shifman-Vainstein-Zakharov (KSVZ) axion solution of the strong CP problem in QCD. We add the minimal number of new particles and show that the Peccei-Quinn scalar might be identified with the complex dilaton field. Scale invariance, together with the Peccei-Quinn symmetry, is broken spontaneously near the Planck scale before inflation, which is driven by the Standard Model Higgs field. We present a set of general conditions which makes this scenario viable and an explicit example of an effective theory possessing spontaneous breaking of scale invariance. We show that this description works both for inflation and low-energy physics in the electroweak vacuum. This scenario can provide a self-consistent inflationary stage and, at the same time, successfully avoid the cosmological bounds on the axion. Our general predictions are the existence of colored TeV mass fermion and the QCD axion. The latter has all the properties of the KSVZ axion but does not contribute to dark matter. This axion can be searched via its mixing to a photon in an external magnetic field.

  3. The Shortlist Method for fast computation of the Earth Mover's Distance and finding optimal solutions to transportation problems.

    PubMed

    Gottschlich, Carsten; Schuhmacher, Dominic

    2014-01-01

    Finding solutions to the classical transportation problem is of great importance, since this optimization problem arises in many engineering and computer science applications. Especially the Earth Mover's Distance is used in a plethora of applications ranging from content-based image retrieval, shape matching, fingerprint recognition, object tracking and phishing web page detection to computing color differences in linguistics and biology. Our starting point is the well-known revised simplex algorithm, which iteratively improves a feasible solution to optimality. The Shortlist Method that we propose substantially reduces the number of candidates inspected for improving the solution, while at the same time balancing the number of pivots required. Tests on simulated benchmarks demonstrate a considerable reduction in computation time for the new method as compared to the usual revised simplex algorithm implemented with state-of-the-art initialization and pivot strategies. As a consequence, the Shortlist Method facilitates the computation of large scale transportation problems in viable time. In addition we describe a novel method for finding an initial feasible solution which we coin Modified Russell's Method.

  4. The Shortlist Method for Fast Computation of the Earth Mover's Distance and Finding Optimal Solutions to Transportation Problems

    PubMed Central

    Gottschlich, Carsten; Schuhmacher, Dominic

    2014-01-01

    Finding solutions to the classical transportation problem is of great importance, since this optimization problem arises in many engineering and computer science applications. Especially the Earth Mover's Distance is used in a plethora of applications ranging from content-based image retrieval, shape matching, fingerprint recognition, object tracking and phishing web page detection to computing color differences in linguistics and biology. Our starting point is the well-known revised simplex algorithm, which iteratively improves a feasible solution to optimality. The Shortlist Method that we propose substantially reduces the number of candidates inspected for improving the solution, while at the same time balancing the number of pivots required. Tests on simulated benchmarks demonstrate a considerable reduction in computation time for the new method as compared to the usual revised simplex algorithm implemented with state-of-the-art initialization and pivot strategies. As a consequence, the Shortlist Method facilitates the computation of large scale transportation problems in viable time. In addition we describe a novel method for finding an initial feasible solution which we coin Modified Russell's Method. PMID:25310106

  5. Scheduling algorithm for flow shop with two batch-processing machines and arbitrary job sizes

    NASA Astrophysics Data System (ADS)

    Cheng, Bayi; Yang, Shanlin; Hu, Xiaoxuan; Li, Kai

    2014-03-01

    This article considers the problem of scheduling two batch-processing machines in flow shop where the jobs have arbitrary sizes and the machines have limited capacity. The jobs are processed in batches and the total size of jobs in each batch cannot exceed the machine capacity. Once a batch is being processed, no interruption is allowed until all the jobs in it are completed. The problem of minimising makespan is NP-hard in the strong sense. First, we present a mathematical model of the problem using integer programme. We show the scale of feasible solutions of the problem and provide optimality properties. Then, we propose a polynomial time algorithm with running time in O(nlogn). The jobs are first assigned in feasible batches and then scheduled on machines. For the general case, we prove that the proposed algorithm has a performance guarantee of 4. For the special case where the processing times of each job on the two machines satisfy p 1 j = ap 2 j , the performance guarantee is ? for a > 0.

  6. Efficient and accurate two-scale FE-FFT-based prediction of the effective material behavior of elasto-viscoplastic polycrystals

    NASA Astrophysics Data System (ADS)

    Kochmann, Julian; Wulfinghoff, Stephan; Ehle, Lisa; Mayer, Joachim; Svendsen, Bob; Reese, Stefanie

    2018-06-01

    Recently, two-scale FE-FFT-based methods (e.g., Spahn et al. in Comput Methods Appl Mech Eng 268:871-883, 2014; Kochmann et al. in Comput Methods Appl Mech Eng 305:89-110, 2016) have been proposed to predict the microscopic and overall mechanical behavior of heterogeneous materials. The purpose of this work is the extension to elasto-viscoplastic polycrystals, efficient and robust Fourier solvers and the prediction of micromechanical fields during macroscopic deformation processes. Assuming scale separation, the macroscopic problem is solved using the finite element method. The solution of the microscopic problem, which is embedded as a periodic unit cell (UC) in each macroscopic integration point, is found by employing fast Fourier transforms, fixed-point and Newton-Krylov methods. The overall material behavior is defined by the mean UC response. In order to ensure spatially converged micromechanical fields as well as feasible overall CPU times, an efficient but simple solution strategy for two-scale simulations is proposed. As an example, the constitutive behavior of 42CrMo4 steel is predicted during macroscopic three-point bending tests.

  7. Temporal scaling in information propagation.

    PubMed

    Huang, Junming; Li, Chao; Wang, Wen-Qiang; Shen, Hua-Wei; Li, Guojie; Cheng, Xue-Qi

    2014-06-18

    For the study of information propagation, one fundamental problem is uncovering universal laws governing the dynamics of information propagation. This problem, from the microscopic perspective, is formulated as estimating the propagation probability that a piece of information propagates from one individual to another. Such a propagation probability generally depends on two major classes of factors: the intrinsic attractiveness of information and the interactions between individuals. Despite the fact that the temporal effect of attractiveness is widely studied, temporal laws underlying individual interactions remain unclear, causing inaccurate prediction of information propagation on evolving social networks. In this report, we empirically study the dynamics of information propagation, using the dataset from a population-scale social media website. We discover a temporal scaling in information propagation: the probability a message propagates between two individuals decays with the length of time latency since their latest interaction, obeying a power-law rule. Leveraging the scaling law, we further propose a temporal model to estimate future propagation probabilities between individuals, reducing the error rate of information propagation prediction from 6.7% to 2.6% and improving viral marketing with 9.7% incremental customers.

  8. Temporal scaling in information propagation

    NASA Astrophysics Data System (ADS)

    Huang, Junming; Li, Chao; Wang, Wen-Qiang; Shen, Hua-Wei; Li, Guojie; Cheng, Xue-Qi

    2014-06-01

    For the study of information propagation, one fundamental problem is uncovering universal laws governing the dynamics of information propagation. This problem, from the microscopic perspective, is formulated as estimating the propagation probability that a piece of information propagates from one individual to another. Such a propagation probability generally depends on two major classes of factors: the intrinsic attractiveness of information and the interactions between individuals. Despite the fact that the temporal effect of attractiveness is widely studied, temporal laws underlying individual interactions remain unclear, causing inaccurate prediction of information propagation on evolving social networks. In this report, we empirically study the dynamics of information propagation, using the dataset from a population-scale social media website. We discover a temporal scaling in information propagation: the probability a message propagates between two individuals decays with the length of time latency since their latest interaction, obeying a power-law rule. Leveraging the scaling law, we further propose a temporal model to estimate future propagation probabilities between individuals, reducing the error rate of information propagation prediction from 6.7% to 2.6% and improving viral marketing with 9.7% incremental customers.

  9. Efficient and accurate two-scale FE-FFT-based prediction of the effective material behavior of elasto-viscoplastic polycrystals

    NASA Astrophysics Data System (ADS)

    Kochmann, Julian; Wulfinghoff, Stephan; Ehle, Lisa; Mayer, Joachim; Svendsen, Bob; Reese, Stefanie

    2017-09-01

    Recently, two-scale FE-FFT-based methods (e.g., Spahn et al. in Comput Methods Appl Mech Eng 268:871-883, 2014; Kochmann et al. in Comput Methods Appl Mech Eng 305:89-110, 2016) have been proposed to predict the microscopic and overall mechanical behavior of heterogeneous materials. The purpose of this work is the extension to elasto-viscoplastic polycrystals, efficient and robust Fourier solvers and the prediction of micromechanical fields during macroscopic deformation processes. Assuming scale separation, the macroscopic problem is solved using the finite element method. The solution of the microscopic problem, which is embedded as a periodic unit cell (UC) in each macroscopic integration point, is found by employing fast Fourier transforms, fixed-point and Newton-Krylov methods. The overall material behavior is defined by the mean UC response. In order to ensure spatially converged micromechanical fields as well as feasible overall CPU times, an efficient but simple solution strategy for two-scale simulations is proposed. As an example, the constitutive behavior of 42CrMo4 steel is predicted during macroscopic three-point bending tests.

  10. The Systems Revolution

    ERIC Educational Resources Information Center

    Ackoff, Russell L.

    1974-01-01

    The major organizational and social problems of our time do not lend themselves to the reductionism of traditional analytical and disciplinary approaches. They must be attacked holistically, with a comprehensive systems approach. The effective study of large-scale social systems requires the synthesis of science with the professions that use it.…

  11. Simulating the Resonant Acoustic Mixer

    DTIC Science & Technology

    2013-08-02

    uuu T Lu tTtxLx (19) where L, T, L/T and 0 are the length, time, velocity and density scales for the problem...kjikjikjikjikjikji kji kji n kji n kji ffffff V t UUU (46) The half-index notation used in (45) and (46) allows flexibility in

  12. Scale-invariant fluctuations from Galilean genesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yi; Brandenberger, Robert, E-mail: wangyi@physics.mcgill.ca, E-mail: rhb@physics.mcgill.ca

    2012-10-01

    We study the spectrum of cosmological fluctuations in scenarios such as Galilean Genesis \\cite(Nicolis) in which a spectator scalar field acquires a scale-invariant spectrum of perturbations during an early phase which asymptotes in the far past to Minkowski space-time. In the case of minimal coupling to gravity and standard scalar field Lagrangian, the induced curvature fluctuations depend quadratically on the spectator field and are hence non-scale-invariant and highly non-Gaussian. We show that if higher dimensional operators (the same operators that lead to the η-problem for inflation) are considered, a linear coupling between background and spectator field fluctuations is induced whichmore » leads to scale-invariant and Gaussian curvature fluctuations.« less

  13. Efficient Storage Scheme of Covariance Matrix during Inverse Modeling

    NASA Astrophysics Data System (ADS)

    Mao, D.; Yeh, T. J.

    2013-12-01

    During stochastic inverse modeling, the covariance matrix of geostatistical based methods carries the information about the geologic structure. Its update during iterations reflects the decrease of uncertainty with the incorporation of observed data. For large scale problem, its storage and update cost too much memory and computational resources. In this study, we propose a new efficient storage scheme for storage and update. Compressed Sparse Column (CSC) format is utilized to storage the covariance matrix, and users can assign how many data they prefer to store based on correlation scales since the data beyond several correlation scales are usually not very informative for inverse modeling. After every iteration, only the diagonal terms of the covariance matrix are updated. The off diagonal terms are calculated and updated based on shortened correlation scales with a pre-assigned exponential model. The correlation scales are shortened by a coefficient, i.e. 0.95, every iteration to show the decrease of uncertainty. There is no universal coefficient for all the problems and users are encouraged to try several times. This new scheme is tested with 1D examples first. The estimated results and uncertainty are compared with the traditional full storage method. In the end, a large scale numerical model is utilized to validate this new scheme.

  14. LES-based generation of high-frequency fluctuation in wind turbulence obtained by meteorological model

    NASA Astrophysics Data System (ADS)

    Tamura, Tetsuro; Kawaguchi, Masaharu; Kawai, Hidenori; Tao, Tao

    2017-11-01

    The connection between a meso-scale model and a micro-scale large eddy simulation (LES) is significant to simulate the micro-scale meteorological problem such as strong convective events due to the typhoon or the tornado using LES. In these problems the mean velocity profiles and the mean wind directions change with time according to the movement of the typhoons or tornadoes. Although, a fine grid micro-scale LES could not be connected to a coarse grid meso-scale WRF directly. In LES when the grid is suddenly refined at the interface of nested grids which is normal to the mean advection the resolved shear stresses decrease due to the interpolation errors and the delay of the generation of smaller scale turbulence that can be resolved on the finer mesh. For the estimation of wind gust disaster the peak wind acting on buildings and structures has to be correctly predicted. In the case of meteorological model the velocity fluctuations have a tendency of diffusive variation without the high frequency component due to the numerically filtering effects. In order to predict the peak value of wind velocity with good accuracy, this paper proposes a LES-based method for generating the higher frequency components of velocity and temperature fields obtained by meteorological model.

  15. Cross-scale interactions: Quantifying multi-scaled cause–effect relationships in macrosystems

    USGS Publications Warehouse

    Soranno, Patricia A.; Cheruvelil, Kendra S.; Bissell, Edward G.; Bremigan, Mary T.; Downing, John A.; Fergus, Carol E.; Filstrup, Christopher T.; Henry, Emily N.; Lottig, Noah R.; Stanley, Emily H.; Stow, Craig A.; Tan, Pang-Ning; Wagner, Tyler; Webster, Katherine E.

    2014-01-01

    Ecologists are increasingly discovering that ecological processes are made up of components that are multi-scaled in space and time. Some of the most complex of these processes are cross-scale interactions (CSIs), which occur when components interact across scales. When undetected, such interactions may cause errors in extrapolation from one region to another. CSIs, particularly those that include a regional scaled component, have not been systematically investigated or even reported because of the challenges of acquiring data at sufficiently broad spatial extents. We present an approach for quantifying CSIs and apply it to a case study investigating one such interaction, between local and regional scaled land-use drivers of lake phosphorus. Ultimately, our approach for investigating CSIs can serve as a basis for efforts to understand a wide variety of multi-scaled problems such as climate change, land-use/land-cover change, and invasive species.

  16. PROSPECTIVE ASSOCIATIONS OF DEPRESSIVE RUMINATION AND SOCIAL PROBLEM SOLVING WITH DEPRESSION: A 6-MONTH LONGITUDINAL STUDY(.).

    PubMed

    Hasegawa, Akira; Hattori, Yosuke; Nishimura, Haruki; Tanno, Yoshihiko

    2015-06-01

    The main purpose of this study was to examine whether depressive rumination and social problem solving are prospectively associated with depressive symptoms. Nonclinical university students (N = 161, 64 men, 97 women; M age = 19.7 yr., SD = 3.6, range = 18-61) recruited from three universities in Japan completed the Beck Depression Inventory-Second Edition (BDI-II), the Ruminative Responses Scale, Social Problem-Solving Inventory-Revised Short Version (SPSI-R:S), and the Means-Ends Problem-Solving Procedure at baseline, and the BDI-II again at 6 mo. later. A stepwise multiple regression analysis with the BDI-II and all subscales of the rumination and social problem solving measures as independent variables indicated that only the BDI-II scores and the Impulsivity/carelessness style subscale of the SPSI-R:S at Time 1 were significantly associated with BDI-II scores at Time 2 (β = 0.73, 0.12, respectively; independent variables accounted for 58.8% of the variance). These findings suggest that in Japan an impulsive and careless problem-solving style was prospectively associated with depressive symptomatology 6 mo. later, as contrasted with previous findings of a cycle of rumination and avoidance problem-solving style.

  17. Stability region maximization by decomposition-aggregation method. [Skylab stability

    NASA Technical Reports Server (NTRS)

    Siljak, D. D.; Cuk, S. M.

    1974-01-01

    This work is to improve the estimates of the stability regions by formulating and resolving a proper maximization problem. The solution of the problem provides the best estimate of the maximal value of the structural parameter and at the same time yields the optimum comparison system, which can be used to determine the degree of stability of the Skylab. The analysis procedure is completely computerized, resulting in a flexible and powerful tool for stability considerations of large-scale linear as well as nonlinear systems.

  18. Solution of the within-group multidimensional discrete ordinates transport equations on massively parallel architectures

    NASA Astrophysics Data System (ADS)

    Zerr, Robert Joseph

    2011-12-01

    The integral transport matrix method (ITMM) has been used as the kernel of new parallel solution methods for the discrete ordinates approximation of the within-group neutron transport equation. The ITMM abandons the repetitive mesh sweeps of the traditional source iterations (SI) scheme in favor of constructing stored operators that account for the direct coupling factors among all the cells and between the cells and boundary surfaces. The main goals of this work were to develop the algorithms that construct these operators and employ them in the solution process, determine the most suitable way to parallelize the entire procedure, and evaluate the behavior and performance of the developed methods for increasing number of processes. This project compares the effectiveness of the ITMM with the SI scheme parallelized with the Koch-Baker-Alcouffe (KBA) method. The primary parallel solution method involves a decomposition of the domain into smaller spatial sub-domains, each with their own transport matrices, and coupled together via interface boundary angular fluxes. Each sub-domain has its own set of ITMM operators and represents an independent transport problem. Multiple iterative parallel solution methods have investigated, including parallel block Jacobi (PBJ), parallel red/black Gauss-Seidel (PGS), and parallel GMRES (PGMRES). The fastest observed parallel solution method, PGS, was used in a weak scaling comparison with the PARTISN code. Compared to the state-of-the-art SI-KBA with diffusion synthetic acceleration (DSA), this new method without acceleration/preconditioning is not competitive for any problem parameters considered. The best comparisons occur for problems that are difficult for SI DSA, namely highly scattering and optically thick. SI DSA execution time curves are generally steeper than the PGS ones. However, until further testing is performed it cannot be concluded that SI DSA does not outperform the ITMM with PGS even on several thousand or tens of thousands of processors. The PGS method does outperform SI DSA for the periodic heterogeneous layers (PHL) configuration problems. Although this demonstrates a relative strength/weakness between the two methods, the practicality of these problems is much less, further limiting instances where it would be beneficial to select ITMM over SI DSA. The results strongly indicate a need for a robust, stable, and efficient acceleration method (or preconditioner for PGMRES). The spatial multigrid (SMG) method is currently incomplete in that it does not work for all cases considered and does not effectively improve the convergence rate for all values of scattering ratio c or cell dimension h. Nevertheless, it does display the desired trend for highly scattering, optically thin problems. That is, it tends to lower the rate of growth of number of iterations with increasing number of processes, P, while not increasing the number of additional operations per iteration to the extent that the total execution time of the rapidly converging accelerated iterations exceeds that of the slower unaccelerated iterations. A predictive parallel performance model has been developed for the PBJ method. Timing tests were performed such that trend lines could be fitted to the data for the different components and used to estimate the execution times. Applied to the weak scaling results, the model notably underestimates construction time, but combined with a slight overestimation in iterative solution time, the model predicts total execution time very well for large P. It also does a decent job with the strong scaling results, closely predicting the construction time and time per iteration, especially as P increases. Although not shown to be competitive up to 1,024 processing elements with the current state of the art, the parallelized ITMM exhibits promising scaling trends. Ultimately, compared to the KBA method, the parallelized ITMM may be found to be a very attractive option for transport calculations spatially decomposed over several tens of thousands of processes. Acceleration/preconditioning of the parallelized ITMM once developed will improve the convergence rate and improve its competitiveness. (Abstract shortened by UMI.)

  19. Reaching extended length-scales with accelerated dynamics

    NASA Astrophysics Data System (ADS)

    Hubartt, Bradley; Shim, Yunsic; Amar, Jacques

    2012-02-01

    While temperature-accelerated dynamics (TAD) has been quite successful in extending the time-scales for non-equilibrium simulations of small systems, the computational time increases rapidly with system size. One possible solution to this problem, which we refer to as parTAD^1 is to use spatial decomposition combined with our previously developed semi-rigorous synchronous sublattice algorithm^2. However, while such an approach leads to significantly better scaling as a function of system-size, it also artificially limits the size of activated events and is not completely rigorous. Here we discuss progress we have made in developing an alternative approach in which localized saddle-point searches are combined with parallel GPU-based molecular dynamics in order to improve the scaling behavior. By using this method, along with the use of an adaptive method to determine the optimal high-temperature^3, we have been able to significantly increase the range of time- and length-scales over which accelerated dynamics simulations may be carried out. [1] Y. Shim et al, Phys. Rev. B 76, 205439 (2007); ibid, Phys. Rev. Lett. 101, 116101 (2008). [2] Y. Shim and J.G. Amar, Phys. Rev. B 71, 125432 (2005). [3] Y. Shim and J.G. Amar, J. Chem. Phys. 134, 054127 (2011).

  20. Forecasts, warnings and social response to flash floods: Is temporality a major problem? The case of the September 2005 flash flood in the Gard region (France)

    NASA Astrophysics Data System (ADS)

    Lutoff, C.; Anquetin, S.; Ruin, I.; Chassande, M.

    2009-09-01

    Flash floods are complex phenomena. The atmospheric and hydrological generating mechanisms of the phenomenon are not completely understood, leading to highly uncertain forecasts of and warnings for these events. On the other hand warning and crisis response to such violent and fast events is not a straightforward process. In both the social and physical aspect of the problem, space and time scales involved either in hydrometeorology, human behavior and social organizations sciences are of crucial importance. Forecasters, emergency managers, mayors, school superintendents, school transportation managers, first responders and road users, all have different time and space frameworks that they use to take emergency decision for themselves, their group or community. The integration of space and time scales of both the phenomenon and human activities is therefore a necessity to better deal with questions as forecasting lead-time and warning efficiency. The aim of this oral presentation is to focus on the spatio-temporal aspects of flash floods to improve our understanding of the event dynamic compared to the different scales of the social response. The authors propose a framework of analysis to compare the temporality of: i) the forecasts (from Méteo-France and from EFAS (Thielen et al., 2008)), ii) the meteorological and hydrological parameters, iii) the social response at different scales. The September 2005 event is particularly interesting for such analysis. The rainfall episode lasted nearly a week with two distinct phases separated by low intensity precipitations. Therefore the Méteo-France vigilance bulletin where somehow disconnected from the local flood’s impacts. Our analysis focuses on the timings of different types of local response, including the delicate issue of school transportation, in regard to the forecasts and the actual dynamic of the event.

  1. Association of screen time with self-perceived attention problems and hyperactivity levels in French students: a cross-sectional study

    PubMed Central

    Guichard, Elie; Kurth, Tobias

    2016-01-01

    Objective To investigate whether high levels of screen time exposure are associated with self-perceived levels of attention problems and hyperactivity in higher education students. Design Cross-sectional study among participants of the i-Share cohort. Setting French-speaking students of universities and higher education institutions. Participants 4816 graduate students who were at least 18 years old. Exposure Screen time was assessed by self-report of the average time spent on five different screen activities on smartphone, television, computer and tablet and categorised into quartiles. Main outcome measure We used the Attention Deficit Hyperactivity Disorder Self-Report Scale (ASRS-v1.1) concerning students’ behaviour over the past 6 months to measure self-perceived levels of attention problems and hyperactivity. Responses were summarised into a global score as well as scores for attention problems and hyperactivity. Results The 4816 participants of this study had a mean age of 20.8 years and 75.5% were female. Multivariable ordinary regression models showed significant associations of screen time exposure with quintiles of the total score of self-perceived attention problems and hyperactivity levels as well as the individual domains. Compared to the lowest screen time exposure category, the ORs (95% CI) were 1.58 (1.37 to 1.82) for each increasing level of quintiles of the global score, 1.57 (1.36 to 1.81) for increasing quintiles of attention levels and 1.25 (1.09 to 1.44) for increasing quartiles of hyperactivity. Conclusions Results of this large cross-sectional study among French university and higher education students show dose-dependent associations between screen time and self-perceived levels of attention problems and hyperactivity. Further studies are warranted to evaluate whether interventions could positively influence these associations. PMID:26920440

  2. Self-similar space-time evolution of an initial density discontinuity

    NASA Astrophysics Data System (ADS)

    Rekaa, V. L.; Pécseli, H. L.; Trulsen, J. K.

    2013-07-01

    The space-time evolution of an initial step-like plasma density variation is studied. We give particular attention to formulate the problem in a way that opens for the possibility of realizing the conditions experimentally. After a short transient time interval of the order of the electron plasma period, the solution is self-similar as illustrated by a video where the space-time evolution is reduced to be a function of the ratio x/t. Solutions of this form are usually found for problems without characteristic length and time scales, in our case the quasi-neutral limit. By introducing ion collisions with neutrals into the numerical analysis, we introduce a length scale, the collisional mean free path. We study the breakdown of the self-similarity of the solution as the mean free path is made shorter than the system length. Analytical results are presented for charge exchange collisions, demonstrating a short time collisionless evolution with an ensuing long time diffusive relaxation of the initial perturbation. For large times, we find a diffusion equation as the limiting analytical form for a charge-exchange collisional plasma, with a diffusion coefficient defined as the square of the ion sound speed divided by the (constant) ion collision frequency. The ion-neutral collision frequency acts as a parameter that allows a collisionless result to be obtained in one limit, while the solution of a diffusion equation is recovered in the opposite limit of large collision frequencies.

  3. A massively parallel computational approach to coupled thermoelastic/porous gas flow problems

    NASA Technical Reports Server (NTRS)

    Shia, David; Mcmanus, Hugh L.

    1995-01-01

    A new computational scheme for coupled thermoelastic/porous gas flow problems is presented. Heat transfer, gas flow, and dynamic thermoelastic governing equations are expressed in fully explicit form, and solved on a massively parallel computer. The transpiration cooling problem is used as an example problem. The numerical solutions have been verified by comparison to available analytical solutions. Transient temperature, pressure, and stress distributions have been obtained. Small spatial oscillations in pressure and stress have been observed, which would be impractical to predict with previously available schemes. Comparisons between serial and massively parallel versions of the scheme have also been made. The results indicate that for small scale problems the serial and parallel versions use practically the same amount of CPU time. However, as the problem size increases the parallel version becomes more efficient than the serial version.

  4. K-State Problem Identification Rating Scales for College Students

    ERIC Educational Resources Information Center

    Robertson, John M.; Benton, Stephen L.; Newton, Fred B.; Downey, Ronald G.; Marsh, Patricia A.; Benton, Sheryl A.; Tseng, Wen-Chih; Shin, Kang-Hyun

    2006-01-01

    The K-State Problem Identification Rating Scales, a new screening instrument for college counseling centers, gathers information about clients' presenting symptoms, functioning levels, and readiness to change. Three studies revealed 7 scales: Mood Difficulties, Learning Problems, Food Concerns, Interpersonal Conflicts, Career Uncertainties,…

  5. Side effects of problem-solving strategies in large-scale nutrition science: towards a diversification of health.

    PubMed

    Penders, Bart; Vos, Rein; Horstman, Klasien

    2009-11-01

    Solving complex problems in large-scale research programmes requires cooperation and division of labour. Simultaneously, large-scale problem solving also gives rise to unintended side effects. Based upon 5 years of researching two large-scale nutrigenomic research programmes, we argue that problems are fragmented in order to be solved. These sub-problems are given priority for practical reasons and in the process of solving them, various changes are introduced in each sub-problem. Combined with additional diversity as a result of interdisciplinarity, this makes reassembling the original and overall goal of the research programme less likely. In the case of nutrigenomics and health, this produces a diversification of health. As a result, the public health goal of contemporary nutrition science is not reached in the large-scale research programmes we studied. Large-scale research programmes are very successful in producing scientific publications and new knowledge; however, in reaching their political goals they often are less successful.

  6. The Effect of Inappropriate Calibration: Three Case Studies in Molecular Ecology

    PubMed Central

    Ho, Simon Y. W.; Saarma, Urmas; Barnett, Ross; Haile, James; Shapiro, Beth

    2008-01-01

    Time-scales estimated from sequence data play an important role in molecular ecology. They can be used to draw correlations between evolutionary and palaeoclimatic events, to measure the tempo of speciation, and to study the demographic history of an endangered species. In all of these studies, it is paramount to have accurate estimates of time-scales and substitution rates. Molecular ecological studies typically focus on intraspecific data that have evolved on genealogical scales, but often these studies inappropriately employ deep fossil calibrations or canonical substitution rates (e.g., 1% per million years for birds and mammals) for calibrating estimates of divergence times. These approaches can yield misleading estimates of molecular time-scales, with significant impacts on subsequent evolutionary and ecological inferences. We illustrate this calibration problem using three case studies: avian speciation in the late Pleistocene, the demographic history of bowhead whales, and the Pleistocene biogeography of brown bears. For each data set, we compare the date estimates that are obtained using internal and external calibration points. In all three cases, the conclusions are significantly altered by the application of revised, internally-calibrated substitution rates. Collectively, the results emphasise the importance of judicious selection of calibrations for analyses of recent evolutionary events. PMID:18286172

  7. The effect of inappropriate calibration: three case studies in molecular ecology.

    PubMed

    Ho, Simon Y W; Saarma, Urmas; Barnett, Ross; Haile, James; Shapiro, Beth

    2008-02-20

    Time-scales estimated from sequence data play an important role in molecular ecology. They can be used to draw correlations between evolutionary and palaeoclimatic events, to measure the tempo of speciation, and to study the demographic history of an endangered species. In all of these studies, it is paramount to have accurate estimates of time-scales and substitution rates. Molecular ecological studies typically focus on intraspecific data that have evolved on genealogical scales, but often these studies inappropriately employ deep fossil calibrations or canonical substitution rates (e.g., 1% per million years for birds and mammals) for calibrating estimates of divergence times. These approaches can yield misleading estimates of molecular time-scales, with significant impacts on subsequent evolutionary and ecological inferences. We illustrate this calibration problem using three case studies: avian speciation in the late Pleistocene, the demographic history of bowhead whales, and the Pleistocene biogeography of brown bears. For each data set, we compare the date estimates that are obtained using internal and external calibration points. In all three cases, the conclusions are significantly altered by the application of revised, internally-calibrated substitution rates. Collectively, the results emphasise the importance of judicious selection of calibrations for analyses of recent evolutionary events.

  8. Dynamic Flow Management Problems in Air Transportation

    NASA Technical Reports Server (NTRS)

    Patterson, Sarah Stock

    1997-01-01

    In 1995, over six hundred thousand licensed pilots flew nearly thirty-five million flights into over eighteen thousand U.S. airports, logging more than 519 billion passenger miles. Since demand for air travel has increased by more than 50% in the last decade while capacity has stagnated, congestion is a problem of undeniable practical significance. In this thesis, we will develop optimization techniques that reduce the impact of congestion on the national airspace. We start by determining the optimal release times for flights into the airspace and the optimal speed adjustment while airborne taking into account the capacitated airspace. This is called the Air Traffic Flow Management Problem (TFMP). We address the complexity, showing that it is NP-hard. We build an integer programming formulation that is quite strong as some of the proposed inequalities are facet defining for the convex hull of solutions. For practical problems, the solutions of the LP relaxation of the TFMP are very often integral. In essence, we reduce the problem to efficiently solving large scale linear programming problems. Thus, the computation times are reasonably small for large scale, practical problems involving thousands of flights. Next, we address the problem of determining how to reroute aircraft in the airspace system when faced with dynamically changing weather conditions. This is called the Air Traffic Flow Management Rerouting Problem (TFMRP) We present an integrated mathematical programming approach for the TFMRP, which utilizes several methodologies, in order to minimize delay costs. In order to address the high dimensionality, we present an aggregate model, in which we formulate the TFMRP as a multicommodity, integer, dynamic network flow problem with certain side constraints. Using Lagrangian relaxation, we generate aggregate flows that are decomposed into a collection of flight paths using a randomized rounding heuristic. This collection of paths is used in a packing integer programming formulation, the solution of which generates feasible and near-optimal routes for individual flights. The algorithm, termed the Lagrangian Generation Algorithm, is used to solve practical problems in the southwestern portion of United States in which the solutions are within 1% of the corresponding lower bounds.

  9. Assessing the Quality of Problems in Problem-Based Learning

    ERIC Educational Resources Information Center

    Sockalingam, Nachamma; Rotgans, Jerome; Schmidt, Henk

    2012-01-01

    This study evaluated the construct validity and reliability of a newly devised 32-item problem quality rating scale intended to measure the quality of problems in problem-based learning. The rating scale measured the following five characteristics of problems: the extent to which the problem (1) leads to learning objectives, (2) is familiar, (3)…

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, William D; Johansen, Hans; Evans, Katherine J

    We present a survey of physical and computational techniques that have the potential to con- tribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy andmore » fidelity in simulation of dynamics and allow more complete representations of climate features at the global scale. At the same time, part- nerships with computer science teams have focused on taking advantage of evolving computer architectures, such as many-core processors and GPUs, so that these approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less

  11. High performance computing aspects of a dimension independent semi-Lagrangian discontinuous Galerkin code

    NASA Astrophysics Data System (ADS)

    Einkemmer, Lukas

    2016-05-01

    The recently developed semi-Lagrangian discontinuous Galerkin approach is used to discretize hyperbolic partial differential equations (usually first order equations). Since these methods are conservative, local in space, and able to limit numerical diffusion, they are considered a promising alternative to more traditional semi-Lagrangian schemes (which are usually based on polynomial or spline interpolation). In this paper, we consider a parallel implementation of a semi-Lagrangian discontinuous Galerkin method for distributed memory systems (so-called clusters). Both strong and weak scaling studies are performed on the Vienna Scientific Cluster 2 (VSC-2). In the case of weak scaling we observe a parallel efficiency above 0.8 for both two and four dimensional problems and up to 8192 cores. Strong scaling results show good scalability to at least 512 cores (we consider problems that can be run on a single processor in reasonable time). In addition, we study the scaling of a two dimensional Vlasov-Poisson solver that is implemented using the framework provided. All of the simulations are conducted in the context of worst case communication overhead; i.e., in a setting where the CFL (Courant-Friedrichs-Lewy) number increases linearly with the problem size. The framework introduced in this paper facilitates a dimension independent implementation of scientific codes (based on C++ templates) using both an MPI and a hybrid approach to parallelization. We describe the essential ingredients of our implementation.

  12. Ultracompact Minihalos as Probes of Inflationary Cosmology.

    PubMed

    Aslanyan, Grigor; Price, Layne C; Adams, Jenni; Bringmann, Torsten; Clark, Hamish A; Easther, Richard; Lewis, Geraint F; Scott, Pat

    2016-09-30

    Cosmological inflation generates primordial density perturbations on all scales, including those far too small to contribute to the cosmic microwave background. At these scales, isolated ultracompact minihalos of dark matter can form well before standard structure formation, if the perturbations have sufficient amplitude. Minihalos affect pulsar timing data and are potentially bright sources of gamma rays. The resulting constraints significantly extend the observable window of inflation in the presence of cold dark matter, coupling two of the key problems in modern cosmology.

  13. Fine Scale Baleen Whale Behavior Observed Via Tagging Over Daily Time Scales

    DTIC Science & Technology

    2014-09-30

    During late fall 2013 and winter 2014, I built a data logger for the optical plankton counter ( OPC ) to facilitate its continued use on the NOAA Ship...Gordon Gunter. This ship has very long (> 5 km) of conducting sea cable, and we had communication issues with the OPC with the manufacturers...telemetry system over this long sea cable. To solve this problem, I adapted an existing data logger to provide power and log data from the OPC locally on

  14. A framework for modeling and optimizing dynamic systems under uncertainty

    DOE PAGES

    Nicholson, Bethany; Siirola, John

    2017-11-11

    Algebraic modeling languages (AMLs) have drastically simplified the implementation of algebraic optimization problems. However, there are still many classes of optimization problems that are not easily represented in most AMLs. These classes of problems are typically reformulated before implementation, which requires significant effort and time from the modeler and obscures the original problem structure or context. In this work we demonstrate how the Pyomo AML can be used to represent complex optimization problems using high-level modeling constructs. We focus on the operation of dynamic systems under uncertainty and demonstrate the combination of Pyomo extensions for dynamic optimization and stochastic programming.more » We use a dynamic semibatch reactor model and a large-scale bubbling fluidized bed adsorber model as test cases.« less

  15. A framework for modeling and optimizing dynamic systems under uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicholson, Bethany; Siirola, John

    Algebraic modeling languages (AMLs) have drastically simplified the implementation of algebraic optimization problems. However, there are still many classes of optimization problems that are not easily represented in most AMLs. These classes of problems are typically reformulated before implementation, which requires significant effort and time from the modeler and obscures the original problem structure or context. In this work we demonstrate how the Pyomo AML can be used to represent complex optimization problems using high-level modeling constructs. We focus on the operation of dynamic systems under uncertainty and demonstrate the combination of Pyomo extensions for dynamic optimization and stochastic programming.more » We use a dynamic semibatch reactor model and a large-scale bubbling fluidized bed adsorber model as test cases.« less

  16. A real-time multi-scale 2D Gaussian filter based on FPGA

    NASA Astrophysics Data System (ADS)

    Luo, Haibo; Gai, Xingqin; Chang, Zheng; Hui, Bin

    2014-11-01

    Multi-scale 2-D Gaussian filter has been widely used in feature extraction (e.g. SIFT, edge etc.), image segmentation, image enhancement, image noise removing, multi-scale shape description etc. However, their computational complexity remains an issue for real-time image processing systems. Aimed at this problem, we propose a framework of multi-scale 2-D Gaussian filter based on FPGA in this paper. Firstly, a full-hardware architecture based on parallel pipeline was designed to achieve high throughput rate. Secondly, in order to save some multiplier, the 2-D convolution is separated into two 1-D convolutions. Thirdly, a dedicate first in first out memory named as CAFIFO (Column Addressing FIFO) was designed to avoid the error propagating induced by spark on clock. Finally, a shared memory framework was designed to reduce memory costs. As a demonstration, we realized a 3 scales 2-D Gaussian filter on a single ALTERA Cyclone III FPGA chip. Experimental results show that, the proposed framework can computing a Multi-scales 2-D Gaussian filtering within one pixel clock period, is further suitable for real-time image processing. Moreover, the main principle can be popularized to the other operators based on convolution, such as Gabor filter, Sobel operator and so on.

  17. On Instability of Geostrophic Current with Linear Vertical Shear at Length Scales of Interleaving

    NASA Astrophysics Data System (ADS)

    Kuzmina, N. P.; Skorokhodov, S. L.; Zhurbas, N. V.; Lyzhkov, D. A.

    2018-01-01

    The instability of long-wave disturbances of a geostrophic current with linear velocity shear is studied with allowance for the diffusion of buoyancy. A detailed derivation of the model problem in dimensionless variables is presented, which is used for analyzing the dynamics of disturbances in a vertically bounded layer and for describing the formation of large-scale intrusions in the Arctic basin. The problem is solved numerically based on a high-precision method developed for solving fourth-order differential equations. It is established that there is an eigenvalue in the spectrum of eigenvalues that corresponds to unstable (growing with time) disturbances, which are characterized by a phase velocity exceeding the maximum velocity of the geostrophic flow. A discussion is presented to explain some features of the instability.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiang, Nai-Yuan; Zavala, Victor M.

    We present a filter line-search algorithm that does not require inertia information of the linear system. This feature enables the use of a wide range of linear algebra strategies and libraries, which is essential to tackle large-scale problems on modern computing architectures. The proposed approach performs curvature tests along the search step to detect negative curvature and to trigger convexification. We prove that the approach is globally convergent and we implement the approach within a parallel interior-point framework to solve large-scale and highly nonlinear problems. Our numerical tests demonstrate that the inertia-free approach is as efficient as inertia detection viamore » symmetric indefinite factorizations. We also demonstrate that the inertia-free approach can lead to reductions in solution time because it reduces the amount of convexification needed.« less

  19. Application of a flexible lattice Boltzmann method based simulation tool for modelling physico-chemical processes at different scales

    NASA Astrophysics Data System (ADS)

    Patel, Ravi A.; Perko, Janez; Jacques, Diederik

    2017-04-01

    Often, especially in the disciplines related to natural porous media, such as for example vadoze zone or aquifer hydrology or contaminant transport, the relevant spatial and temporal scales on which we need to provide information is larger than the scale where the processes actually occur. Usual techniques used to deal with these problems assume the existence of a REV. However, in order to understand the behavior on larger scales it is important to downscale the problem onto the relevant scale of the processes. Due to the limitations of resources (time, memory) the downscaling can only be made up to the certain lower scale. At this lower scale still several scales may co-exist - the scale which can be explicitly described and a scale which needs to be conceptualized by effective properties. Hence, models which are supposed to provide effective properties on relevant scales should therefor be flexible enough to represent complex pore-structure by explicit geometry on one side, and differently defined processes (e.g. by the effective properties) which emerge on lower scale. In this work we present the state-of-the-art lattice Boltzmann method based simulation tool applicable to advection-diffusion equation coupled to geochemical processes. The lattice Boltzmann transport solver can be coupled with an external geochemical solver which allows to account for a wide range of geochemical reaction networks through thermodynamic databases. The applicability to multiphase systems is ongoing. We provide several examples related to the calculation of an effective diffusion properties, permeability and effective reaction rate based on a continuum scale based on the pore scale geometry.

  20. Multi-scale image segmentation and numerical modeling in carbonate rocks

    NASA Astrophysics Data System (ADS)

    Alves, G. C.; Vanorio, T.

    2016-12-01

    Numerical methods based on computational simulations can be an important tool in estimating physical properties of rocks. These can complement experimental results, especially when time constraints and sample availability are a problem. However, computational models created at different scales can yield conflicting results with respect to the physical laboratory. This problem is exacerbated in carbonate rocks due to their heterogeneity at all scales. We developed a multi-scale approach performing segmentation of the rock images and numerical modeling across several scales, accounting for those heterogeneities. As a first step, we measured the porosity and the elastic properties of a group of carbonate samples with varying micrite content. Then, samples were imaged by Scanning Electron Microscope (SEM) as well as optical microscope at different magnifications. We applied three different image segmentation techniques to create numerical models from the SEM images and performed numerical simulations of the elastic wave-equation. Our results show that a multi-scale approach can efficiently account for micro-porosities in tight micrite-supported samples, yielding acoustic velocities comparable to those obtained experimentally. Nevertheless, in high-porosity samples characterized by larger grain/micrite ratio, results show that SEM scale images tend to overestimate velocities, mostly due to their inability to capture macro- and/or intragranular- porosity. This suggests that, for high-porosity carbonate samples, optical microscope images would be more suited for numerical simulations.

  1. The Space-Time Conservative Schemes for Large-Scale, Time-Accurate Flow Simulations with Tetrahedral Meshes

    NASA Technical Reports Server (NTRS)

    Venkatachari, Balaji Shankar; Streett, Craig L.; Chang, Chau-Lyan; Friedlander, David J.; Wang, Xiao-Yen; Chang, Sin-Chung

    2016-01-01

    Despite decades of development of unstructured mesh methods, high-fidelity time-accurate simulations are still predominantly carried out on structured, or unstructured hexahedral meshes by using high-order finite-difference, weighted essentially non-oscillatory (WENO), or hybrid schemes formed by their combinations. In this work, the space-time conservation element solution element (CESE) method is used to simulate several flow problems including supersonic jet/shock interaction and its impact on launch vehicle acoustics, and direct numerical simulations of turbulent flows using tetrahedral meshes. This paper provides a status report for the continuing development of the space-time conservation element solution element (CESE) numerical and software framework under the Revolutionary Computational Aerosciences (RCA) project. Solution accuracy and large-scale parallel performance of the numerical framework is assessed with the goal of providing a viable paradigm for future high-fidelity flow physics simulations.

  2. Singular perturbation of smoothly evolving Hele-Shaw solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siegel, M.; Tanveer, S.

    1996-01-01

    We present analytical scaling results, confirmed by accurate numerics, to show that there exists a class of smoothly evolving zero surface tension solutions to the Hele-Shaw problem that are significantly perturbed by an arbitrarily small amount of surface tension in order one time. {copyright} {ital 1996 The American Physical Society.}

  3. Metadata and annotations for multi-scale electrophysiological data.

    PubMed

    Bower, Mark R; Stead, Matt; Brinkmann, Benjamin H; Dufendach, Kevin; Worrell, Gregory A

    2009-01-01

    The increasing use of high-frequency (kHz), long-duration (days) intracranial monitoring from multiple electrodes during pre-surgical evaluation for epilepsy produces large amounts of data that are challenging to store and maintain. Descriptive metadata and clinical annotations of these large data sets also pose challenges to simple, often manual, methods of data analysis. The problems of reliable communication of metadata and annotations between programs, the maintenance of the meanings within that information over long time periods, and the flexibility to re-sort data for analysis place differing demands on data structures and algorithms. Solutions to these individual problem domains (communication, storage and analysis) can be configured to provide easy translation and clarity across the domains. The Multi-scale Annotation Format (MAF) provides an integrated metadata and annotation environment that maximizes code reuse, minimizes error probability and encourages future changes by reducing the tendency to over-fit information technology solutions to current problems. An example of a graphical utility for generating and evaluating metadata and annotations for "big data" files is presented.

  4. A validity and reliability study of the coping self-efficacy scale

    PubMed Central

    Chesney, Margaret A.; Neilands, Torsten B.; Chambers, Donald B.; Taylor, Jonelle M.; Folkman, Susan

    2006-01-01

    Objectives Investigate the psychometric characteristics of the coping self-efficacy (CSE) scale, a 26-item measure of one’s confidence in performing coping behaviors when faced with life challenges. Design Data came from two randomized clinical trials (N1 = 149, N2 = 199) evaluating a theory-based Coping Effectiveness Training (CET) intervention in reducing psychological distress and increasing positive mood in persons coping with chronic illness. Methods The 348 participants were HIV-seropositive men with depressed mood who have sex with men. Participants were randomly assigned to intervention and comparison conditions and assessed pre- and post-intervention. Outcome variables included the CSE scale, ways of coping, and measures of social support and psychological distress and well-being. Results Exploratory (EFA) and confirmatory factor analyses (CFA) revealed a 13-item reduced form of the CSE scale with three factors: Use problem-focused coping (6 items, α = .91), stop unpleasant emotions and thoughts (4 items, α = .91), and get support from friends and family (3 items, α = .80). Internal consistency and test–retest reliability are strong for all three factors. Concurrent validity analyses showed these factors assess self-efficacy for different types of coping. Predictive validity analyses showed that residualized change scores in using problem- and emotion-focused coping skills were predictive of reduced psychological distress and increased psychological well-being over time. Conclusions The CSE scale provides a measure of a person’s perceived ability to cope effectively with life challenges, as well as a way to assess changes in CSE over time in intervention research. PMID:16870053

  5. The assessment of depressive patients' involvement in decision making in audio-taped primary care consultations.

    PubMed

    Loh, Andreas; Simon, Daniela; Hennig, Katrin; Hennig, Benjamin; Härter, Martin; Elwyn, Glyn

    2006-11-01

    In primary care of depression treatment options such as antidepressants, counseling and psychotherapy are reasonable. Patient involvement could foster adherence and clinical outcome. However, there is a lack of empirical information about the extent to which general practitioners involve patients in decision making processes in this condition, and about the consultation time spent for distinct decision making tasks. Twenty general practice consultations with depressive patients prior to a treatment decision were audio-taped and transcribed. Patient involvement in decision making was assessed with the OPTION-scale and durations of decision making stages were measured. Mean duration of consultations was 16 min, 6s. The mean of the OPTION-items were between 0.0 and 26.9, in a scale range from 0 to 100. Overall, 78.6% of the consultation time was spent for the step "problem definition" (12 min, 42 s). Very low levels of patient involvement in medical decisions were observed in consultations about depression. Physicians used the majority of their time for the definition of the patient's medical problem. To improve treatment decision making in this condition, general practitioners should enhance their decision making competences and be more aware of the time spent in each decision making stage.

  6. Electrodynamical Model of Quasi-Efficient Financial Markets

    NASA Astrophysics Data System (ADS)

    Ilinski, Kirill N.; Stepanenko, Alexander S.

    The modelling of financial markets presents a problem which is both theoretically challenging and practically important. The theoretical aspects concern the issue of market efficiency which may even have political implications [1], whilst the practical side of the problem has clear relevance to portfolio management [2] and derivative pricing [3]. Up till now all market models contain "smart money" traders and "noise" traders whose joint activity constitutes the market [4, 5]. On a short time scale this traditional separation does not seem to be realistic, and is hardly acceptable since all high-frequency market participants are professional traders and cannot be separated into "smart" and "noisy." In this paper we present a "microscopic" model with homogenuous quasi-rational behaviour of traders, aiming to describe short time market behaviour. To construct the model we use an analogy between "screening" in quantum electrodynamics and an equilibration process in a market with temporal mispricing [6, 7]. As a result, we obtain the time-dependent distribution function of the returns which is in quantitative agreement with real market data and obeys the anomalous scaling relations recently reported for both high-frequency exchange rates [8], S&P500 [9] and other stock market indices [10, 11].

  7. Social Milieu Oriented Routing: A New Dimension to Enhance Network Security in WSNs.

    PubMed

    Liu, Lianggui; Chen, Li; Jia, Huiling

    2016-02-19

    In large-scale wireless sensor networks (WSNs), in order to enhance network security, it is crucial for a trustor node to perform social milieu oriented routing to a target a trustee node to carry out trust evaluation. This challenging social milieu oriented routing with more than one end-to-end Quality of Trust (QoT) constraint has proved to be NP-complete. Heuristic algorithms with polynomial and pseudo-polynomial-time complexities are often used to deal with this challenging problem. However, existing solutions cannot guarantee the efficiency of searching; that is, they can hardly avoid obtaining partial optimal solutions during a searching process. Quantum annealing (QA) uses delocalization and tunneling to avoid falling into local minima without sacrificing execution time. This has been proven a promising way to many optimization problems in recently published literatures. In this paper, for the first time, with the help of a novel approach, that is, configuration path-integral Monte Carlo (CPIMC) simulations, a QA-based optimal social trust path (QA_OSTP) selection algorithm is applied to the extraction of the optimal social trust path in large-scale WSNs. Extensive experiments have been conducted, and the experiment results demonstrate that QA_OSTP outperforms its heuristic opponents.

  8. Chip Scale Ultra-Stable Clocks: Miniaturized Phonon Trap Timing Units for PNT of CubeSats

    NASA Technical Reports Server (NTRS)

    Rais-Zadeh, Mina; Altunc, Serhat; Hunter, Roger C.; Petro, Andrew

    2016-01-01

    The Chip Scale Ultra-Stable Clocks (CSUSC) project aims to provide a superior alternative to current solutions for low size, weight, and power timing devices. Currently available quartz-based clocks have problems adjusting to the high temperature and extreme acceleration found in space applications, especially when scaled down to match small spacecraft size, weight, and power requirements. The CSUSC project aims to utilize dual-mode resonators on an ovenized platform to achieve the exceptional temperature stability required for these systems. The dual-mode architecture utilizes a temperature sensitive and temperature stable mode simultaneously driven on the same device volume to eliminate ovenization error while maintaining extremely high performance. Using this technology it is possible to achieve parts-per-billion (ppb) levels of temperature stability with multiple orders of magnitude smaller size, weight, and power.

  9. A mixed parallel strategy for the solution of coupled multi-scale problems at finite strains

    NASA Astrophysics Data System (ADS)

    Lopes, I. A. Rodrigues; Pires, F. M. Andrade; Reis, F. J. P.

    2018-02-01

    A mixed parallel strategy for the solution of homogenization-based multi-scale constitutive problems undergoing finite strains is proposed. The approach aims to reduce the computational time and memory requirements of non-linear coupled simulations that use finite element discretization at both scales (FE^2). In the first level of the algorithm, a non-conforming domain decomposition technique, based on the FETI method combined with a mortar discretization at the interface of macroscopic subdomains, is employed. A master-slave scheme, which distributes tasks by macroscopic element and adopts dynamic scheduling, is then used for each macroscopic subdomain composing the second level of the algorithm. This strategy allows the parallelization of FE^2 simulations in computers with either shared memory or distributed memory architectures. The proposed strategy preserves the quadratic rates of asymptotic convergence that characterize the Newton-Raphson scheme. Several examples are presented to demonstrate the robustness and efficiency of the proposed parallel strategy.

  10. SfM with MRFs: discrete-continuous optimization for large-scale structure from motion.

    PubMed

    Crandall, David J; Owens, Andrew; Snavely, Noah; Huttenlocher, Daniel P

    2013-12-01

    Recent work in structure from motion (SfM) has built 3D models from large collections of images downloaded from the Internet. Many approaches to this problem use incremental algorithms that solve progressively larger bundle adjustment problems. These incremental techniques scale poorly as the image collection grows, and can suffer from drift or local minima. We present an alternative framework for SfM based on finding a coarse initial solution using hybrid discrete-continuous optimization and then improving that solution using bundle adjustment. The initial optimization step uses a discrete Markov random field (MRF) formulation, coupled with a continuous Levenberg-Marquardt refinement. The formulation naturally incorporates various sources of information about both the cameras and points, including noisy geotags and vanishing point (VP) estimates. We test our method on several large-scale photo collections, including one with measured camera positions, and show that it produces models that are similar to or better than those produced by incremental bundle adjustment, but more robustly and in a fraction of the time.

  11. Modal resonant dynamics of cables with a flexible support: A modulated diffraction problem

    NASA Astrophysics Data System (ADS)

    Guo, Tieding; Kang, Houjun; Wang, Lianhua; Liu, Qijian; Zhao, Yueyu

    2018-06-01

    Modal resonant dynamics of cables with a flexible support is defined as a modulated (wave) diffraction problem, and investigated by asymptotic expansions of the cable-support coupled system. The support-cable mass ratio, which is usually very large, turns out to be the key parameter for characterizing cable-support dynamic interactions. By treating the mass ratio's inverse as a small perturbation parameter and scaling the cable tension properly, both cable's modal resonant dynamics and the flexible support dynamics are asymptotically reduced by using multiple scale expansions, leading finally to a reduced cable-support coupled model (i.e., on a slow time scale). After numerical validations of the reduced coupled model, cable-support coupled responses and the flexible support induced coupling effects on the cable, are both fully investigated, based upon the reduced model. More explicitly, the dynamic effects on the cable's nonlinear frequency and force responses, caused by the support-cable mass ratio, the resonant detuning parameter and the support damping, are carefully evaluated.

  12. The Method of Fundamental Solutions using the Vector Magnetic Dipoles for Calculation of the Magnetic Fields in the Diagnostic Problems Based on Full-Scale Modelling Experiment

    NASA Astrophysics Data System (ADS)

    Bakhvalov, Yu A.; Grechikhin, V. V.; Yufanova, A. L.

    2016-04-01

    The article describes the calculation of the magnetic fields in the problems diagnostic of technical systems based on the full-scale modeling experiment. Use of gridless fundamental solution method and its variants in combination with grid methods (finite differences and finite elements) are allowed to considerably reduce the dimensionality task of the field calculation and hence to reduce calculation time. When implementing the method are used fictitious magnetic charges. In addition, much attention is given to the calculation accuracy. Error occurs when wrong choice of the distance between the charges. The authors are proposing to use vector magnetic dipoles to improve the accuracy of magnetic fields calculation. Examples of this approacharegiven. The article shows the results of research. They are allowed to recommend the use of this approach in the method of fundamental solutions for the full-scale modeling tests of technical systems.

  13. a Coarse-To Model for Airplane Detection from Large Remote Sensing Images Using Saliency Modle and Deep Learning

    NASA Astrophysics Data System (ADS)

    Song, Z. N.; Sui, H. G.

    2018-04-01

    High resolution remote sensing images are bearing the important strategic information, especially finding some time-sensitive-targets quickly, like airplanes, ships, and cars. Most of time the problem firstly we face is how to rapidly judge whether a particular target is included in a large random remote sensing image, instead of detecting them on a given image. The problem of time-sensitive-targets target finding in a huge image is a great challenge: 1) Complex background leads to high loss and false alarms in tiny object detection in a large-scale images. 2) Unlike traditional image retrieval, what we need to do is not just compare the similarity of image blocks, but quickly find specific targets in a huge image. In this paper, taking the target of airplane as an example, presents an effective method for searching aircraft targets in large scale optical remote sensing images. Firstly, we used an improved visual attention model utilizes salience detection and line segment detector to quickly locate suspected regions in a large and complicated remote sensing image. Then for each region, without region proposal method, a single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation is adopted to search small airplane objects. Unlike sliding window and region proposal-based techniques, we can do entire image (region) during training and test time so it implicitly encodes contextual information about classes as well as their appearance. Experimental results show the proposed method is quickly identify airplanes in large-scale images.

  14. Emotional and behavioral problems associated with attachment security and parenting style in adopted and non-adopted children.

    PubMed

    Altınoğlu Dikmeer, Ilkiz; Erol, Neşe; Gençöz, Tülin

    2014-01-01

    This study aimed to investigate and compare emotional and behavioral problems in Turkish adoptees and non-adopted peers raised by their biological parents. The study included 61 adopted children (34 female and 27 male) aged 6-18 years and 62 age- and gender-matched non-adopted children (35 female and 27 male). Parents rated their children's problem behaviors using the Child Behavior Checklist/6-18, temperament characteristics using the School Age Temperament Inventory, their own personality traits using the Basic Personality Traits Inventory, and their parenting styles using the Measure of Child Rearing Styles. Children rated their parents' availability and reliability as attachment figures using the Kerns Security Scale and parenting styles using the Measure of Child Rearing Styles. Adolescents aged 11-18 years self-rated their problem behaviors using the Youth Self Report. Group differences and correlations were analyzed. There were non-significant differences in all scale scores between the adopted and non-adopted groups. In contrast to the literature, age of the children at the time of adoption was not associated with problem behaviors or attachment relationships. On the other hand, the findings indicate that as the age at which the children learned that they had been adopted increased emotional and behavioral problems increased. Adoption alone could not explain the problem behaviors observed in the adopted children; the observed problem behaviors should be considered within the context of the developmental process.

  15. Superadiabatic driving of a three-level quantum system

    NASA Astrophysics Data System (ADS)

    Theisen, M.; Petiziol, F.; Carretta, S.; Santini, P.; Wimberger, S.

    2017-07-01

    We study superadiabatic quantum control of a three-level quantum system whose energy spectrum exhibits multiple avoided crossings. In particular, we investigate the possibility of treating the full control task in terms of independent two-level Landau-Zener problems. We first show that the time profiles of the elements of the full control Hamiltonian are characterized by peaks centered around the crossing times. These peaks decay algebraically for large times. In principle, such a power-law scaling invalidates the hypothesis of perfect separability. Nonetheless, we address the problem from a pragmatic point of view by studying the fidelity obtained through separate control as a function of the intercrossing separation. This procedure may be a good approach to achieve approximate adiabatic driving of a specific instantaneous eigenstate in realistic implementations.

  16. Software environment for implementing engineering applications on MIMD computers

    NASA Technical Reports Server (NTRS)

    Lopez, L. A.; Valimohamed, K. A.; Schiff, S.

    1990-01-01

    In this paper the concept for a software environment for developing engineering application systems for multiprocessor hardware (MIMD) is presented. The philosophy employed is to solve the largest problems possible in a reasonable amount of time, rather than solve existing problems faster. In the proposed environment most of the problems concerning parallel computation and handling of large distributed data spaces are hidden from the application program developer, thereby facilitating the development of large-scale software applications. Applications developed under the environment can be executed on a variety of MIMD hardware; it protects the application software from the effects of a rapidly changing MIMD hardware technology.

  17. Three-Axis Time-Optimal Attitude Maneuvers of a Rigid-Body

    NASA Astrophysics Data System (ADS)

    Wang, Xijing; Li, Jisheng

    With the development trends for modern satellites towards macro-scale and micro-scale, new demands are requested for its attitude adjustment. Precise pointing control and rapid maneuvering capabilities have long been part of many space missions. While the development of computer technology enables new optimal algorithms being used continuously, a powerful tool for solving problem is provided. Many papers about attitude adjustment have been published, the configurations of the spacecraft are considered rigid body with flexible parts or gyrostate-type systems. The object function always include minimum time or minimum fuel. During earlier satellite missions, the attitude acquisition was achieved by using the momentum ex change devices, performed by a sequential single-axis slewing strategy. Recently, the simultaneous three-axis minimum-time maneuver(reorientation) problems have been studied by many researchers. It is important to research the minimum-time maneuver of a rigid spacecraft within onboard power limits, because of potential space application such as surveying multiple targets in space and academic value. The minimum-time maneuver of a rigid spacecraft is a basic problem because the solutions for maneuvering flexible spacecraft are based on the solution to the rigid body slew problem. A new method for the open-loop solution for a rigid spacecraft maneuver is presented. Having neglected all perturbation torque, the necessary conditions of spacecraft from one state to another state can be determined. There is difference between single-axis with multi-axis. For single- axis analytical solution is possible and the switching line passing through the state-space origin belongs to parabolic. For multi-axis, it is impossible to get analytical solution due to the dynamic coupling between the axes and must be solved numerically. Proved by modern research, Euler axis rotations are quasi-time-optimal in general. On the basis of minimum value principles, a research for reorienting an inertial syrnmetric spacecraft with time cost function from an initial state of rest to a final state of rest is deduced. And the solution to it is stated below: Firstly, the essential condition for solving the problem is deduced with the minimum value principle. The necessary conditions for optimality yield a two point boundary-value problem (TPBVP), which, when solved, produces the control history that minimize time performance index. In the nonsingular control, the solution is the' bang-bang maneuver. The control profile is characterized by Saturated controls for the entire maneuver. The singular control maybe existed. It is only singular in mathematics. According to physical principle, the bigger the mode of the control torque is, the shorter the time is. So saturated controls are used in singular control. Secondly, the control parameters are always in maximum, so the key problem is to determine switch point thus original problem is changed to find the changing time. By the use of adjusting the switch on/off time, the genetic algorithm, which is a new robust method is optimized to determine the switch features without the gyroscopic coupling. There is improvement upon the traditional GA in this research. The homotopy method to find the nonlinear algebra is based on rigorous topology continuum theory. Based on the idea of the homotopy, the relaxation parameters are introduced, and the switch point is figured out with simulated annealing. Computer simulation results using a rigid body show that the new method is feasible and efficient. A practical method of computing approximate solutions to the time-optimal control- switch times for rigid body reorientation has been developed.

  18. Variance fluctuations in nonstationary time series: a comparative study of music genres

    NASA Astrophysics Data System (ADS)

    Jennings, Heather D.; Ivanov, Plamen Ch.; De Martins, Allan M.; da Silva, P. C.; Viswanathan, G. M.

    2004-05-01

    An important problem in physics concerns the analysis of audio time series generated by transduced acoustic phenomena. Here, we develop a new method to quantify the scaling properties of the local variance of nonstationary time series. We apply this technique to analyze audio signals obtained from selected genres of music. We find quantitative differences in the correlation properties of high art music, popular music, and dance music. We discuss the relevance of these objective findings in relation to the subjective experience of music.

  19. A hybrid Dantzig-Wolfe, Benders decomposition and column generation procedure for multiple diet production planning under uncertainties

    NASA Astrophysics Data System (ADS)

    Udomsungworagul, A.; Charnsethikul, P.

    2018-03-01

    This article introduces methodology to solve large scale two-phase linear programming with a case of multiple time period animal diet problems under both nutrients in raw materials and finished product demand uncertainties. Assumption of allowing to manufacture multiple product formulas in the same time period and assumption of allowing to hold raw materials and finished products inventory have been added. Dantzig-Wolfe decompositions, Benders decomposition and Column generations technique has been combined and applied to solve the problem. The proposed procedure was programmed using VBA and Solver tool in Microsoft Excel. A case study was used and tested in term of efficiency and effectiveness trade-offs.

  20. Target detection and localization in shallow water: an experimental demonstration of the acoustic barrier problem at the laboratory scale.

    PubMed

    Marandet, Christian; Roux, Philippe; Nicolas, Barbara; Mars, Jérôme

    2011-01-01

    This study demonstrates experimentally at the laboratory scale the detection and localization of a wavelength-sized target in a shallow ultrasonic waveguide between two source-receiver arrays at 3 MHz. In the framework of the acoustic barrier problem, at the 1/1000 scale, the waveguide represents a 1.1-km-long, 52-m-deep ocean acoustic channel in the kilohertz frequency range. The two coplanar arrays record in the time-domain the transfer matrix of the waveguide between each pair of source-receiver transducers. Invoking the reciprocity principle, a time-domain double-beamforming algorithm is simultaneously performed on the source and receiver arrays. This array processing projects the multireverberated acoustic echoes into an equivalent set of eigenrays, which are defined by their launch and arrival angles. Comparison is made between the intensity of each eigenray without and with a target for detection in the waveguide. Localization is performed through tomography inversion of the acoustic impedance of the target, using all of the eigenrays extracted from double beamforming. The use of the diffraction-based sensitivity kernel for each eigenray provides both the localization and the signature of the target. Experimental results are shown in the presence of surface waves, and methodological issues are discussed for detection and localization.

  1. Superframe Duration Allocation Schemes to Improve the Throughput of Cluster-Tree Wireless Sensor Networks

    PubMed Central

    Leão, Erico; Montez, Carlos; Moraes, Ricardo; Portugal, Paulo; Vasques, Francisco

    2017-01-01

    The use of Wireless Sensor Network (WSN) technologies is an attractive option to support wide-scale monitoring applications, such as the ones that can be found in precision agriculture, environmental monitoring and industrial automation. The IEEE 802.15.4/ZigBee cluster-tree topology is a suitable topology to build wide-scale WSNs. Despite some of its known advantages, including timing synchronisation and duty-cycle operation, cluster-tree networks may suffer from severe network congestion problems due to the convergecast pattern of its communication traffic. Therefore, the careful adjustment of transmission opportunities (superframe durations) allocated to the cluster-heads is an important research issue. This paper proposes a set of proportional Superframe Duration Allocation (SDA) schemes, based on well-defined protocol and timing models, and on the message load imposed by child nodes (Load-SDA scheme), or by number of descendant nodes (Nodes-SDA scheme) of each cluster-head. The underlying reasoning is to adequately allocate transmission opportunities (superframe durations) and parametrize buffer sizes, in order to improve the network throughput and avoid typical problems, such as: network congestion, high end-to-end communication delays and discarded messages due to buffer overflows. Simulation assessments show how proposed allocation schemes may clearly improve the operation of wide-scale cluster-tree networks. PMID:28134822

  2. Better, Cheaper, Faster Molecular Dynamics

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    Recent, revolutionary progress in genomics and structural, molecular and cellular biology has created new opportunities for molecular-level computer simulations of biological systems by providing vast amounts of data that require interpretation. These opportunities are further enhanced by the increasing availability of massively parallel computers. For many problems, the method of choice is classical molecular dynamics (iterative solving of Newton's equations of motion). It focuses on two main objectives. One is to calculate the relative stability of different states of the system. A typical problem that has' such an objective is computer-aided drug design. Another common objective is to describe evolution of the system towards a low energy (possibly the global minimum energy), "native" state. Perhaps the best example of such a problem is protein folding. Both types of problems share the same difficulty. Often, different states of the system are separated by high energy barriers, which implies that transitions between these states are rare events. This, in turn, can greatly impede exploration of phase space. In some instances this can lead to "quasi non-ergodicity", whereby a part of phase space is inaccessible on time scales of the simulation. To overcome this difficulty and to extend molecular dynamics to "biological" time scales (millisecond or longer) new physical formulations and new algorithmic developments are required. To be efficient they should account for natural limitations of multi-processor computer architecture. I will present work along these lines done in my group. In particular, I will focus on a new approach to calculating the free energies (stability) of different states and to overcoming "the curse of rare events". I will also discuss algorithmic improvements to multiple time step methods and to the treatment of slowly decaying, log-ranged, electrostatic effects.

  3. GEE-WIS Anchored Problem Solving Using Real-Time Authentic Water Quality Data

    NASA Astrophysics Data System (ADS)

    Young, M.; Wlodarczyk, M. S.; Branco, B.; Torgersen, T.

    2002-05-01

    GEE-WIS scientific problem solving consists of observing, hypothesizing, synthesis, argument building and reasoning, in the context of analysis, representation, modeling and sense-making of real-time authentic water quality data. Geoscience Environmental Education - Web-accessible Instrumented Systems, or GEE-WIS, an NSF Geoscience Education grant, has established a set of companion websites that stream real-time data from two campus retention ponds for research and use in secondary and undergraduate water quality lessons. We have targeted scientific problem solving skills because of the nature of the GEE-WIS environment, but further because they are central to state and federal efforts to establish science education curriculum standards and are at the core of performance-based testing. We have used a design experiment process to create and test two Anchored Instruction scenario problems. Customization such as that done through a design process, is acknowledged to be a fundamental component of educational research from an ecological psychology perspective. Our efforts have shared core design elements with other NSF water quality projects. Our method involves the analysis of student written scenario responses for level of scientific problem solving using a qualitative scoring rubric designed from participation in a related NSF project, SCALE (Synergy Communities: Aggregating Learning about Education). Student solutions of GEE-WIS anchor problems from Fall 2001 and Spring 2002 will be summarized. Implications are drawn for those interested in making secondary and high education geoscience more realistic and more motivating for students through the use of real-time authentic data via Internet.

  4. Distributed Parallel Processing and Dynamic Load Balancing Techniques for Multidisciplinary High Speed Aircraft Design

    NASA Technical Reports Server (NTRS)

    Krasteva, Denitza T.

    1998-01-01

    Multidisciplinary design optimization (MDO) for large-scale engineering problems poses many challenges (e.g., the design of an efficient concurrent paradigm for global optimization based on disciplinary analyses, expensive computations over vast data sets, etc.) This work focuses on the application of distributed schemes for massively parallel architectures to MDO problems, as a tool for reducing computation time and solving larger problems. The specific problem considered here is configuration optimization of a high speed civil transport (HSCT), and the efficient parallelization of the embedded paradigm for reasonable design space identification. Two distributed dynamic load balancing techniques (random polling and global round robin with message combining) and two necessary termination detection schemes (global task count and token passing) were implemented and evaluated in terms of effectiveness and scalability to large problem sizes and a thousand processors. The effect of certain parameters on execution time was also inspected. Empirical results demonstrated stable performance and effectiveness for all schemes, and the parametric study showed that the selected algorithmic parameters have a negligible effect on performance.

  5. Two reference time scales for studying the dynamic cavitation of liquid films

    NASA Technical Reports Server (NTRS)

    Sun, D. C.; Brewe, D. E.

    1992-01-01

    Two formulas, one for the characteristic time of filling a void with the vapor of the surrounding liquid, and one of filling the void by diffusion of the dissolved gas in the liquid, are derived. By comparing these time scales with that of the dynamic operation of oil film bearings, it is concluded that the evaporation process is usually fast enough to fill the cavitation bubble with oil vapor; whereas the diffusion process is much too slow for the dissolved air to liberate itself and enter the cavitation bubble. These results imply that the formation of a two phase fluid in dynamically loaded bearings, as often reported in the literature, is caused by air entrainment. They further indicate a way to simplify the treatment of the dynamic problem of bubble evolution.

  6. A two steps solution approach to solving large nonlinear models: application to a problem of conjunctive use.

    PubMed

    Vieira, J; Cunha, M C

    2011-01-01

    This article describes a solution method of solving large nonlinear problems in two steps. The two steps solution approach takes advantage of handling smaller and simpler models and having better starting points to improve solution efficiency. The set of nonlinear constraints (named as complicating constraints) which makes the solution of the model rather complex and time consuming is eliminated from step one. The complicating constraints are added only in the second step so that a solution of the complete model is then found. The solution method is applied to a large-scale problem of conjunctive use of surface water and groundwater resources. The results obtained are compared with solutions determined with the direct solve of the complete model in one single step. In all examples the two steps solution approach allowed a significant reduction of the computation time. This potential gain of efficiency of the two steps solution approach can be extremely important for work in progress and it can be particularly useful for cases where the computation time would be a critical factor for having an optimized solution in due time.

  7. Chopping Time of the FPU {α }-Model

    NASA Astrophysics Data System (ADS)

    Carati, A.; Ponno, A.

    2018-03-01

    We study, both numerically and analytically, the time needed to observe the breaking of an FPU α -chain in two or more pieces, starting from an unbroken configuration at a given temperature. It is found that such a "chopping" time is given by a formula that, at low temperatures, is of the Arrhenius-Kramers form, so that the chain does not break up on an observable time-scale. The result explains why the study of the FPU problem is meaningful also in the ill-posed case of the α -model.

  8. Effectiveness of the Treatment Readiness and Induction Program for Increasing Adolescent Motivation for Change

    PubMed Central

    Becan, Jennifer E.; Knight, Danica K.; Crawley, Rachel D.; Joe, George W.; Flynn, Patrick M.

    2014-01-01

    Success in substance abuse treatment is improved by problem recognition, desire to seek help, and readiness to engage in treatment, all of which are important aspects of motivation. Interventions that facilitate these at treatment induction for adolescents are especially needed. The purpose of this study is to assess the effectiveness of TRIP (Treatment Readiness and Induction Program) in promoting treatment motivation. Data represent 519 adolescents from 6 residential programs who completed assessments at treatment intake (Time 1) and 35 days after admission (Time 2). The design consisted of a comparison sample (n = 281) that had enrolled in treatment prior to implementation of TRIP (standard operating practice) and a sample of clients that had entered treatment after TRIP began and received standard operating practice enhanced by TRIP (n = 238). Repeated measures ANCOVAs were conducted using each Time 2 motivation scale as a dependent measure. Motivation scales were conceptualized as representing sequential stages of change. LISREL was used to test a structural model involving TRIP participation, gender, drug use severity, juvenile justice involvement, age, race-ethnicity, prior treatment, and urgency as predictors of the stages of treatment motivation. Compared to standard practice, adolescents receiving TRIP demonstrated greater gains in problem recognition, even after controlling for the other variables in the model. The model fit was adequate, with TRIP directly affecting problem recognition and indirectly affecting later stages of change (desire for help and treatment readiness). Future studies should examine which specific components of TRIP affect change in motivation. PMID:25456094

  9. Thermal instability in post-flare plasmas

    NASA Technical Reports Server (NTRS)

    Antiochos, S. K.

    1976-01-01

    The cooling of post-flare plasmas is discussed and the formation of loop prominences is explained as due to a thermal instability. A one-dimensional model was developed for active loop prominences. Only the motion and heat fluxes parallel to the existing magnetic fields are considered. The relevant size scales and time scales are such that single-fluid MHD equations are valid. The effects of gravity, the geometry of the field and conduction losses to the chromosphere are included. A computer code was constructed to solve the model equations. Basically, the system is treated as an initial value problem (with certain boundary conditions at the chromosphere-corona transition region), and a two-step time differencing scheme is used.

  10. Portable parallel stochastic optimization for the design of aeropropulsion components

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Rhodes, G. S.

    1994-01-01

    This report presents the results of Phase 1 research to develop a methodology for performing large-scale Multi-disciplinary Stochastic Optimization (MSO) for the design of aerospace systems ranging from aeropropulsion components to complete aircraft configurations. The current research recognizes that such design optimization problems are computationally expensive, and require the use of either massively parallel or multiple-processor computers. The methodology also recognizes that many operational and performance parameters are uncertain, and that uncertainty must be considered explicitly to achieve optimum performance and cost. The objective of this Phase 1 research was to initialize the development of an MSO methodology that is portable to a wide variety of hardware platforms, while achieving efficient, large-scale parallelism when multiple processors are available. The first effort in the project was a literature review of available computer hardware, as well as review of portable, parallel programming environments. The first effort was to implement the MSO methodology for a problem using the portable parallel programming language, Parallel Virtual Machine (PVM). The third and final effort was to demonstrate the example on a variety of computers, including a distributed-memory multiprocessor, a distributed-memory network of workstations, and a single-processor workstation. Results indicate the MSO methodology can be well-applied towards large-scale aerospace design problems. Nearly perfect linear speedup was demonstrated for computation of optimization sensitivity coefficients on both a 128-node distributed-memory multiprocessor (the Intel iPSC/860) and a network of workstations (speedups of almost 19 times achieved for 20 workstations). Very high parallel efficiencies (75 percent for 31 processors and 60 percent for 50 processors) were also achieved for computation of aerodynamic influence coefficients on the Intel. Finally, the multi-level parallelization strategy that will be needed for large-scale MSO problems was demonstrated to be highly efficient. The same parallel code instructions were used on both platforms, demonstrating portability. There are many applications for which MSO can be applied, including NASA's High-Speed-Civil Transport, and advanced propulsion systems. The use of MSO will reduce design and development time and testing costs dramatically.

  11. Enabling Functional Neural Circuit Simulations with Distributed Computing of Neuromodulated Plasticity

    PubMed Central

    Potjans, Wiebke; Morrison, Abigail; Diesmann, Markus

    2010-01-01

    A major puzzle in the field of computational neuroscience is how to relate system-level learning in higher organisms to synaptic plasticity. Recently, plasticity rules depending not only on pre- and post-synaptic activity but also on a third, non-local neuromodulatory signal have emerged as key candidates to bridge the gap between the macroscopic and the microscopic level of learning. Crucial insights into this topic are expected to be gained from simulations of neural systems, as these allow the simultaneous study of the multiple spatial and temporal scales that are involved in the problem. In particular, synaptic plasticity can be studied during the whole learning process, i.e., on a time scale of minutes to hours and across multiple brain areas. Implementing neuromodulated plasticity in large-scale network simulations where the neuromodulatory signal is dynamically generated by the network itself is challenging, because the network structure is commonly defined purely by the connectivity graph without explicit reference to the embedding of the nodes in physical space. Furthermore, the simulation of networks with realistic connectivity entails the use of distributed computing. A neuromodulated synapse must therefore be informed in an efficient way about the neuromodulatory signal, which is typically generated by a population of neurons located on different machines than either the pre- or post-synaptic neuron. Here, we develop a general framework to solve the problem of implementing neuromodulated plasticity in a time-driven distributed simulation, without reference to a particular implementation language, neuromodulator, or neuromodulated plasticity mechanism. We implement our framework in the simulator NEST and demonstrate excellent scaling up to 1024 processors for simulations of a recurrent network incorporating neuromodulated spike-timing dependent plasticity. PMID:21151370

  12. Shock-induced thermal wave propagation and response analysis of a viscoelastic thin plate under transient heating loads

    NASA Astrophysics Data System (ADS)

    Li, Chenlin; Guo, Huili; Tian, Xiaogeng

    2018-04-01

    This paper is devoted to the thermal shock analysis for viscoelastic materials under transient heating loads. The governing coupled equations with time-delay parameter and nonlocal scale parameter are derived based on the generalized thermo-viscoelasticity theory. The problem of a thin plate composed of viscoelastic material, subjected to a sudden temperature rise at the boundary plane, is solved by employing Laplace transformation techniques. The transient responses, i.e. temperature, displacement, stresses, heat flux as well as strain, are obtained and discussed. The effects of time-delay and nonlocal scale parameter on the transient responses are analyzed and discussed. It can be observed that: the propagation of thermal wave is dynamically smoothed and changed with the variation of time-delay; while the displacement, strain, and stress can be rapidly reduced by nonlocal scale parameter, which can be viewed as an important indicator for predicting the stiffness softening behavior for viscoelastic materials.

  13. Real-Time Robust Tracking for Motion Blur and Fast Motion via Correlation Filters.

    PubMed

    Xu, Lingyun; Luo, Haibo; Hui, Bin; Chang, Zheng

    2016-09-07

    Visual tracking has extensive applications in intelligent monitoring and guidance systems. Among state-of-the-art tracking algorithms, Correlation Filter methods perform favorably in robustness, accuracy and speed. However, it also has shortcomings when dealing with pervasive target scale variation, motion blur and fast motion. In this paper we proposed a new real-time robust scheme based on Kernelized Correlation Filter (KCF) to significantly improve performance on motion blur and fast motion. By fusing KCF and STC trackers, our algorithm also solve the estimation of scale variation in many scenarios. We theoretically analyze the problem for CFs towards motions and utilize the point sharpness function of the target patch to evaluate the motion state of target. Then we set up an efficient scheme to handle the motion and scale variation without much time consuming. Our algorithm preserves the properties of KCF besides the ability to handle special scenarios. In the end extensive experimental results on benchmark of VOT datasets show our algorithm performs advantageously competed with the top-rank trackers.

  14. Reusable Launch Vehicle Control in Multiple Time Scale Sliding Modes

    NASA Technical Reports Server (NTRS)

    Shtessel, Yuri

    1999-01-01

    A reusable launch vehicle control problem during ascent is addressed via multiple-time scaled continuous sliding mode control. The proposed sliding mode controller utilizes a two-loop structure and provides robust, de-coupled tracking of both orientation angle command profiles and angular rate command profiles in the presence of bounded external disturbances and plant uncertainties. Sliding mode control causes the angular rate and orientation angle tracking error dynamics to be constrained to linear, de-coupled, homogeneous, and vector valued differential equations with desired eigenvalues placement. The dual-time scale sliding mode controller was designed for the X-33 technology demonstration sub-orbital launch vehicle in the launch mode. 6DOF simulation results show that the designed controller provides robust, accurate, de-coupled tracking of the orientation angle command profiles in presence of external disturbances and vehicle inertia uncertainties. It creates possibility to operate the X-33 vehicle in an aircraft-like mode with reduced pre-launch adjustment of the control system.

  15. Particle dynamics in a viscously decaying cat's eye: The effect of finite Schmidt numbers

    NASA Astrophysics Data System (ADS)

    Newton, P. K.; Meiburg, Eckart

    1991-05-01

    The dynamics and mixing of passive marker particles for the model problem of a decaying cat's eye flow is studied. The flow field corresponds to Stuart's one-parameter family of solutions [J. Fluid Mech. 29, 417 (1967)]. It is time dependent as a result of viscosity, which is modeled by allowing the free parameter to depend on time according to the self-similar solution of the Navier-Stokes equations for an isolated point vortex. Particle diffusion is numerically simulated by a random walk model. While earlier work had shown that, for small values of time over Reynolds number t/Re≪1, the interval length characterizing the formation of lobes of fluid escaping from the cat's eye scales as Re-1/2, the present study shows that, for the case of diffusive effects and t/Pe≪1, the scaling follows Pe-1/4. A simple argument, taking into account streamline convergence and divergence in different parts of the flow field, explains the Pe-1/4 scaling.

  16. A robust, finite element model for hydrostatic surface water flows

    USGS Publications Warehouse

    Walters, R.A.; Casulli, V.

    1998-01-01

    A finite element scheme is introduced for the 2-dimensional shallow water equations using semi-implicit methods in time. A semi-Lagrangian method is used to approximate the effects of advection. A wave equation is formed at the discrete level such that the equations decouple into an equation for surface elevation and a momentum equation for the horizontal velocity. The convergence rates and relative computational efficiency are examined with the use of three test cases representing various degrees of difficulty. A test with a polar-quadrant grid investigates the response to local grid-scale forcing and the presence of spurious modes, a channel test case establishes convergence rates, and a field-scale test case examines problems with highly irregular grids.A finite element scheme is introduced for the 2-dimensional shallow water equations using semi-implicit methods in time. A semi-Lagrangian method is used to approximate the effects of advection. A wave equation is formed at the discrete level such that the equations decouple into an equation for surface elevation and a momentum equation for the horizontal velocity. The convergence rates and relative computational efficiency are examined with the use of three test cases representing various degrees of difficulty. A test with a polar-quadrant grid investigates the response to local grid-scale forcing and the presence of spurious modes, a channel test case establishes convergence rates, and a field-scale test case examines problems with highly irregular grids.

  17. Catalytic ignition model in a monolithic reactor with in-depth reaction

    NASA Technical Reports Server (NTRS)

    Tien, Ta-Ching; Tien, James S.

    1990-01-01

    Two transient models have been developed to study the catalytic ignition in a monolithic catalytic reactor. The special feature in these models is the inclusion of thermal and species structures in the porous catalytic layer. There are many time scales involved in the catalytic ignition problem, and these two models are developed with different time scales. In the full transient model, the equations are non-dimensionalized by the shortest time scale (mass diffusion across the catalytic layer). It is therefore accurate but is computationally costly. In the energy-integral model, only the slowest process (solid heat-up) is taken as nonsteady. It is approximate but computationally efficient. In the computations performed, the catalyst is platinum and the reactants are rich mixtures of hydrogen and oxygen. One-step global chemical reaction rates are used for both gas-phase homogeneous reaction and catalytic heterogeneous reaction. The computed results reveal the transient ignition processes in detail, including the structure variation with time in the reactive catalytic layer. An ignition map using reactor length and catalyst loading is constructed. The comparison of computed results between the two transient models verifies the applicability of the energy-integral model when the time is greater than the second largest time scale of the system. It also suggests that a proper combined use of the two models can catch all the transient phenomena while minimizing the computational cost.

  18. A model for distribution centers location-routing problem on a multimodal transportation network with a meta-heuristic solving approach

    NASA Astrophysics Data System (ADS)

    Fazayeli, Saeed; Eydi, Alireza; Kamalabadi, Isa Nakhai

    2017-07-01

    Nowadays, organizations have to compete with different competitors in regional, national and international levels, so they have to improve their competition capabilities to survive against competitors. Undertaking activities on a global scale requires a proper distribution system which could take advantages of different transportation modes. Accordingly, the present paper addresses a location-routing problem on multimodal transportation network. The introduced problem follows four objectives simultaneously which form main contribution of the paper; determining multimodal routes between supplier and distribution centers, locating mode changing facilities, locating distribution centers, and determining product delivery tours from the distribution centers to retailers. An integer linear programming is presented for the problem, and a genetic algorithm with a new chromosome structure proposed to solve the problem. Proposed chromosome structure consists of two different parts for multimodal transportation and location-routing parts of the model. Based on published data in the literature, two numerical cases with different sizes generated and solved. Also, different cost scenarios designed to better analyze model and algorithm performance. Results show that algorithm can effectively solve large-size problems within a reasonable time which GAMS software failed to reach an optimal solution even within much longer times.

  19. A model for distribution centers location-routing problem on a multimodal transportation network with a meta-heuristic solving approach

    NASA Astrophysics Data System (ADS)

    Fazayeli, Saeed; Eydi, Alireza; Kamalabadi, Isa Nakhai

    2018-07-01

    Nowadays, organizations have to compete with different competitors in regional, national and international levels, so they have to improve their competition capabilities to survive against competitors. Undertaking activities on a global scale requires a proper distribution system which could take advantages of different transportation modes. Accordingly, the present paper addresses a location-routing problem on multimodal transportation network. The introduced problem follows four objectives simultaneously which form main contribution of the paper; determining multimodal routes between supplier and distribution centers, locating mode changing facilities, locating distribution centers, and determining product delivery tours from the distribution centers to retailers. An integer linear programming is presented for the problem, and a genetic algorithm with a new chromosome structure proposed to solve the problem. Proposed chromosome structure consists of two different parts for multimodal transportation and location-routing parts of the model. Based on published data in the literature, two numerical cases with different sizes generated and solved. Also, different cost scenarios designed to better analyze model and algorithm performance. Results show that algorithm can effectively solve large-size problems within a reasonable time which GAMS software failed to reach an optimal solution even within much longer times.

  20. Associations of Patient Health-Related Problem Solving with Disease Control, Emergency Department Visits, and Hospitalizations in HIV and Diabetes Clinic Samples

    PubMed Central

    Gemmell, Leigh; Kulkarni, Babul; Klick, Brendan; Brancati, Frederick L.

    2007-01-01

    Background Patient problem solving and decision making are recognized as essential to effective self-management across multiple chronic diseases. However, a health-related problem-solving instrument that demonstrates sensitivity to disease control parameters in multiple diseases has not been established. Objectives To determine, in two disease samples, internal consistency and associations with disease control of the Health Problem-Solving Scale (HPSS), a 50-item measure with 7 subscales assessing effective and ineffective problem-solving approaches, learning from past experiences, and motivation/orientation. Design Cross-sectional study. Participants Outpatients from university-affiliated medical center HIV (N = 111) and diabetes mellitus (DM, N = 78) clinics. Measurements HPSS, CD4, hemoglobin A1c (HbA1c), and number of hospitalizations in the previous year and Emergency Department (ED) visits in the previous 6 months. Results Administration time for the HPSS ranged from 5 to 10 minutes. Cronbach’s alpha for the total HPSS was 0.86 and 0.89 for HIV and DM, respectively. Higher total scores (better problem solving) were associated with higher CD4 and fewer hospitalizations in HIV and lower HbA1c and fewer ED visits in DM. Health Problem-Solving Scale subscales representing negative problem-solving approaches were consistently associated with more hospitalizations (HIV, DM) and ED visits (DM). Conclusions The HPSS may identify problem-solving difficulties with disease self-management and assess effectiveness of interventions targeting patient decision making in self-care. PMID:17443373

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Kuo -Ling; Mehrotra, Sanjay

    We present a homogeneous algorithm equipped with a modified potential function for the monotone complementarity problem. We show that this potential function is reduced by at least a constant amount if a scaled Lipschitz condition (SLC) is satisfied. A practical algorithm based on this potential function is implemented in a software package named iOptimize. The implementation in iOptimize maintains global linear and polynomial time convergence properties, while achieving practical performance. It either successfully solves the problem, or concludes that the SLC is not satisfied. When compared with the mature software package MOSEK (barrier solver version 6.0.0.106), iOptimize solves convex quadraticmore » programming problems, convex quadratically constrained quadratic programming problems, and general convex programming problems in fewer iterations. Moreover, several problems for which MOSEK fails are solved to optimality. In addition, we also find that iOptimize detects infeasibility more reliably than the general nonlinear solvers Ipopt (version 3.9.2) and Knitro (version 8.0).« less

  2. Internet use and video gaming predict problem behavior in early adolescence.

    PubMed

    Holtz, Peter; Appel, Markus

    2011-02-01

    In early adolescence, the time spent using the Internet and video games is higher than in any other present-day age group. Due to age-inappropriate web and gaming content, the impact of new media use on teenagers is a matter of public and scientific concern. Based on current theories on inappropriate media use, a study was conducted that comprised 205 adolescents aged 10-14 years (Md = 13). Individuals were identified who showed clinically relevant problem behavior according to the problem scales of the Youth Self Report (YSR). Online gaming, communicational Internet use, and playing first-person shooters were predictive of externalizing behavior problems (aggression, delinquency). Playing online role-playing games was predictive of internalizing problem behavior (including withdrawal and anxiety). Parent-child communication about Internet activities was negatively related to problem behavior. Copyright © 2010 The Association for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.

  3. Sensitivity of the breastfeeding motivational measurement scale: a known group analysis of first time mothers.

    PubMed

    Stockdale, Janine; Sinclair, Marlene; Kernohan, George; McCrum-Gardner, Evie; Keller, John

    2013-01-01

    Breastfeeding has immense public health value for mothers, babies, and society. But there is an undesirably large gap between the number of new mothers who undertake and persist in breastfeeding compared to what would be a preferred level of accomplishment. This gap is a reflection of the many obstacles, both physical and psychological, that confront new mothers. Previous research has illuminated many of these concerns, but research on this problem is limited in part by the unavailability of a research instrument that can measure the key differences between first-time mothers and experienced mothers, with regard to the challenges they face when breastfeeding and the instructional advice they require. An instrument was designed to measure motivational complexity associated with sustained breast feeding behaviour; the Breastfeeding Motivational Measurement Scale. It contains 51 self-report items (7 point Likert scale) that cluster into four categories related to perceived value of breast-feeding, confidence to succeed, factors that influence success or failure, and strength of intentions, or goal. However, this scale has not been validated in terms of its sensitivity to profile the motivation of new mothers and experienced mothers. This issue was investigated by having 202 breastfeeding mothers (100 first time mothers) fill out the scale. The analysis reported in this paper is a three factor solution consisting of value, midwife support, and expectancies for success that explained the characteristics of first time mothers as a known group. These results support the validity of the BMM scale as a diagnostic tool for research on first time mothers who are learning to breastfeed. Further research studies are required to further test the validity of the scale in additional subgroups.

  4. Time Scale Optimization and the Hunt for Astronomical Cycles in Deep Time Strata

    NASA Astrophysics Data System (ADS)

    Meyers, Stephen R.

    2016-04-01

    A valuable attribute of astrochronology is the direct link between chronometer and climate change, providing a remarkable opportunity to constrain the evolution of the surficial Earth System. Consequently, the hunt for astronomical cycles in strata has spurred the development of a rich conceptual framework for climatic/oceanographic change, and has allowed exploration of the geologic record with unprecedented temporal resolution. Accompanying these successes, however, has been a persistent skepticism about appropriate astrochronologic testing and circular reasoning: how does one reliably test for astronomical cycles in stratigraphic data, especially when time is poorly constrained? From this perspective, it would seem that the merits and promise of astrochronology (e.g., a geologic time scale measured in ≤400 kyr increments) also serves as its Achilles heel, if the confirmation of such short rhythms defies rigorous statistical testing. To address these statistical challenges in astrochronologic testing, a new approach has been developed that (1) explicitly evaluates time scale uncertainty, (2) is resilient to common problems associated with spectrum confidence level assessment and 'multiple testing', and (3) achieves high statistical power under a wide range of conditions (it can identify astronomical cycles when present in data). Designated TimeOpt (for "time scale optimization"; Meyers 2015), the method employs a probabilistic linear regression model framework to investigate amplitude modulation and frequency ratios (bundling) in stratigraphic data, while simultaneously determining the optimal time scale. This presentation will review the TimeOpt method, and demonstrate how the flexible statistical framework can be further extended to evaluate (and optimize upon) complex sedimentation rate models, enhancing the statistical power of the approach, and addressing the challenge of unsteady sedimentation. Meyers, S. R. (2015), The evaluation of eccentricity-related amplitude modulation and bundling in paleoclimate data: An inverse approach for astrochronologic testing and time scale optimization, Paleoceanography, 30, doi:10.1002/ 2015PA002850.

  5. Mothers' problem-solving skill and use of help with infant-related issues: the role of importance and need for action.

    PubMed

    Pridham, K F; Chang, A S; Hansen, M F

    1987-08-01

    Examination was made of the relationship of mothers' appraisal of the importance of and need for action around infant-related issues to maternal experience (parity and time since birth), use of help, and perceived problem-solving competence. Sixty-two mothers (38 primiparae and 24 multiparae) kept for 90 days post-birth a daily log of issues, rated for importance and for need for action, and of help used. Mothers also reported perceived problem-solving competence on an 11-item scale. Findings indicated tentativeness in ratings of importance and action. Ratings of importance were associated with action ratings, except for temperament issues. Action ratings for baby care and illness issues decreased significantly with time. Otherwise, maternal experience had no effect on ratings. More of the variance in perceived competence than use of help was explained by action and importance ratings.

  6. Fast time- and frequency-domain finite-element methods for electromagnetic analysis

    NASA Astrophysics Data System (ADS)

    Lee, Woochan

    Fast electromagnetic analysis in time and frequency domain is of critical importance to the design of integrated circuits (IC) and other advanced engineering products and systems. Many IC structures constitute a very large scale problem in modeling and simulation, the size of which also continuously grows with the advancement of the processing technology. This results in numerical problems beyond the reach of existing most powerful computational resources. Different from many other engineering problems, the structure of most ICs is special in the sense that its geometry is of Manhattan type and its dielectrics are layered. Hence, it is important to develop structure-aware algorithms that take advantage of the structure specialties to speed up the computation. In addition, among existing time-domain methods, explicit methods can avoid solving a matrix equation. However, their time step is traditionally restricted by the space step for ensuring the stability of a time-domain simulation. Therefore, making explicit time-domain methods unconditionally stable is important to accelerate the computation. In addition to time-domain methods, frequency-domain methods have suffered from an indefinite system that makes an iterative solution difficult to converge fast. The first contribution of this work is a fast time-domain finite-element algorithm for the analysis and design of very large-scale on-chip circuits. The structure specialty of on-chip circuits such as Manhattan geometry and layered permittivity is preserved in the proposed algorithm. As a result, the large-scale matrix solution encountered in the 3-D circuit analysis is turned into a simple scaling of the solution of a small 1-D matrix, which can be obtained in linear (optimal) complexity with negligible cost. Furthermore, the time step size is not sacrificed, and the total number of time steps to be simulated is also significantly reduced, thus achieving a total cost reduction in CPU time. The second contribution is a new method for making an explicit time-domain finite-element method (TDFEM) unconditionally stable for general electromagnetic analysis. In this method, for a given time step, we find the unstable modes that are the root cause of instability, and deduct them directly from the system matrix resulting from a TDFEM based analysis. As a result, an explicit TDFEM simulation is made stable for an arbitrarily large time step irrespective of the space step. The third contribution is a new method for full-wave applications from low to very high frequencies in a TDFEM based on matrix exponential. In this method, we directly deduct the eigenmodes having large eigenvalues from the system matrix, thus achieving a significantly increased time step in the matrix exponential based TDFEM. The fourth contribution is a new method for transforming the indefinite system matrix of a frequency-domain FEM to a symmetric positive definite one. We deduct non-positive definite component directly from the system matrix resulting from a frequency-domain FEM-based analysis. The resulting new representation of the finite-element operator ensures an iterative solution to converge in a small number of iterations. We then add back the non-positive definite component to synthesize the original solution with negligible cost.

  7. Phase of Illness in palliative care: Cross-sectional analysis of clinical data from community, hospital and hospice patients.

    PubMed

    Mather, Harriet; Guo, Ping; Firth, Alice; Davies, Joanna M; Sykes, Nigel; Landon, Alison; Murtagh, Fliss Em

    2018-02-01

    Phase of Illness describes stages of advanced illness according to care needs of the individual, family and suitability of care plan. There is limited evidence on its association with other measures of symptoms, and health-related needs, in palliative care. The aims of the study are as follows. (1) Describe function, pain, other physical problems, psycho-spiritual problems and family and carer support needs by Phase of Illness. (2) Consider strength of associations between these measures and Phase of Illness. Secondary analysis of patient-level data; a total of 1317 patients in three settings. Function measured using Australia-modified Karnofsky Performance Scale. Pain, other physical problems, psycho-spiritual problems and family and carer support needs measured using items on Palliative Care Problem Severity Scale. Australia-modified Karnofsky Performance Scale and Palliative Care Problem Severity Scale items varied significantly by Phase of Illness. Mean function was highest in stable phase (65.9, 95% confidence interval = 63.4-68.3) and lowest in dying phase (16.6, 95% confidence interval = 15.3-17.8). Mean pain was highest in unstable phase (1.43, 95% confidence interval = 1.36-1.51). Multinomial regression: psycho-spiritual problems were not associated with Phase of Illness ( χ 2  = 2.940, df = 3, p = 0.401). Family and carer support needs were greater in deteriorating phase than unstable phase (odds ratio (deteriorating vs unstable) = 1.23, 95% confidence interval = 1.01-1.49). Forty-nine percent of the variance in Phase of Illness is explained by Australia-modified Karnofsky Performance Scale and Palliative Care Problem Severity Scale. Phase of Illness has value as a clinical measure of overall palliative need, capturing additional information beyond Australia-modified Karnofsky Performance Scale and Palliative Care Problem Severity Scale. Lack of significant association between psycho-spiritual problems and Phase of Illness warrants further investigation.

  8. Satisfiability Test with Synchronous Simulated Annealing on the Fujitsu AP1000 Massively-Parallel Multiprocessor

    NASA Technical Reports Server (NTRS)

    Sohn, Andrew; Biswas, Rupak

    1996-01-01

    Solving the hard Satisfiability Problem is time consuming even for modest-sized problem instances. Solving the Random L-SAT Problem is especially difficult due to the ratio of clauses to variables. This report presents a parallel synchronous simulated annealing method for solving the Random L-SAT Problem on a large-scale distributed-memory multiprocessor. In particular, we use a parallel synchronous simulated annealing procedure, called Generalized Speculative Computation, which guarantees the same decision sequence as sequential simulated annealing. To demonstrate the performance of the parallel method, we have selected problem instances varying in size from 100-variables/425-clauses to 5000-variables/21,250-clauses. Experimental results on the AP1000 multiprocessor indicate that our approach can satisfy 99.9 percent of the clauses while giving almost a 70-fold speedup on 500 processors.

  9. The Geological Grading Scale: Every million Points Counts!

    NASA Astrophysics Data System (ADS)

    Stegman, D. R.; Cooper, C. M.

    2006-12-01

    The concept of geological time, ranging from thousands to billions of years, is naturally quite difficult for students to grasp initially, as it is much longer than the timescales over which they experience everyday life. Moreover, universities operate on a few key timescales (hourly lectures, weekly assignments, mid-term examinations) to which students' maximum attention is focused, largely driven by graded assessment. The geological grading scale exploits the overwhelming interest students have in grades as an opportunity to instill familiarity with geological time. With the geological grading scale, the number of possible points/marks/grades available in the course is scaled to 4.5 billion points --- collapsing the entirety of Earth history into one semester. Alternatively, geological time can be compressed into each assignment, with scores for weekly homeworks not worth 100 points each, but 4.5 billion! Homeworks left incomplete with questions unanswered lose 100's of millions of points - equivalent to missing the Paleozoic era. The expected quality of presentation for problem sets can be established with great impact in the first week by docking assignments an insignificant amount points for handing in messy work; though likely more points than they've lost in their entire schooling history combined. Use this grading scale and your students will gradually begin to appreciate exactly how much time represents a geological blink of the eye.

  10. Wavelet transforms with discrete-time continuous-dilation wavelets

    NASA Astrophysics Data System (ADS)

    Zhao, Wei; Rao, Raghuveer M.

    1999-03-01

    Wavelet constructions and transforms have been confined principally to the continuous-time domain. Even the discrete wavelet transform implemented through multirate filter banks is based on continuous-time wavelet functions that provide orthogonal or biorthogonal decompositions. This paper provides a novel wavelet transform construction based on the definition of discrete-time wavelets that can undergo continuous parameter dilations. The result is a transformation that has the advantage of discrete-time or digital implementation while circumventing the problem of inadequate scaling resolution seen with conventional dyadic or M-channel constructions. Examples of constructing such wavelets are presented.

  11. Managing Network Partitions in Structured P2P Networks

    NASA Astrophysics Data System (ADS)

    Shafaat, Tallat M.; Ghodsi, Ali; Haridi, Seif

    Structured overlay networks form a major class of peer-to-peer systems, which are touted for their abilities to scale, tolerate failures, and self-manage. Any long-lived Internet-scale distributed system is destined to face network partitions. Consequently, the problem of network partitions and mergers is highly related to fault-tolerance and self-management in large-scale systems. This makes it a crucial requirement for building any structured peer-to-peer systems to be resilient to network partitions. Although the problem of network partitions and mergers is highly related to fault-tolerance and self-management in large-scale systems, it has hardly been studied in the context of structured peer-to-peer systems. Structured overlays have mainly been studied under churn (frequent joins/failures), which as a side effect solves the problem of network partitions, as it is similar to massive node failures. Yet, the crucial aspect of network mergers has been ignored. In fact, it has been claimed that ring-based structured overlay networks, which constitute the majority of the structured overlays, are intrinsically ill-suited for merging rings. In this chapter, we motivate the problem of network partitions and mergers in structured overlays. We discuss how a structured overlay can automatically detect a network partition and merger. We present an algorithm for merging multiple similar ring-based overlays when the underlying network merges. We examine the solution in dynamic conditions, showing how our solution is resilient to churn during the merger, something widely believed to be difficult or impossible. We evaluate the algorithm for various scenarios and show that even when falsely detecting a merger, the algorithm quickly terminates and does not clutter the network with many messages. The algorithm is flexible as the tradeoff between message complexity and time complexity can be adjusted by a parameter.

  12. International comparisons of behavioral and emotional problems in preschool children: parents' reports from 24 societies.

    PubMed

    Rescorla, Leslie A; Achenbach, Thomas M; Ivanova, Masha Y; Harder, Valerie S; Otten, Laura; Bilenberg, Niels; Bjarnadottir, Gudrun; Capron, Christiane; De Pauw, Sarah S W; Dias, Pedro; Dobrean, Anca; Döpfner, Manfred; Duyme, Michel; Eapen, Valsamma; Erol, Nese; Esmaeili, Elaheh Mohammad; Ezpeleta, Lourdes; Frigerio, Alessandra; Fung, Daniel S S; Gonçalves, Miguel; Guðmundsson, Halldór; Jeng, Suh-Fang; Jusiené, Roma; Ah Kim, Young; Kristensen, Solvejg; Liu, Jianghong; Lecannelier, Felipe; Leung, Patrick W L; Machado, Bárbara César; Montirosso, Rosario; Ja Oh, Kyung; Ooi, Yoon Phaik; Plück, Julia; Pomalima, Rolando; Pranvera, Jetishi; Schmeck, Klaus; Shahini, Mimoza; Silva, Jaime R; Simsek, Zeynep; Sourander, Andre; Valverde, José; van der Ende, Jan; Van Leeuwen, Karla G; Wu, Yen-Tzu; Yurdusen, Sema; Zubrick, Stephen R; Verhulst, Frank C

    2011-01-01

    International comparisons were conducted of preschool children's behavioral and emotional problems as reported on the Child Behavior Checklist for Ages 1½-5 by parents in 24 societies (N = 19,850). Item ratings were aggregated into scores on syndromes; Diagnostic and Statistical Manual of Mental Disorders-oriented scales; a Stress Problems scale; and Internalizing, Externalizing, and Total Problems scales. Effect sizes for scale score differences among the 24 societies ranged from small to medium (3-12%). Although societies differed greatly in language, culture, and other characteristics, Total Problems scores for 18 of the 24 societies were within 7.1 points of the omnicultural mean of 33.3 (on a scale of 0-198). Gender and age differences, as well as gender and age interactions with society, were all very small (effect sizes < 1%). Across all pairs of societies, correlations between mean item ratings averaged .78, and correlations between internal consistency alphas for the scales averaged .92, indicating that the rank orders of mean item ratings and internal consistencies of scales were very similar across diverse societies.

  13. International Comparisons of Behavioral and Emotional Problems in Preschool Children: Parents’ Reports From 24 Societies

    PubMed Central

    Rescorla, Leslie A.; Achenbach, Thomas M.; Ivanova, Masha Y.; Harder, Valerie S.; Otten, Laura; Bilenberg, Niels; Bjarnadottir, Gudrun; Capron, Christiane; De Pauw, Sarah S. W.; Dias, Pedro; Dobrean, Anca; Döpfner, Manfred; Duyme, Michel; Eapen, Valsamma; Erol, Nese; Esmaeili, Elaheh Mohammad; Ezpeleta, Lourdes; Frigerio, Alessandra; Fung, Daniel S. S.; Gonçalves, Miguel; Guđmundsson, Halldór; Jeng, Suh-Fang; Jusiené, Roma; Kim, Young Ah; Kristensen, Solvejg; Liu, Jianghong; Lecannelier, Felipe; Leung, Patrick W. L.; Machado, Bárbara César; Montirosso, Rosario; Oh, Kyung Ja; Ooi, Yoon Phaik; Plück, Julia; Pomalima, Rolando; Pranvera, Jetishi; Schmeck, Klaus; Shahini, Mimoza; Silva, Jaime R.; Simsek, Zeynep; Sourander, Andre; Valverde, José; van der Ende, Jan; Van Leeuwen, Karla G.; Wu, Yen-Tzu; Yurdusen, Sema; Zubrick, Stephen R.; Verhulst, Frank C.

    2014-01-01

    International comparisons were conducted of preschool children’s behavioral and emotional problems as reported on the Child Behavior Checklist for Ages 1½–5 by parents in 24 societies (N =19,850). Item ratings were aggregated into scores on syndromes; Diagnostic and Statistical Manual of Mental Disorders–oriented scales; a Stress Problems scale; and Internalizing, Externalizing, and Total Problems scales. Effect sizes for scale score differences among the 24 societies ranged from small to medium (3–12%). Although societies differed greatly in language, culture, and other characteristics, Total Problems scores for 18 of the 24 societies were within 7.1 points of the omnicultural mean of 33.3 (on a scale of 0–198). Gender and age differences, as well as gender and age interactions with society, were all very small (effect sizes <1%). Across all pairs of societies, correlations between mean item ratings averaged .78, and correlations between internal consistency alphas for the scales averaged .92, indicating that the rank orders of mean item ratings and internal consistencies of scales were very similar across diverse societies. PMID:21534056

  14. Behavior analytic approaches to problem behavior in intellectual disabilities.

    PubMed

    Hagopian, Louis P; Gregory, Meagan K

    2016-03-01

    The purpose of the current review is to summarize recent behavior analytic research on problem behavior in individuals with intellectual disabilities. We have focused our review on studies published from 2013 to 2015, but also included earlier studies that were relevant. Behavior analytic research on problem behavior continues to focus on the use and refinement of functional behavioral assessment procedures and function-based interventions. During the review period, a number of studies reported on procedures aimed at making functional analysis procedures more time efficient. Behavioral interventions continue to evolve, and there were several larger scale clinical studies reporting on multiple individuals. There was increased attention on the part of behavioral researchers to develop statistical methods for analysis of within subject data and continued efforts to aggregate findings across studies through evaluative reviews and meta-analyses. Findings support continued utility of functional analysis for guiding individualized interventions and for classifying problem behavior. Modifications designed to make functional analysis more efficient relative to the standard method of functional analysis were reported; however, these require further validation. Larger scale studies on behavioral assessment and treatment procedures provided additional empirical support for effectiveness of these approaches and their sustainability outside controlled clinical settings.

  15. Hierarchical analysis of vegetation dynamics over 71 years: Soil-rainfall interactions in a Chihuahuan Desert ecosystem

    USDA-ARS?s Scientific Manuscript database

    Proliferation of woody plants in grasslands and savannas (hereafter, “rangelands”) is a persistent problem globally. This widely-observed shift from grass to shrub dominance in rangelands worldwide has been heterogeneous in space and time largely due to cross-scale interactions between soils, climat...

  16. How Language Limits Our Understanding of Environmental Education.

    ERIC Educational Resources Information Center

    Bowers, Chet

    2001-01-01

    Develops a theory of metaphor that helps explain how environmental education contributes to the double bind of helping to address environmental problems while at the same time reinforcing the use of the language-thought patterns that underlie the digital phase of the Industrial Revolution that we are now entering on a global scale. (Author/SAH)

  17. Modelling atmospheric flows with adaptive moving meshes

    NASA Astrophysics Data System (ADS)

    Kühnlein, Christian; Smolarkiewicz, Piotr K.; Dörnbrack, Andreas

    2012-04-01

    An anelastic atmospheric flow solver has been developed that combines semi-implicit non-oscillatory forward-in-time numerics with a solution-adaptive mesh capability. A key feature of the solver is the unification of a mesh adaptation apparatus, based on moving mesh partial differential equations (PDEs), with the rigorous formulation of the governing anelastic PDEs in generalised time-dependent curvilinear coordinates. The solver development includes an enhancement of the flux-form multidimensional positive definite advection transport algorithm (MPDATA) - employed in the integration of the underlying anelastic PDEs - that ensures full compatibility with mass continuity under moving meshes. In addition, to satisfy the geometric conservation law (GCL) tensor identity under general moving meshes, a diagnostic approach is proposed based on the treatment of the GCL as an elliptic problem. The benefits of the solution-adaptive moving mesh technique for the simulation of multiscale atmospheric flows are demonstrated. The developed solver is verified for two idealised flow problems with distinct levels of complexity: passive scalar advection in a prescribed deformational flow, and the life cycle of a large-scale atmospheric baroclinic wave instability showing fine-scale phenomena of fronts and internal gravity waves.

  18. Large-Scale Point-Cloud Visualization through Localized Textured Surface Reconstruction.

    PubMed

    Arikan, Murat; Preiner, Reinhold; Scheiblauer, Claus; Jeschke, Stefan; Wimmer, Michael

    2014-09-01

    In this paper, we introduce a novel scene representation for the visualization of large-scale point clouds accompanied by a set of high-resolution photographs. Many real-world applications deal with very densely sampled point-cloud data, which are augmented with photographs that often reveal lighting variations and inaccuracies in registration. Consequently, the high-quality representation of the captured data, i.e., both point clouds and photographs together, is a challenging and time-consuming task. We propose a two-phase approach, in which the first (preprocessing) phase generates multiple overlapping surface patches and handles the problem of seamless texture generation locally for each patch. The second phase stitches these patches at render-time to produce a high-quality visualization of the data. As a result of the proposed localization of the global texturing problem, our algorithm is more than an order of magnitude faster than equivalent mesh-based texturing techniques. Furthermore, since our preprocessing phase requires only a minor fraction of the whole data set at once, we provide maximum flexibility when dealing with growing data sets.

  19. Behavioral correlation with television watching and videogame playing among children in the United Arab Emirates.

    PubMed

    Yousef, Said; Eapen, Valsamma; Zoubeidi, Taoufik; Mabrouk, Abdelazim

    2014-08-01

    Television viewing and videogame use (TV/VG) appear to be associated with some childhood behavioral problems. There are no studies addressing this problem in the United Arab Emirates. One hundred ninety-seven school children (mean age, 8.7 ± 2.1 years) were assessed. Child Behavior Checklist (CBCL) subscale scores and socio-demographic characteristics were compared between children who were involved with TV/VG more than 2 hours/day and those involved less than 2 hours/day (the recommended upper limit by The American Academy of Pediatrics). Thirty-seven percent of children who were involved with TV/VG time of more than 2 hours/day scored significantly higher on CBCL syndrome scales of withdrawn, social problems, attention problems, delinquent behavior, aggressive behavior, internalizing problems, externalizing problems and the CBCL total scores compared with their counterparts. Moreover, these children were younger in birth order and had fewer siblings. After controlling for these confounders using logistic regression, we found that TV/VG time more than 2 hours/day was positively associated with withdrawn (p = 0.008), attention problem (p = 0.037), externalizing problems (p = 0.007), and CBCL total (p = 0.014). Involvement with TV/VG for more than 2 hours/day is associated with more childhood behavioral problems. Counteracting negative effects of the over-involvement with TV/VG in children requires increased parental awareness.

  20. Efficient hemodynamic event detection utilizing relational databases and wavelet analysis

    NASA Technical Reports Server (NTRS)

    Saeed, M.; Mark, R. G.

    2001-01-01

    Development of a temporal query framework for time-oriented medical databases has hitherto been a challenging problem. We describe a novel method for the detection of hemodynamic events in multiparameter trends utilizing wavelet coefficients in a MySQL relational database. Storage of the wavelet coefficients allowed for a compact representation of the trends, and provided robust descriptors for the dynamics of the parameter time series. A data model was developed to allow for simplified queries along several dimensions and time scales. Of particular importance, the data model and wavelet framework allowed for queries to be processed with minimal table-join operations. A web-based search engine was developed to allow for user-defined queries. Typical queries required between 0.01 and 0.02 seconds, with at least two orders of magnitude improvement in speed over conventional queries. This powerful and innovative structure will facilitate research on large-scale time-oriented medical databases.

  1. Global Detection of Live Virtual Machine Migration Based on Cellular Neural Networks

    PubMed Central

    Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian

    2014-01-01

    In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better. PMID:24959631

  2. Global detection of live virtual machine migration based on cellular neural networks.

    PubMed

    Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian

    2014-01-01

    In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better.

  3. Effect of music care on depression and behavioral problems in elderly people with dementia in Taiwan: a quasi-experimental, longitudinal study.

    PubMed

    Wang, Su-Chin; Yu, Ching-Len; Chang, Su-Hsien

    2017-02-01

    The purpose was to examine the effectiveness of music care on cognitive function, depression, and behavioral problems among elderly people with dementia in long-term care facilities in Taiwan. The study had a quasi-experimental, longitudinal research design and used two groups of subjects. Subjects were not randomly assigned to experimental group (n = 90) or comparison group (n = 56). Based on Bandura's social cognition theory, subjects in the experimental group received Kagayashiki music care (KMC) twice per week for 24 weeks. Subjects in the comparison group were provided with activities as usual. Results found, using the control score of the Clifton Assessment Procedures for the Elderly Behavior Rating Scale (baseline) and time of attending KMC activities as a covariate, the two groups of subjects had statistically significant differences in the mini-mental state examination (MMSE). Results also showed that, using the control score of the Cornell Scale for Depression in Dementia (baseline) and MMSE (baseline) as a covariate, the two groups of subjects had statistically significant differences in the Clifton Assessment Procedures for the Elderly Behavior Rating Scale. These findings provide information for staff caregivers in long-term care facilities to develop a non-invasive care model for elderly people with dementia to deal with depression, anxiety, and behavioral problems.

  4. Spatial and temporal patterns of stranded intertidal marine debris: is there a picture of global change?

    PubMed

    Browne, Mark Anthony; Chapman, M Gee; Thompson, Richard C; Amaral Zettler, Linda A; Jambeck, Jenna; Mallos, Nicholas J

    2015-06-16

    Floating and stranded marine debris is widespread. Increasing sea levels and altered rainfall, solar radiation, wind speed, waves, and oceanic currents associated with climatic change are likely to transfer more debris from coastal cities into marine and coastal habitats. Marine debris causes economic and ecological impacts, but understanding the scope of these requires quantitative information on spatial patterns and trends in the amounts and types of debris at a global scale. There are very few large-scale programs to measure debris, but many peer-reviewed and published scientific studies of marine debris describe local patterns. Unfortunately, methods of defining debris, sampling, and interpreting patterns in space or time vary considerably among studies, yet if data could be synthesized across studies, a global picture of the problem may be avaliable. We analyzed 104 published scientific papers on marine debris in order to determine how to evaluate this. Although many studies were well designed to answer specific questions, definitions of what constitutes marine debris, the methods used to measure, and the scale of the scope of the studies means that no general picture can emerge from this wealth of data. These problems are detailed to guide future studies and guidelines provided to enable the collection of more comparable data to better manage this growing problem.

  5. Partially acoustic dark matter, interacting dark radiation, and large scale structure

    NASA Astrophysics Data System (ADS)

    Chacko, Zackaria; Cui, Yanou; Hong, Sungwoo; Okui, Takemichi; Tsai, Yuhsinz

    2016-12-01

    The standard paradigm of collisionless cold dark matter is in tension with measurements on large scales. In particular, the best fit values of the Hubble rate H 0 and the matter density perturbation σ 8 inferred from the cosmic microwave background seem inconsistent with the results from direct measurements. We show that both problems can be solved in a framework in which dark matter consists of two distinct components, a dominant component and a subdominant component. The primary component is cold and collisionless. The secondary component is also cold, but interacts strongly with dark radiation, which itself forms a tightly coupled fluid. The growth of density perturbations in the subdominant component is inhibited by dark acoustic oscillations due to its coupling to the dark radiation, solving the σ 8 problem, while the presence of tightly coupled dark radiation ameliorates the H 0 problem. The subdominant component of dark matter and dark radiation continue to remain in thermal equilibrium until late times, inhibiting the formation of a dark disk. We present an example of a simple model that naturally realizes this scenario in which both constituents of dark matter are thermal WIMPs. Our scenario can be tested by future stage-IV experiments designed to probe the CMB and large scale structure.

  6. Beyond the Cell: Using Multiscalar Topics to Bring Interdisciplinarity into Undergraduate Cellular Biology Courses

    PubMed Central

    Weber, Carolyn F.

    2016-01-01

    Western science has grown increasingly reductionistic and, in parallel, the undergraduate life sciences curriculum has become disciplinarily fragmented. While reductionistic approaches have led to landmark discoveries, many of the most exciting scientific advances in the late 20th century have occurred at disciplinary interfaces; work at these interfaces is necessary to manage the world’s looming problems, particularly those that are rooted in cellular-level processes but have ecosystem- and even global-scale ramifications (e.g., nonsustainable agriculture, emerging infectious diseases). Managing such problems requires comprehending whole scenarios and their emergent properties as sums of their multiple facets and complex interrelationships, which usually integrate several disciplines across multiple scales (e.g., time, organization, space). This essay discusses bringing interdisciplinarity into undergraduate cellular biology courses through the use of multiscalar topics. Discussing how cellular-level processes impact large-scale phenomena makes them relevant to everyday life and unites diverse disciplines (e.g., sociology, cell biology, physics) as facets of a single system or problem, emphasizing their connections to core concepts in biology. I provide specific examples of multiscalar topics and discuss preliminary evidence that using such topics may increase students’ understanding of the cell’s position within an ecosystem and how cellular biology interfaces with other disciplines. PMID:27146162

  7. Partially acoustic dark matter, interacting dark radiation, and large scale structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chacko, Zackaria; Cui, Yanou; Hong, Sungwoo

    The standard paradigm of collisionless cold dark matter is in tension with measurements on large scales. In particular, the best fit values of the Hubble rate H 0 and the matter density perturbation σ 8 inferred from the cosmic microwave background seem inconsistent with the results from direct measurements. We show that both problems can be solved in a framework in which dark matter consists of two distinct components, a dominant component and a subdominant component. The primary component is cold and collisionless. The secondary component is also cold, but interacts strongly with dark radiation, which itself forms a tightlymore » coupled fluid. The growth of density perturbations in the subdominant component is inhibited by dark acoustic oscillations due to its coupling to the dark radiation, solving the σ 8 problem, while the presence of tightly coupled dark radiation ameliorates the H 0 problem. The subdominant component of dark matter and dark radiation continue to remain in thermal equilibrium until late times, inhibiting the formation of a dark disk. We present an example of a simple model that naturally realizes this scenario in which both constituents of dark matter are thermal WIMPs. Our scenario can be tested by future stage-IV experiments designed to probe the CMB and large scale structure.« less

  8. Partially acoustic dark matter, interacting dark radiation, and large scale structure

    DOE PAGES

    Chacko, Zackaria; Cui, Yanou; Hong, Sungwoo; ...

    2016-12-21

    The standard paradigm of collisionless cold dark matter is in tension with measurements on large scales. In particular, the best fit values of the Hubble rate H 0 and the matter density perturbation σ 8 inferred from the cosmic microwave background seem inconsistent with the results from direct measurements. We show that both problems can be solved in a framework in which dark matter consists of two distinct components, a dominant component and a subdominant component. The primary component is cold and collisionless. The secondary component is also cold, but interacts strongly with dark radiation, which itself forms a tightlymore » coupled fluid. The growth of density perturbations in the subdominant component is inhibited by dark acoustic oscillations due to its coupling to the dark radiation, solving the σ 8 problem, while the presence of tightly coupled dark radiation ameliorates the H 0 problem. The subdominant component of dark matter and dark radiation continue to remain in thermal equilibrium until late times, inhibiting the formation of a dark disk. We present an example of a simple model that naturally realizes this scenario in which both constituents of dark matter are thermal WIMPs. Our scenario can be tested by future stage-IV experiments designed to probe the CMB and large scale structure.« less

  9. H2, fixed architecture, control design for large scale systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1990-01-01

    The H2, fixed architecture, control problem is a classic linear quadratic Gaussian (LQG) problem whose solution is constrained to be a linear time invariant compensator with a decentralized processing structure. The compensator can be made of p independent subcontrollers, each of which has a fixed order and connects selected sensors to selected actuators. The H2, fixed architecture, control problem allows the design of simplified feedback systems needed to control large scale systems. Its solution becomes more complicated, however, as more constraints are introduced. This work derives the necessary conditions for optimality for the problem and studies their properties. It is found that the filter and control problems couple when the architecture constraints are introduced, and that the different subcontrollers must be coordinated in order to achieve global system performance. The problem requires the simultaneous solution of highly coupled matrix equations. The use of homotopy is investigated as a numerical tool, and its convergence properties studied. It is found that the general constrained problem may have multiple stabilizing solutions, and that these solutions may be local minima or saddle points for the quadratic cost. The nature of the solution is not invariant when the parameters of the system are changed. Bifurcations occur, and a solution may continuously transform into a nonstabilizing compensator. Using a modified homotopy procedure, fixed architecture compensators are derived for models of large flexible structures to help understand the properties of the constrained solutions and compare them to the corresponding unconstrained ones.

  10. Neurocognitive dysfunction in problem gamblers with co-occurring antisocial personality disorder.

    PubMed

    Blum, Austin W; Leppink, Eric W; Grant, Jon E

    2017-07-01

    Problem gamblers with symptoms of antisocial personality disorder (ASPD) may represent a distinct problem gambling subtype, but the neurocognitive profile of individuals affected by both disorders is poorly characterized. Non-treatment-seeking young adults (18-29years) who gambled ≥5 times in the preceding year were recruited from the general community. Problem gamblers (defined as those meeting ≥1 DSM-5 diagnostic criteria for gambling disorder) with a lifetime history of ASPD (N=26) were identified using the Mini International Neuropsychiatric Interview (MINI) and compared with controls (N=266) using questionnaire-based impulsivity scales and objective computerized neuropsychological tasks. Findings were uncorrected for multiple comparisons. Effect sizes were calculated using Cohen's d. Problem gambling with ASPD was associated with significantly elevated gambling disorder symptoms, lower quality of life, greater psychiatric comorbidity, higher impulsivity questionnaire scores on the Barratt Impulsiveness Scale (d=0.4) and Eysenck Impulsivity Questionnaire (d=0.5), and impaired cognitive flexibility (d=0.4), executive planning (d=0.4), and an aspect of decision-making (d=0.6). Performance on measures of response inhibition, risk adjustment, and quality of decision making did not differ significantly between groups. These preliminary findings, though in need of replication, support the characterization of problem gambling with ASPD as a subtype of problem gambling associated with higher rates of impulsivity and executive function deficits. Taken together, these results may have treatment implications. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. The use of modified scaling factors in the design of high-power, non-linear, transmitting rod-core antennas

    NASA Astrophysics Data System (ADS)

    Jordan, Jared Williams; Dvorak, Steven L.; Sternberg, Ben K.

    2010-10-01

    In this paper, we develop a technique for designing high-power, non-linear, transmitting rod-core antennas by using simple modified scale factors rather than running labor-intensive numerical models. By using modified scale factors, a designer can predict changes in magnetic moment, inductance, core series loss resistance, etc. We define modified scale factors as the case when all physical dimensions of the rod antenna are scaled by p, except for the cross-sectional area of the individual wires or strips that are used to construct the core. This allows one to make measurements on a scaled-down version of the rod antenna using the same core material that will be used in the final antenna design. The modified scale factors were derived from prolate spheroidal analytical expressions for a finite-length rod antenna and were verified with experimental results. The modified scaling factors can only be used if the magnetic flux densities within the two scaled cores are the same. With the magnetic flux density constant, the two scaled cores will operate with the same complex permeability, thus changing the non-linear problem to a quasi-linear problem. We also demonstrate that by holding the number of turns times the drive current constant, while changing the number of turns, the inductance and core series loss resistance change by the number of turns squared. Experimental measurements were made on rod cores made from varying diameters of black oxide, low carbon steel wires and different widths of Metglas foil. Furthermore, we demonstrate that the modified scale factors work even in the presence of eddy currents within the core material.

  12. Aerospace plane guidance using geometric control theory

    NASA Technical Reports Server (NTRS)

    Van Buren, Mark A.; Mease, Kenneth D.

    1990-01-01

    A reduced-order method employing decomposition, based on time-scale separation, of the 4-D state space in a 2-D slow manifold and a family of 2-D fast manifolds is shown to provide an excellent approximation to the full-order minimum-fuel ascent trajectory. Near-optimal guidance is obtained by tracking the reduced-order trajectory. The tracking problem is solved as regulation problems on the family of fast manifolds, using the exact linearization methodology from nonlinear geometric control theory. The validity of the overall guidance approach is indicated by simulation.

  13. Multi-GPU implementation of a VMAT treatment plan optimization algorithm.

    PubMed

    Tian, Zhen; Peng, Fei; Folkerts, Michael; Tan, Jun; Jia, Xun; Jiang, Steve B

    2015-06-01

    Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU's relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors' group, on a multi-GPU platform to solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors' method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H&N) cancer case is then used to validate the authors' method. The authors also compare their multi-GPU implementation with three different single GPU implementation strategies, i.e., truncating DDC matrix (S1), repeatedly transferring DDC matrix between CPU and GPU (S2), and porting computations involving DDC matrix to CPU (S3), in terms of both plan quality and computational efficiency. Two more H&N patient cases and three prostate cases are used to demonstrate the advantages of the authors' method. The authors' multi-GPU implementation can finish the optimization process within ∼ 1 min for the H&N patient case. S1 leads to an inferior plan quality although its total time was 10 s shorter than the multi-GPU implementation due to the reduced matrix size. S2 and S3 yield the same plan quality as the multi-GPU implementation but take ∼4 and ∼6 min, respectively. High computational efficiency was consistently achieved for the other five patient cases tested, with VMAT plans of clinically acceptable quality obtained within 23-46 s. Conversely, to obtain clinically comparable or acceptable plans for all six of these VMAT cases that the authors have tested in this paper, the optimization time needed in a commercial TPS system on CPU was found to be in an order of several minutes. The results demonstrate that the multi-GPU implementation of the authors' column-generation-based VMAT optimization can handle the large-scale VMAT optimization problem efficiently without sacrificing plan quality. The authors' study may serve as an example to shed some light on other large-scale medical physics problems that require multi-GPU techniques.

  14. The effect of intermediate-scale motions on line formation. [sawtooth and sine motions in solar atmosphere

    NASA Technical Reports Server (NTRS)

    Shine, R. A.

    1975-01-01

    The problem of LTE and non-LTE line formation in the presence of nonthermal velocity fields with geometric scales between the microscopic and macroscopic limits is investigated in the cases of periodic sinusoidal and sawtooth waves. For a fixed source function (the LTE case), it is shown that time-averaged line profiles progress smoothly from the microscopic to the macroscopic limits as the geometric scale of the motions increases, that the sinusoidal motions produce symmetric time-averaged profiles, and that the sawtooth motions cause a redshift. In several idealized non-LTE cases, it is found that intermediate-scale velocity fields can significantly increase the surface source functions and line-core intensities. Calculations are made for a two-level atom in an isothermal atmosphere for a range of velocity scales and non-LTE coupling parameters and also for a two-level atom and a four-level representation of Na I line formation in the Harvard-Smithsonian Reference Atmosphere (1971) solar model. It is found that intermediate-scale velocity fields in the solar atmosphere could explain the central intensities of the Na I D lines and other strong absorption lines without invoking previously suggested high electron densities.

  15. Individual Differences in Childhood Sleep Problems Predict Later Cognitive Executive Control

    PubMed Central

    Friedman, Naomi P.; Corley, Robin P.; Hewitt, John K.; Wright, Kenneth P.

    2009-01-01

    Study Objective: To determine whether individual differences in developmental patterns of general sleep problems are associated with 3 executive function abilities—inhibiting, updating working memory, and task shifting—in late adolescence. Participants: 916 twins (465 female, 451 male) and parents from the Colorado Longitudinal Twin Study. Measurements and Results: Parents reported their children's sleep problems at ages 4 years, 5 y, 7 y, and 9–16 y based on a 7-item scale from the Child-Behavior Checklist; a subset of children (n = 568) completed laboratory assessments of executive functions at age 17. Latent variable growth curve analyses were used to model individual differences in longitudinal trajectories of childhood sleep problems. Sleep problems declined over time, with ~70% of children having ≥ 1 problem at age 4 and ~33% of children at age 16. However, significant individual differences in both the initial levels of problems (intercept) and changes across time (slope) were observed. When executive function latent variables were added to the model, the intercept did not significantly correlate with the later executive function latent variables; however, the slope variable significantly (P < 0.05) negatively correlated with inhibiting (r = −0.27) and updating (r = −0.21), but not shifting (r = −0.10) abilities. Further analyses suggested that the slope variable predicted the variance common to the 3 executive functions (r = −0.29). Conclusions: Early levels of sleep problems do not seem to have appreciable implications for later executive functioning. However, individuals whose sleep problems decrease more across time show better general executive control in late adolescence. Citation: Friedman NP; Corley RP; Hewitt JK; Wright KP. Individual differences in childhood sleep problems predict later cognitive executive control. SLEEP 2009;32(3):323-333. PMID:19294952

  16. Adaptive learning compressive tracking based on Markov location prediction

    NASA Astrophysics Data System (ADS)

    Zhou, Xingyu; Fu, Dongmei; Yang, Tao; Shi, Yanan

    2017-03-01

    Object tracking is an interdisciplinary research topic in image processing, pattern recognition, and computer vision which has theoretical and practical application value in video surveillance, virtual reality, and automatic navigation. Compressive tracking (CT) has many advantages, such as efficiency and accuracy. However, when there are object occlusion, abrupt motion and blur, similar objects, and scale changing, the CT has the problem of tracking drift. We propose the Markov object location prediction to get the initial position of the object. Then CT is used to locate the object accurately, and the classifier parameter adaptive updating strategy is given based on the confidence map. At the same time according to the object location, extract the scale features, which is able to deal with object scale variations effectively. Experimental results show that the proposed algorithm has better tracking accuracy and robustness than current advanced algorithms and achieves real-time performance.

  17. The Triggering of Large-Scale Waves by CME Initiation

    NASA Astrophysics Data System (ADS)

    Forbes, Terry

    Studies of the large-scale waves generated at the onset of a coronal mass ejection (CME) can provide important information about the processes in the corona that trigger and drive CMEs. The size of the region where the waves originate can indicate the location of the magnetic forces that drive the CME outward, and the rate at which compressive waves steepen into shocks can provide a measure of how the driving forces develop in time. However, in practice it is difficult to separate the effects of wave formation from wave propagation. The problem is particularly acute for the corona because of the multiplicity of wave modes (e.g. slow versus fast MHD waves) and the highly nonuniform structure of the solar atmosphere. At the present time large-scale numerical simulations provide the best hope for deconvolving wave propagation and formation effects from one another.

  18. Vision and academic performance of learning disabled children.

    PubMed

    Wharry, R E; Kirkpatrick, S W

    1986-02-01

    The purpose of this study was to assess difference in academic performance among myopic, hyperopic, and emmetropic children who were learning disabled. More specifically, myopic children were expected to perform better on mathematical and spatial tasks than would hyperopic ones and that hyperopic and emmetropic children would perform better on verbal measures than would myopic ones. For 439 learning disabled students visual anomalies were determined via a Generated Retinal Reflex Image Screening System. Test data were obtained from school files. Partial support for the hypothesis was obtained. Myopic learning disabled children outperformed hyperopic and emmetropic children on the Key Math test. Myopic children scored better than hyperopic children on the WRAT Reading subtest and on the Durrell Analysis of Reading Difficulty Oral Reading Comprehension, Oral Rate, Flashword, and Spelling subtests, and on the Key Math Measurement and Total Scores. Severity of refractive error significantly affected the Wechsler Intelligence Scale for Children--Revised Full Scale, Performance Scale, Verbal Scale, and Digit Span scores but did not affect any academic test scores. Several other findings were also reported. Those with nonametropic problems scored higher than those without problems on the Key Math Time subtest. Implications supportive of the theories of Benbow and Benbow and Geschwind and Behan were stated.

  19. Separation and imaging diffractions by a sparsity-promoting model and subspace trust-region algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Caixia; Zhao, Jingtao; Wang, Yanfei; Wang, Chengxiang; Geng, Weifeng

    2017-03-01

    The small-scale geologic inhomogeneities or discontinuities, such as tiny faults, cavities or fractures, generally have spatial scales comparable to or even smaller than the seismic wavelength. Therefore, the seismic responses of these objects are coded in diffractions and an attempt to high-resolution imaging can be made if we can appropriately image them. As the amplitudes of reflections can be several orders of magnitude larger than those of diffractions, one of the key problems of diffraction imaging is to suppress reflections and at the same time to preserve diffractions. A sparsity-promoting method for separating diffractions in the common-offset domain is proposed that uses the Kirchhoff integral formula to enforce the sparsity of diffractions and the linear Radon transform to formulate reflections. A subspace trust-region algorithm that can provide globally convergent solutions is employed for solving this large-scale computation problem. The method not only allows for separation of diffractions in the case of interfering events but also ensures a high fidelity of the separated diffractions. Numerical experiment and field application demonstrate the good performance of the proposed method in imaging the small-scale geological features related to the migration channel and storage spaces of carbonate reservoirs.

  20. Newmark local time stepping on high-performance computing architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch; Institute of Geophysics, ETH Zurich; Grote, Marcus, E-mail: marcus.grote@unibas.ch

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strongmore » element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.« less

  1. Effects of time perspective and self-control on procrastination and Internet addiction.

    PubMed

    Kim, Jinha; Hong, Hyeongi; Lee, Jungeun; Hyun, Myoung-Ho

    2017-06-01

    Background and aims College students experiencing stress show tendencies to procrastinate and can develop Internet addiction problems. This study investigated the structural relationship between time perspective and self-control on procrastination and Internet addiction. Methods College students (N = 377) residing in South Korea completed the following questionnaires: the Pathological Internet Use Behavior Symptom Scale for Adults, the Zimbardo Time Perspective Inventory, the Self-Control Rating Scale, and the Aitken Procrastination Inventory. The sample variance-covariance matrix was analyzed using AMOS 20.0. Results Time perspective had a direct effect on self-control and an indirect effect on Internet use and procrastination. In addition, self-control affected procrastination and Internet use. Conclusions Individuals with a present-oriented time perspective tend to evidence poorer self-control, increasing the likelihood of procrastination and Internet addiction. Individuals with a future-oriented time perspective, on the other hand, tend to have stronger self-control, decreasing their risk of procrastination and Internet addiction.

  2. Effects of time perspective and self-control on procrastination and Internet addiction

    PubMed Central

    Kim, Jinha; Hong, Hyeongi; Lee, Jungeun; Hyun, Myoung-Ho

    2017-01-01

    Background and aims College students experiencing stress show tendencies to procrastinate and can develop Internet addiction problems. This study investigated the structural relationship between time perspective and self-control on procrastination and Internet addiction. Methods College students (N = 377) residing in South Korea completed the following questionnaires: the Pathological Internet Use Behavior Symptom Scale for Adults, the Zimbardo Time Perspective Inventory, the Self-Control Rating Scale, and the Aitken Procrastination Inventory. The sample variance–covariance matrix was analyzed using AMOS 20.0. Results Time perspective had a direct effect on self-control and an indirect effect on Internet use and procrastination. In addition, self-control affected procrastination and Internet use. Conclusions Individuals with a present-oriented time perspective tend to evidence poorer self-control, increasing the likelihood of procrastination and Internet addiction. Individuals with a future-oriented time perspective, on the other hand, tend to have stronger self-control, decreasing their risk of procrastination and Internet addiction. PMID:28494615

  3. The Gist of Delay of Gratification: Understanding and Predicting Problem Behaviors

    PubMed Central

    REYNA, VALERIE F.; WILHELMS, EVAN A.

    2017-01-01

    Delay of gratification captures elements of temptation and self-denial that characterize real-life problems with money and other problem behaviors such as unhealthy risk taking. According to fuzzy-trace theory, decision makers mentally represent social values such as delay of gratification in a coarse but meaningful form of memory called “gist.” Applying this theory, we developed a gist measure of delay of gratification that does not involve quantitative trade-offs (as delay discounting does) and hypothesize that this construct explains unique variance beyond sensation seeking and inhibition in accounting for problem behaviors. Across four studies, we examine this Delay-of-gratification Gist Scale by using principal components analyses and evaluating convergent and divergent validity with other potentially related scales such as Future Orientation, Propensity to Plan, Time Perspectives Inventory, Spendthrift-Tightwad, Sensation Seeking, Cognitive Reflection, Barratt Impulsiveness, and the Monetary Choice Questionnaire (delay discounting). The new 12-item measure captured a single dimension of delay of gratification, correlated as predicted with other scales, but accounted for unique variance in predicting such outcomes as overdrawing bank accounts, substance abuse, and overall subjective well-being. Results support a theoretical distinction between reward-related approach motivation, including sensation seeking, and inhibitory faculties, including cognitive reflection. However, individuals’ agreement with the qualitative gist of delay of gratification, as expressed in many cultural traditions, could not be reduced to such dualist distinctions nor to quantitative conceptions of delay discounting, shedding light on mechanisms of self-control and risk taking. PMID:28808356

  4. The Gist of Delay of Gratification: Understanding and Predicting Problem Behaviors.

    PubMed

    Reyna, Valerie F; Wilhelms, Evan A

    2017-04-01

    Delay of gratification captures elements of temptation and self-denial that characterize real-life problems with money and other problem behaviors such as unhealthy risk taking. According to fuzzy-trace theory, decision makers mentally represent social values such as delay of gratification in a coarse but meaningful form of memory called "gist." Applying this theory, we developed a gist measure of delay of gratification that does not involve quantitative trade-offs (as delay discounting does) and hypothesize that this construct explains unique variance beyond sensation seeking and inhibition in accounting for problem behaviors. Across four studies, we examine this Delay-of-gratification Gist Scale by using principal components analyses and evaluating convergent and divergent validity with other potentially related scales such as Future Orientation, Propensity to Plan, Time Perspectives Inventory, Spendthrift-Tightwad, Sensation Seeking, Cognitive Reflection, Barratt Impulsiveness, and the Monetary Choice Questionnaire (delay discounting). The new 12-item measure captured a single dimension of delay of gratification, correlated as predicted with other scales, but accounted for unique variance in predicting such outcomes as overdrawing bank accounts, substance abuse, and overall subjective well-being. Results support a theoretical distinction between reward-related approach motivation, including sensation seeking, and inhibitory faculties, including cognitive reflection. However, individuals' agreement with the qualitative gist of delay of gratification, as expressed in many cultural traditions, could not be reduced to such dualist distinctions nor to quantitative conceptions of delay discounting, shedding light on mechanisms of self-control and risk taking.

  5. Inverse Problems in Complex Models and Applications to Earth Sciences

    NASA Astrophysics Data System (ADS)

    Bosch, M. E.

    2015-12-01

    The inference of the subsurface earth structure and properties requires the integration of different types of data, information and knowledge, by combined processes of analysis and synthesis. To support the process of integrating information, the regular concept of data inversion is evolving to expand its application to models with multiple inner components (properties, scales, structural parameters) that explain multiple data (geophysical survey data, well-logs, core data). The probabilistic inference methods provide the natural framework for the formulation of these problems, considering a posterior probability density function (PDF) that combines the information from a prior information PDF and the new sets of observations. To formulate the posterior PDF in the context of multiple datasets, the data likelihood functions are factorized assuming independence of uncertainties for data originating across different surveys. A realistic description of the earth medium requires modeling several properties and structural parameters, which relate to each other according to dependency and independency notions. Thus, conditional probabilities across model components also factorize. A common setting proceeds by structuring the model parameter space in hierarchical layers. A primary layer (e.g. lithology) conditions a secondary layer (e.g. physical medium properties), which conditions a third layer (e.g. geophysical data). In general, less structured relations within model components and data emerge from the analysis of other inverse problems. They can be described with flexibility via direct acyclic graphs, which are graphs that map dependency relations between the model components. Examples of inverse problems in complex models can be shown at various scales. At local scale, for example, the distribution of gas saturation is inferred from pre-stack seismic data and a calibrated rock-physics model. At regional scale, joint inversion of gravity and magnetic data is applied for the estimation of lithological structure of the crust, with the lithotype body regions conditioning the mass density and magnetic susceptibility fields. At planetary scale, the Earth mantle temperature and element composition is inferred from seismic travel-time and geodetic data.

  6. Optimal Multi-scale Demand-side Management for Continuous Power-Intensive Processes

    NASA Astrophysics Data System (ADS)

    Mitra, Sumit

    With the advent of deregulation in electricity markets and an increasing share of intermittent power generation sources, the profitability of industrial consumers that operate power-intensive processes has become directly linked to the variability in energy prices. Thus, for industrial consumers that are able to adjust to the fluctuations, time-sensitive electricity prices (as part of so-called Demand-Side Management (DSM) in the smart grid) offer potential economical incentives. In this thesis, we introduce optimization models and decomposition strategies for the multi-scale Demand-Side Management of continuous power-intensive processes. On an operational level, we derive a mode formulation for scheduling under time-sensitive electricity prices. The formulation is applied to air separation plants and cement plants to minimize the operating cost. We also describe how a mode formulation can be used for industrial combined heat and power plants that are co-located at integrated chemical sites to increase operating profit by adjusting their steam and electricity production according to their inherent flexibility. Furthermore, a robust optimization formulation is developed to address the uncertainty in electricity prices by accounting for correlations and multiple ranges in the realization of the random variables. On a strategic level, we introduce a multi-scale model that provides an understanding of the value of flexibility of the current plant configuration and the value of additional flexibility in terms of retrofits for Demand-Side Management under product demand uncertainty. The integration of multiple time scales leads to large-scale two-stage stochastic programming problems, for which we need to apply decomposition strategies in order to obtain a good solution within a reasonable amount of time. Hence, we describe two decomposition schemes that can be applied to solve two-stage stochastic programming problems: First, a hybrid bi-level decomposition scheme with novel Lagrangean-type and subset-type cuts to strengthen the relaxation. Second, an enhanced cross-decomposition scheme that integrates Benders decomposition and Lagrangean decomposition on a scenario basis. To demonstrate the effectiveness of our developed methodology, we provide several industrial case studies throughout the thesis.

  7. Impacts of physical and chemical aquifer heterogeneity on basin-scale solute transport: Vulnerability of deep groundwater to arsenic contamination in Bangladesh

    NASA Astrophysics Data System (ADS)

    Michael, Holly A.; Khan, Mahfuzur R.

    2016-12-01

    Aquifer heterogeneity presents a primary challenge in predicting the movement of solutes in groundwater systems. The problem is particularly difficult on very large scales, across which permeability, chemical properties, and pumping rates may vary by many orders of magnitude and data are often sparse. An example is the fluvio-deltaic aquifer system of Bangladesh, where naturally-occurring arsenic (As) exists over tens of thousands of square kilometers in shallow groundwater. Millions of people in As-affected regions rely on deep (≥150 m) groundwater as a safe source of drinking water. The sustainability of this resource has been evaluated with models using effective properties appropriate for a basin-scale contamination problem, but the extent to which preferential flow affects the timescale of downward migration of As-contaminated shallow groundwater is unknown. Here we embed detailed, heterogeneous representations of hydraulic conductivity (K), pumping rates, and sorptive properties (Kd) within a basin-scale numerical groundwater flow and solute transport model to evaluate their effects on vulnerability and deviations from simulations with homogeneous representations in two areas with different flow systems. Advective particle tracking shows that heterogeneity in K does not affect average travel times from shallow zones to 150 m depth, but the travel times of the fastest 10% of particles decreases by a factor of ∼2. Pumping distributions do not strongly affect travel times if irrigation remains shallow, but increases in the deep pumping rate substantially reduce travel times. Simulation of advective-dispersive transport with sorption shows that deep groundwater is protected from contamination over a sustainable timeframe (>1000 y) if the spatial distribution of Kd is uniform. However, if only low-K sediments sorb As, 30% of the aquifer is not protected. Results indicate that sustainable management strategies in the Bengal Basin should consider impacts of both physical and chemical heterogeneity, as well as their correlation. These insights from Bangladesh show that preferential flow strongly influences breakthrough of both conservative and reactive solutes even at large spatial scales, with implications for predicting water supply vulnerability in contaminated heterogeneous aquifers worldwide.

  8. Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF) predictors of police officer problem behavior and collateral self-report test scores.

    PubMed

    Tarescavage, Anthony M; Fischler, Gary L; Cappo, Bruce M; Hill, David O; Corey, David M; Ben-Porath, Yossef S

    2015-03-01

    The current study examined the predictive validity of Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF; Ben-Porath & Tellegen, 2008/2011) scores in police officer screenings. We utilized a sample of 712 police officer candidates (82.6% male) from 2 Midwestern police departments. The sample included 426 hired officers, most of whom had supervisor ratings of problem behaviors and human resource records of civilian complaints. With the full sample, we calculated zero-order correlations between MMPI-2-RF scale scores and scale scores from the California Psychological Inventory (Gough, 1956) and Inwald Personality Inventory (Inwald, 2006) by gender. In the hired sample, we correlated MMPI-2-RF scale scores with the outcome data for males only, owing to the relatively small number of hired women. Several scales demonstrated meaningful correlations with the criteria, particularly in the thought dysfunction and behavioral/externalizing dysfunction domains. After applying a correction for range restriction, the correlation coefficient magnitudes were generally in the moderate to large range. The practical implications of these findings were explored by means of risk ratio analyses, which indicated that officers who produced elevations at cutscores lower than the traditionally used 65 T-score level were as much as 10 times more likely than those scoring below the cutoff to exhibit problem behaviors. Overall, the results supported the validity of the MMPI-2-RF in this setting. Implications and limitations of this study are discussed. 2015 APA, all rights reserved

  9. Density-Aware Clustering Based on Aggregated Heat Kernel and Its Transformation

    DOE PAGES

    Huang, Hao; Yoo, Shinjae; Yu, Dantong; ...

    2015-06-01

    Current spectral clustering algorithms suffer from the sensitivity to existing noise, and parameter scaling, and may not be aware of different density distributions across clusters. If these problems are left untreated, the consequent clustering results cannot accurately represent true data patterns, in particular, for complex real world datasets with heterogeneous densities. This paper aims to solve these problems by proposing a diffusion-based Aggregated Heat Kernel (AHK) to improve the clustering stability, and a Local Density Affinity Transformation (LDAT) to correct the bias originating from different cluster densities. AHK statistically\\ models the heat diffusion traces along the entire time scale, somore » it ensures robustness during clustering process, while LDAT probabilistically reveals local density of each instance and suppresses the local density bias in the affinity matrix. Our proposed framework integrates these two techniques systematically. As a result, not only does it provide an advanced noise-resisting and density-aware spectral mapping to the original dataset, but also demonstrates the stability during the processing of tuning the scaling parameter (which usually controls the range of neighborhood). Furthermore, our framework works well with the majority of similarity kernels, which ensures its applicability to many types of data and problem domains. The systematic experiments on different applications show that our proposed algorithms outperform state-of-the-art clustering algorithms for the data with heterogeneous density distributions, and achieve robust clustering performance with respect to tuning the scaling parameter and handling various levels and types of noise.« less

  10. Degradation modeling of high temperature proton exchange membrane fuel cells using dual time scale simulation

    NASA Astrophysics Data System (ADS)

    Pohl, E.; Maximini, M.; Bauschulte, A.; vom Schloß, J.; Hermanns, R. T. E.

    2015-02-01

    HT-PEM fuel cells suffer from performance losses due to degradation effects. Therefore, the durability of HT-PEM is currently an important factor of research and development. In this paper a novel approach is presented for an integrated short term and long term simulation of HT-PEM accelerated lifetime testing. The physical phenomena of short term and long term effects are commonly modeled separately due to the different time scales. However, in accelerated lifetime testing, long term degradation effects have a crucial impact on the short term dynamics. Our approach addresses this problem by applying a novel method for dual time scale simulation. A transient system simulation is performed for an open voltage cycle test on a HT-PEM fuel cell for a physical time of 35 days. The analysis describes the system dynamics by numerical electrochemical impedance spectroscopy. Furthermore, a performance assessment is performed in order to demonstrate the efficiency of the approach. The presented approach reduces the simulation time by approximately 73% compared to conventional simulation approach without losing too much accuracy. The approach promises a comprehensive perspective considering short term dynamic behavior and long term degradation effects.

  11. A Metascalable Computing Framework for Large Spatiotemporal-Scale Atomistic Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nomura, K; Seymour, R; Wang, W

    2009-02-17

    A metascalable (or 'design once, scale on new architectures') parallel computing framework has been developed for large spatiotemporal-scale atomistic simulations of materials based on spatiotemporal data locality principles, which is expected to scale on emerging multipetaflops architectures. The framework consists of: (1) an embedded divide-and-conquer (EDC) algorithmic framework based on spatial locality to design linear-scaling algorithms for high complexity problems; (2) a space-time-ensemble parallel (STEP) approach based on temporal locality to predict long-time dynamics, while introducing multiple parallelization axes; and (3) a tunable hierarchical cellular decomposition (HCD) parallelization framework to map these O(N) algorithms onto a multicore cluster based onmore » hybrid implementation combining message passing and critical section-free multithreading. The EDC-STEP-HCD framework exposes maximal concurrency and data locality, thereby achieving: (1) inter-node parallel efficiency well over 0.95 for 218 billion-atom molecular-dynamics and 1.68 trillion electronic-degrees-of-freedom quantum-mechanical simulations on 212,992 IBM BlueGene/L processors (superscalability); (2) high intra-node, multithreading parallel efficiency (nanoscalability); and (3) nearly perfect time/ensemble parallel efficiency (eon-scalability). The spatiotemporal scale covered by MD simulation on a sustained petaflops computer per day (i.e. petaflops {center_dot} day of computing) is estimated as NT = 2.14 (e.g. N = 2.14 million atoms for T = 1 microseconds).« less

  12. Detecting effects of the indicated prevention Programme for Externalizing Problem behaviour (PEP) on child symptoms, parenting, and parental quality of life in a randomized controlled trial.

    PubMed

    Hanisch, Charlotte; Freund-Braier, Inez; Hautmann, Christopher; Jänen, Nicola; Plück, Julia; Brix, Gabriele; Eichelberger, Ilka; Döpfner, Manfred

    2010-01-01

    Behavioural parent training is effective in improving child disruptive behavioural problems in preschool children by increasing parenting competence. The indicated Prevention Programme for Externalizing Problem behaviour (PEP) is a group training programme for parents and kindergarten teachers of children aged 3-6 years with externalizing behavioural problems. To evaluate the effects of PEP on child problem behaviour, parenting practices, parent-child interactions, and parental quality of life. Parents and kindergarten teachers of 155 children were randomly assigned to an intervention group (n = 91) and a nontreated control group (n = 64). They rated children's problem behaviour before and after PEP training; parents also reported on their parenting practices and quality of life. Standardized play situations were video-taped and rated for parent-child interactions, e.g. parental warmth. In the intention to treat analysis, mothers of the intervention group described less disruptive child behaviour and better parenting strategies, and showed more parental warmth during a standardized parent-child interaction. Dosage analyses confirmed these results for parents who attended at least five training sessions. Children were also rated to show less behaviour problems by their kindergarten teachers. Training effects were especially positive for parents who attended at least half of the training sessions. CBCL: Child Behaviour Checklist; CII: Coder Impressions Inventory; DASS: Depression anxiety Stress Scale; HSQ: Home-situation Questionnaire; LSS: Life Satisfaction Scale; OBDT: observed behaviour during the test; PCL: Problem Checklist; PEP: prevention programme for externalizing problem behaviour; PPC: Parent Problem Checklist; PPS: Parent Practices Scale; PS: Parenting Scale; PSBC: Problem Setting and Behaviour checklist; QJPS: Questionnaire on Judging Parental Strains; SEFS: Self-Efficacy Scale; SSC: Social Support Scale; TRF: Caregiver-Teacher Report Form.

  13. Computational issues in complex water-energy optimization problems: Time scales, parameterizations, objectives and algorithms

    NASA Astrophysics Data System (ADS)

    Efstratiadis, Andreas; Tsoukalas, Ioannis; Kossieris, Panayiotis; Karavokiros, George; Christofides, Antonis; Siskos, Alexandros; Mamassis, Nikos; Koutsoyiannis, Demetris

    2015-04-01

    Modelling of large-scale hybrid renewable energy systems (HRES) is a challenging task, for which several open computational issues exist. HRES comprise typical components of hydrosystems (reservoirs, boreholes, conveyance networks, hydropower stations, pumps, water demand nodes, etc.), which are dynamically linked with renewables (e.g., wind turbines, solar parks) and energy demand nodes. In such systems, apart from the well-known shortcomings of water resources modelling (nonlinear dynamics, unknown future inflows, large number of variables and constraints, conflicting criteria, etc.), additional complexities and uncertainties arise due to the introduction of energy components and associated fluxes. A major difficulty is the need for coupling two different temporal scales, given that in hydrosystem modeling, monthly simulation steps are typically adopted, yet for a faithful representation of the energy balance (i.e. energy production vs. demand) a much finer resolution (e.g. hourly) is required. Another drawback is the increase of control variables, constraints and objectives, due to the simultaneous modelling of the two parallel fluxes (i.e. water and energy) and their interactions. Finally, since the driving hydrometeorological processes of the integrated system are inherently uncertain, it is often essential to use synthetically generated input time series of large length, in order to assess the system performance in terms of reliability and risk, with satisfactory accuracy. To address these issues, we propose an effective and efficient modeling framework, key objectives of which are: (a) the substantial reduction of control variables, through parsimonious yet consistent parameterizations; (b) the substantial decrease of computational burden of simulation, by linearizing the combined water and energy allocation problem of each individual time step, and solve each local sub-problem through very fast linear network programming algorithms, and (c) the substantial decrease of the required number of function evaluations for detecting the optimal management policy, using an innovative, surrogate-assisted global optimization approach.

  14. Passive advection of a vector field: Anisotropy, finite correlation time, exact solution, and logarithmic corrections to ordinary scaling

    NASA Astrophysics Data System (ADS)

    Antonov, N. V.; Gulitskiy, N. M.

    2015-10-01

    In this work we study the generalization of the problem considered in [Phys. Rev. E 91, 013002 (2015), 10.1103/PhysRevE.91.013002] to the case of finite correlation time of the environment (velocity) field. The model describes a vector (e.g., magnetic) field, passively advected by a strongly anisotropic turbulent flow. Inertial-range asymptotic behavior is studied by means of the field theoretic renormalization group and the operator product expansion. The advecting velocity field is Gaussian, with finite correlation time and preassigned pair correlation function. Due to the presence of distinguished direction n , all the multiloop diagrams in this model vanish, so that the results obtained are exact. The inertial-range behavior of the model is described by two regimes (the limits of vanishing or infinite correlation time) that correspond to the two nontrivial fixed points of the RG equations. Their stability depends on the relation between the exponents in the energy spectrum E ∝k⊥1 -ξ and the dispersion law ω ∝k⊥2 -η . In contrast to the well-known isotropic Kraichnan's model, where various correlation functions exhibit anomalous scaling behavior with infinite sets of anomalous exponents, here the corrections to ordinary scaling are polynomials of logarithms of the integral turbulence scale L .

  15. Critical spaces for quasilinear parabolic evolution equations and applications

    NASA Astrophysics Data System (ADS)

    Prüss, Jan; Simonett, Gieri; Wilke, Mathias

    2018-02-01

    We present a comprehensive theory of critical spaces for the broad class of quasilinear parabolic evolution equations. The approach is based on maximal Lp-regularity in time-weighted function spaces. It is shown that our notion of critical spaces coincides with the concept of scaling invariant spaces in case that the underlying partial differential equation enjoys a scaling invariance. Applications to the vorticity equations for the Navier-Stokes problem, convection-diffusion equations, the Nernst-Planck-Poisson equations in electro-chemistry, chemotaxis equations, the MHD equations, and some other well-known parabolic equations are given.

  16. Review and synthesis of problems and directions for large scale geographic information system development

    NASA Technical Reports Server (NTRS)

    Boyle, A. R.; Dangermond, J.; Marble, D.; Simonett, D. S.; Tomlinson, R. F.

    1983-01-01

    Problems and directions for large scale geographic information system development were reviewed and the general problems associated with automated geographic information systems and spatial data handling were addressed.

  17. Simulated parallel annealing within a neighborhood for optimization of biomechanical systems.

    PubMed

    Higginson, J S; Neptune, R R; Anderson, F C

    2005-09-01

    Optimization problems for biomechanical systems have become extremely complex. Simulated annealing (SA) algorithms have performed well in a variety of test problems and biomechanical applications; however, despite advances in computer speed, convergence to optimal solutions for systems of even moderate complexity has remained prohibitive. The objective of this study was to develop a portable parallel version of a SA algorithm for solving optimization problems in biomechanics. The algorithm for simulated parallel annealing within a neighborhood (SPAN) was designed to minimize interprocessor communication time and closely retain the heuristics of the serial SA algorithm. The computational speed of the SPAN algorithm scaled linearly with the number of processors on different computer platforms for a simple quadratic test problem and for a more complex forward dynamic simulation of human pedaling.

  18. Human Cognitive Limitations. Broad, Consistent, Clinical Application of Physiological Principles Will Require Decision Support.

    PubMed

    Morris, Alan H

    2018-02-01

    Our education system seems to fail to enable clinicians to broadly understand core physiological principles. The emphasis on reductionist science, including "omics" branches of research, has likely contributed to this decrease in understanding. Consequently, clinicians cannot be expected to consistently make clinical decisions linked to best physiological evidence. This is a large-scale problem with multiple determinants, within an even larger clinical decision problem: the failure of clinicians to consistently link their decisions to best evidence. Clinicians, like all human decision-makers, suffer from significant cognitive limitations. Detailed context-sensitive computer protocols can generate personalized medicine instructions that are well matched to individual patient needs over time and can partially resolve this problem.

  19. Dynamic resource allocation in conservation planning

    USGS Publications Warehouse

    Golovin, D.; Krause, A.; Gardner, B.; Converse, S.J.; Morey, S.

    2011-01-01

    Consider the problem of protecting endangered species by selecting patches of land to be used for conservation purposes. Typically, the availability of patches changes over time, and recommendations must be made dynamically. This is a challenging prototypical example of a sequential optimization problem under uncertainty in computational sustainability. Existing techniques do not scale to problems of realistic size. In this paper, we develop an efficient algorithm for adaptively making recommendations for dynamic conservation planning, and prove that it obtains near-optimal performance. We further evaluate our approach on a detailed reserve design case study of conservation planning for three rare species in the Pacific Northwest of the United States. Copyright ?? 2011, Association for the Advancement of Artificial Intelligence. All rights reserved.

  20. Supercomputer optimizations for stochastic optimal control applications

    NASA Technical Reports Server (NTRS)

    Chung, Siu-Leung; Hanson, Floyd B.; Xu, Huihuang

    1991-01-01

    Supercomputer optimizations for a computational method of solving stochastic, multibody, dynamic programming problems are presented. The computational method is valid for a general class of optimal control problems that are nonlinear, multibody dynamical systems, perturbed by general Markov noise in continuous time, i.e., nonsmooth Gaussian as well as jump Poisson random white noise. Optimization techniques for vector multiprocessors or vectorizing supercomputers include advanced data structures, loop restructuring, loop collapsing, blocking, and compiler directives. These advanced computing techniques and superconducting hardware help alleviate Bellman's curse of dimensionality in dynamic programming computations, by permitting the solution of large multibody problems. Possible applications include lumped flight dynamics models for uncertain environments, such as large scale and background random aerospace fluctuations.

  1. Application of computational aero-acoustics to real world problems

    NASA Technical Reports Server (NTRS)

    Hardin, Jay C.

    1996-01-01

    The application of computational aeroacoustics (CAA) to real problems is discussed in relation to the analysis performed with the aim of assessing the application of the various techniques. It is considered that the applications are limited by the inability of the computational resources to resolve the large range of scales involved in high Reynolds number flows. Possible simplifications are discussed. It is considered that problems remain to be solved in relation to the efficient use of the power of parallel computers and in the development of turbulent modeling schemes. The goal of CAA is stated as being the implementation of acoustic design studies on a computer terminal with reasonable run times.

  2. The role of marital quality and spousal support in behaviour problems of children with and without intellectual disability.

    PubMed

    Wieland, Natalie; Baker, B L

    2010-07-01

    Children with intellectual disability (ID) have been found to be at an increased risk for developing behavioural problems. The purpose of this study was to examine the relationship between the marital domain, including marital quality and spousal support, and behaviour problems in children with and without ID. The relationship between the marital domain and child behaviour problems was examined in 132 families of 6-year-olds with and without ID. Using hierarchical regression, these relationships were also studied over time from child ages 6-8 years. Child behaviour problems were assessed with mother-reported Child Behavior Checklist. The marital domain was measured using the Dyadic Adjustment Scale-7 and the Spousal Support and Agreement Scale. Mother-reported parenting stress and observed parenting practices were tested as potential mediators of the relationship between the marital domain and child behaviour problems. Mean levels of the marital domain were not significantly different between typically developing (TD) and ID groups, but there were significantly greater levels of variance in reported marital quality in the ID group at ages 6, 7 and 8. The marital domain score at child age 6 years predicted child behaviour problems at age 8 for the TD group only. This predictive relationship appeared to be a unidirectional effect, as child behaviour problems at age 6 were not found to predict levels of the marital domain at age 8. Parenting stress partially mediated this relationship for the TD group. The marital domain may have a greater impact on behavioural outcomes for TD children. Implications for future research and interventions are discussed.

  3. Problem Gambling Family Impacts: Development of the Problem Gambling Family Impact Scale.

    PubMed

    Dowling, N A; Suomi, A; Jackson, A C; Lavis, T

    2016-09-01

    Although family members of problem gamblers frequently present to treatment services, problem gambling family impacts are under-researched. The most commonly endorsed items on a new measure of gambling-related family impacts [Problem Gambling Family Impact Measure (PG-FIM: Problem Gambler version)] by 212 treatment-seeking problem gamblers included trust (62.5 %), anger (61.8 %), depression or sadness (58.7 %), anxiety (57.7 %), distress due to gambling-related absences (56.1 %), reduced quality time (52.4 %), and communication breakdowns (52.4 %). The PG-FIM (Problem Gambler version) was comprised of three factors: (1) financial impacts, (2) increased responsibility impacts, and (3) psychosocial impacts with good psychometric properties. Younger, more impulsive, non-electronic gaming machine (EGM) gamblers who had more severe gambling problems reported more financial impacts; non-EGM gamblers with poorer general health reported more increased responsibility impacts; and more impulsive non-EGM gamblers with more psychological distress and higher gambling severity reported more psychosocial impacts. The findings have implications for the development of interventions for the family members of problem gamblers.

  4. Intermediate inflation from a non-canonical scalar field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rezazadeh, K.; Karami, K.; Karimi, P., E-mail: rezazadeh86@gmail.com, E-mail: KKarami@uok.ac.ir, E-mail: parvin.karimi67@yahoo.com

    2015-09-01

    We study the intermediate inflation in a non-canonical scalar field framework with a power-like Lagrangian. We show that in contrast with the standard canonical intermediate inflation, our non-canonical model is compatible with the observational results of Planck 2015. Also, we estimate the equilateral non-Gaussianity parameter which is in well agreement with the prediction of Planck 2015. Then, we obtain an approximation for the energy scale at the initial time of inflation and show that it can be of order of the Planck energy scale, i.e. M{sub P} ∼ 10{sup 18}GeV. We will see that after a short period of time, inflation entersmore » in the slow-roll regime that its energy scale is of order M{sub P}/100 ∼ 10{sup 16}GeV and the horizon exit takes place in this energy scale. We also examine an idea in our non-canonical model to overcome the central drawback of intermediate inflation which is the fact that inflation never ends. We solve this problem without disturbing significantly the nature of the intermediate inflation until the time of horizon exit.« less

  5. A Pseudo-Vertical Equilibrium Model for Slow Gravity Drainage Dynamics

    NASA Astrophysics Data System (ADS)

    Becker, Beatrix; Guo, Bo; Bandilla, Karl; Celia, Michael A.; Flemisch, Bernd; Helmig, Rainer

    2017-12-01

    Vertical equilibrium (VE) models are computationally efficient and have been widely used for modeling fluid migration in the subsurface. However, they rely on the assumption of instant gravity segregation of the two fluid phases which may not be valid especially for systems that have very slow drainage at low wetting phase saturations. In these cases, the time scale for the wetting phase to reach vertical equilibrium can be several orders of magnitude larger than the time scale of interest, rendering conventional VE models unsuitable. Here we present a pseudo-VE model that relaxes the assumption of instant segregation of the two fluid phases by applying a pseudo-residual saturation inside the plume of the injected fluid that declines over time due to slow vertical drainage. This pseudo-VE model is cast in a multiscale framework for vertically integrated models with the vertical drainage solved as a fine-scale problem. Two types of fine-scale models are developed for the vertical drainage, which lead to two pseudo-VE models. Comparisons with a conventional VE model and a full multidimensional model show that the pseudo-VE models have much wider applicability than the conventional VE model while maintaining the computational benefit of the conventional VE model.

  6. Variational assimilation of streamflow into operational distributed hydrologic models: effect of spatiotemporal adjustment scale

    NASA Astrophysics Data System (ADS)

    Lee, H.; Seo, D.-J.; Liu, Y.; Koren, V.; McKee, P.; Corby, R.

    2012-01-01

    State updating of distributed rainfall-runoff models via streamflow assimilation is subject to overfitting because large dimensionality of the state space of the model may render the assimilation problem seriously under-determined. To examine the issue in the context of operational hydrology, we carry out a set of real-world experiments in which streamflow data is assimilated into gridded Sacramento Soil Moisture Accounting (SAC-SMA) and kinematic-wave routing models of the US National Weather Service (NWS) Research Distributed Hydrologic Model (RDHM) with the variational data assimilation technique. Study basins include four basins in Oklahoma and five basins in Texas. To assess the sensitivity of data assimilation performance to dimensionality reduction in the control vector, we used nine different spatiotemporal adjustment scales, where state variables are adjusted in a lumped, semi-distributed, or distributed fashion and biases in precipitation and potential evaporation (PE) are adjusted hourly, 6-hourly, or kept time-invariant. For each adjustment scale, three different streamflow assimilation scenarios are explored, where streamflow observations at basin interior points, at the basin outlet, or at both interior points and the outlet are assimilated. The streamflow assimilation experiments with nine different basins show that the optimum spatiotemporal adjustment scale varies from one basin to another and may be different for streamflow analysis and prediction in all of the three streamflow assimilation scenarios. The most preferred adjustment scale for seven out of nine basins is found to be the distributed, hourly scale, despite the fact that several independent validation results at this adjustment scale indicated the occurrence of overfitting. Basins with highly correlated interior and outlet flows tend to be less sensitive to the adjustment scale and could benefit more from streamflow assimilation. In comparison to outlet flow assimilation, interior flow assimilation at any adjustment scale produces streamflow predictions with a spatial correlation structure more consistent with that of streamflow observations. We also describe diagnosing the complexity of the assimilation problem using the spatial correlation information associated with the streamflow process, and discuss the effect of timing errors in a simulated hydrograph on the performance of the data assimilation procedure.

  7. Performance of Grey Wolf Optimizer on large scale problems

    NASA Astrophysics Data System (ADS)

    Gupta, Shubham; Deep, Kusum

    2017-01-01

    For solving nonlinear continuous problems of optimization numerous nature inspired optimization techniques are being proposed in literature which can be implemented to solve real life problems wherein the conventional techniques cannot be applied. Grey Wolf Optimizer is one of such technique which is gaining popularity since the last two years. The objective of this paper is to investigate the performance of Grey Wolf Optimization Algorithm on large scale optimization problems. The Algorithm is implemented on 5 common scalable problems appearing in literature namely Sphere, Rosenbrock, Rastrigin, Ackley and Griewank Functions. The dimensions of these problems are varied from 50 to 1000. The results indicate that Grey Wolf Optimizer is a powerful nature inspired Optimization Algorithm for large scale problems, except Rosenbrock which is a unimodal function.

  8. Coupling transfer function and GIS for assessing non-point-source groundwater vulnerability at regional scale

    NASA Astrophysics Data System (ADS)

    Coppola, A.; Comegna, V.; de Simone, L.

    2009-04-01

    Non-point source (NPS) pollution in the vadose zone is a global environmental problem. The knowledge and information required to address the problem of NPS pollutants in the vadose zone cross several technological and sub disciplinary lines: spatial statistics, geographic information systems (GIS), hydrology, soil science, and remote sensing. The main issues encountered by NPS groundwater vulnerability assessment, as discussed by Stewart [2001], are the large spatial scales, the complex processes that govern fluid flow and solute transport in the unsaturated zone, the absence of unsaturated zone measurements of diffuse pesticide concentrations in 3-D regional-scale space as these are difficult, time consuming, and prohibitively costly, and the computational effort required for solving the nonlinear equations for physically-based modeling of regional scale, heterogeneous applications. As an alternative solution, here is presented an approach that is based on coupling of transfer function and GIS modeling that: a) is capable of solute concentration estimation at a depth of interest within a known error confidence class; b) uses available soil survey, climatic, and irrigation information, and requires minimal computational cost for application; c) can dynamically support decision making through thematic mapping and 3D scenarios This result was pursued through 1) the design and building of a spatial database containing environmental and physical information regarding the study area, 2) the development of the transfer function procedure for layered soils, 3) the final representation of results through digital mapping and 3D visualization. One side GIS modeled environmental data in order to characterize, at regional scale, soil profile texture and depth, land use, climatic data, water table depth, potential evapotranspiration; on the other side such information was implemented in the up-scaling procedure of the Jury's TFM resulting in a set of texture based travel time probability density functions for layered soils each describing a characteristic leaching behavior for soil profiles with similar hydraulic properties. Such behavior, in terms of solute travel time to water table, was then imported back into GIS and finally estimation groundwater vulnerability for each soil unit was represented into a map as well as visualized in 3D.

  9. Addressing the computational cost of large EIT solutions.

    PubMed

    Boyle, Alistair; Borsic, Andrea; Adler, Andy

    2012-05-01

    Electrical impedance tomography (EIT) is a soft field tomography modality based on the application of electric current to a body and measurement of voltages through electrodes at the boundary. The interior conductivity is reconstructed on a discrete representation of the domain using a finite-element method (FEM) mesh and a parametrization of that domain. The reconstruction requires a sequence of numerically intensive calculations. There is strong interest in reducing the cost of these calculations. An improvement in the compute time for current problems would encourage further exploration of computationally challenging problems such as the incorporation of time series data, wide-spread adoption of three-dimensional simulations and correlation of other modalities such as CT and ultrasound. Multicore processors offer an opportunity to reduce EIT computation times but may require some restructuring of the underlying algorithms to maximize the use of available resources. This work profiles two EIT software packages (EIDORS and NDRM) to experimentally determine where the computational costs arise in EIT as problems scale. Sparse matrix solvers, a key component for the FEM forward problem and sensitivity estimates in the inverse problem, are shown to take a considerable portion of the total compute time in these packages. A sparse matrix solver performance measurement tool, Meagre-Crowd, is developed to interface with a variety of solvers and compare their performance over a range of two- and three-dimensional problems of increasing node density. Results show that distributed sparse matrix solvers that operate on multiple cores are advantageous up to a limit that increases as the node density increases. We recommend a selection procedure to find a solver and hardware arrangement matched to the problem and provide guidance and tools to perform that selection.

  10. The Contribution of Game Genre and other Use Patterns to Problem Video Game Play among Adult Video Gamers.

    PubMed

    Elliott, Luther; Ream, Geoffrey; McGinsky, Elizabeth; Dunlap, Eloise

    2012-12-01

    AIMS: To assess the contribution of patterns of video game play, including game genre, involvement, and time spent gaming, to problem use symptomatology. DESIGN: Nationally representative survey. SETTING: Online. PARTICIPANTS: Large sample (n=3,380) of adult video gamers in the US. MEASUREMENTS: Problem video game play (PVGP) scale, video game genre typology, use patterns (gaming days in the past month and hours on days used), enjoyment, consumer involvement, and background variables. FINDINGS: Study confirms game genre's contribution to problem use as well as demographic variation in play patterns that underlie problem video game play vulnerability. CONCLUSIONS: Identification of a small group of game types positively correlated with problem use suggests new directions for research into the specific design elements and reward mechanics of "addictive" video games. Unique vulnerabilities to problem use among certain groups demonstrate the need for ongoing investigation of health disparities related to contextual dimensions of video game play.

  11. Solution of monotone complementarity and general convex programming problems using a modified potential reduction interior point method

    DOE PAGES

    Huang, Kuo -Ling; Mehrotra, Sanjay

    2016-11-08

    We present a homogeneous algorithm equipped with a modified potential function for the monotone complementarity problem. We show that this potential function is reduced by at least a constant amount if a scaled Lipschitz condition (SLC) is satisfied. A practical algorithm based on this potential function is implemented in a software package named iOptimize. The implementation in iOptimize maintains global linear and polynomial time convergence properties, while achieving practical performance. It either successfully solves the problem, or concludes that the SLC is not satisfied. When compared with the mature software package MOSEK (barrier solver version 6.0.0.106), iOptimize solves convex quadraticmore » programming problems, convex quadratically constrained quadratic programming problems, and general convex programming problems in fewer iterations. Moreover, several problems for which MOSEK fails are solved to optimality. In addition, we also find that iOptimize detects infeasibility more reliably than the general nonlinear solvers Ipopt (version 3.9.2) and Knitro (version 8.0).« less

  12. The Contribution of Game Genre and other Use Patterns to Problem Video Game Play among Adult Video Gamers

    PubMed Central

    Ream, Geoffrey; McGinsky, Elizabeth; Dunlap, Eloise

    2012-01-01

    Aims To assess the contribution of patterns of video game play, including game genre, involvement, and time spent gaming, to problem use symptomatology. Design Nationally representative survey. Setting Online. Participants Large sample (n=3,380) of adult video gamers in the US. Measurements Problem video game play (PVGP) scale, video game genre typology, use patterns (gaming days in the past month and hours on days used), enjoyment, consumer involvement, and background variables. Findings Study confirms game genre's contribution to problem use as well as demographic variation in play patterns that underlie problem video game play vulnerability. Conclusions Identification of a small group of game types positively correlated with problem use suggests new directions for research into the specific design elements and reward mechanics of “addictive” video games. Unique vulnerabilities to problem use among certain groups demonstrate the need for ongoing investigation of health disparities related to contextual dimensions of video game play. PMID:23284310

  13. Parents and teachers reporting on a child's emotional and behavioural problems following severe traumatic brain injury (TBI): the moderating effect of time.

    PubMed

    Silberg, Tamar; Tal-Jacobi, Dana; Levav, Miriam; Brezner, Amichai; Rassovsky, Yuri

    2015-01-01

    Gathering information from parents and teachers following paediatric traumatic brain injury (TBI) has substantial clinical value for diagnostic decisions. Yet, a multi-informant approach has rarely been addressed when evaluating children at the chronic stage post-injury. In the current study, the goals were to examine (1) differences between parents' and teachers' reports on a child's emotional and behavioural problems and (2) the effect of time elapsed since injury on each rater's report. A sample of 42 parents and 42 teachers of children following severe TBI completed two standard rating scales. Receiver Operating Characteristic (ROC) curves were used to determine whether time elapsed since injury reliably distinguished children falling above and below clinical levels. Emotional-behavioural scores of children following severe TBI fell within normal range, according to both teachers and parents. Significant differences were found between parents' reports relatively close to the time of injury and 2 years post-injury. However, no such differences were observed in teachers' ratings. Parents and teachers of children following severe TBI differ in their reports on a child's emotional and behavioural problems. The present study not only underscores the importance of multiple informants, but also highlights, for the first time, the possibility that informants' perceptions may vary across time.

  14. [Habits and problems of sleep in adolescent students].

    PubMed

    Lazaratou, E; Dikeos, D; Anagnostopoulos, D; Soldatos, C

    2008-07-01

    The evaluation of sleep habits and sleep related problems in high school adolescent students in the Athens area and the assessment of these problems' relation to demographic and other variables was investigated by the Athens Insomnia Scale - 5 item version (AIS-5), which was administered to 713 adolescent Senior High School students in the Greater Athens Area. Data such as age, sex, school records, and time spent per week in school-related and extracurricular activities were collected. The sample's mean sleep duration was 7,5 hours, mean bedtime 12:20 am and wake-up time 7:15 am. Total sleep time was not affected by gender, but was influenced by time spent in various activities. Sleep complaints were related to delayed sleep, onset latency and insufficient total duration of sleep. Girls complained more than boys, while correlations showed that students with lower academic per formance and those in second grade were more likely to have higher AIS-5 scores. The results show that sleep time of high school students is dependent on practical matters such as school schedule and other activities, while sleep complaints are related to female gender, bad school performance as well as to the second grade. The difference between actual sleep time and sleep complaints should be considered when studying the sleep of adolescents.

  15. A Multiscale Software Tool for Field/Circuit Co-Simulation

    DTIC Science & Technology

    2011-12-15

    technology fields: Number of graduating undergraduates who achieved a 3.5 GPA to 4.0 (4.0 max scale): Number of graduating undergraduates funded by a...times more efficient than FDTD for such a problem in 3D . The techniques in class (c) above include the discontinuous Galerkin method and multidomain...implements a finite-differential-time-domain method on single field propagation in a 3D space. We consider a cavity model which includes two electric

  16. Double-slit interferometry with a Bose-Einstein condensate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, L.A.; Berman, G.P.; Bishop, A.R.

    2005-03-01

    A Bose-Einstein 'double-slit' interferometer has been recently realized experimentally by Y. Shin et al., Phys. Rev. Lett. 92 050405 (2004). We analyze the interferometric steps by solving numerically the time-dependent Gross-Pitaevskii equation in three-dimensional space. We focus on the adiabaticity time scales of the problem and on the creation of spurious collective excitations as a possible source of the strong degradation of the interference pattern observed experimentally. The role of quantum fluctuations is discussed.

  17. Higher order moments of the matter distribution in scale-free cosmological simulations with large dynamic range

    NASA Technical Reports Server (NTRS)

    Lucchin, Francesco; Matarrese, Sabino; Melott, Adrian L.; Moscardini, Lauro

    1994-01-01

    We calculate reduced moments (xi bar)(sub q) of the matter density fluctuations, up to order q = 5, from counts in cells produced by particle-mesh numerical simulations with scale-free Gaussian initial conditions. We use power-law spectra P(k) proportional to k(exp n) with indices n = -3, -2, -1, 0, 1. Due to the supposed absence of characteristic times or scales in our models, all quantities are expected to depend on a single scaling variable. For each model, the moments at all times can be expressed in terms of the variance (xi bar)(sub 2), alone. We look for agreement with the hierarchical scaling ansatz, according to which ((xi bar)(sub q)) proportional to ((xi bar)(sub 2))(exp (q - 1)). For n less than or equal to -2 models, we find strong deviations from the hierarchy, which are mostly due to the presence of boundary problems in the simulations. A small, residual signal of deviation from the hierarchical scaling is however also found in n greater than or equal to -1 models. The wide range of spectra considered and the large dynamic range, with careful checks of scaling and shot-noise effects, allows us to reliably detect evolution away from the perturbation theory result.

  18. A fast, parallel algorithm to solve the basic fluvial erosion/transport equations

    NASA Astrophysics Data System (ADS)

    Braun, J.

    2012-04-01

    Quantitative models of landform evolution are commonly based on the solution of a set of equations representing the processes of fluvial erosion, transport and deposition, which leads to predict the geometry of a river channel network and its evolution through time. The river network is often regarded as the backbone of any surface processes model (SPM) that might include other physical processes acting at a range of spatial and temporal scales along hill slopes. The basic laws of fluvial erosion requires the computation of local (slope) and non-local (drainage area) quantities at every point of a given landscape, a computationally expensive operation which limits the resolution of most SPMs. I present here an algorithm to compute the various components required in the parameterization of fluvial erosion (and transport) and thus solve the basic fluvial geomorphic equation, that is very efficient because it is O(n) (the number of required arithmetic operations is linearly proportional to the number of nodes defining the landscape), and is fully parallelizable (the computation cost decreases in a direct inverse proportion to the number of processors used to solve the problem). The algorithm is ideally suited for use on latest multi-core processors. Using this new technique, geomorphic problems can be solved at an unprecedented resolution (typically of the order of 10,000 X 10,000 nodes) while keeping the computational cost reasonable (order 1 sec per time step). Furthermore, I will show that the algorithm is applicable to any regular or irregular representation of the landform, and is such that the temporal evolution of the landform can be discretized by a fully implicit time-marching algorithm, making it unconditionally stable. I will demonstrate that such an efficient algorithm is ideally suited to produce a fully predictive SPM that links observationally based parameterizations of small-scale processes to the evolution of large-scale features of the landscapes on geological time scales. It can also be used to model surface processes at the continental or planetary scale and be linked to lithospheric or mantle flow models to predict the potential interactions between tectonics driving surface uplift in orogenic areas, mantle flow producing dynamic topography on continental scales and surface processes.

  19. Origin of the asteroid belt

    NASA Technical Reports Server (NTRS)

    Wetherill, George W.

    1989-01-01

    Earlier and current concepts relevant to the origin of the asteroid belt are discussed and are considered in the framework of the solar system origin. Numerical and analytical solutions of the dynamical theory of planetesimal accumulation are characterized by bifurcations into runaway and nonrunaway solutions, and it is emphasized that the differences in time scales resulting from runaway and nonrunaway growth can be more important than conventional time scale differences determined by heliocentric distances. It is concluded that, in principle, it is possible to combine new calculations with previous work to formulate a theory of the asteroidal accumulation consistent with the meteoritic record and with work on the formation of terrestrial planets. Problems remaining to be addressed before a mature theory can be formulated are discussed.

  20. Late stages of accumulation and early evolution of the planets

    NASA Technical Reports Server (NTRS)

    Vityazev, Andrey V.; Perchernikova, G. V.

    1991-01-01

    Recently developed solutions of problems are discussed that were traditionally considered fundamental in classical solar system cosmogony: determination of planetary orbit distribution patterns, values for mean eccentricity and orbital inclinations of the planets, and rotation periods and rotation axis inclinations of the planets. Two important cosmochemical aspects of accumulation are examined: the time scale for gas loss from the terrestrial planet zone, and the composition of the planets in terms of isotope data. It was concluded that the early beginning of planet differentiation is a function of the heating of protoplanets during collisions with large (thousands of kilometers) bodies. Energetics, heat mass transfer processes, and characteristic time scales of these processes at the early stages of planet evolution are considered.

  1. Large-scale semidefinite programming for many-electron quantum mechanics.

    PubMed

    Mazziotti, David A

    2011-02-25

    The energy of a many-electron quantum system can be approximated by a constrained optimization of the two-electron reduced density matrix (2-RDM) that is solvable in polynomial time by semidefinite programming (SDP). Here we develop a SDP method for computing strongly correlated 2-RDMs that is 10-20 times faster than previous methods [D. A. Mazziotti, Phys. Rev. Lett. 93, 213001 (2004)]. We illustrate with (i) the dissociation of N(2) and (ii) the metal-to-insulator transition of H(50). For H(50) the SDP problem has 9.4×10(6) variables. This advance also expands the feasibility of large-scale applications in quantum information, control, statistics, and economics. © 2011 American Physical Society

  2. Large-Scale Semidefinite Programming for Many-Electron Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Mazziotti, David A.

    2011-02-01

    The energy of a many-electron quantum system can be approximated by a constrained optimization of the two-electron reduced density matrix (2-RDM) that is solvable in polynomial time by semidefinite programming (SDP). Here we develop a SDP method for computing strongly correlated 2-RDMs that is 10-20 times faster than previous methods [D. A. Mazziotti, Phys. Rev. Lett. 93, 213001 (2004)PRLTAO0031-900710.1103/PhysRevLett.93.213001]. We illustrate with (i) the dissociation of N2 and (ii) the metal-to-insulator transition of H50. For H50 the SDP problem has 9.4×106 variables. This advance also expands the feasibility of large-scale applications in quantum information, control, statistics, and economics.

  3. Analytical approach to an integrate-and-fire model with spike-triggered adaptation

    NASA Astrophysics Data System (ADS)

    Schwalger, Tilo; Lindner, Benjamin

    2015-12-01

    The calculation of the steady-state probability density for multidimensional stochastic systems that do not obey detailed balance is a difficult problem. Here we present the analytical derivation of the stationary joint and various marginal probability densities for a stochastic neuron model with adaptation current. Our approach assumes weak noise but is valid for arbitrary adaptation strength and time scale. The theory predicts several effects of adaptation on the statistics of the membrane potential of a tonically firing neuron: (i) a membrane potential distribution with a convex shape, (ii) a strongly increased probability of hyperpolarized membrane potentials induced by strong and fast adaptation, and (iii) a maximized variability associated with the adaptation current at a finite adaptation time scale.

  4. Position space analysis of the AdS (in)stability problem

    NASA Astrophysics Data System (ADS)

    Dimitrakopoulos, Fotios V.; Freivogel, Ben; Lippert, Matthew; Yang, I.-Sheng

    2015-08-01

    We investigate whether arbitrarily small perturbations in global AdS space are generically unstable and collapse into black holes on the time scale set by gravitational interactions. We argue that current evidence, combined with our analysis, strongly suggests that a set of nonzero measure in the space of initial conditions does not collapse on this time scale. We perform an analysis in position space to study this puzzle, and our formalism allows us to directly study the vanishing-amplitude limit. We show that gravitational self-interaction leads to tidal deformations which are equally likely to focus or defocus energy, and we sketch the phase diagram accordingly. We also clarify the connection between gravitational evolution in global AdS and holographic thermalization.

  5. Analysis of composite ablators using massively parallel computation

    NASA Technical Reports Server (NTRS)

    Shia, David

    1995-01-01

    In this work, the feasibility of using massively parallel computation to study the response of ablative materials is investigated. Explicit and implicit finite difference methods are used on a massively parallel computer, the Thinking Machines CM-5. The governing equations are a set of nonlinear partial differential equations. The governing equations are developed for three sample problems: (1) transpiration cooling, (2) ablative composite plate, and (3) restrained thermal growth testing. The transpiration cooling problem is solved using a solution scheme based solely on the explicit finite difference method. The results are compared with available analytical steady-state through-thickness temperature and pressure distributions and good agreement between the numerical and analytical solutions is found. It is also found that a solution scheme based on the explicit finite difference method has the following advantages: incorporates complex physics easily, results in a simple algorithm, and is easily parallelizable. However, a solution scheme of this kind needs very small time steps to maintain stability. A solution scheme based on the implicit finite difference method has the advantage that it does not require very small times steps to maintain stability. However, this kind of solution scheme has the disadvantages that complex physics cannot be easily incorporated into the algorithm and that the solution scheme is difficult to parallelize. A hybrid solution scheme is then developed to combine the strengths of the explicit and implicit finite difference methods and minimize their weaknesses. This is achieved by identifying the critical time scale associated with the governing equations and applying the appropriate finite difference method according to this critical time scale. The hybrid solution scheme is then applied to the ablative composite plate and restrained thermal growth problems. The gas storage term is included in the explicit pressure calculation of both problems. Results from ablative composite plate problems are compared with previous numerical results which did not include the gas storage term. It is found that the through-thickness temperature distribution is not affected much by the gas storage term. However, the through-thickness pressure and stress distributions, and the extent of chemical reactions are different from the previous numerical results. Two types of chemical reaction models are used in the restrained thermal growth testing problem: (1) pressure-independent Arrhenius type rate equations and (2) pressure-dependent Arrhenius type rate equations. The numerical results are compared to experimental results and the pressure-dependent model is able to capture the trend better than the pressure-independent one. Finally, a performance study is done on the hybrid algorithm using the ablative composite plate problem. It is found that there is a good speedup of performance on the CM-5. For 32 CPU's, the speedup of performance is 20. The efficiency of the algorithm is found to be a function of the size and execution time of a given problem and the effective parallelization of the algorithm. It also seems that there is an optimum number of CPU's to use for a given problem.

  6. Rayleigh convective instability in a cloud medium

    NASA Astrophysics Data System (ADS)

    Shmerlin, B. Ya.; Shmerlin, M. B.

    2017-09-01

    The problem of convective instability of an atmospheric layer containing a horizontally finite region filled with a cloud medium is considered. Solutions exponentially growing with time, i.e., solitary cloud rolls or spatially localized systems of cloud rolls, have been constructed. In the case of axial symmetry, their analogs are convective vortices with both ascending and descending motions on the axis and cloud clusters with ring-shaped convective structures. Depending on the anisotropy of turbulent exchange, the scale of vortices changes from the tornado scale to the scale of tropical cyclones. The solutions with descending motions on the axis can correspond to the formation of a tornado funnel or a hurricane eye in tropical cyclones.

  7. Linear static structural and vibration analysis on high-performance computers

    NASA Technical Reports Server (NTRS)

    Baddourah, M. A.; Storaasli, O. O.; Bostic, S. W.

    1993-01-01

    Parallel computers offer the oppurtunity to significantly reduce the computation time necessary to analyze large-scale aerospace structures. This paper presents algorithms developed for and implemented on massively-parallel computers hereafter referred to as Scalable High-Performance Computers (SHPC), for the most computationally intensive tasks involved in structural analysis, namely, generation and assembly of system matrices, solution of systems of equations and calculation of the eigenvalues and eigenvectors. Results on SHPC are presented for large-scale structural problems (i.e. models for High-Speed Civil Transport). The goal of this research is to develop a new, efficient technique which extends structural analysis to SHPC and makes large-scale structural analyses tractable.

  8. ICASE/LaRC Symposium on Visualizing Time-Varying Data

    NASA Technical Reports Server (NTRS)

    Banks, D. C. (Editor); Crockett, T. W. (Editor); Stacy, K. (Editor)

    1996-01-01

    Time-varying datasets present difficult problems for both analysis and visualization. For example, the data may be terabytes in size, distributed across mass storage systems at several sites, with time scales ranging from femtoseconds to eons. In response to these challenges, ICASE and NASA Langley Research Center, in cooperation with ACM SIGGRAPH, organized the first symposium on visualizing time-varying data. The purpose was to bring the producers of time-varying data together with visualization specialists to assess open issues in the field, present new solutions, and encourage collaborative problem-solving. These proceedings contain the peer-reviewed papers which were presented at the symposium. They cover a broad range of topics, from methods for modeling and compressing data to systems for visualizing CFD simulations and World Wide Web traffic. Because the subject matter is inherently dynamic, a paper proceedings cannot adequately convey all aspects of the work. The accompanying video proceedings provide additional context for several of the papers.

  9. Precipitation data in a mountainous catchment in Honduras: quality assessment and spatiotemporal characteristics

    NASA Astrophysics Data System (ADS)

    Westerberg, I.; Walther, A.; Guerrero, J.-L.; Coello, Z.; Halldin, S.; Xu, C.-Y.; Chen, D.; Lundin, L.-C.

    2010-08-01

    An accurate description of temporal and spatial precipitation variability in Central America is important for local farming, water supply and flood management. Data quality problems and lack of consistent precipitation data impede hydrometeorological analysis in the 7,500 km2 Choluteca River basin in central Honduras, encompassing the capital Tegucigalpa. We used precipitation data from 60 daily and 13 monthly stations in 1913-2006 from five local authorities and NOAA's Global Historical Climatology Network. Quality control routines were developed to tackle the specific data quality problems. The quality-controlled data were characterised spatially and temporally, and compared with regional and larger-scale studies. Two gap-filling methods for daily data and three interpolation methods for monthly and mean annual precipitation were compared. The coefficient-of-correlation-weighting method provided the best results for gap-filling and the universal kriging method for spatial interpolation. In-homogeneity in the time series was the main quality problem, and 22% of the daily precipitation data were too poor to be used. Spatial autocorrelation for monthly precipitation was low during the dry season, and correlation increased markedly when data were temporally aggregated from a daily time scale to 4-5 days. The analysis manifested the high spatial and temporal variability caused by the diverse precipitation-generating mechanisms and the need for an improved monitoring network.

  10. Scaled Runge-Kutta algorithms for handling dense output

    NASA Technical Reports Server (NTRS)

    Horn, M. K.

    1981-01-01

    Low order Runge-Kutta algorithms are developed which determine the solution of a system of ordinary differential equations at any point within a given integration step, as well as at the end of each step. The scaled Runge-Kutta methods are designed to be used with existing Runge-Kutta formulas, using the derivative evaluations of these defining algorithms as the core of the system. For a slight increase in computing time, the solution may be generated within the integration step, improving the efficiency of the Runge-Kutta algorithms, since the step length need no longer be severely reduced to coincide with the desired output point. Scaled Runge-Kutta algorithms are presented for orders 3 through 5, along with accuracy comparisons between the defining algorithms and their scaled versions for a test problem.

  11. Simulations of Dissipative Circular Restricted Three-body Problems Using the Velocity-scaling Correction Method

    NASA Astrophysics Data System (ADS)

    Wang, Shoucheng; Huang, Guoqing; Wu, Xin

    2018-02-01

    In this paper, we survey the effect of dissipative forces including radiation pressure, Poynting–Robertson drag, and solar wind drag on the motion of dust grains with negligible mass, which are subjected to the gravities of the Sun and Jupiter moving in circular orbits. The effect of the dissipative parameter on the locations of five Lagrangian equilibrium points is estimated analytically. The instability of the triangular equilibrium point L4 caused by the drag forces is also shown analytically. In this case, the Jacobi constant varies with time, whereas its integral invariant relation still provides a probability for the applicability of the conventional fourth-order Runge–Kutta algorithm combined with the velocity scaling manifold correction scheme. Consequently, the velocity-only correction method significantly suppresses the effects of artificial dissipation and a rapid increase in trajectory errors caused by the uncorrected one. The stability time of an orbit, regardless of whether it is chaotic or not in the conservative problem, is apparently longer in the corrected case than in the uncorrected case when the dissipative forces are included. Although the artificial dissipation is ruled out, the drag dissipation leads to an escape of grains. Numerical evidence also demonstrates that more orbits near the triangular equilibrium point L4 escape as the integration time increases.

  12. Implications of gambling problems for family and interpersonal adjustment: results from the Quinte Longitudinal Study.

    PubMed

    Cowlishaw, Sean; Suomi, Aino; Rodgers, Bryan

    2016-09-01

    To evaluate (1) whether gambling problems predict overall trajectories of change in family or interpersonal adjustment and (2) whether annual measures of gambling problems predict time-specific decreases in family or interpersonal adjustment, concurrently and prospectively. The Quinte Longitudinal Study (QLS) involved random-digit dialling of telephone numbers around the city of Belleville, Canada to recruit 'general population' and 'at-risk' groups (the latter oversampling people likely to develop problems). Five waves of assessment were conducted (2006-10). Latent Trajectory Modelling (LTM) estimated overall trajectories of family and interpersonal adjustment, which were predicted by gambling problems, and also estimated how time-specific problems predicted deviations from these trajectories. Southeast Ontario, Canada. Community sample of Canadian adults (n = 4121). The Problem Gambling Severity Index (PGSI) defined at-risk gambling (ARG: PGSI 1-2) and moderate-risk/problem gambling (MR/PG: PGSI 3+). Outcomes included: (1) family functioning, assessed using a seven-point rating of overall functioning; (2) social support, assessed using items from the Non-support subscale of the Personality Assessment Inventory; and (3) relationship satisfaction, measured by the Kansas Marital Satisfaction Scale. Baseline measures of ARG and MR/PG did not predict rates of change in trajectories of family or interpersonal adjustment. Rather, the annual measures of MR/PG predicted time-specific decreases in family functioning (estimate: -0.11, P < 0.01), social support (estimate: -0.28, P < 0.01) and relationship satisfaction (estimate: -0.53, P < 0.01). ARG predicted concurrent levels of family functioning (estimate: -0.07, P < 0.01). There were time-lagged effects of MR/PG on subsequent levels of family functioning (estimate: -0.12, P < 0.01) and social support (estimate: -0.24, P < 0.01). In a longitudinal study of Canadian adults, moderate-risk/problem gambling did not predict overall trajectories of family or interpersonal adjustment. Rather, the annual measures of moderate-risk/problem gambling predicted time-specific and concurrent decreases in all outcomes, and lower family functioning and social support across adjacent waves. © 2016 The Authors. Addiction published by John Wiley & Sons Ltd on behalf of Society for the Study of Addiction.

  13. Distributed resource allocation under communication constraints

    NASA Astrophysics Data System (ADS)

    Dodin, Pierre; Nimier, Vincent

    2001-03-01

    This paper deals with a study of the multi-sensor management problem for multi-target tracking. The collaboration between many sensors observing the same target means that they are able to fuse their data during the information process. Then one must take into account this possibility to compute the optimal association sensors-target at each step of time. In order to solve this problem for real large scale system, one must both consider the information aspect and the control aspect of the problem. To unify these problems, one possibility is to use a decentralized filtering algorithm locally driven by an assignment algorithm. The decentralized filtering algorithm we use in our model is the filtering algorithm of Grime, which relaxes the usual full-connected hypothesis. By full-connected, one means that the information in a full-connected system is totally distributed everywhere at the same moment, which is unacceptable for a real large scale system. We modelize the distributed assignment decision with the help of a greedy algorithm. Each sensor performs a global optimization, in order to estimate other information sets. A consequence of the relaxation of the full- connected hypothesis is that the sensors' information set are not the same at each step of time, producing an information dis- symmetry in the system. The assignment algorithm uses a local knowledge of this dis-symmetry. By testing the reactions and the coherence of the local assignment decisions of our system, against maneuvering targets, we show that it is still possible to manage with decentralized assignment control even though the system is not full-connected.

  14. The Reliability and Construct Validity of Scores on the Attitudes toward Problem Solving Scale

    ERIC Educational Resources Information Center

    Zakaria, Effandi; Haron, Zolkepeli; Daud, Md Yusoff

    2004-01-01

    The Attitudes Toward Problem Solving Scale (ATPSS) has received limited attention concerning its reliability and validity with a Malaysian secondary education population. Developed by Charles, Lester & O'Daffer (1987), the instruments assessed attitudes toward problem solving in areas of Willingness to Engage in Problem Solving Activities,…

  15. An investigation of the use of temporal decomposition in space mission scheduling

    NASA Technical Reports Server (NTRS)

    Bullington, Stanley E.; Narayanan, Venkat

    1994-01-01

    This research involves an examination of techniques for solving scheduling problems in long-duration space missions. The mission timeline is broken up into several time segments, which are then scheduled incrementally. Three methods are presented for identifying the activities that are to be attempted within these segments. The first method is a mathematical model, which is presented primarily to illustrate the structure of the temporal decomposition problem. Since the mathematical model is bound to be computationally prohibitive for realistic problems, two heuristic assignment procedures are also presented. The first heuristic method is based on dispatching rules for activity selection, and the second heuristic assigns performances of a model evenly over timeline segments. These heuristics are tested using a sample Space Station mission and a Spacelab mission. The results are compared with those obtained by scheduling the missions without any problem decomposition. The applicability of this approach to large-scale mission scheduling problems is also discussed.

  16. Development of small scale cluster computer for numerical analysis

    NASA Astrophysics Data System (ADS)

    Zulkifli, N. H. N.; Sapit, A.; Mohammed, A. N.

    2017-09-01

    In this study, two units of personal computer were successfully networked together to form a small scale cluster. Each of the processor involved are multicore processor which has four cores in it, thus made this cluster to have eight processors. Here, the cluster incorporate Ubuntu 14.04 LINUX environment with MPI implementation (MPICH2). Two main tests were conducted in order to test the cluster, which is communication test and performance test. The communication test was done to make sure that the computers are able to pass the required information without any problem and were done by using simple MPI Hello Program where the program written in C language. Additional, performance test was also done to prove that this cluster calculation performance is much better than single CPU computer. In this performance test, four tests were done by running the same code by using single node, 2 processors, 4 processors, and 8 processors. The result shows that with additional processors, the time required to solve the problem decrease. Time required for the calculation shorten to half when we double the processors. To conclude, we successfully develop a small scale cluster computer using common hardware which capable of higher computing power when compare to single CPU processor, and this can be beneficial for research that require high computing power especially numerical analysis such as finite element analysis, computational fluid dynamics, and computational physics analysis.

  17. Effectiveness of the Treatment Readiness and Induction Program for increasing adolescent motivation for change.

    PubMed

    Becan, Jennifer E; Knight, Danica K; Crawley, Rachel D; Joe, George W; Flynn, Patrick M

    2015-03-01

    Success in substance abuse treatment is improved by problem recognition, desire to seek help, and readiness to engage in treatment, all of which are important aspects of motivation. Interventions that facilitate these at treatment induction for adolescents are especially needed. The purpose of this study is to assess the effectiveness of TRIP (Treatment Readiness and Induction Program) in promoting treatment motivation. Data represent 519 adolescents from 6 residential programs who completed assessments at treatment intake (time 1) and 35 days after admission (time 2). The design consisted of a comparison sample (n=281) that had enrolled in treatment prior to implementation of TRIP (standard operating practice) and a sample of clients that had entered treatment after TRIP began and received standard operating practice enhanced by TRIP (n=238). Repeated measures ANCOVAs were conducted using each time 2 motivation scale as a dependent measure. Motivation scales were conceptualized as representing sequential stages of change. LISREL was used to test a structural model involving TRIP participation, gender, drug use severity, juvenile justice involvement, age, race-ethnicity, prior treatment, and urgency as predictors of the stages of treatment motivation. Compared to standard practice, adolescents receiving TRIP demonstrated greater gains in problem recognition, even after controlling for the other variables in the model. The model fit was adequate, with TRIP directly affecting problem recognition and indirectly affecting later stages of change (desire for help and treatment readiness). Future studies should examine which specific components of TRIP affect change in motivation. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Vectorial finite elements for solving the radiative transfer equation

    NASA Astrophysics Data System (ADS)

    Badri, M. A.; Jolivet, P.; Rousseau, B.; Le Corre, S.; Digonnet, H.; Favennec, Y.

    2018-06-01

    The discrete ordinate method coupled with the finite element method is often used for the spatio-angular discretization of the radiative transfer equation. In this paper we attempt to improve upon such a discretization technique. Instead of using standard finite elements, we reformulate the radiative transfer equation using vectorial finite elements. In comparison to standard finite elements, this reformulation yields faster timings for the linear system assemblies, as well as for the solution phase when using scattering media. The proposed vectorial finite element discretization for solving the radiative transfer equation is cross-validated against a benchmark problem available in literature. In addition, we have used the method of manufactured solutions to verify the order of accuracy for our discretization technique within different absorbing, scattering, and emitting media. For solving large problems of radiation on parallel computers, the vectorial finite element method is parallelized using domain decomposition. The proposed domain decomposition method scales on large number of processes, and its performance is unaffected by the changes in optical thickness of the medium. Our parallel solver is used to solve a large scale radiative transfer problem of the Kelvin-cell radiation.

  19. Down to the roughness scale assessment of piston-ring/liner contacts

    NASA Astrophysics Data System (ADS)

    Checo, H. M.; Jaramillo, A.; Ausas, R. F.; Jai, M.; Buscaglia, G. C.

    2017-02-01

    The effects of surface roughness in hydrodynamic bearings been accounted for through several approaches, the most widely used being averaging or stochastic techniques. With these the surface is not treated “as it is”, but by means of an assumed probability distribution for the roughness. The so called direct, deterministic or measured-surface simulation) solve the lubrication problem with realistic surfaces down to the roughness scale. This leads to expensive computational problems. Most researchers have tackled this problem considering non-moving surfaces and neglecting the ring dynamics to reduce the computational burden. What is proposed here is to solve the fully-deterministic simulation both in space and in time, so that the actual movement of the surfaces and the rings dynamics are taken into account. This simulation is much more complex than previous ones, as it is intrinsically transient. The feasibility of these fully-deterministic simulations is illustrated two cases: fully deterministic simulation of liner surfaces with diverse finishings (honed and coated bores) with constant piston velocity and load on the ring and also in real engine conditions.

  20. Pore-scale modeling of moving contact line problems in immiscible two-phase flow

    NASA Astrophysics Data System (ADS)

    Kucala, Alec; Noble, David; Martinez, Mario

    2016-11-01

    Accurate modeling of moving contact line (MCL) problems is imperative in predicting capillary pressure vs. saturation curves, permeability, and preferential flow paths for a variety of applications, including geological carbon storage (GCS) and enhanced oil recovery (EOR). Here, we present a model for the moving contact line using pore-scale computational fluid dynamics (CFD) which solves the full, time-dependent Navier-Stokes equations using the Galerkin finite-element method. The MCL is modeled as a surface traction force proportional to the surface tension, dependent on the static properties of the immiscible fluid/solid system. We present a variety of verification test cases for simple two- and three-dimensional geometries to validate the current model, including threshold pressure predictions in flows through pore-throats for a variety of wetting angles. Simulations involving more complex geometries are also presented to be used in future simulations for GCS and EOR problems. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  1. Carer Reports of the Efficacy of Cognitive Behavioral Interventions for Anger

    ERIC Educational Resources Information Center

    Rose, John

    2010-01-01

    Anger resulting in Aggression can be a significant problem for some people with Intellectual Disabilities. Carers were asked to complete a provocation inventory and an attribution scale before and after a group cognitive behavioral intervention aimed for anger and at similar points in time for a waiting list control. When compared using an…

  2. The NO{sub x} Budget trading program: a collaborative, innovative approach to solving a regional air pollution problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Napolitano, Sam; Stevens, Gabrielle; Schreifels, Jeremy

    2007-11-15

    The NO{sub x} Budget Trading Program showed that regional cap-and-trade programs are adaptable to more than one pollutant, time period, and geographic scale, and can achieve compliance results similar to the Acid Rain Program. Here are 11 specific lessons that have emerged from the experience. (author)

  3. Assimilating a synthetic Kalman filter leaf area index series into the WOFOST model to improve regional winter wheat yield estimation

    USDA-ARS?s Scientific Manuscript database

    The scale mismatch between remotely sensed observations and crop growth models simulated state variables decreases the reliability of crop yield estimates. To overcome this problem, we used a two-step data assimilation phases: first we generated a complete leaf area index (LAI) time series by combin...

  4. Non-adaptive and adaptive hybrid approaches for enhancing water quality management

    NASA Astrophysics Data System (ADS)

    Kalwij, Ineke M.; Peralta, Richard C.

    2008-09-01

    SummaryUsing optimization to help solve groundwater management problems cost-effectively is becoming increasingly important. Hybrid optimization approaches, that combine two or more optimization algorithms, will become valuable and common tools for addressing complex nonlinear hydrologic problems. Hybrid heuristic optimizers have capabilities far beyond those of a simple genetic algorithm (SGA), and are continuously improving. SGAs having only parent selection, crossover, and mutation are inefficient and rarely used for optimizing contaminant transport management. Even an advanced genetic algorithm (AGA) that includes elitism (to emphasize using the best strategies as parents) and healing (to help assure optimal strategy feasibility) is undesirably inefficient. Much more efficient than an AGA is the presented hybrid (AGCT), which adds comprehensive tabu search (TS) features to an AGA. TS mechanisms (TS probability, tabu list size, search coarseness and solution space size, and a TS threshold value) force the optimizer to search portions of the solution space that yield superior pumping strategies, and to avoid reproducing similar or inferior strategies. An AGCT characteristic is that TS control parameters are unchanging during optimization. However, TS parameter values that are ideal for optimization commencement can be undesirable when nearing assumed global optimality. The second presented hybrid, termed global converger (GC), is significantly better than the AGCT. GC includes AGCT plus feedback-driven auto-adaptive control that dynamically changes TS parameters during run-time. Before comparing AGCT and GC, we empirically derived scaled dimensionless TS control parameter guidelines by evaluating 50 sets of parameter values for a hypothetical optimization problem. For the hypothetical area, AGCT optimized both well locations and pumping rates. The parameters are useful starting values because using trial-and-error to identify an ideal combination of control parameter values for a new optimization problem can be time consuming. For comparison, AGA, AGCT, and GC are applied to optimize pumping rates for assumed well locations of a complex large-scale contaminant transport and remediation optimization problem at Blaine Naval Ammunition Depot (NAD). Both hybrid approaches converged more closely to the optimal solution than the non-hybrid AGA. GC averaged 18.79% better convergence than AGCT, and 31.9% than AGA, within the same computation time (12.5 days). AGCT averaged 13.1% better convergence than AGA. The GC can significantly reduce the burden of employing computationally intensive hydrologic simulation models within a limited time period and for real-world optimization problems. Although demonstrated for a groundwater quality problem, it is also applicable to other arenas, such as managing salt water intrusion and surface water contaminant loading.

  5. Development and validation of brief scales to measure emotional and behavioural problems among Chinese adolescents

    PubMed Central

    Shen, Minxue; Hu, Ming; Sun, Zhenqiu

    2017-01-01

    Objectives To develop and validate brief scales to measure common emotional and behavioural problems among adolescents in the examination-oriented education system and collectivistic culture of China. Setting Middle schools in Hunan province. Participants 5442 middle school students aged 11–19 years were sampled. 4727 valid questionnaires were collected and used for validation of the scales. The final sample included 2408 boys and 2319 girls. Primary and secondary outcome measures The tools were assessed by the item response theory, classical test theory (reliability and construct validity) and differential item functioning. Results Four scales to measure anxiety, depression, study problem and sociality problem were established. Exploratory factor analysis showed that each scale had two solutions. Confirmatory factor analysis showed acceptable to good model fit for each scale. Internal consistency and test–retest reliability of all scales were above 0.7. Item response theory showed that all items had acceptable discrimination parameters and most items had appropriate difficulty parameters. 10 items demonstrated differential item functioning with respect to gender. Conclusions Four brief scales were developed and validated among adolescents in middle schools of China. The scales have good psychometric properties with minor differential item functioning. They can be used in middle school settings, and will help school officials to assess the students’ emotional/behavioural problems. PMID:28062469

  6. Assessing fit, interplay, and scale: Aligning governance and information for improved water management in a changing climate

    NASA Astrophysics Data System (ADS)

    Kirchhoff, C.; Dilling, L.

    2011-12-01

    Water managers have long experienced the challenges of managing water resources in a variable climate. However, climate change has the potential to reshape the experiential landscape by, for example, increasing the intensity and duration of droughts, shifting precipitation timing and amounts, and changing sea levels. Given the uncertainty in evaluating potential climate risks as well as future water availability and water demands, scholars suggest water managers employ more flexible and adaptive science-based management to manage uncertainty (NRC 2009). While such an approach is appropriate, for adaptive science-based management to be effective both governance and information must be concordant across three measures: fit, interplay and scale (Young 2002)(Note 1). Our research relies on interviews of state water managers and related experts (n=50) and documentary analysis in five U.S. states to understand the drivers and constraints to improving water resource planning and decision-making in a changing climate using an assessment of fit, interplay and scale as an evaluative framework. We apply this framework to assess and compare how water managers plan and respond to current or anticipated water resource challenges within each state. We hypothesize that better alignment between the data and management framework and the water resource problem improves water managers' facility to understand (via available, relevant, timely information) and respond appropriately (through institutional response mechanisms). In addition, better alignment between governance mechanisms (between the scope of the problem and identified appropriate responses) improves water management. Moreover, because many of the management challenges analyzed in this study concern present day issues with scarcity brought on by a combination of growth and drought, better alignment of fit, interplay, and scale today will enable and prepare water managers to be more successful in adapting to climate change impacts in the long-term. Note 1: For the purposes of this research, the problem of fit deals with the level of concordance between the natural and human systems while interplay involves how institutional arrangements interact both horizontally and vertically. Lastly, scale considers both spatial and temporal alignment of the physical systems and management structure. For example, to manage water resources effectively in a changing climate suggests having information that informs short-term and long-term changes and having institutional arrangements that seek understanding across temporal scales and facilitate responses based on information available (Young 2002).

  7. Diffusion in random networks

    DOE PAGES

    Zhang, Duan Z.; Padrino, Juan C.

    2017-06-01

    The ensemble averaging technique is applied to model mass transport by diffusion in random networks. The system consists of an ensemble of random networks, where each network is made of pockets connected by tortuous channels. Inside a channel, fluid transport is assumed to be governed by the one-dimensional diffusion equation. Mass balance leads to an integro-differential equation for the pocket mass density. The so-called dual-porosity model is found to be equivalent to the leading order approximation of the integration kernel when the diffusion time scale inside the channels is small compared to the macroscopic time scale. As a test problem,more » we consider the one-dimensional mass diffusion in a semi-infinite domain. Because of the required time to establish the linear concentration profile inside a channel, for early times the similarity variable is xt $-$1/4 rather than xt $-$1/2 as in the traditional theory. We found this early time similarity can be explained by random walk theory through the network.« less

  8. Phase of Illness in palliative care: Cross-sectional analysis of clinical data from community, hospital and hospice patients

    PubMed Central

    Mather, Harriet; Guo, Ping; Firth, Alice; Davies, Joanna M; Sykes, Nigel; Landon, Alison; Murtagh, Fliss EM

    2017-01-01

    Background: Phase of Illness describes stages of advanced illness according to care needs of the individual, family and suitability of care plan. There is limited evidence on its association with other measures of symptoms, and health-related needs, in palliative care. Aims: The aims of the study are as follows. (1) Describe function, pain, other physical problems, psycho-spiritual problems and family and carer support needs by Phase of Illness. (2) Consider strength of associations between these measures and Phase of Illness. Design and setting: Secondary analysis of patient-level data; a total of 1317 patients in three settings. Function measured using Australia-modified Karnofsky Performance Scale. Pain, other physical problems, psycho-spiritual problems and family and carer support needs measured using items on Palliative Care Problem Severity Scale. Results: Australia-modified Karnofsky Performance Scale and Palliative Care Problem Severity Scale items varied significantly by Phase of Illness. Mean function was highest in stable phase (65.9, 95% confidence interval = 63.4–68.3) and lowest in dying phase (16.6, 95% confidence interval = 15.3–17.8). Mean pain was highest in unstable phase (1.43, 95% confidence interval = 1.36–1.51). Multinomial regression: psycho-spiritual problems were not associated with Phase of Illness (χ2 = 2.940, df = 3, p = 0.401). Family and carer support needs were greater in deteriorating phase than unstable phase (odds ratio (deteriorating vs unstable) = 1.23, 95% confidence interval = 1.01–1.49). Forty-nine percent of the variance in Phase of Illness is explained by Australia-modified Karnofsky Performance Scale and Palliative Care Problem Severity Scale. Conclusion: Phase of Illness has value as a clinical measure of overall palliative need, capturing additional information beyond Australia-modified Karnofsky Performance Scale and Palliative Care Problem Severity Scale. Lack of significant association between psycho-spiritual problems and Phase of Illness warrants further investigation. PMID:28812945

  9. Resilience-promoting factors in war-exposed adolescents: an epidemiologic study.

    PubMed

    Fayyad, John; Cordahi-Tabet, C; Yeretzian, J; Salamoun, M; Najm, C; Karam, E G

    2017-02-01

    Studies of war-exposed children have not investigated a comprehensive array of resilience-promoting factors, nor representative samples of children and adolescents. A representative sample of N = 710 adolescents was randomly selected from communities recently exposed to war. All those who had experienced war trauma were administered questionnaires measuring war exposure, family violence, availability of leisure activities, school-related problems, interpersonal and peer problems, socialization, daily routine problems, displacement, availability of parental supervision and contact and medical needs as well as coping skills related to religious coping, denial, self-control, avoidance and problem solving. Mental health was measured by the Strengths and Difficulties Questionnaire (SDQ) and the Child-Revised Impact of Events Scale (CRIES). Resilient adolescents were defined as those who experienced war trauma, but did not manifest any symptoms on the SDQ or CRIES. Resilience was related to being male, using problem-solving techniques, having leisure activities, and having parents who spent time with their adolescents and who supported them with school work. Interventions designed for war-traumatized youth must build individual coping skills of children and adolescents, yet at the same time target parents and teachers in an integrated manner.

  10. On the problem of boundaries and scaling for urban street networks

    PubMed Central

    Masucci, A. Paolo; Arcaute, Elsa; Hatna, Erez; Stanilov, Kiril; Batty, Michael

    2015-01-01

    Urban morphology has presented significant intellectual challenges to mathematicians and physicists ever since the eighteenth century, when Euler first explored the famous Königsberg bridges problem. Many important regularities and scaling laws have been observed in urban studies, including Zipf's law and Gibrat's law, rendering cities attractive systems for analysis within statistical physics. Nevertheless, a broad consensus on how cities and their boundaries are defined is still lacking. Applying an elementary clustering technique to the street intersection space, we show that growth curves for the maximum cluster size of the largest cities in the UK and in California collapse to a single curve, namely the logistic. Subsequently, by introducing the concept of the condensation threshold, we show that natural boundaries of cities can be well defined in a universal way. This allows us to study and discuss systematically some of the regularities that are present in cities. We show that some scaling laws present consistent behaviour in space and time, thus suggesting the presence of common principles at the basis of the evolution of urban systems. PMID:26468071

  11. On the problem of boundaries and scaling for urban street networks.

    PubMed

    Masucci, A Paolo; Arcaute, Elsa; Hatna, Erez; Stanilov, Kiril; Batty, Michael

    2015-10-06

    Urban morphology has presented significant intellectual challenges to mathematicians and physicists ever since the eighteenth century, when Euler first explored the famous Königsberg bridges problem. Many important regularities and scaling laws have been observed in urban studies, including Zipf's law and Gibrat's law, rendering cities attractive systems for analysis within statistical physics. Nevertheless, a broad consensus on how cities and their boundaries are defined is still lacking. Applying an elementary clustering technique to the street intersection space, we show that growth curves for the maximum cluster size of the largest cities in the UK and in California collapse to a single curve, namely the logistic. Subsequently, by introducing the concept of the condensation threshold, we show that natural boundaries of cities can be well defined in a universal way. This allows us to study and discuss systematically some of the regularities that are present in cities. We show that some scaling laws present consistent behaviour in space and time, thus suggesting the presence of common principles at the basis of the evolution of urban systems. © 2015 The Authors.

  12. The accurate particle tracer code

    NASA Astrophysics Data System (ADS)

    Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi; Yao, Yicun

    2017-11-01

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runaway electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world's fastest computer, the Sunway TaihuLight supercomputer, by supporting master-slave architecture of Sunway many-core processors. Based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.

  13. Neural networks for continuous online learning and control.

    PubMed

    Choy, Min Chee; Srinivasan, Dipti; Cheu, Ruey Long

    2006-11-01

    This paper proposes a new hybrid neural network (NN) model that employs a multistage online learning process to solve the distributed control problem with an infinite horizon. Various techniques such as reinforcement learning and evolutionary algorithm are used to design the multistage online learning process. For this paper, the infinite horizon distributed control problem is implemented in the form of real-time distributed traffic signal control for intersections in a large-scale traffic network. The hybrid neural network model is used to design each of the local traffic signal controllers at the respective intersections. As the state of the traffic network changes due to random fluctuation of traffic volumes, the NN-based local controllers will need to adapt to the changing dynamics in order to provide effective traffic signal control and to prevent the traffic network from becoming overcongested. Such a problem is especially challenging if the local controllers are used for an infinite horizon problem where online learning has to take place continuously once the controllers are implemented into the traffic network. A comprehensive simulation model of a section of the Central Business District (CBD) of Singapore has been developed using PARAMICS microscopic simulation program. As the complexity of the simulation increases, results show that the hybrid NN model provides significant improvement in traffic conditions when evaluated against an existing traffic signal control algorithm as well as a new, continuously updated simultaneous perturbation stochastic approximation-based neural network (SPSA-NN). Using the hybrid NN model, the total mean delay of each vehicle has been reduced by 78% and the total mean stoppage time of each vehicle has been reduced by 84% compared to the existing traffic signal control algorithm. This shows the efficacy of the hybrid NN model in solving large-scale traffic signal control problem in a distributed manner. Also, it indicates the possibility of using the hybrid NN model for other applications that are similar in nature as the infinite horizon distributed control problem.

  14. Examples of data assimilation in mesoscale models

    NASA Technical Reports Server (NTRS)

    Carr, Fred; Zack, John; Schmidt, Jerry; Snook, John; Benjamin, Stan; Stauffer, David

    1993-01-01

    The keynote address was the problem of physical initialization of mesoscale models. The classic purpose of physical or diabatic initialization is to reduce or eliminate the spin-up error caused by the lack, at the initial time, of the fully developed vertical circulations required to support regions of large rainfall rates. However, even if a model has no spin-up problem, imposition of observed moisture and heating rate information during assimilation can improve quantitative precipitation forecasts, especially early in the forecast. The two key issues in physical initialization are the choice of assimilating technique and sources of hydrologic/hydrometeor data. Another example of data assimilation in mesoscale models was presented in a series of meso-beta scale model experiments with and 11 km version of the MASS model designed to investigate the sensitivity of convective initiation forced by thermally direct circulations resulting from differential surface heating to four dimensional assimilation of surface and radar data. The results of these simulations underscore the need to accurately initialize and simulate grid and sub-grid scale clouds in meso- beta scale models. The status of the application of the CSU-RAMS mesoscale model by the NOAA Forecast Systems Lab for producing real-time forecasts with 10-60 km mesh resolutions over (4000 km)(exp 2) domains for use by the aviation community was reported. Either MAPS or LAPS model data are used to initialize the RAMS model on a 12-h cycle. The use of MAPS (Mesoscale Analysis and Prediction System) model was discussed. Also discussed was the mesobeta-scale data assimilation using a triply-nested nonhydrostatic version of the MM5 model.

  15. Dynamics of Pure Shape, Relativity, and the Problem of Time

    NASA Astrophysics Data System (ADS)

    Barbour, Julian

    A new approach to the dynamics of the universe based on work by Ó Murchadha, Foster, Anderson and the author is presented. The only kinematics presupposed is the spatial geometry needed to define configuration spaces in purely relational terms. A new formulation of the relativity principle based on Poincarés analysis of the problem of absolute and relative motion (Machs principle) is given. The entire dynamics is based on shape and nothing else. It leads to much stronger predictions than standard Newtonian theory. For the dynamics of Riemannian 3-geometries on which matter fields also evolve, implementation of the new relativity principle establishes unexpected links between special relativity, general relativity and the gauge principle. They all emerge together as a self-consistent complex from a unified and completely relational approach to dynamics. A connection between time and scale invariance is established. In particular, the representation of general relativity as evolution of the shape of space leads to a unique dynamical definition of simultaneity. This opens up the prospect of a solution of the problem of time in quantum gravity on the basis of a fundamental dynamical principle.

  16. Sensor selection cost optimisation for tracking structurally cyclic systems: a P-order solution

    NASA Astrophysics Data System (ADS)

    Doostmohammadian, M.; Zarrabi, H.; Rabiee, H. R.

    2017-08-01

    Measurements and sensing implementations impose certain cost in sensor networks. The sensor selection cost optimisation is the problem of minimising the sensing cost of monitoring a physical (or cyber-physical) system. Consider a given set of sensors tracking states of a dynamical system for estimation purposes. For each sensor assume different costs to measure different (realisable) states. The idea is to assign sensors to measure states such that the global cost is minimised. The number and selection of sensor measurements need to ensure the observability to track the dynamic state of the system with bounded estimation error. The main question we address is how to select the state measurements to minimise the cost while satisfying the observability conditions. Relaxing the observability condition for structurally cyclic systems, the main contribution is to propose a graph theoretic approach to solve the problem in polynomial time. Note that polynomial time algorithms are suitable for large-scale systems as their running time is upper-bounded by a polynomial expression in the size of input for the algorithm. We frame the problem as a linear sum assignment with solution complexity of ?.

  17. Investigating gender differences in alcohol problems: a latent trait modeling approach.

    PubMed

    Nichol, Penny E; Krueger, Robert F; Iacono, William G

    2007-05-01

    Inconsistent results have been found in research investigating gender differences in alcohol problems. Previous studies of gender differences used a wide range of methodological techniques, as well as limited assortments of alcohol problems. Parents (1,348 men and 1,402 women) of twins enrolled in the Minnesota Twin Family Study answered questions about a wide range of alcohol problems. A latent trait modeling technique was used to evaluate gender differences in the probability of endorsement at the problem level and for the overall 105-problem scale. Of the 34 problems that showed significant gender differences, 29 were more likely to be endorsed by men than women with equivalent overall alcohol problem levels. These male-oriented symptoms included measures of heavy drinking, duration of drinking, tolerance, and acting out behaviors. Nineteen symptoms were denoted for removal to create a scale that favored neither gender in assessment. Significant gender differences were found in approximately one-third of the symptoms assessed and in the overall scale. Further examination of the nature of gender differences in alcohol problem symptoms should be undertaken to investigate whether a gender-neutral scale should be created or if men and women should be assessed with separate criteria for alcohol dependence and abuse.

  18. An asymptotic-preserving stochastic Galerkin method for the radiative heat transfer equations with random inputs and diffusive scalings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Shi, E-mail: sjin@wisc.edu; Institute of Natural Sciences, Department of Mathematics, MOE-LSEC and SHL-MAC, Shanghai Jiao Tong University, Shanghai 200240; Lu, Hanqing, E-mail: hanqing@math.wisc.edu

    2017-04-01

    In this paper, we develop an Asymptotic-Preserving (AP) stochastic Galerkin scheme for the radiative heat transfer equations with random inputs and diffusive scalings. In this problem the random inputs arise due to uncertainties in cross section, initial data or boundary data. We use the generalized polynomial chaos based stochastic Galerkin (gPC-SG) method, which is combined with the micro–macro decomposition based deterministic AP framework in order to handle efficiently the diffusive regime. For linearized problem we prove the regularity of the solution in the random space and consequently the spectral accuracy of the gPC-SG method. We also prove the uniform (inmore » the mean free path) linear stability for the space-time discretizations. Several numerical tests are presented to show the efficiency and accuracy of proposed scheme, especially in the diffusive regime.« less

  19. Relations between the Test of Variables of Attention (TOVA) and the Children's Memory Scale (CMS).

    PubMed

    Riccio, Cynthia A; Garland, Beth H; Cohen, Morris J

    2007-09-01

    There is considerable overlap in the constructs of attention and memory. The objective of this study was to examine the relationship between the Test of Variables of Attention (TOVA), a measure of attention, to components of memory and learning as measured by the Children's Memory Scale (CMS). Participants (N = 105) were consecutive referrals to an out-patient facility, generally for learning or behavior problems, who were administered both the TOVA and the CMS. Significant correlations were found between the omissions score on the TOVA and subscales of the CMS. TOVA variability and TOVA reaction time correlated significantly with subscales of the CMS as well. TOVA commission errors did not correlate significantly with any CMS Index. Although significant, the correlation coefficients indicate that the CMS and TOVA are measuring either different constructs or similar constructs but in different ways. As such, both measures may be useful in distinguishing memory from attention problems.

  20. Rainfall Climatology over Asir Region, Saudi Arabia

    NASA Astrophysics Data System (ADS)

    Sharif, H.; Furl, C.; Al-Zahrani, M.

    2012-04-01

    Arid and semi-arid lands occupy about one-third of the land surface of the earth and support about one-fifth of the world population. The Asir area in Saudi Arabia is an example of these areas faced with the problem of maintaining sustainable water resources. This problem is exacerbated by the high levels of population growth, land use changes, increasing water demand, and climate variability. In this study, the characteristics of decade-scale variations in precipitation are examined in more detail for Asir region. The spatio-temporal distributions of rainfall over the region are analyzed. The objectives are to identify the sensitivity, magnitude, and range of changes in annual and seasonal evapotranspiration resulting from observed decade-scale precipitation variations. An additional objective is to characterize orographic controls on the space-time variability of rainfall. The rainfall data is obtained from more than 30 rain gauges spread over the region.

Top