Single-machine common/slack due window assignment problems with linear decreasing processing times
NASA Astrophysics Data System (ADS)
Zhang, Xingong; Lin, Win-Chin; Wu, Wen-Hsiang; Wu, Chin-Chia
2017-08-01
This paper studies linear non-increasing processing times and the common/slack due window assignment problems on a single machine, where the actual processing time of a job is a linear non-increasing function of its starting time. The aim is to minimize the sum of the earliness cost, tardiness cost, due window location and due window size. Some optimality results are discussed for the common/slack due window assignment problems and two O(n log n) time algorithms are presented to solve the two problems. Finally, two examples are provided to illustrate the correctness of the corresponding algorithms.
Zhao, Chuan-Li; Hsu, Hua-Feng
2014-01-01
This paper considers single machine scheduling and due date assignment with setup time. The setup time is proportional to the length of the already processed jobs; that is, the setup time is past-sequence-dependent (p-s-d). It is assumed that a job's processing time depends on its position in a sequence. The objective functions include total earliness, the weighted number of tardy jobs, and the cost of due date assignment. We analyze these problems with two different due date assignment methods. We first consider the model with job-dependent position effects. For each case, by converting the problem to a series of assignment problems, we proved that the problems can be solved in O(n 4) time. For the model with job-independent position effects, we proved that the problems can be solved in O(n 3) time by providing a dynamic programming algorithm. PMID:25258727
Zhao, Chuan-Li; Hsu, Chou-Jung; Hsu, Hua-Feng
2014-01-01
This paper considers single machine scheduling and due date assignment with setup time. The setup time is proportional to the length of the already processed jobs; that is, the setup time is past-sequence-dependent (p-s-d). It is assumed that a job's processing time depends on its position in a sequence. The objective functions include total earliness, the weighted number of tardy jobs, and the cost of due date assignment. We analyze these problems with two different due date assignment methods. We first consider the model with job-dependent position effects. For each case, by converting the problem to a series of assignment problems, we proved that the problems can be solved in O(n(4)) time. For the model with job-independent position effects, we proved that the problems can be solved in O(n(3)) time by providing a dynamic programming algorithm.
Due-Window Assignment Scheduling with Variable Job Processing Times
Wu, Yu-Bin
2015-01-01
We consider a common due-window assignment scheduling problem jobs with variable job processing times on a single machine, where the processing time of a job is a function of its position in a sequence (i.e., learning effect) or its starting time (i.e., deteriorating effect). The problem is to determine the optimal due-windows, and the processing sequence simultaneously to minimize a cost function includes earliness, tardiness, the window location, window size, and weighted number of tardy jobs. We prove that the problem can be solved in polynomial time. PMID:25918745
Early retirement and the financial assets of individuals with back problems.
Schofield, Deborah J; Shrestha, Rupendra N; Percival, Richard; Callander, Emily J; Kelly, Simon J; Passey, Megan E
2011-05-01
This paper quantifies the relationship between early retirement due to back problems and wealth, and contributes to a more complete picture of the full costs associated with back problems. The output data set of the microsimulation model Health&WealthMOD was analysed. Health&WealthMOD was specifically designed to measure the economic impacts of ill health on Australian workers aged 45-64 years. People aged 45-64 years who are out of the labour force due to back problems have significantly less chance of having any accumulated wealth. While almost all individuals who are in full-time employment with no chronic health condition have some wealth accumulated, a significantly smaller proportion (89%) of those who have retired early due to back problems do. Of those who have retired early due to back problems who do have some wealth, on average the total value of this wealth is 87% less (95% CI: -90 to -84%) than the total value of wealth accumulated by those who have remained in full-time employment with no health condition controlling for age, sex and education. The financial burden placed on those retiring early due to back problems is likely to cause financial stress in the future, as not only have retired individuals lost an income stream from paid employment, but they also have little or no wealth to draw upon. Preventing early retirement due to back problems will increase the time individuals will have to amass savings to finance their retirement and to protect against financial shocks.
NASA Astrophysics Data System (ADS)
Birgin, Ernesto G.; Ronconi, Débora P.
2012-10-01
The single machine scheduling problem with a common due date and non-identical ready times for the jobs is examined in this work. Performance is measured by the minimization of the weighted sum of earliness and tardiness penalties of the jobs. Since this problem is NP-hard, the application of constructive heuristics that exploit specific characteristics of the problem to improve their performance is investigated. The proposed approaches are examined through a computational comparative study on a set of 280 benchmark test problems with up to 1000 jobs.
Single machine scheduling with slack due dates assignment
NASA Astrophysics Data System (ADS)
Liu, Weiguo; Hu, Xiangpei; Wang, Xuyin
2017-04-01
This paper considers a single machine scheduling problem in which each job is assigned an individual due date based on a common flow allowance (i.e. all jobs have slack due date). The goal is to find a sequence for jobs, together with a due date assignment, that minimizes a non-regular criterion comprising the total weighted absolute lateness value and common flow allowance cost, where the weight is a position-dependent weight. In order to solve this problem, an ? time algorithm is proposed. Some extensions of the problem are also shown.
Scheduling Jobs and a Variable Maintenance on a Single Machine with Common Due-Date Assignment
Wan, Long
2014-01-01
We investigate a common due-date assignment scheduling problem with a variable maintenance on a single machine. The goal is to minimize the total earliness, tardiness, and due-date cost. We derive some properties on an optimal solution for our problem. For a special case with identical jobs we propose an optimal polynomial time algorithm followed by a numerical example. PMID:25147861
Time Dependent Heterogeneous Vehicle Routing Problem for Catering Service Delivery Problem
NASA Astrophysics Data System (ADS)
Azis, Zainal; Mawengkang, Herman
2017-09-01
The heterogeneous vehicle routing problem (HVRP) is a variant of vehicle routing problem (VRP) which describes various types of vehicles with different capacity to serve a set of customers with known geographical locations. This paper considers the optimal service deliveries of meals of a catering company located in Medan City, Indonesia. Due to the road condition as well as traffic, it is necessary for the company to use different type of vehicle to fulfill customers demand in time. The HVRP incorporates time dependency of travel times on the particular time of the day. The objective is to minimize the sum of the costs of travelling and elapsed time over the planning horizon. The problem can be modeled as a linear mixed integer program and we address a feasible neighbourhood search approach to solve the problem.
The solution of large multi-dimensional Poisson problems
NASA Technical Reports Server (NTRS)
Stone, H. S.
1974-01-01
The Buneman algorithm for solving Poisson problems can be adapted to solve large Poisson problems on computers with a rotating drum memory so that the computation is done with very little time lost due to rotational latency of the drum.
Use of a Behavioral Graphic Organizer to Reduce Disruptive Behavior
ERIC Educational Resources Information Center
McDaniel, Sara C.; Flower, Andrea
2015-01-01
Students with challenging behavior spend substantial amounts of time away from instruction due to behavioral problems. Time away from instruction reduces their opportunities for learning, which are critical as these students typically demonstrate academic performance below their same-age peers. After removal from instruction due to behavioral…
Effects of computing time delay on real-time control systems
NASA Technical Reports Server (NTRS)
Shin, Kang G.; Cui, Xianzhong
1988-01-01
The reliability of a real-time digital control system depends not only on the reliability of the hardware and software used, but also on the speed in executing control algorithms. The latter is due to the negative effects of computing time delay on control system performance. For a given sampling interval, the effects of computing time delay are classified into the delay problem and the loss problem. Analysis of these two problems is presented as a means of evaluating real-time control systems. As an example, both the self-tuning predicted (STP) control and Proportional-Integral-Derivative (PID) control are applied to the problem of tracking robot trajectories, and their respective effects of computing time delay on control performance are comparatively evaluated. For this example, the STP (PID) controller is shown to outperform the PID (STP) controller in coping with the delay (loss) problem.
GRADIENT: Graph Analytic Approach for Discovering Irregular Events, Nascent and Temporal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogan, Emilie
2015-03-31
Finding a time-ordered signature within large graphs is a computationally complex problem due to the combinatorial explosion of potential patterns. GRADIENT is designed to search and understand that problem space.
GRADIENT: Graph Analytic Approach for Discovering Irregular Events, Nascent and Temporal
Hogan, Emilie
2018-01-16
Finding a time-ordered signature within large graphs is a computationally complex problem due to the combinatorial explosion of potential patterns. GRADIENT is designed to search and understand that problem space.
Sleep promotes analogical transfer in problem solving.
Monaghan, Padraic; Sio, Ut Na; Lau, Sum Wai; Woo, Hoi Kei; Linkenauger, Sally A; Ormerod, Thomas C
2015-10-01
Analogical problem solving requires using a known solution from one problem to apply to a related problem. Sleep is known to have profound effects on memory and information restructuring, and so we tested whether sleep promoted such analogical transfer, determining whether improvement was due to subjective memory for problems, subjective recognition of similarity across related problems, or by abstract generalisation of structure. In Experiment 1, participants were exposed to a set of source problems. Then, after a 12-h period involving sleep or wake, they attempted target problems structurally related to the source problems but with different surface features. Experiment 2 controlled for time of day effects by testing participants either in the morning or the evening. Sleep improved analogical transfer, but effects were not due to improvements in subjective memory or similarity recognition, but rather effects of structural generalisation across problems. Copyright © 2015 Elsevier B.V. All rights reserved.
Labor force participation and the influence of having back problems on income poverty in Australia.
Schofield, Deborah J; Callander, Emily J; Shrestha, Rupendra N; Percival, Richard; Kelly, Simon J; Passey, Megan E
2012-06-01
Cross-sectional study of 45- to 64-year-old Australians. To assess the relationship between chronic back problems and being in income poverty among the older working-aged population. Older workers who leave the labor force due to chronic back problems have fragile economic situations and as such are likely to have poorer living standards. Poverty is one way of comparing the living standards of different individuals within society. The 2003 Survey of Disability, Ageing and Carers data were used, along with the 50% of the median equivalized income-unit income poverty line to identify those in poverty. Logistic regression models were used to look at the relationship between chronic back problems, labor force participation, and poverty. Regardless of labor force participation status (employed full-time, part-time, or not in the labor force at all), those with chronic back problems were significantly more likely to be in poverty. Those not in the labor force due to chronic back problems were significantly more likely to be in poverty than those in the labor force full-time with no chronic health condition (Odds ratio [OR]: 0.07, 95% CI: 0.07-0.07, P < 0.0001). Further, those employed part-time with no chronic health condition were 48% less likely to be in poverty (OR: 0.52, 95% CI: 0.51-0.53, P < 0.0001) than those also employed part-time but with chronic back problems. It was found that among those with back problems, those out of the labor force were significantly more likely to be in poverty than those employed part-time or full-time (OR: 0.44, 95% CI: 0.43-0.44, P < 0.0001; OR: 0.10, 95% CI: 0.10-0.10, P < 0.0001, respectively). This highlights the need to prevent and effectively treat chronic back problems, as these conditions are associated with reduced living standards.
NASA Astrophysics Data System (ADS)
Susilawati, Enny; Mawengkang, Herman; Efendi, Syahril
2018-01-01
Generally a Vehicle Routing Problem with time windows (VRPTW) can be defined as a problem to determine the optimal set of routes used by a fleet of vehicles to serve a given set of customers with service time restrictions; the objective is to minimize the total travel cost (related to the travel times or distances) and operational cost (related to the number of vehicles used). In this paper we address a variant of the VRPTW in which the fleet of vehicle is heterogenic due to the different size of demand from customers. The problem, called Heterogeneous VRP (HVRP) also includes service levels. We use integer programming model to describe the problem. A feasible neighbourhood approach is proposed to solve the model.
In Swimming Branch, Identification of Family Burden of Families of Mentally Disabled Athletes
ERIC Educational Resources Information Center
Nacar, Eyyup; Karahuseyinoglu, M. Fatih; Karatas, Baykal; Altungul, Oguzhan
2017-01-01
The families of individuals with Down-syndrome, autism, and mental problem who need for special requirements experience physical problems, tiredness, and antisocial life, which bring additional cost to family budget, from time to time due to difficulties of their children The aim of this study is to identify family burdens charged by kids with…
Fast and Efficient Discrimination of Traveling Salesperson Problem Stimulus Difficulty
ERIC Educational Resources Information Center
Dry, Matthew J.; Fontaine, Elizabeth L.
2014-01-01
The Traveling Salesperson Problem (TSP) is a computationally difficult combinatorial optimization problem. In spite of its relative difficulty, human solvers are able to generate close-to-optimal solutions in a close-to-linear time frame, and it has been suggested that this is due to the visual system's inherent sensitivity to certain geometric…
Inexact adaptive Newton methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertiger, W.I.; Kelsey, F.J.
1985-02-01
The Inexact Adaptive Newton method (IAN) is a modification of the Adaptive Implicit Method/sup 1/ (AIM) with improved Newton convergence. Both methods simplify the Jacobian at each time step by zeroing coefficients in regions where saturations are changing slowly. The methods differ in how the diagonal block terms are treated. On test problems with up to 3,000 cells, IAN consistently saves approximately 30% of the CPU time when compared to the fully implicit method. AIM shows similar savings on some problems, but takes as much CPU time as fully implicit on other test problems due to poor Newton convergence.
Dual transponder time synchronization at C band using ATS-3.
NASA Technical Reports Server (NTRS)
Mazur, W. E., Jr.
1972-01-01
The use of artificial satellites for time synchronization of geographically distant clocks is hindered by problems due to satellite motion or equipment delay measurements. The ATS-3 satellite with its two C-band transponder channels helps solve these problems through techniques for synchronization to accuracies of tenths of microseconds. Portable cesium clocks were used to verify the accuracy of the described system.
A multilevel probabilistic beam search algorithm for the shortest common supersequence problem.
Gallardo, José E
2012-01-01
The shortest common supersequence problem is a classical problem with many applications in different fields such as planning, Artificial Intelligence and especially in Bioinformatics. Due to its NP-hardness, we can not expect to efficiently solve this problem using conventional exact techniques. This paper presents a heuristic to tackle this problem based on the use at different levels of a probabilistic variant of a classical heuristic known as Beam Search. The proposed algorithm is empirically analysed and compared to current approaches in the literature. Experiments show that it provides better quality solutions in a reasonable time for medium and large instances of the problem. For very large instances, our heuristic also provides better solutions, but required execution times may increase considerably.
ERIC Educational Resources Information Center
Lomas, Joanna E.; Fisher, Wayne W.; Kelley, Michael E.
2010-01-01
Prior research indicates that reinforcement of an appropriate response (e.g., compliance) can produce concomitant reductions in problem behavior reinforced by escape when problem behavior continues to produce negative reinforcement (e.g., Lalli et al., 1999). These effects may be due to a preference for positive over negative reinforcement or to…
Variational Methods For Sloshing Problems With Surface Tension
NASA Astrophysics Data System (ADS)
Tan, Chee Han; Carlson, Max; Hohenegger, Christel; Osting, Braxton
2016-11-01
We consider the sloshing problem for an incompressible, inviscid, irrotational fluid in a container, including effects due to surface tension on the free surface. We restrict ourselves to a constant contact angle and we seek time-harmonic solutions of the linearized problem, which describes the time-evolution of the fluid due to a small initial disturbance of the surface at rest. As opposed to the zero surface tension case, where the problem reduces to a partial differential equation for the velocity potential, we obtain a coupled system for the velocity potential and the free surface displacement. We derive a new variational formulation of the coupled problem and establish the existence of solutions using the direct method from the Calculus of Variations. In the limit of zero surface tension, we recover the variational formulation of the classical Steklov eigenvalue problem, as derived by B. A. Troesch. For the particular case of an axially symmetric container, we propose a finite element numerical method for computing the sloshing modes of the coupled system. The scheme is implemented in FEniCS and we obtain a qualitative description of the effect of surface tension on the sloshing modes.
Sleep and Mental Health in Undergraduate Students with Generally Healthy Sleep Habits
Milojevich, Helen M.; Lukowski, Angela F.
2016-01-01
Whereas previous research has indicated that sleep problems tend to co-occur with increased mental health issues in university students, relatively little is known about relations between sleep quality and mental health in university students with generally healthy sleep habits. Understanding relations between sleep and mental health in individuals with generally healthy sleep habits is important because (a) student sleep habits tend to worsen over time and (b) even time-limited experience of sleep problems may have significant implications for the onset of mental health problems. In the present research, 69 university students with generally healthy sleep habits completed questionnaires about sleep quality and mental health. Although participants did not report clinically concerning mental health issues as a group, global sleep quality was associated with mental health. Regression analyses revealed that nighttime sleep duration and the frequency of nighttime sleep disruptions were differentially related to total problems and clinically-relevant symptoms of psychological distress. These results indicate that understanding relations between sleep and mental health in university students with generally healthy sleep habits is important not only due to the large number of undergraduates who experience sleep problems and mental health issues over time but also due to the potential to intervene and improve mental health outcomes before they become clinically concerning. PMID:27280714
Sleep and Mental Health in Undergraduate Students with Generally Healthy Sleep Habits.
Milojevich, Helen M; Lukowski, Angela F
2016-01-01
Whereas previous research has indicated that sleep problems tend to co-occur with increased mental health issues in university students, relatively little is known about relations between sleep quality and mental health in university students with generally healthy sleep habits. Understanding relations between sleep and mental health in individuals with generally healthy sleep habits is important because (a) student sleep habits tend to worsen over time and (b) even time-limited experience of sleep problems may have significant implications for the onset of mental health problems. In the present research, 69 university students with generally healthy sleep habits completed questionnaires about sleep quality and mental health. Although participants did not report clinically concerning mental health issues as a group, global sleep quality was associated with mental health. Regression analyses revealed that nighttime sleep duration and the frequency of nighttime sleep disruptions were differentially related to total problems and clinically-relevant symptoms of psychological distress. These results indicate that understanding relations between sleep and mental health in university students with generally healthy sleep habits is important not only due to the large number of undergraduates who experience sleep problems and mental health issues over time but also due to the potential to intervene and improve mental health outcomes before they become clinically concerning.
NASA Technical Reports Server (NTRS)
Smith, Stephen F.; Pathak, Dhiraj K.
1991-01-01
In this paper, we report work aimed at applying concepts of constraint-based problem structuring and multi-perspective scheduling to an over-subscribed scheduling problem. Previous research has demonstrated the utility of these concepts as a means for effectively balancing conflicting objectives in constraint-relaxable scheduling problems, and our goal here is to provide evidence of their similar potential in the context of HST observation scheduling. To this end, we define and experimentally assess the performance of two time-bounded heuristic scheduling strategies in balancing the tradeoff between resource setup time minimization and satisfaction of absolute time constraints. The first strategy considered is motivated by dispatch-based manufacturing scheduling research, and employs a problem decomposition that concentrates local search on minimizing resource idle time due to setup activities. The second is motivated by research in opportunistic scheduling and advocates a problem decomposition that focuses attention on the goal activities that have the tightest temporal constraints. Analysis of experimental results gives evidence of differential superiority on the part of each strategy in different problem solving circumstances. A composite strategy based on recognition of characteristics of the current problem solving state is then defined and tested to illustrate the potential benefits of constraint-based problem structuring and multi-perspective scheduling in over-subscribe scheduling problems.
NASA Astrophysics Data System (ADS)
Sharma, Pankaj; Jain, Ajai
2014-12-01
Stochastic dynamic job shop scheduling problem with consideration of sequence-dependent setup times are among the most difficult classes of scheduling problems. This paper assesses the performance of nine dispatching rules in such shop from makespan, mean flow time, maximum flow time, mean tardiness, maximum tardiness, number of tardy jobs, total setups and mean setup time performance measures viewpoint. A discrete event simulation model of a stochastic dynamic job shop manufacturing system is developed for investigation purpose. Nine dispatching rules identified from literature are incorporated in the simulation model. The simulation experiments are conducted under due date tightness factor of 3, shop utilization percentage of 90% and setup times less than processing times. Results indicate that shortest setup time (SIMSET) rule provides the best performance for mean flow time and number of tardy jobs measures. The job with similar setup and modified earliest due date (JMEDD) rule provides the best performance for makespan, maximum flow time, mean tardiness, maximum tardiness, total setups and mean setup time measures.
Short Workweeks during Economic Downturns.
ERIC Educational Resources Information Center
Bednarzik, Robert W.
1983-01-01
The most common economic reasons for part-time employment during recessions are cutbacks in weekly hours due to slack work and failure to find full-time positions. Each is characteristically distinct and illustrates different underlying labor market problems. (JOW)
Resource-constrained scheduling with hard due windows and rejection penalties
NASA Astrophysics Data System (ADS)
Garcia, Christopher
2016-09-01
This work studies a scheduling problem where each job must be either accepted and scheduled to complete within its specified due window, or rejected altogether. Each job has a certain processing time and contributes a certain profit if accepted or penalty cost if rejected. There is a set of renewable resources, and no resource limit can be exceeded at any time. Each job requires a certain amount of each resource when processed, and the objective is to maximize total profit. A mixed-integer programming formulation and three approximation algorithms are presented: a priority rule heuristic, an algorithm based on the metaheuristic for randomized priority search and an evolutionary algorithm. Computational experiments comparing these four solution methods were performed on a set of generated benchmark problems covering a wide range of problem characteristics. The evolutionary algorithm outperformed the other methods in most cases, often significantly, and never significantly underperformed any method.
Frilander, Heikki; Lallukka, Tea; Viikari-Juntura, Eira; Heliövaara, Markku; Solovieva, Svetlana
2016-01-01
Disability retirement causes a significant burden on the society and affects the well-being of individuals. Early health problems as determinants of disability retirement have received little attention. The objective was to study, whether interrupting compulsory military service is an early indicator of disability retirement among Finnish men and whether seeking medical advice during military service increases the risk of all-cause disability retirement and disability retirement due to mental disorders and musculoskeletal diseases. We also looked at secular trends in these associations. We examined a nationally representative sample of 2069 men, who had entered military service during 1967-1996. We linked military service health records with cause-specific register data on disability retirement from 1968 to 2008. Secular trends were explored in three service time strata. We used the Cox regression model to estimate proportional hazard ratios and their 95% confidence intervals. During the follow-up time altogether 140 (6.8%) men retired due to disability, mental disorders being the most common cause. The men who interrupted service had a remarkably higher cumulative incidence of disability retirement (18.9%). The associations between seeking medical advice during military service and all-cause disability retirement were similar across the three service time cohorts (overall hazard ratio 1.40 per one standard deviation of the number of visits; 95% confidence interval 1.26-1.56). Visits due to mental problems predicted disability retirement due to mental disorders in the men who served between 1987 and 1996 and a tendency for a similar cause-specific association was seen for musculoskeletal diseases in the men who served in 1967-1976. In conclusion, health problems-in particular mental problems-during late adolescence are strong determinants of disability retirement. Call-up examinations and military service provide access to the entire age cohort of men, where persons at risk for work disability can be identified and early preventive measures initiated.
Nøvik, Torunn Stene; Jozefiak, Thomas
2014-04-01
Studies about changes in the prevalence of emotional and behaviour problems across time are lacking, especially among younger children. To determine if the level of parent-reported emotional and behaviour problems and competencies in young Norwegian school children had changed across a 16-year time interval. We compared parent reports obtained by the Child Behavior Checklist in two samples of children aged 7-9 years from the general population assessed in 1991 and 2007. The results demonstrated overall stability or slight decreases of emotional and behaviour problems and a significant increase in competencies, mainly due to increased activity and social competence scores in the 2007 sample. Boys obtained higher scores than girls in Total Problems, Externalizing and Attention problems at both time points and there was a high stability of the rank order of items. The findings suggest stability in child emotional and behaviour problems, and an increase of competencies across the period.
J.-L. Lions' problem concerning maximal regularity of equations governed by non-autonomous forms
NASA Astrophysics Data System (ADS)
Fackler, Stephan
2017-05-01
An old problem due to J.-L. Lions going back to the 1960s asks whether the abstract Cauchy problem associated to non-autonomous forms has maximal regularity if the time dependence is merely assumed to be continuous or even measurable. We give a negative answer to this question and discuss the minimal regularity needed for positive results.
A framework for simultaneous aerodynamic design optimization in the presence of chaos
DOE Office of Scientific and Technical Information (OSTI.GOV)
Günther, Stefanie, E-mail: stefanie.guenther@scicomp.uni-kl.de; Gauger, Nicolas R.; Wang, Qiqi
Integrating existing solvers for unsteady partial differential equations into a simultaneous optimization method is challenging due to the forward-in-time information propagation of classical time-stepping methods. This paper applies the simultaneous single-step one-shot optimization method to a reformulated unsteady constraint that allows for both forward- and backward-in-time information propagation. Especially in the presence of chaotic and turbulent flow, solving the initial value problem simultaneously with the optimization problem often scales poorly with the time domain length. The new formulation relaxes the initial condition and instead solves a least squares problem for the discrete partial differential equations. This enables efficient one-shot optimizationmore » that is independent of the time domain length, even in the presence of chaos.« less
Known TCP Implementation Problems
NASA Technical Reports Server (NTRS)
Paxson, Vern (Editor); Allman, Mark; Dawson, Scott; Fenner, William; Griner, Jim; Heavens, Ian; Lahey, K.; Semke, J.; Volz, B.
1999-01-01
This memo catalogs a number of known TCP implementation problems. The goal in doing so is to improve conditions in the existing Internet by enhancing the quality of current TCP/IP implementations. It is hoped that both performance and correctness issues can be resolved by making implementors aware of the problems and their solutions. In the long term, it is hoped that this will provide a reduction in unnecessary traffic on the network, the rate of connection failures due to protocol errors, and load on network servers due to time spent processing both unsuccessful connections and retransmitted data. This will help to ensure the stability of the global Internet. Each problem is defined as follows: Name of Problem The name associated with the problem. In this memo, the name is given as a subsection heading. Classification one or more problem categories for which the problem is classified: "congestion control", "performance", "reliability", "resource management". Description A definition of the problem, succinct but including necessary background material. Significance A brief summary of the sorts of environments for which the problem is significant.
Amanatidou, Elisavet; Samiotis, Georgios; Trikoilidou, Eleni; Pekridis, George; Taousanidis, Nikolaos
2015-02-01
Zero net sludge growth can be achieved by complete retention of solids in activated sludge wastewater treatment, especially in high strength and biodegradable wastewaters. When increasing the solids retention time, MLSS and MLVSS concentrations reach a plateau phase and observed growth yields values tend to zero (Yobs ≈ 0). In this work, in order to evaluate sedimentation problems arised due to high MLSS concentrations and complete sludge retention operational conditions, two identical innovative slaughterhouse wastewater treatment plants were studied. Measurements of wastewaters' quality characteristics, treatment plant's operational conditions, sludge microscopic analysis and state point analysis were conducted. Results have shown that low COD/Nitrogen ratios increase sludge bulking and flotation phenomena due to accidental denitrification in clarifiers. High return activated sludge rate is essential in complete retention systems as it reduces sludge condensation and hydraulic retention time in the clarifiers. Under certain operational conditions sludge loading rates can greatly exceed literature limit values. The presented methodology is a useful tool for estimation of sedimentation problems encountered in activated sludge wastewater treatment plants with complete retention time. Copyright © 2014 Elsevier Ltd. All rights reserved.
Due Process Hearing Case Study
ERIC Educational Resources Information Center
Bateman, David F.
2008-01-01
Chris resides with his parents and attends school in an unnamed district ("the District"). During the second half of second grade, Chris started to have problems with reading and written expression. The District evaluated Chris and did not find problems significant enough to identify him as having a learning disability. At about the same time his…
Magic from Human Regenerative Technologies--Stem Cells
ERIC Educational Resources Information Center
Ritz, John M.
2012-01-01
Many people suffer from chronic diseases and problems due to injury from accidents or ailments. Some problems, such as measles and cancer, can be cured or put into remission with time, medicine, or treatments. Other ailments, such as high blood pressure, failing kidneys, and cystic fibrosis, cannot be cured and require continuous use of…
Problems of the Randomization Test for AB Designs
ERIC Educational Resources Information Center
Manolov, Rumen; Solanas, Antonio
2009-01-01
N = 1 designs imply repeated registrations of the behaviour of the same experimental unit and the measurements obtained are often few due to time limitations, while they are also likely to be sequentially dependent. The analytical techniques needed to enhance statistical and clinical decision making have to deal with these problems. Different…
New scheduling rules for a dynamic flexible flow line problem with sequence-dependent setup times
NASA Astrophysics Data System (ADS)
Kia, Hamidreza; Ghodsypour, Seyed Hassan; Davoudpour, Hamid
2017-09-01
In the literature, the application of multi-objective dynamic scheduling problem and simple priority rules are widely studied. Although these rules are not efficient enough due to simplicity and lack of general insight, composite dispatching rules have a very suitable performance because they result from experiments. In this paper, a dynamic flexible flow line problem with sequence-dependent setup times is studied. The objective of the problem is minimization of mean flow time and mean tardiness. A 0-1 mixed integer model of the problem is formulated. Since the problem is NP-hard, four new composite dispatching rules are proposed to solve it by applying genetic programming framework and choosing proper operators. Furthermore, a discrete-event simulation model is made to examine the performances of scheduling rules considering four new heuristic rules and the six adapted heuristic rules from the literature. It is clear from the experimental results that composite dispatching rules that are formed from genetic programming have a better performance in minimization of mean flow time and mean tardiness than others.
1988-06-01
LEVELSKSI C. Q ac ca VANE OVERALL TOTAL-STATIC EXPANSION RATOS * Figure 12. Prediction of Response due to Second Stage Vane. 22-12 SAP /- MAXIMUM...assessment methods, written by Armstrong. The problem of life time prediction is reviewed by Labourdette, who also summarizes ONERA’s research in...applicable to single blades and bladed assemblies. The blade fatigue problem and its assessment methods, and life-time- prediction are considered. Aeroelastic
Lomas, Joanna E; Fisher, Wayne W; Kelley, Michael E
2010-01-01
Prior research indicates that reinforcement of an appropriate response (e.g., compliance) can produce concomitant reductions in problem behavior reinforced by escape when problem behavior continues to produce negative reinforcement (e.g., Lalli et al., 1999). These effects may be due to a preference for positive over negative reinforcement or to positive reinforcement acting as an abolishing operation, rendering demands less aversive and escape from demands less effective as negative reinforcement. In the current investigation, we delivered a preferred food item and praise on a variable-time 15-s schedule while providing escape for problem behavior on a fixed-ratio 1 schedule in a demand condition for 3 participants with problem behavior maintained by negative reinforcement. Results for all 3 participants showed that variable-time delivery of preferred edible items reduced problem behavior even though escape continued to be available for these responses. These findings are discussed in the context of motivating operations.
[Multiresistant tuberculosis in Denmark 1993-1996].
Viskum, K; Kok-Jensen, A
1998-05-18
Infections with multiresistant tubercle bacilli have also become a problem in the rich part of the world. The reasons are lack of compliance in patients with life style problems and ineffectiveness of the health system due to lack of fundings. During a four year period, 1993-1996 ten patients were seen in Denmark with tuberculosis due to multiresistant Mycobacterium tuberculosis. Nine were infected abroad, one developed MDR-TB during treatment in Denmark. It is possible to cure these patients, but it is expensive and takes a long time. In the future more cases created within Denmark are likely to be seen due to lack of funding for the tuberculosis programme and, depending on immigration, further cases created abroad are expected.
Solutions to time variant problems of real-time expert systems
NASA Technical Reports Server (NTRS)
Yeh, Show-Way; Wu, Chuan-Lin; Hung, Chaw-Kwei
1988-01-01
Real-time expert systems for monitoring and control are driven by input data which changes with time. One of the subtle problems of this field is the propagation of time variant problems from rule to rule. This propagation problem is even complicated under a multiprogramming environment where the expert system may issue test commands to the system to get data and to access time consuming devices to retrieve data for concurrent reasoning. Two approaches are used to handle the flood of input data. Snapshots can be taken to freeze the system from time to time. The expert system treats the system as a stationary one and traces changes by comparing consecutive snapshots. In the other approach, when an input is available, the rules associated with it are evaluated. For both approaches, if the premise condition of a fired rule is changed to being false, the downstream rules should be deactivated. If the status change is due to disappearance of a transient problem, actions taken by the fired downstream rules which are no longer true may need to be undone. If a downstream rule is being evaluated, it should not be fired. Three mechanisms for solving this problem are discussed: tracing, backward checking, and censor setting. In the forward tracing mechanism, when the premise conditions of a fired rule become false, the premise conditions of downstream rules which have been fired or are being evaluated due to the firing of that rule are reevaluated. A tree with its root at the rule being deactivated is traversed. In the backward checking mechanism, when a rule is being fired, the expert system checks back on the premise conditions of the upstream rules that result in evaluation of the rule to see whether it should be fired. The root of the tree being traversed is the rule being fired. In the censor setting mechanism, when a rule is to be evaluated, a censor is constructed based on the premise conditions of the upstream rules and the censor is evaluated just before the rule is fired. Unlike the backward checking mechanism, this one does not search the upstream rules. This paper explores the details of implementation of the three mechanisms.
Insights Into the Fractional Order Initial Value Problem via Semi-Infinite Systems
NASA Technical Reports Server (NTRS)
Hartley, Tom T.; Lorenzo, Carl F.
1998-01-01
This paper considers various aspects of the initial value problem for fractional order differential equations. The main contribution of this paper is to use the solutions to known spatially distributed systems to demonstrate that fractional differintegral operators require an initial condition term that is time-varying due to past distributed storage of information.
DOT National Transportation Integrated Search
2015-10-01
Visibility is one of the most important impacts weather can have on road systems; weather-related visibility reduction is most often due to fog. Florida is among the top-rated states in the United States with regards to traffic safety problems result...
Meta-RaPS Algorithm for the Aerial Refueling Scheduling Problem
NASA Technical Reports Server (NTRS)
Kaplan, Sezgin; Arin, Arif; Rabadi, Ghaith
2011-01-01
The Aerial Refueling Scheduling Problem (ARSP) can be defined as determining the refueling completion times for each fighter aircraft (job) on multiple tankers (machines). ARSP assumes that jobs have different release times and due dates, The total weighted tardiness is used to evaluate schedule's quality. Therefore, ARSP can be modeled as a parallel machine scheduling with release limes and due dates to minimize the total weighted tardiness. Since ARSP is NP-hard, it will be more appropriate to develop a pproimate or heuristic algorithm to obtain solutions in reasonable computation limes. In this paper, Meta-Raps-ATC algorithm is implemented to create high quality solutions. Meta-RaPS (Meta-heuristic for Randomized Priority Search) is a recent and promising meta heuristic that is applied by introducing randomness to a construction heuristic. The Apparent Tardiness Rule (ATC), which is a good rule for scheduling problems with tardiness objective, is used to construct initial solutions which are improved by an exchanging operation. Results are presented for generated instances.
Differential geometric treewidth estimation in adiabatic quantum computation
NASA Astrophysics Data System (ADS)
Wang, Chi; Jonckheere, Edmond; Brun, Todd
2016-10-01
The D-Wave adiabatic quantum computing platform is designed to solve a particular class of problems—the Quadratic Unconstrained Binary Optimization (QUBO) problems. Due to the particular "Chimera" physical architecture of the D-Wave chip, the logical problem graph at hand needs an extra process called minor embedding in order to be solvable on the D-Wave architecture. The latter problem is itself NP-hard. In this paper, we propose a novel polynomial-time approximation to the closely related treewidth based on the differential geometric concept of Ollivier-Ricci curvature. The latter runs in polynomial time and thus could significantly reduce the overall complexity of determining whether a QUBO problem is minor embeddable, and thus solvable on the D-Wave architecture.
Enhancements on the Convex Programming Based Powered Descent Guidance Algorithm for Mars Landing
NASA Technical Reports Server (NTRS)
Acikmese, Behcet; Blackmore, Lars; Scharf, Daniel P.; Wolf, Aron
2008-01-01
In this paper, we present enhancements on the powered descent guidance algorithm developed for Mars pinpoint landing. The guidance algorithm solves the powered descent minimum fuel trajectory optimization problem via a direct numerical method. Our main contribution is to formulate the trajectory optimization problem, which has nonconvex control constraints, as a finite dimensional convex optimization problem, specifically as a finite dimensional second order cone programming (SOCP) problem. SOCP is a subclass of convex programming, and there are efficient SOCP solvers with deterministic convergence properties. Hence, the resulting guidance algorithm can potentially be implemented onboard a spacecraft for real-time applications. Particularly, this paper discusses the algorithmic improvements obtained by: (i) Using an efficient approach to choose the optimal time-of-flight; (ii) Using a computationally inexpensive way to detect the feasibility/ infeasibility of the problem due to the thrust-to-weight constraint; (iii) Incorporating the rotation rate of the planet into the problem formulation; (iv) Developing additional constraints on the position and velocity to guarantee no-subsurface flight between the time samples of the temporal discretization; (v) Developing a fuel-limited targeting algorithm; (vi) Initial result on developing an onboard table lookup method to obtain almost fuel optimal solutions in real-time.
ERIC Educational Resources Information Center
Keen-Rocha, Linda
2005-01-01
Science instructors sometimes avoid inquiry-based activities due to limited classroom time. Inquiry takes time, as students choose problems, design experiments, obtain materials, conduct investigations, gather data, communicate results, and discuss their experiments. While there are no quick solutions to time concerns, the 5E learning cycle seeks…
NASA Technical Reports Server (NTRS)
Reiskin, A. B.
1978-01-01
The application of the Lixiscope in dentistry was studied. Due to a problem in resolution, it is concluded that, at this present time, the Lixiscope can only be as a previewing device for conventional radiography.
Systemic Engagement: Universities as Partners in Systemic Approaches to Community Change
ERIC Educational Resources Information Center
McNall, Miles A.; Barnes-Najor, Jessica V.; Brown, Robert E.; Doberneck, Diane M.; Fitzgerald, Hiram E.
2015-01-01
The most pressing social problems facing humanity in the 21st century are what systems theorist Russell Ackoff referred to as "messes"--complex dynamic systems of problems that interact and reinforce each other over time. In this article, the authors argue that the lack of progress in managing messes is in part due to the predominance of…
Improving Time Management for the Working Student.
ERIC Educational Resources Information Center
Anderson, Tim; Lott, Rod; Wieczorek, Linda
This action research project implemented and evaluated a program for increasing time spent on homework. The project was intended to improve academic achievement among five employed high school students taking geometry and physical science who were also employed more than 15 hours per week. The problem of lower academic achievement due to…
QUICR-learning for Multi-Agent Coordination
NASA Technical Reports Server (NTRS)
Agogino, Adrian K.; Tumer, Kagan
2006-01-01
Coordinating multiple agents that need to perform a sequence of actions to maximize a system level reward requires solving two distinct credit assignment problems. First, credit must be assigned for an action taken at time step t that results in a reward at time step t > t. Second, credit must be assigned for the contribution of agent i to the overall system performance. The first credit assignment problem is typically addressed with temporal difference methods such as Q-learning. The second credit assignment problem is typically addressed by creating custom reward functions. To address both credit assignment problems simultaneously, we propose the "Q Updates with Immediate Counterfactual Rewards-learning" (QUICR-learning) designed to improve both the convergence properties and performance of Q-learning in large multi-agent problems. QUICR-learning is based on previous work on single-time-step counterfactual rewards described by the collectives framework. Results on a traffic congestion problem shows that QUICR-learning is significantly better than a Q-learner using collectives-based (single-time-step counterfactual) rewards. In addition QUICR-learning provides significant gains over conventional and local Q-learning. Additional results on a multi-agent grid-world problem show that the improvements due to QUICR-learning are not domain specific and can provide up to a ten fold increase in performance over existing methods.
Analysis of oil-pipeline distribution of multiple products subject to delivery time-windows
NASA Astrophysics Data System (ADS)
Jittamai, Phongchai
This dissertation defines the operational problems of, and develops solution methodologies for, a distribution of multiple products into oil pipeline subject to delivery time-windows constraints. A multiple-product oil pipeline is a pipeline system composing of pipes, pumps, valves and storage facilities used to transport different types of liquids. Typically, products delivered by pipelines are petroleum of different grades moving either from production facilities to refineries or from refineries to distributors. Time-windows, which are generally used in logistics and scheduling areas, are incorporated in this study. The distribution of multiple products into oil pipeline subject to delivery time-windows is modeled as multicommodity network flow structure and mathematically formulated. The main focus of this dissertation is the investigation of operating issues and problem complexity of single-source pipeline problems and also providing solution methodology to compute input schedule that yields minimum total time violation from due delivery time-windows. The problem is proved to be NP-complete. The heuristic approach, a reversed-flow algorithm, is developed based on pipeline flow reversibility to compute input schedule for the pipeline problem. This algorithm is implemented in no longer than O(T·E) time. This dissertation also extends the study to examine some operating attributes and problem complexity of multiple-source pipelines. The multiple-source pipeline problem is also NP-complete. A heuristic algorithm modified from the one used in single-source pipeline problems is introduced. This algorithm can also be implemented in no longer than O(T·E) time. Computational results are presented for both methodologies on randomly generated problem sets. The computational experience indicates that reversed-flow algorithms provide good solutions in comparison with the optimal solutions. Only 25% of the problems tested were more than 30% greater than optimal values and approximately 40% of the tested problems were solved optimally by the algorithms.
Time-dependent wave splitting and source separation
NASA Astrophysics Data System (ADS)
Grote, Marcus J.; Kray, Marie; Nataf, Frédéric; Assous, Franck
2017-02-01
Starting from classical absorbing boundary conditions, we propose a method for the separation of time-dependent scattered wave fields due to multiple sources or obstacles. In contrast to previous techniques, our method is local in space and time, deterministic, and avoids a priori assumptions on the frequency spectrum of the signal. Numerical examples in two space dimensions illustrate the usefulness of wave splitting for time-dependent scattering problems.
Development and Evaluation of a Casualty Evacuation Model for a European Conflict.
1985-12-01
EVAC, the computer code which implements our technique, has been used to solve a series of test problems in less time and requiring less memory than...the order of 1/K the amount of main memory for a K-commodity problem, so it can solve significantly larger problems than MCNF. I . 10 CHAPTER II A...technique may require only half the memory of the general L.P. package [6]. These advances are due to the efficient data structures which have been
Technology and Education: A Comparative Survey
ERIC Educational Resources Information Center
Blum, Albert A.
1971-01-01
The article deals with the efforts of nine different countries to overcome the problems of job obsolescence due to automation and the education and training of youth to fit the needs of the times. (EK)
O(2) Hopf bifurcation of viscous shock waves in a channel
NASA Astrophysics Data System (ADS)
Pogan, Alin; Yao, Jinghua; Zumbrun, Kevin
2015-07-01
Extending work of Texier and Zumbrun in the semilinear non-reflection symmetric case, we study O(2) transverse Hopf bifurcation, or "cellular instability", of viscous shock waves in a channel, for a class of quasilinear hyperbolic-parabolic systems including the equations of thermoviscoelasticity. The main difficulties are to (i) obtain Fréchet differentiability of the time- T solution operator by appropriate hyperbolic-parabolic energy estimates, and (ii) handle O(2) symmetry in the absence of either center manifold reduction (due to lack of spectral gap) or (due to nonstandard quasilinear hyperbolic-parabolic form) the requisite framework for treatment by spatial dynamics on the space of time-periodic functions, the two standard treatments for this problem. The latter issue is resolved by Lyapunov-Schmidt reduction of the time- T map, yielding a four-dimensional problem with O(2) plus approximate S1 symmetry, which we treat "by hand" using direct Implicit Function Theorem arguments. The former is treated by balancing information obtained in Lagrangian coordinates with that from associated constraints. Interestingly, this argument does not apply to gas dynamics or magnetohydrodynamics (MHD), due to the infinite-dimensional family of Lagrangian symmetries corresponding to invariance under arbitrary volume-preserving diffeomorphisms.
Developments in boundary element methods - 2
NASA Astrophysics Data System (ADS)
Banerjee, P. K.; Shaw, R. P.
This book is a continuation of the effort to demonstrate the power and versatility of boundary element methods which began in Volume 1 of this series. While Volume 1 was designed to introduce the reader to a selected range of problems in engineering for which the method has been shown to be efficient, the present volume has been restricted to time-dependent problems in engineering. Boundary element formulation for melting and solidification problems in considered along with transient flow through porous elastic media, applications of boundary element methods to problems of water waves, and problems of general viscous flow. Attention is given to time-dependent inelastic deformation of metals by boundary element methods, the determination of eigenvalues by boundary element methods, transient stress analysis of tunnels and caverns of arbitrary shape due to traveling waves, an analysis of hydrodynamic loads by boundary element methods, and acoustic emissions from submerged structures.
Some estimation formulae for continuous time-invariant linear systems
NASA Technical Reports Server (NTRS)
Bierman, G. J.; Sidhu, G. S.
1975-01-01
In this brief paper we examine a Riccati equation decomposition due to Reid and Lainiotis and apply the result to the continuous time-invariant linear filtering problem. Exploitation of the time-invariant structure leads to integration-free covariance recursions which are of use in covariance analyses and in filter implementations. A super-linearly convergent iterative solution to the algebraic Riccati equation (ARE) is developed. The resulting algorithm, arranged in a square-root form, is thought to be numerically stable and competitive with other ARE solution methods. Certain covariance relations that are relevant to the fixed-point and fixed-lag smoothing problems are also discussed.
ERIC Educational Resources Information Center
FRIEDMAN, STANLEY R.
MANY STUDIES HAVE INDICATED THE PRESENCE OF A SLUMP OR INVERSION IN THE PROBLEM-SOLVING EFFICIENCY OF CHILDREN AT THE FOURTH GRADE LEVEL. IT HAS BEEN SUGGESTED THAT THIS MAY BE DUE TO THE INTERFERING EFFECT OF THE FORMATION OF COMPLEX HYPOTHESES BY THE CHILDREN. SINCE A TENDENCY TO RESPOND RAPIDLY WOULD PRESUMABLY INHIBIT THE FORMATION OF COMPLEX…
DOT National Transportation Integrated Search
2006-01-01
Problem: Work zones on heavily traveled divided highways present problems to motorists in the form of traffic delays and increased accident risks due to sometimes reduced motorist guidance, dense traffic, and other driving difficulties. To minimize t...
Yue, Lei; Guan, Zailin; Saif, Ullah; Zhang, Fei; Wang, Hao
2016-01-01
Group scheduling is significant for efficient and cost effective production system. However, there exist setup times between the groups, which require to decrease it by sequencing groups in an efficient way. Current research is focused on a sequence dependent group scheduling problem with an aim to minimize the makespan in addition to minimize the total weighted tardiness simultaneously. In most of the production scheduling problems, the processing time of jobs is assumed as fixed. However, the actual processing time of jobs may be reduced due to "learning effect". The integration of sequence dependent group scheduling problem with learning effects has been rarely considered in literature. Therefore, current research considers a single machine group scheduling problem with sequence dependent setup times and learning effects simultaneously. A novel hybrid Pareto artificial bee colony algorithm (HPABC) with some steps of genetic algorithm is proposed for current problem to get Pareto solutions. Furthermore, five different sizes of test problems (small, small medium, medium, large medium, large) are tested using proposed HPABC. Taguchi method is used to tune the effective parameters of the proposed HPABC for each problem category. The performance of HPABC is compared with three famous multi objective optimization algorithms, improved strength Pareto evolutionary algorithm (SPEA2), non-dominated sorting genetic algorithm II (NSGAII) and particle swarm optimization algorithm (PSO). Results indicate that HPABC outperforms SPEA2, NSGAII and PSO and gives better Pareto optimal solutions in terms of diversity and quality for almost all the instances of the different sizes of problems.
Tillage system and time post-liquid dairy manure: Effects on runoff, sediment and nutrients losses
USDA-ARS?s Scientific Manuscript database
Liquid manure applied in agricultural lands improves soil quality. However, incorrect management of manure may cause environmental problems due to sediments and nutrients losses associated to runoff. The aims of this work were to: (i) evaluate the time effect of post-liquid dairy manure (LDM) applic...
NASA Astrophysics Data System (ADS)
Shahriari, Mohammadreza
2016-06-01
The time-cost tradeoff problem is one of the most important and applicable problems in project scheduling area. There are many factors that force the mangers to crash the time. This factor could be early utilization, early commissioning and operation, improving the project cash flow, avoiding unfavorable weather conditions, compensating the delays, and so on. Since there is a need to allocate extra resources to short the finishing time of project and the project managers are intended to spend the lowest possible amount of money and achieve the maximum crashing time, as a result, both direct and indirect costs will be influenced in the project, and here, we are facing into the time value of money. It means that when we crash the starting activities in a project, the extra investment will be tied in until the end date of the project; however, when we crash the final activities, the extra investment will be tied in for a much shorter period. This study is presenting a two-objective mathematical model for balancing compressing the project time with activities delay to prepare a suitable tool for decision makers caught in available facilities and due to the time of projects. Also drawing the scheduling problem to real world conditions by considering nonlinear objective function and the time value of money are considered. The presented problem was solved using NSGA-II, and the effect of time compressing reports on the non-dominant set.
Sijtsema, J J; Oldehinkel, A J; Veenstra, R; Verhulst, F C; Ormel, J
2014-06-01
Both structural (i.e., SES, familial psychopathology, family composition) and dynamic (i.e., parental warmth and rejection) family characteristics have been associated with aggressive and depressive problem development. However, it is unclear to what extent (changes in) dynamic family characteristics have an independent effect on problem development while accounting for stable family characteristics and comorbid problem development. This issue was addressed by studying problem development in a large community sample (N = 2,230; age 10-20) of adolescents using Linear Mixed models. Paternal and maternal warmth and rejection were assessed via the Egna Minnen Beträffande Uppfostran for Children (EMBU-C). Aggressive and depressive problems were assessed via subscales of the Youth/Adult Self-Report. Results showed that dynamic family characteristics independently affected the development of aggressive problems. Moreover, maternal rejection in preadolescence and increases in paternal rejection were associated with aggressive problems, whereas decreases in maternal rejection were associated with decreases in depressive problems over time. Paternal and maternal warmth in preadolescence was associated with fewer depressive problems during adolescence. Moreover, increases in paternal warmth were associated with fewer depressive problems over time. Aggressive problems were a stable predictor of depressive problems over time. Finally, those who increased in depressive problems became more aggressive during adolescence, whereas those who decreased in depressive problems became also less aggressive. Besides the effect of comorbid problems, problem development is to a large extent due to dynamic family characteristics, and in particular to changes in parental rejection, which leaves much room for parenting-based interventions.
Xuan, Ziming; Shaffer, Howard
2009-06-01
To examine behavioral patterns of actual Internet gamblers who experienced gambling-related problems and voluntarily closed their accounts. A nested case-control design was used to compare gamblers who closed their accounts because of gambling problems to those who maintained open accounts. Actual play patterns of in vivo Internet gamblers who subscribed to an Internet gambling site. 226 gamblers who closed accounts due to gambling problems were selected from a cohort of 47,603 Internet gamblers who subscribed to an Internet gambling site during February 2005; 226 matched-case controls were selected from the group of gamblers who did not close their accounts. Daily aggregates of behavioral data were collected during an 18-month study period. Main outcomes of interest were daily aggregates of stake, odds, and net loss, which were standardized by the daily aggregate number of bets. We also examined the number of bets to measure trajectory of gambling frequency. Account closers due to gambling problems experienced increasing monetary loss as the time to closure approached; they also increased their stake per bet. Yet they did not chase longer odds; their choices of wagers were more probabilistically conservative (i.e., short odds) compared with the controls. The changes of monetary involvement and risk preference occurred concurrently during the last few days prior to voluntary closing. Our finding of an involvement-seeking yet risk-averse tendency among self-identified problem gamblers challenges the notion that problem gamblers seek "long odds" during "chasing."
NASA Astrophysics Data System (ADS)
Ram, Paras; Joshi, Vimal Kumar; Sharma, Kushal; Walia, Mittu; Yadav, Nisha
2016-01-01
An attempt has been made to describe the effects of geothermal viscosity with viscous dissipation on the three dimensional time dependent boundary layer flow of magnetic nanofluids due to a stretchable rotating plate in the presence of a porous medium. The modelled governing time dependent equations are transformed a from boundary value problem to an initial value problem, and thereafter solved by a fourth order Runge-Kutta method in MATLAB with a shooting technique for the initial guess. The influences of mixed temperature, depth dependent viscosity, and the rotation strength parameter on the flow field and temperature field generated on the plate surface are investigated. The derived results show direct impact in the problems of heat transfer in high speed computer disks (Herrero et al. [1]) and turbine rotor systems (Owen and Rogers [2]).
Multi-period project portfolio selection under risk considerations and stochastic income
NASA Astrophysics Data System (ADS)
Tofighian, Ali Asghar; Moezzi, Hamid; Khakzar Barfuei, Morteza; Shafiee, Mahmood
2018-02-01
This paper deals with multi-period project portfolio selection problem. In this problem, the available budget is invested on the best portfolio of projects in each period such that the net profit is maximized. We also consider more realistic assumptions to cover wider range of applications than those reported in previous studies. A novel mathematical model is presented to solve the problem, considering risks, stochastic incomes, and possibility of investing extra budget in each time period. Due to the complexity of the problem, an effective meta-heuristic method hybridized with a local search procedure is presented to solve the problem. The algorithm is based on genetic algorithm (GA), which is a prominent method to solve this type of problems. The GA is enhanced by a new solution representation and well selected operators. It also is hybridized with a local search mechanism to gain better solution in shorter time. The performance of the proposed algorithm is then compared with well-known algorithms, like basic genetic algorithm (GA), particle swarm optimization (PSO), and electromagnetism-like algorithm (EM-like) by means of some prominent indicators. The computation results show the superiority of the proposed algorithm in terms of accuracy, robustness and computation time. At last, the proposed algorithm is wisely combined with PSO to improve the computing time considerably.
1991-05-01
the problem of the frequency drift is still open. In- this context, the cavity pulling has drawn a lot of attention. Today, to our knowledge, 4...term maser frequency drift associated with the cavity pulling is a well known subject due to the high level of -precision obtainable in principle by...microprocessors. The frequency pulling due to microwave AM = =1:transitions (Ramsey pulling ) has been analyzed and shown to be important. Status of
NASA Astrophysics Data System (ADS)
Zhao, Dang-Jun; Song, Zheng-Yu
2017-08-01
This study proposes a multiphase convex programming approach for rapid reentry trajectory generation that satisfies path, waypoint and no-fly zone (NFZ) constraints on Common Aerial Vehicles (CAVs). Because the time when the vehicle reaches the waypoint is unknown, the trajectory of the vehicle is divided into several phases according to the prescribed waypoints, rendering a multiphase optimization problem with free final time. Due to the requirement of rapidity, the minimum flight time of each phase index is preferred over other indices in this research. The sequential linearization is used to approximate the nonlinear dynamics of the vehicle as well as the nonlinear concave path constraints on the heat rate, dynamic pressure, and normal load; meanwhile, the convexification techniques are proposed to relax the concave constraints on control variables. Next, the original multiphase optimization problem is reformulated as a standard second-order convex programming problem. Theoretical analysis is conducted to show that the original problem and the converted problem have the same solution. Numerical results are presented to demonstrate that the proposed approach is efficient and effective.
Behavioral pattern identification for structural health monitoring in complex systems
NASA Astrophysics Data System (ADS)
Gupta, Shalabh
Estimation of structural damage and quantification of structural integrity are critical for safe and reliable operation of human-engineered complex systems, such as electromechanical, thermofluid, and petrochemical systems. Damage due to fatigue crack is one of the most commonly encountered sources of structural degradation in mechanical systems. Early detection of fatigue damage is essential because the resulting structural degradation could potentially cause catastrophic failures, leading to loss of expensive equipment and human life. Therefore, for reliable operation and enhanced availability, it is necessary to develop capabilities for prognosis and estimation of impending failures, such as the onset of wide-spread fatigue crack damage in mechanical structures. This dissertation presents information-based online sensing of fatigue damage using the analytical tools of symbolic time series analysis ( STSA). Anomaly detection using STSA is a pattern recognition method that has been recently developed based upon a fixed-structure, fixed-order Markov chain. The analysis procedure is built upon the principles of Symbolic Dynamics, Information Theory and Statistical Pattern Recognition. The dissertation demonstrates real-time fatigue damage monitoring based on time series data of ultrasonic signals. Statistical pattern changes are measured using STSA to monitor the evolution of fatigue damage. Real-time anomaly detection is presented as a solution to the forward (analysis) problem and the inverse (synthesis) problem. (1) the forward problem - The primary objective of the forward problem is identification of the statistical changes in the time series data of ultrasonic signals due to gradual evolution of fatigue damage. (2) the inverse problem - The objective of the inverse problem is to infer the anomalies from the observed time series data in real time based on the statistical information generated during the forward problem. A computer-controlled special-purpose fatigue test apparatus, equipped with multiple sensing devices (e.g., ultrasonics and optical microscope) for damage analysis, has been used to experimentally validate the STSA method for early detection of anomalous behavior. The sensor information is integrated with a software module consisting of the STSA algorithm for real-time monitoring of fatigue damage. Experiments have been conducted under different loading conditions on specimens constructed from the ductile aluminium alloy 7075 - T6. The dissertation has also investigated the application of the STSA method for early detection of anomalies in other engineering disciplines. Two primary applications include combustion instability in a generic thermal pulse combustor model and whirling phenomenon in a typical misaligned shaft.
Wavelet-sparsity based regularization over time in the inverse problem of electrocardiography.
Cluitmans, Matthijs J M; Karel, Joël M H; Bonizzi, Pietro; Volders, Paul G A; Westra, Ronald L; Peeters, Ralf L M
2013-01-01
Noninvasive, detailed assessment of electrical cardiac activity at the level of the heart surface has the potential to revolutionize diagnostics and therapy of cardiac pathologies. Due to the requirement of noninvasiveness, body-surface potentials are measured and have to be projected back to the heart surface, yielding an ill-posed inverse problem. Ill-posedness ensures that there are non-unique solutions to this problem, resulting in a problem of choice. In the current paper, it is proposed to restrict this choice by requiring that the time series of reconstructed heart-surface potentials is sparse in the wavelet domain. A local search technique is introduced that pursues a sparse solution, using an orthogonal wavelet transform. Epicardial potentials reconstructed from this method are compared to those from existing methods, and validated with actual intracardiac recordings. The new technique improves the reconstructions in terms of smoothness and recovers physiologically meaningful details. Additionally, reconstruction of activation timing seems to be improved when pursuing sparsity of the reconstructed signals in the wavelet domain.
ScaffoldScaffolder: solving contig orientation via bidirected to directed graph reduction.
Bodily, Paul M; Fujimoto, M Stanley; Snell, Quinn; Ventura, Dan; Clement, Mark J
2016-01-01
The contig orientation problem, which we formally define as the MAX-DIR problem, has at times been addressed cursorily and at times using various heuristics. In setting forth a linear-time reduction from the MAX-CUT problem to the MAX-DIR problem, we prove the latter is NP-complete. We compare the relative performance of a novel greedy approach with several other heuristic solutions. Our results suggest that our greedy heuristic algorithm not only works well but also outperforms the other algorithms due to the nature of scaffold graphs. Our results also demonstrate a novel method for identifying inverted repeats and inversion variants, both of which contradict the basic single-orientation assumption. Such inversions have previously been noted as being difficult to detect and are directly involved in the genetic mechanisms of several diseases. http://bioresearch.byu.edu/scaffoldscaffolder. paulmbodily@gmail.com Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Mishra, S. K.; Sahithi, V. V. D.; Rao, C. S. P.
2016-09-01
The lot sizing problem deals with finding optimal order quantities which minimizes the ordering and holding cost of product mix. when multiple items at multiple levels with all capacity restrictions are considered, the lot sizing problem become NP hard. Many heuristics were developed in the past have inevitably failed due to size, computational complexity and time. However the authors were successful in the development of PSO based technique namely iterative improvement binary particles swarm technique to address very large capacitated multi-item multi level lot sizing (CMIMLLS) problem. First binary particle Swarm Optimization algorithm is used to find a solution in a reasonable time and iterative improvement local search mechanism is employed to improvise the solution obtained by BPSO algorithm. This hybrid mechanism of using local search on the global solution is found to improve the quality of solutions with respect to time thus IIBPSO method is found best and show excellent results.
NASA Technical Reports Server (NTRS)
Halyo, N.; Broussard, J. R.
1984-01-01
The stochastic, infinite time, discrete output feedback problem for time invariant linear systems is examined. Two sets of sufficient conditions for the existence of a stable, globally optimal solution are presented. An expression for the total change in the cost function due to a change in the feedback gain is obtained. This expression is used to show that a sequence of gains can be obtained by an algorithm, so that the corresponding cost sequence is monotonically decreasing and the corresponding sequence of the cost gradient converges to zero. The algorithm is guaranteed to obtain a critical point of the cost function. The computational steps necessary to implement the algorithm on a computer are presented. The results are applied to a digital outer loop flight control problem. The numerical results for this 13th order problem indicate a rate of convergence considerably faster than two other algorithms used for comparison.
Heuristic algorithms for the minmax regret flow-shop problem with interval processing times.
Ćwik, Michał; Józefczyk, Jerzy
2018-01-01
An uncertain version of the permutation flow-shop with unlimited buffers and the makespan as a criterion is considered. The investigated parametric uncertainty is represented by given interval-valued processing times. The maximum regret is used for the evaluation of uncertainty. Consequently, the minmax regret discrete optimization problem is solved. Due to its high complexity, two relaxations are applied to simplify the optimization procedure. First of all, a greedy procedure is used for calculating the criterion's value, as such calculation is NP-hard problem itself. Moreover, the lower bound is used instead of solving the internal deterministic flow-shop. The constructive heuristic algorithm is applied for the relaxed optimization problem. The algorithm is compared with previously elaborated other heuristic algorithms basing on the evolutionary and the middle interval approaches. The conducted computational experiments showed the advantage of the constructive heuristic algorithm with regards to both the criterion and the time of computations. The Wilcoxon paired-rank statistical test confirmed this conclusion.
Optimal perturbations for nonlinear systems using graph-based optimal transport
NASA Astrophysics Data System (ADS)
Grover, Piyush; Elamvazhuthi, Karthik
2018-06-01
We formulate and solve a class of finite-time transport and mixing problems in the set-oriented framework. The aim is to obtain optimal discrete-time perturbations in nonlinear dynamical systems to transport a specified initial measure on the phase space to a final measure in finite time. The measure is propagated under system dynamics in between the perturbations via the associated transfer operator. Each perturbation is described by a deterministic map in the measure space that implements a version of Monge-Kantorovich optimal transport with quadratic cost. Hence, the optimal solution minimizes a sum of quadratic costs on phase space transport due to the perturbations applied at specified times. The action of the transport map is approximated by a continuous pseudo-time flow on a graph, resulting in a tractable convex optimization problem. This problem is solved via state-of-the-art solvers to global optimality. We apply this algorithm to a problem of transport between measures supported on two disjoint almost-invariant sets in a chaotic fluid system, and to a finite-time optimal mixing problem by choosing the final measure to be uniform. In both cases, the optimal perturbations are found to exploit the phase space structures, such as lobe dynamics, leading to efficient global transport. As the time-horizon of the problem is increased, the optimal perturbations become increasingly localized. Hence, by combining the transfer operator approach with ideas from the theory of optimal mass transportation, we obtain a discrete-time graph-based algorithm for optimal transport and mixing in nonlinear systems.
Image restoration, uncertainty, and information.
Yu, F T
1969-01-01
Some of the physical interpretations about image restoration are discussed. From the theory of information the unrealizability of an inverse filter can be explained by degradation of information, which is due to distortion on the recorded image. The image restoration is a time and space problem, which can be recognized from the theory of relativity (the problem of image restoration is related to Heisenberg's uncertainty principle in quantum mechanics). A detailed discussion of the relationship between information and energy is given. Two general results may be stated: (1) the restoration of the image from the distorted signal is possible only if it satisfies the detectability condition. However, the restored image, at the best, can only approach to the maximum allowable time criterion. (2) The restoration of an image by superimposing the distorted signal (due to smearing) is a physically unrealizable method. However, this restoration procedure may be achieved by the expenditure of an infinite amount of energy.
Effect of a Starting Model on the Solution of a Travel Time Seismic Tomography Problem
NASA Astrophysics Data System (ADS)
Yanovskaya, T. B.; Medvedev, S. V.; Gobarenko, V. S.
2018-03-01
In the problems of three-dimensional (3D) travel time seismic tomography where the data are travel times of diving waves and the starting model is a system of plane layers where the velocity is a function of depth alone, the solution turns out to strongly depend on the selection of the starting model. This is due to the fact that in the different starting models, the rays between the same points can intersect different layers, which makes the tomography problem fundamentally nonlinear. This effect is demonstrated by the model example. Based on the same example, it is shown how the starting model should be selected to ensure a solution close to the true velocity distribution. The starting model (the average dependence of the seismic velocity on depth) should be determined by the method of successive iterations at each step of which the horizontal velocity variations in the layers are determined by solving the two-dimensional tomography problem. An example illustrating the application of this technique to the P-wave travel time data in the region of the Black Sea basin is presented.
Khare, Rahul; Sala, Guillaume; Kinahan, Paul; Esposito, Giuseppe; Banovac, Filip; Cleary, Kevin; Enquobahrie, Andinet
2013-01-01
Positron emission tomography computed tomography (PET-CT) images are increasingly being used for guidance during percutaneous biopsy. However, due to the physics of image acquisition, PET-CT images are susceptible to problems due to respiratory and cardiac motion, leading to inaccurate tumor localization, shape distortion, and attenuation correction. To address these problems, we present a method for motion correction that relies on respiratory gated CT images aligned using a deformable registration algorithm. In this work, we use two deformable registration algorithms and two optimization approaches for registering the CT images obtained over the respiratory cycle. The two algorithms are the BSpline and the symmetric forces Demons registration. In the first optmization approach, CT images at each time point are registered to a single reference time point. In the second approach, deformation maps are obtained to align each CT time point with its adjacent time point. These deformations are then composed to find the deformation with respect to a reference time point. We evaluate these two algorithms and optimization approaches using respiratory gated CT images obtained from 7 patients. Our results show that overall the BSpline registration algorithm with the reference optimization approach gives the best results.
Computational problems and signal processing in SETI
NASA Technical Reports Server (NTRS)
Deans, Stanley R.; Cullers, D. K.; Stauduhar, Richard
1991-01-01
The Search for Extraterrestrial Intelligence (SETI), currently being planned at NASA, will require that an enormous amount of data (on the order of 10 exp 11 distinct signal paths for a typical observation) be analyzed in real time by special-purpose hardware. Even though the SETI system design is not based on maximum entropy and Bayesian methods (partly due to the real-time processing constraint), it is expected that enough data will be saved to be able to apply these and other methods off line where computational complexity is not an overriding issue. Interesting computational problems that relate directly to the system design for processing such an enormous amount of data have emerged. Some of these problems are discussed, along with the current status on their solution.
Algorithms for elasto-plastic-creep postbuckling
NASA Technical Reports Server (NTRS)
Padovan, J.; Tovichakchaikul, S.
1984-01-01
This paper considers the development of an improved constrained time stepping scheme which can efficiently and stably handle the pre-post-buckling behavior of general structure subject to high temperature environments. Due to the generality of the scheme, the combined influence of elastic-plastic behavior can be handled in addition to time dependent creep effects. This includes structural problems exhibiting indefinite tangent properties. To illustrate the capability of the procedure, several benchmark problems employing finite element analyses are presented. These demonstrate the numerical efficiency and stability of the scheme. Additionally, the potential influence of complex creep histories on the buckling characteristics is considered.
Teaching as Regulation and Dealing with Complexity
ERIC Educational Resources Information Center
Boshuizen, H. P. A.
2016-01-01
At an abstract level, teaching a class can be perceived as one big regulation problem. For an optimal result, teachers must continuously (re)align their goals and sub-goals, and need to get timely and valid information on how they are doing in reaching these goals. This discussion describes the specific difficulties due to the time characteristics…
Water pollution and habitat degradation in the Gulf of Thailand.
Cheevaporn, Voravit; Menasveta, Piamsak
2003-01-01
The Gulf of Thailand has been a major marine resource for Thai people for a long time. However, recent industrialization and community development have exerted considerable stress on the marine environments and provoked habitat degradation. The following pollution problems in the Gulf have been prioritized and are discussed in details: (1) Untreated municipal and industrial waste water are considered to be the most serious problems of the country due to limited waste water treatment facilities in the area. (2) Eutrophication is an emerging problem in the gulf of Thailand. Fortunately, the major species of phytoplankton that have been reported as the cause of red tide phenomena were non-toxic species such as Noctiluca sp. and Trichodesmium sp. (3) Few problems have been documented from trace metals contamination in the Gulf of Thailand and public health threat from seafood contamination does not appear to be significant yet. (4) Petroleum hydrocarbon residue contamination is not a problem, although a few spills from small oil tankers have been recorded. A rapid decrease in mangrove forest, coral reefs, and fisheries resources due to mismanagement is also discussed.
NASA Astrophysics Data System (ADS)
Martirosyan, A. N.; Davtyan, A. V.; Dinunts, A. S.; Martirosyan, H. A.
2018-04-01
The purpose of this article is to investigate a problem of closing cracks by building up a layer of sediments on surfaces of a crack in an infinite thermoelastic medium in the presence of a flow of fluids with impurities. The statement of the problem of closing geophysical cracks in the presence of a fluid flow is presented with regard to the thermoelastic stress and the influence of the impurity deposition in the liquid on the crack surfaces due to thermal diffusion at the fracture closure. The Wiener–Hopf method yields an analytical solution in the special case without friction. Numerical calculations are performed in this case and the dependence of the crack closure time on the coordinate is plotted. A similar spatial problem is also solved. These results generalize the results of previous studies of geophysical cracks and debris in rocks, where the closure of a crack due to temperature effects is studied without taking the elastic stresses into account.
Mixed Integer Programming and Heuristic Scheduling for Space Communication
NASA Technical Reports Server (NTRS)
Lee, Charles H.; Cheung, Kar-Ming
2013-01-01
Optimal planning and scheduling for a communication network was created where the nodes within the network are communicating at the highest possible rates while meeting the mission requirements and operational constraints. The planning and scheduling problem was formulated in the framework of Mixed Integer Programming (MIP) to introduce a special penalty function to convert the MIP problem into a continuous optimization problem, and to solve the constrained optimization problem using heuristic optimization. The communication network consists of space and ground assets with the link dynamics between any two assets varying with respect to time, distance, and telecom configurations. One asset could be communicating with another at very high data rates at one time, and at other times, communication is impossible, as the asset could be inaccessible from the network due to planetary occultation. Based on the network's geometric dynamics and link capabilities, the start time, end time, and link configuration of each view period are selected to maximize the communication efficiency within the network. Mathematical formulations for the constrained mixed integer optimization problem were derived, and efficient analytical and numerical techniques were developed to find the optimal solution. By setting up the problem using MIP, the search space for the optimization problem is reduced significantly, thereby speeding up the solution process. The ratio of the dimension of the traditional method over the proposed formulation is approximately an order N (single) to 2*N (arraying), where N is the number of receiving antennas of a node. By introducing a special penalty function, the MIP problem with non-differentiable cost function and nonlinear constraints can be converted into a continuous variable problem, whose solution is possible.
Absenteeism due to voice disorders in female teachers: a public health problem.
de Medeiros, Adriane Mesquita; Assunção, Ada Ávila; Barreto, Sandhi Maria
2012-11-01
This study estimates the prevalence of absenteeism due to voice disorders among teachers and investigates individual and contextual factors associated with it. The study involved 1,980 teachers from 76 municipal schools. The response rate was 85%. The survey was carried out between May 2004 and July 2005 using a self-administered structured questionnaire containing sociodemographic, lifestyle, health, and work-related questions. The dependent variable was obtained from answers to the following question: In the last 2 weeks, have you missed work because of voice problems? Logistic regression analysis was used to determine the associated factors. Voice-related absenteeism in the prior 2 weeks was reported by 66 teachers (3.35%). During their entire careers, approximately one-third of teachers missed work at least once due to voice problems. In the final model, factors associated with recent absenteeism were as follows: witnessing violence by students or parents one or more times (OR = 2.10; 95% CI = 1.14-3.90), presence of depression or anxiety (OR = 2.03; 95% CI = 1.09-3.78), upper respiratory problems in the prior 2 weeks (OR = 2.85; 95% CI = 1.53-5.29), and absenteeism because of voice problems during the preceding 6 months (OR = 15.79; 95% CI = 8.18-30.45). The results encourage new approaches to the problems of absenteeism in the educational sector and contribute to addressing the weaknesses of economic and administrative approaches to the phenomenon.
Time-optimal aircraft pursuit-evasion with a weapon envelope constraint
NASA Technical Reports Server (NTRS)
Menon, P. K. A.; Duke, E. L.
1990-01-01
The optimal pursuit-evasion problem between two aircraft, including nonlinear point-mass vehicle models and a realistic weapon envelope, is analyzed. Using a linear combination of flight time and the square of the vehicle acceleration as the performance index, a closed-form solution is obtained in nonlinear feedback form. Due to its modest computational requirements, this guidance law can be used for onboard real-time implementation.
A Survey of the Isentropic Euler Vortex Problem Using High-Order Methods
NASA Technical Reports Server (NTRS)
Spiegel, Seth C.; Huynh, H. T.; DeBonis, James R.
2015-01-01
The flux reconstruction (FR) method offers a simple, efficient, and easy to implement method, and it has been shown to equate to a differential approach to discontinuous Galerkin (DG) methods. The FR method is also accurate to an arbitrary order and the isentropic Euler vortex problem is used here to empirically verify this claim. This problem is widely used in computational fluid dynamics (CFD) to verify the accuracy of a given numerical method due to its simplicity and known exact solution at any given time. While verifying our FR solver, multiple obstacles emerged that prevented us from achieving the expected order of accuracy over short and long amounts of simulation time. It was found that these complications stemmed from a few overlooked details in the original problem definition combined with the FR and DG methods achieving high-accuracy with minimal dissipation. This paper is intended to consolidate the many versions of the vortex problem found in literature and to highlight some of the consequences if these overlooked details remain neglected.
A heterogeneous fleet vehicle routing model for solving the LPG distribution problem: A case study
NASA Astrophysics Data System (ADS)
Onut, S.; Kamber, M. R.; Altay, G.
2014-03-01
Vehicle Routing Problem (VRP) is an important management problem in the field of distribution and logistics. In VRPs, routes from a distribution point to geographically distributed points are designed with minimum cost and considering customer demands. All points should be visited only once and by one vehicle in one route. Total demand in one route should not exceed the capacity of the vehicle that assigned to that route. VRPs are varied due to real life constraints related to vehicle types, number of depots, transportation conditions and time periods, etc. Heterogeneous fleet vehicle routing problem is a kind of VRP that vehicles have different capacity and costs. There are two types of vehicles in our problem. In this study, it is used the real world data and obtained from a company that operates in LPG sector in Turkey. An optimization model is established for planning daily routes and assigned vehicles. The model is solved by GAMS and optimal solution is found in a reasonable time.
The Ozone Problem | Ground-level Ozone | New England | US ...
2017-04-10
Many factors impact ground-level ozone development, including temperature, wind speed and direction, time of day, and driving patterns. Due to its dependence on weather conditions, ozone is typically a summertime pollutant and a chief component of summertime smog.
Evaluation of corrosion inhibitor : final report.
DOT National Transportation Integrated Search
1980-05-01
Solution to the problem of deterioration of bridge decks due to the corrosion of embedded steel has been sought by engineers for a long time. The purpose of the study was to evaluate, under laboratory conditions, the properties of concrete using a co...
Micalizzi, Lauren; Ronald, Angelica; Saudino, Kimberly J.
2015-01-01
A genetically informed cross-lagged model was applied to twin data to explore etiological links between autistic-like traits and affective problems in early childhood. The sample comprised 310 same-sex twin pairs (143 monozygotic and 167 dizygotic; 53% male). Autistic-like traits and affective problems were assessed at ages 2 and 3 using parent ratings. Both constructs were related within and across age (r = .30−.53) and showed moderate stability (r = .45−.54). Autistic-like traits and affective problems showed genetic and environmental influences at both ages. Whereas at age 2, the covariance between autistic-like traits and affective problems was entirely due to environmental influences (shared and nonshared), at age 3, genetic factors also contributed to the covariance between constructs. The stability paths, but not the cross-lagged paths, were significant, indicating that there is stability in both autistic-like traits and affective problems but they do not mutually influence each other across age. Stability effects were due to genetic, shared, and nonshared environmental influences. Substantial novel genetic and nonshared environmental influences emerge at age 3 and suggest change in the etiology of these constructs over time. During early childhood, autistic-like traits tend to occur alongside affective problems and partly overlapping genetic and environmental influences explain this association. PMID:26456961
Stochastic scheduling on a repairable manufacturing system
NASA Astrophysics Data System (ADS)
Li, Wei; Cao, Jinhua
1995-08-01
In this paper, we consider some stochastic scheduling problems with a set of stochastic jobs on a manufacturing system with a single machine that is subject to multiple breakdowns and repairs. When the machine processing a job fails, the job processing must restart some time later when the machine is repaired. For this typical manufacturing system, we find the optimal policies that minimize the following objective functions: (1) the weighed sum of the completion times; (2) the weighed number of late jobs having constant due dates; (3) the weighted number of late jobs having random due dates exponentially distributed, which generalize some previous results.
Analysis of the geophysical data using a posteriori algorithms
NASA Astrophysics Data System (ADS)
Voskoboynikova, Gyulnara; Khairetdinov, Marat
2016-04-01
The problems of monitoring, prediction and prevention of extraordinary natural and technogenic events are priority of modern problems. These events include earthquakes, volcanic eruptions, the lunar-solar tides, landslides, falling celestial bodies, explosions utilized stockpiles of ammunition, numerous quarry explosion in open coal mines, provoking technogenic earthquakes. Monitoring is based on a number of successive stages, which include remote registration of the events responses, measurement of the main parameters as arrival times of seismic waves or the original waveforms. At the final stage the inverse problems associated with determining the geographic location and time of the registration event are solving. Therefore, improving the accuracy of the parameters estimation of the original records in the high noise is an important problem. As is known, the main measurement errors arise due to the influence of external noise, the difference between the real and model structures of the medium, imprecision of the time definition in the events epicenter, the instrumental errors. Therefore, posteriori algorithms more accurate in comparison with known algorithms are proposed and investigated. They are based on a combination of discrete optimization method and fractal approach for joint detection and estimation of the arrival times in the quasi-periodic waveforms sequence in problems of geophysical monitoring with improved accuracy. Existing today, alternative approaches to solving these problems does not provide the given accuracy. The proposed algorithms are considered for the tasks of vibration sounding of the Earth in times of lunar and solar tides, and for the problem of monitoring of the borehole seismic source location in trade drilling.
Time Crystal Platform: From Quasicrystal Structures in Time to Systems with Exotic Interactions.
Giergiel, Krzysztof; Miroszewski, Artur; Sacha, Krzysztof
2018-04-06
Time crystals are quantum many-body systems that, due to interactions between particles, are able to spontaneously self-organize their motion in a periodic way in time by analogy with the formation of crystalline structures in space in condensed matter physics. In solid state physics properties of space crystals are often investigated with the help of external potentials that are spatially periodic and reflect various crystalline structures. A similar approach can be applied for time crystals, as periodically driven systems constitute counterparts of spatially periodic systems, but in the time domain. Here we show that condensed matter problems ranging from single particles in potentials of quasicrystal structure to many-body systems with exotic long-range interactions can be realized in the time domain with an appropriate periodic driving. Moreover, it is possible to create molecules where atoms are bound together due to destructive interference if the atomic scattering length is modulated in time.
Time Crystal Platform: From Quasicrystal Structures in Time to Systems with Exotic Interactions
NASA Astrophysics Data System (ADS)
Giergiel, Krzysztof; Miroszewski, Artur; Sacha, Krzysztof
2018-04-01
Time crystals are quantum many-body systems that, due to interactions between particles, are able to spontaneously self-organize their motion in a periodic way in time by analogy with the formation of crystalline structures in space in condensed matter physics. In solid state physics properties of space crystals are often investigated with the help of external potentials that are spatially periodic and reflect various crystalline structures. A similar approach can be applied for time crystals, as periodically driven systems constitute counterparts of spatially periodic systems, but in the time domain. Here we show that condensed matter problems ranging from single particles in potentials of quasicrystal structure to many-body systems with exotic long-range interactions can be realized in the time domain with an appropriate periodic driving. Moreover, it is possible to create molecules where atoms are bound together due to destructive interference if the atomic scattering length is modulated in time.
Application of higher-order cepstral techniques in problems of fetal heart signal extraction
NASA Astrophysics Data System (ADS)
Sabry-Rizk, Madiha; Zgallai, Walid; Hardiman, P.; O'Riordan, J.
1996-10-01
Recently, cepstral analysis based on second order statistics and homomorphic filtering techniques have been used in the adaptive decomposition of overlapping, or otherwise, and noise contaminated ECG complexes of mothers and fetals obtained by a transabdominal surface electrodes connected to a monitoring instrument, an interface card, and a PC. Differential time delays of fetal heart beats measured from a reference point located on the mother complex after transformation to cepstra domains are first obtained and this is followed by fetal heart rate variability computations. Homomorphic filtering in the complex cepstral domain and the subuent transformation to the time domain results in fetal complex recovery. However, three problems have been identified with second-order based cepstral techniques that needed rectification in this paper. These are (1) errors resulting from the phase unwrapping algorithms and leading to fetal complex perturbation, (2) the unavoidable conversion of noise statistics from Gaussianess to non-Gaussianess due to the highly non-linear nature of homomorphic transform does warrant stringent noise cancellation routines, (3) due to the aforementioned problems in (1) and (2), it is difficult to adaptively optimize windows to include all individual fetal complexes in the time domain based on amplitude thresholding routines in the complex cepstral domain (i.e. the task of `zooming' in on weak fetal complexes requires more processing time). The use of third-order based high resolution differential cepstrum technique results in recovery of the delay of the order of 120 milliseconds.
Frilander, Heikki; Lallukka, Tea; Viikari-Juntura, Eira; Heliövaara, Markku; Solovieva, Svetlana
2016-01-01
Disability retirement causes a significant burden on the society and affects the well-being of individuals. Early health problems as determinants of disability retirement have received little attention. The objective was to study, whether interrupting compulsory military service is an early indicator of disability retirement among Finnish men and whether seeking medical advice during military service increases the risk of all-cause disability retirement and disability retirement due to mental disorders and musculoskeletal diseases. We also looked at secular trends in these associations. We examined a nationally representative sample of 2069 men, who had entered military service during 1967–1996. We linked military service health records with cause-specific register data on disability retirement from 1968 to 2008. Secular trends were explored in three service time strata. We used the Cox regression model to estimate proportional hazard ratios and their 95% confidence intervals. During the follow-up time altogether 140 (6.8%) men retired due to disability, mental disorders being the most common cause. The men who interrupted service had a remarkably higher cumulative incidence of disability retirement (18.9%). The associations between seeking medical advice during military service and all-cause disability retirement were similar across the three service time cohorts (overall hazard ratio 1.40 per one standard deviation of the number of visits; 95% confidence interval 1.26–1.56). Visits due to mental problems predicted disability retirement due to mental disorders in the men who served between 1987 and 1996 and a tendency for a similar cause-specific association was seen for musculoskeletal diseases in the men who served in 1967–1976. In conclusion, health problems—in particular mental problems—during late adolescence are strong determinants of disability retirement. Call-up examinations and military service provide access to the entire age cohort of men, where persons at risk for work disability can be identified and early preventive measures initiated. PMID:27533052
NASA Astrophysics Data System (ADS)
Bernede, Adrien; Poëtte, Gaël
2018-02-01
In this paper, we are interested in the resolution of the time-dependent problem of particle transport in a medium whose composition evolves with time due to interactions. As a constraint, we want to use of Monte-Carlo (MC) scheme for the transport phase. A common resolution strategy consists in a splitting between the MC/transport phase and the time discretization scheme/medium evolution phase. After going over and illustrating the main drawbacks of split solvers in a simplified configuration (monokinetic, scalar Bateman problem), we build a new Unsplit MC (UMC) solver improving the accuracy of the solutions, avoiding numerical instabilities, and less sensitive to time discretization. The new solver is essentially based on a Monte Carlo scheme with time dependent cross sections implying the on-the-fly resolution of a reduced model for each MC particle describing the time evolution of the matter along their flight path.
A Simple Algorithm for the Metric Traveling Salesman Problem
NASA Technical Reports Server (NTRS)
Grimm, M. J.
1984-01-01
An algorithm was designed for a wire list net sort problem. A branch and bound algorithm for the metric traveling salesman problem is presented for this. The algorithm is a best bound first recursive descent where the bound is based on the triangle inequality. The bounded subsets are defined by the relative order of the first K of the N cities (i.e., a K city subtour). When K equals N, the bound is the length of the tour. The algorithm is implemented as a one page subroutine written in the C programming language for the VAX 11/750. Average execution times for randomly selected planar points using the Euclidean metric are 0.01, 0.05, 0.42, and 3.13 seconds for ten, fifteen, twenty, and twenty-five cities, respectively. Maximum execution times for a hundred cases are less than eleven times the averages. The speed of the algorithms is due to an initial ordering algorithm that is a N squared operation. The algorithm also solves the related problem where the tour does not return to the starting city and the starting and/or ending cities may be specified. It is possible to extend the algorithm to solve a nonsymmetric problem satisfying the triangle inequality.
Solving the transient water age distribution problem in environmental flow systems
NASA Astrophysics Data System (ADS)
Cornaton, F. J.
2011-12-01
The temporal evolution of groundwater age and its frequency distributions can display important changes as flow regimes vary due to the natural change in climate and hydrologic conditions and/or to human induced pressures on the resource to satisfy the water demand. Groundwater age being nowadays frequently used to investigate reservoir properties and recharge conditions, special attention needs to be put on the way this property is characterized, would it be using isotopic methods, multiple tracer techniques, or mathematical modelling. Steady-state age frequency distributions can be modelled using standard numerical techniques, since the general balance equation describing age transport under steady-state flow conditions is exactly equivalent to a standard advection-dispersion equation. The time-dependent problem is however described by an extended transport operator that incorporates an additional coordinate for water age. The consequence is that numerical solutions can hardly be achieved, especially for real 3-D applications over large time periods of interest. The absence of any robust method has thus left us in the quantitative hydrogeology community dodging the issue of transience. Novel algorithms for solving the age distribution problem under time-varying flow regimes are presented and, for some specific configurations, extended to the problem of generalized component exposure time. The solution strategy is based on the combination of the Laplace Transform technique applied to the age (or exposure time) coordinate with standard time-marching schemes. The method is well-suited for groundwater problems with possible density-dependency of fluid flow (e.g. coupled flow and heat/salt concentration problems), but also presents significance to the homogeneous flow (compressible case) problem. The approach is validated using 1-D analytical solutions and exercised on some demonstration problems that are relevant to topical issues in groundwater age, including analysis of transfer times in the vadose zone, aquifer-aquitard interactions and the induction of transient age distributions when a well pump is started.
CENTRALIZED MANAGEMENT OF SMALL TREATMENT PLANTS USING INSTRUMENTS AND REMOTE ALARMS
The operation and maintenance of small treatment plants and associated lift stations pose unique and difficult problems to the authority responsible for their performance. Due to financial and manpower limitations, they must operate unattended the majority of the time. Undetected...
Exact and Heuristic Algorithms for Runway Scheduling
NASA Technical Reports Server (NTRS)
Malik, Waqar A.; Jung, Yoon C.
2016-01-01
This paper explores the Single Runway Scheduling (SRS) problem with arrivals, departures, and crossing aircraft on the airport surface. Constraints for wake vortex separations, departure area navigation separations and departure time window restrictions are explicitly considered. The main objective of this research is to develop exact and heuristic based algorithms that can be used in real-time decision support tools for Air Traffic Control Tower (ATCT) controllers. The paper provides a multi-objective dynamic programming (DP) based algorithm that finds the exact solution to the SRS problem, but may prove unusable for application in real-time environment due to large computation times for moderate sized problems. We next propose a second algorithm that uses heuristics to restrict the search space for the DP based algorithm. A third algorithm based on a combination of insertion and local search (ILS) heuristics is then presented. Simulation conducted for the east side of Dallas/Fort Worth International Airport allows comparison of the three proposed algorithms and indicates that the ILS algorithm performs favorably in its ability to find efficient solutions and its computation times.
Multitarget-multisensor management for decentralized sensor networks
NASA Astrophysics Data System (ADS)
Tharmarasa, R.; Kirubarajan, T.; Sinha, A.; Hernandez, M. L.
2006-05-01
In this paper, we consider the problem of sensor resource management in decentralized tracking systems. Due to the availability of cheap sensors, it is possible to use a large number of sensors and a few fusion centers (FCs) to monitor a large surveillance region. Even though a large number of sensors are available, due to frequency, power and other physical limitations, only a few of them can be active at any one time. The problem is then to select sensor subsets that should be used by each FC at each sampling time in order to optimize the tracking performance subject to their operational constraints. In a recent paper, we proposed an algorithm to handle the above issues for joint detection and tracking, without using simplistic clustering techniques that are standard in the literature. However, in that paper, a hierarchical architecture with feedback at every sampling time was considered, and the sensor management was performed only at a central fusion center (CFC). However, in general, it is not possible to communicate with the CFC at every sampling time, and in many cases there may not even be a CFC. Sometimes, communication between CFC and local fusion centers might fail as well. Therefore performing sensor management only at the CFC is not viable in most networks. In this paper, we consider an architecture in which there is no CFC, each FC communicates only with the neighboring FCs, and communications are restricted. In this case, each FC has to decide which sensors are to be used by itself at each measurement time step. We propose an efficient algorithm to handle the above problem in real time. Simulation results illustrating the performance of the proposed algorithm are also presented.
Quenching rate for a nonlocal problem arising in the micro-electro mechanical system
NASA Astrophysics Data System (ADS)
Guo, Jong-Shenq; Hu, Bei
2018-03-01
In this paper, we study the quenching rate of the solution for a nonlocal parabolic problem which arises in the study of the micro-electro mechanical system. This question is equivalent to the stabilization of the solution to the transformed problem in self-similar variables. First, some a priori estimates are provided. In order to construct a Lyapunov function, due to the lack of time monotonicity property, we then derive some very useful and challenging estimates by a delicate analysis. Finally, with this Lyapunov function, we prove that the quenching rate is self-similar which is the same as the problem without the nonlocal term, except the constant limit depends on the solution itself.
Improving the image discontinuous problem by using color temperature mapping method
NASA Astrophysics Data System (ADS)
Jeng, Wei-De; Mang, Ou-Yang; Lai, Chien-Cheng; Wu, Hsien-Ming
2011-09-01
This article mainly focuses on image processing of radial imaging capsule endoscope (RICE). First, it used the radial imaging capsule endoscope (RICE) to take the images, the experimental used a piggy to get the intestines and captured the images, but the images captured by RICE were blurred due to the RICE has aberration problems in the image center and lower light uniformity affect the image quality. To solve the problems, image processing can use to improve it. Therefore, the images captured by different time can use Person correlation coefficient algorithm to connect all the images, and using the color temperature mapping way to improve the discontinuous problem in the connection region.
Fast solver for large scale eddy current non-destructive evaluation problems
NASA Astrophysics Data System (ADS)
Lei, Naiguang
Eddy current testing plays a very important role in non-destructive evaluations of conducting test samples. Based on Faraday's law, an alternating magnetic field source generates induced currents, called eddy currents, in an electrically conducting test specimen. The eddy currents generate induced magnetic fields that oppose the direction of the inducing magnetic field in accordance with Lenz's law. In the presence of discontinuities in material property or defects in the test specimen, the induced eddy current paths are perturbed and the associated magnetic fields can be detected by coils or magnetic field sensors, such as Hall elements or magneto-resistance sensors. Due to the complexity of the test specimen and the inspection environments, the availability of theoretical simulation models is extremely valuable for studying the basic field/flaw interactions in order to obtain a fuller understanding of non-destructive testing phenomena. Theoretical models of the forward problem are also useful for training and validation of automated defect detection systems. Theoretical models generate defect signatures that are expensive to replicate experimentally. In general, modelling methods can be classified into two categories: analytical and numerical. Although analytical approaches offer closed form solution, it is generally not possible to obtain largely due to the complex sample and defect geometries, especially in three-dimensional space. Numerical modelling has become popular with advances in computer technology and computational methods. However, due to the huge time consumption in the case of large scale problems, accelerations/fast solvers are needed to enhance numerical models. This dissertation describes a numerical simulation model for eddy current problems using finite element analysis. Validation of the accuracy of this model is demonstrated via comparison with experimental measurements of steam generator tube wall defects. These simulations generating two-dimension raster scan data typically takes one to two days on a dedicated eight-core PC. A novel direct integral solver for eddy current problems and GPU-based implementation is also investigated in this research to reduce the computational time.
Calibration of decadal ensemble predictions
NASA Astrophysics Data System (ADS)
Pasternack, Alexander; Rust, Henning W.; Bhend, Jonas; Liniger, Mark; Grieger, Jens; Müller, Wolfgang; Ulbrich, Uwe
2017-04-01
Decadal climate predictions are of great socio-economic interest due to the corresponding planning horizons of several political and economic decisions. Due to uncertainties of weather and climate, forecasts (e.g. due to initial condition uncertainty), they are issued in a probabilistic way. One issue frequently observed for probabilistic forecasts is that they tend to be not reliable, i.e. the forecasted probabilities are not consistent with the relative frequency of the associated observed events. Thus, these kind of forecasts need to be re-calibrated. While re-calibration methods for seasonal time scales are available and frequently applied, these methods still have to be adapted for decadal time scales and its characteristic problems like climate trend and lead time dependent bias. Regarding this, we propose a method to re-calibrate decadal ensemble predictions that takes the above mentioned characteristics into account. Finally, this method will be applied and validated to decadal forecasts from the MiKlip system (Germany's initiative for decadal prediction).
Solution of magnetic field and eddy current problem induced by rotating magnetic poles (abstract)
NASA Astrophysics Data System (ADS)
Liu, Z. J.; Low, T. S.
1996-04-01
The magnetic field and eddy current problems induced by rotating permanent magnet poles occur in electromagnetic dampers, magnetic couplings, and many other devices. Whereas numerical techniques, for example, finite element methods can be exploited to study various features of these problems, such as heat generation and drag torque development, etc., the analytical solution is always of interest to the designers since it helps them to gain the insight into the interdependence of the parameters involved and provides an efficient tool for designing. Some of the previous work showed that the solution of the eddy current problem due to the linearly moving magnet poles can give satisfactory approximation for the eddy current problem due to rotating fields. However, in many practical cases, especially when the number of magnet poles is small, there is significant effect of flux focusing due to the geometry. The above approximation can therefore lead to marked errors in the theoretical predictions of the device performance. Bernot et al. recently described an analytical solution in a polar coordinate system where the radial field is excited by a time-varying source. A discussion of an analytical solution of the magnetic field and eddy current problems induced by moving magnet poles in radial field machines will be given in this article. The theoretical predictions obtained from this method is compared with the results obtained from finite element calculations. The validity of the method is also checked by the comparison of the theoretical predictions and the measurements from a test machine. It is shown that the introduced solution leads to a significant improvement in the air gap field prediction as compared with the results obtained from the analytical solution that models the eddy current problems induced by linearly moving magnet poles.
USDA-ARS?s Scientific Manuscript database
Aflatoxins are secondary metabolites produced by certain fungal species of the Aspergillus genus. Aflatoxin contamination remains a problem in agricultural products due to its toxic and carcinogenic properties. Conventional chemical methods for aflatoxin detection are time-consuming and destructive....
Solution of nonlinear time-dependent PDEs through componentwise approximation of matrix functions
NASA Astrophysics Data System (ADS)
Cibotarica, Alexandru; Lambers, James V.; Palchak, Elisabeth M.
2016-09-01
Exponential propagation iterative (EPI) methods provide an efficient approach to the solution of large stiff systems of ODEs, compared to standard integrators. However, the bulk of the computational effort in these methods is due to products of matrix functions and vectors, which can become very costly at high resolution due to an increase in the number of Krylov projection steps needed to maintain accuracy. In this paper, it is proposed to modify EPI methods by using Krylov subspace spectral (KSS) methods, instead of standard Krylov projection methods, to compute products of matrix functions and vectors. Numerical experiments demonstrate that this modification causes the number of Krylov projection steps to become bounded independently of the grid size, thus dramatically improving efficiency and scalability. As a result, for each test problem featured, as the total number of grid points increases, the growth in computation time is just below linear, while other methods achieved this only on selected test problems or not at all.
Dynamic minimum set problem for reserve design: Heuristic solutions for large problems
Sabbadin, Régis; Johnson, Fred A.; Stith, Bradley
2018-01-01
Conversion of wild habitats to human dominated landscape is a major cause of biodiversity loss. An approach to mitigate the impact of habitat loss consists of designating reserves where habitat is preserved and managed. Determining the most valuable areas to preserve in a landscape is called the reserve design problem. There exists several possible formulations of the reserve design problem, depending on the objectives and the constraints. In this article, we considered the dynamic problem of designing a reserve that contains a desired area of several key habitats. The dynamic case implies that the reserve cannot be designed in one time step, due to budget constraints, and that habitats can be lost before they are reserved, due for example to climate change or human development. We proposed two heuristics strategies that can be used to select sites to reserve each year for large reserve design problem. The first heuristic is a combination of the Marxan and site-ordering algorithms and the second heuristic is an augmented version of the common naive myopic heuristic. We evaluated the strategies on several simulated examples and showed that the augmented greedy heuristic is particularly interesting when some of the habitats to protect are particularly threatened and/or the compactness of the network is accounted for. PMID:29543830
AI techniques for a space application scheduling problem
NASA Technical Reports Server (NTRS)
Thalman, N.; Sparn, T.; Jaffres, L.; Gablehouse, D.; Judd, D.; Russell, C.
1991-01-01
Scheduling is a very complex optimization problem which can be categorized as an NP-complete problem. NP-complete problems are quite diverse, as are the algorithms used in searching for an optimal solution. In most cases, the best solutions that can be derived for these combinatorial explosive problems are near-optimal solutions. Due to the complexity of the scheduling problem, artificial intelligence (AI) can aid in solving these types of problems. Some of the factors are examined which make space application scheduling problems difficult and presents a fairly new AI-based technique called tabu search as applied to a real scheduling application. the specific problem is concerned with scheduling application. The specific problem is concerned with scheduling solar and stellar observations for the SOLar-STellar Irradiance Comparison Experiment (SOLSTICE) instrument in a constrained environment which produces minimum impact on the other instruments and maximizes target observation times. The SOLSTICE instrument will gly on-board the Upper Atmosphere Research Satellite (UARS) in 1991, and a similar instrument will fly on the earth observing system (Eos).
Global solutions to the electrodynamic two-body problem on a straight line
NASA Astrophysics Data System (ADS)
Bauer, G.; Deckert, D.-A.; Dürr, D.; Hinrichs, G.
2017-06-01
The classical electrodynamic two-body problem has been a long standing open problem in mathematics. For motion constrained to the straight line, the interaction is similar to that of the two-body problem of classical gravitation. The additional complication is the presence of unbounded state-dependent delays in the Coulomb forces due to the finiteness of the speed of light. This circumstance renders the notion of local solutions meaningless, and therefore, straightforward ODE techniques cannot be applied. Here, we study the time-symmetric case, i.e., the Fokker-Schwarzschild-Tetrode (FST) equations, comprising both advanced and retarded delays. We extend the technique developed in Deckert and Hinrichs (J Differ Equ 260:6900-6929, 2016), where existence of FST solutions was proven on the half line, to ensure global existence—a result that had been obtained by Bauer (Ein Existenzsatz für die Wheeler-Feynman-Elektrodynamik, Herbert Utz Verlag, München, 1997). Due to the novel technique, the presented proof is shorter and more transparent but also relies on the idea to employ asymptotic data to characterize solutions.
Flatulence on airplanes: just let it go.
Pommergaard, Hans C; Burcharth, Jakob; Fischer, Anders; Thomas, William E G; Rosenberg, Jacob
2013-02-15
Flatus is natural and an invariable consequence of digestion, however at times it creates problems of social character due to sound and odour. This problem may be more significant on commercial airplanes where many people are seated in limited space and where changes in volume of intestinal gases, due to altered cabin pressure, increase the amount of potential flatus. Holding back flatus on an airplane may cause significant discomfort and physical symptoms, whereas releasing flatus potentially presents social complications. To avoid this problem we humbly propose that active charcoal should be embedded in the seat cushion, since this material is able to neutralise the odour. Moreover active charcoal may be used in trousers and blankets to emphasise this effect. Other less practical or politically correct solutions to overcome this problem may be to restrict access of flatus-prone persons from airplanes, by using a methane breath test or to alter the fibre content of airline meals in order to reduce its flatulent potential. We conclude that the use of active charcoal on airlines may improve flight comfort for all passengers.
NASA Astrophysics Data System (ADS)
Yamagata, H.; Sedlak, D. L.
2008-12-01
To improve public health, the United Nations' Johannesburg Summit on Sustainable Development in 2002 set Millennium Development Goals (MDGs) of reducing by half the proportion of people without sustainable access to safe drinking water and sanitation by 2015. The Mezquital Valley of Mexico is one of the places suffering serious human health problems such as ascariasis due to agricultural irrigation with untreated wastewater discharged by Mexico City. Despite the existence of serious health problems, wastewater treatment has not been installed due to economic barriers: the agricultural benefit of nutrients in the wastewater and cost of building and operating wastewater treatment plants. To develop solutions to this problem, the human health damage and the benefits of nutrient input were evaluated. The health impact caused by untreated wastewater reuse in the Mezquital Valley was estimated to be about 14 DALYs (disability-adjusted life year) per 100,000, which was 2.8 times higher than the DALYs lost by ascariasis in Mexico in 2002 estimated by WHO. The economic damage of the health impact was evaluated at 77,000 /year using willingness-to-pay (WTP) for reducing DALYs. The value of nutrient inputs (nitrogen and phosphorus) due to reuse of untreated wastewater was evaluated at 33 million /year using fertilizer prices. Therefore, attempts to decrease public health problems associated with reuse in the Mezquital Valley need to address losses of economic benefits associated with nutrients in sewage. In 2007, the Mexican Government announced plans to install wastewater treatment plants in this area. Although nutrient inputs in irrigated water is expected to decrease by 33% due to the wastewater treatment, farmers in the Mezquital Valley would still benefit from improved public health in the community and increases of crop values due to the ability to grow raw-eaten vegetables.
On the timing problem in optical PPM communications.
NASA Technical Reports Server (NTRS)
Gagliardi, R. M.
1971-01-01
Investigation of the effects of imperfect timing in a direct-detection (noncoherent) optical system using pulse-position-modulation bits. Special emphasis is placed on specification of timing accuracy, and an examination of system degradation when this accuracy is not attained. Bit error probabilities are shown as a function of timing errors, from which average error probabilities can be computed for specific synchronization methods. Of significant importance is shown to be the presence of a residual, or irreducible error probability, due entirely to the timing system, that cannot be overcome by the data channel.
Discrete maximal regularity of time-stepping schemes for fractional evolution equations.
Jin, Bangti; Li, Buyang; Zhou, Zhi
2018-01-01
In this work, we establish the maximal [Formula: see text]-regularity for several time stepping schemes for a fractional evolution model, which involves a fractional derivative of order [Formula: see text], [Formula: see text], in time. These schemes include convolution quadratures generated by backward Euler method and second-order backward difference formula, the L1 scheme, explicit Euler method and a fractional variant of the Crank-Nicolson method. The main tools for the analysis include operator-valued Fourier multiplier theorem due to Weis (Math Ann 319:735-758, 2001. doi:10.1007/PL00004457) and its discrete analogue due to Blunck (Stud Math 146:157-176, 2001. doi:10.4064/sm146-2-3). These results generalize the corresponding results for parabolic problems.
Kitamura, Shingo; Enomoto, Minori; Kamei, Yuichi; Inada, Naoko; Moriwaki, Aiko; Kamio, Yoko; Mishima, Kazuo
2015-03-13
Although delayed sleep timing causes many socio-psycho-biological problems such as sleep loss, excessive daytime sleepiness, obesity, and impaired daytime neurocognitive performance in adults, there are insufficient data showing the clinical significance of a 'night owl lifestyle' in early life. This study examined the association between habitual delayed bedtime and sleep-related problems among community-dwelling 2-year-old children in Japan. Parents/caregivers of 708 community-dwelling 2-year-old children in Nishitokyo City, Tokyo, participated in the study. The participants answered a questionnaire to evaluate their child's sleep habits and sleep-related problems for the past 1 month. Of the 425 children for whom complete data were collected, 90 (21.2%) went to bed at 22:00 or later. Children with delayed bedtime showed significantly more irregular bedtime, delayed wake time, shorter total sleep time, and difficulty in initiating and terminating sleep. Although this relationship indicated the presence of sleep debt in children with delayed bedtime, sleep onset latency did not differ between children with earlier bedtime and those with delayed bedtime. Rather, delayed bedtime was significantly associated with bedtime resistance and problems in the morning even when adjusting for nighttime and daytime sleep time. Even in 2-year-old children, delayed bedtime was associated with various sleep-related problems. The causal factors may include diminished homeostatic sleep drive due to prolonged daytime nap as well as diurnal preference (morning or night type) regulated by the biological clock.
Mapping of uncertainty relations between continuous and discrete time
NASA Astrophysics Data System (ADS)
Chiuchiú, Davide; Pigolotti, Simone
2018-03-01
Lower bounds on fluctuations of thermodynamic currents depend on the nature of time, discrete or continuous. To understand the physical reason, we compare current fluctuations in discrete-time Markov chains and continuous-time master equations. We prove that current fluctuations in the master equations are always more likely, due to random timings of transitions. This comparison leads to a mapping of the moments of a current between discrete and continuous time. We exploit this mapping to obtain uncertainty bounds. Our results reduce the quests for uncertainty bounds in discrete and continuous time to a single problem.
Mapping of uncertainty relations between continuous and discrete time.
Chiuchiù, Davide; Pigolotti, Simone
2018-03-01
Lower bounds on fluctuations of thermodynamic currents depend on the nature of time, discrete or continuous. To understand the physical reason, we compare current fluctuations in discrete-time Markov chains and continuous-time master equations. We prove that current fluctuations in the master equations are always more likely, due to random timings of transitions. This comparison leads to a mapping of the moments of a current between discrete and continuous time. We exploit this mapping to obtain uncertainty bounds. Our results reduce the quests for uncertainty bounds in discrete and continuous time to a single problem.
Parent Influences on Early Childhood Internalizing Difficulties
ERIC Educational Resources Information Center
Bayer, Jordana, K.; Sanson, Ann, V.; Hemphill, Sheryl A.
2006-01-01
Children's internalizing problems are a concerning mental health issue, due to significant prevalence and continuity over time. This study tested a multivariate model predicting young children's internalizing behaviors from parenting practices, parents' anxiety-depression and family stressors. A community sample of 2 year old children (N=112) was…
FINDING A COMMON DATA REPRESENTATION AND INTERCHANGE APPROACH FOR MULTIMEDIA MODELS
Within many disciplines, multiple approaches are used to represent and access very similar data (e.g., a time series of values), often due to the lack of commonly accepted standards. When projects must use data from multiple disciplines, the problems quickly compound. Often sig...
METEOROLOGICAL FACTORS RESPONSIBLE FOR HIGH CO (CARBON MONOXIDE) LEVELS IN ALASKAN CITIES
High winter carbon monoxide levels in Anchorage, as in Fairbanks, are due to intense nocturnal (ground-based) inversions persisting through the periods of maximum emissions and at times throughout the day. The problem is exacerbated by the large amounts of carbon monoxide emitted...
Identification of time-varying structural dynamic systems - An artificial intelligence approach
NASA Technical Reports Server (NTRS)
Glass, B. J.; Hanagud, S.
1992-01-01
An application of the artificial intelligence-derived methodologies of heuristic search and object-oriented programming to the problem of identifying the form of the model and the associated parameters of a time-varying structural dynamic system is presented in this paper. Possible model variations due to changes in boundary conditions or configurations of a structure are organized into a taxonomy of models, and a variant of best-first search is used to identify the model whose simulated response best matches that of the current physical structure. Simulated model responses are verified experimentally. An output-error approach is used in a discontinuous model space, and an equation-error approach is used in the parameter space. The advantages of the AI methods used, compared with conventional programming techniques for implementing knowledge structuring and inheritance, are discussed. Convergence conditions and example problems have been discussed. In the example problem, both the time-varying model and its new parameters have been identified when changes occur.
NASA Astrophysics Data System (ADS)
Goma, Sergio R.
2015-03-01
In current times, mobile technologies are ubiquitous and the complexity of problems is continuously increasing. In the context of advancement of engineering, we explore in this paper possible reasons that could cause a saturation in technology evolution - namely the ability of problem solving based on previous results and the ability of expressing solutions in a more efficient way, concluding that `thinking outside of brain' - as in solving engineering problems that are expressed in a virtual media due to their complexity - would benefit from mobile technology augmentation. This could be the necessary evolutionary step that would provide the efficiency required to solve new complex problems (addressing the `running out of time' issue) and remove the communication of results barrier (addressing the human `perception/expression imbalance' issue). Some consequences are discussed, as in this context the artificial intelligence becomes an automation tool aid instead of a necessary next evolutionary step. The paper concludes that research in modeling as problem solving aid and data visualization as perception aid augmented with mobile technologies could be the path to an evolutionary step in advancing engineering.
Recovery of time-dependent volatility in option pricing model
NASA Astrophysics Data System (ADS)
Deng, Zui-Cha; Hon, Y. C.; Isakov, V.
2016-11-01
In this paper we investigate an inverse problem of determining the time-dependent volatility from observed market prices of options with different strikes. Due to the non linearity and sparsity of observations, an analytical solution to the problem is generally not available. Numerical approximation is also difficult to obtain using most of the existing numerical algorithms. Based on our recent theoretical results, we apply the linearisation technique to convert the problem into an inverse source problem from which recovery of the unknown volatility function can be achieved. Two kinds of strategies, namely, the integral equation method and the Landweber iterations, are adopted to obtain the stable numerical solution to the inverse problem. Both theoretical analysis and numerical examples confirm that the proposed approaches are effective. The work described in this paper was partially supported by a grant from the Research Grant Council of the Hong Kong Special Administrative Region (Project No. CityU 101112) and grants from the NNSF of China (Nos. 11261029, 11461039), and NSF grants DMS 10-08902 and 15-14886 and by Emylou Keith and Betty Dutcher Distinguished Professorship at the Wichita State University (USA).
Zeytinogla, Isik Uurla; Seaton, M Bianca; Lillevik, Waheeda; Moruz, Josefina
2005-01-01
Women workers dominate the labor market of part-time and casual jobs in Canada and other industrialized countries, particularly in the retail trade and consumer services sector. However, research into the occupational health consequences of part-time and casual jobs for this large group of women workers is still in its early stages. Emerging evidence suggests that part-time and casual jobs contribute to stress and result in health problems for women. To learn about the impact of part-time and casual jobs on women's experiences of stress and their resulting physical and emotional health, we conducted interviews and focus groups with occupational health and safety union representatives and female workers in retail and consumer services. Results show that stress is a major occupational health problem for these women, due to the working conditions in part-time and casual jobs, the psychosocial work environment, and the gendered work environment in the retail trade and consumer services. Stress from part-time and casual jobs results in repetitive strain injuries, migraine headaches, and feelings of low self-esteem, low motivation, and job dissatisfaction for women. The disconcerting implication of our research is that part-time and casual employment comes at a cost for some women.
Resonances and vibrations in an elevator cable system due to boundary sway
NASA Astrophysics Data System (ADS)
Gaiko, Nick V.; van Horssen, Wim T.
2018-06-01
In this paper, an analytical method is presented to study an initial-boundary value problem describing the transverse displacements of a vertically moving beam under boundary excitation. The length of the beam is linearly varying in time, i.e., the axial, vertical velocity of the beam is assumed to be constant. The bending stiffness of the beam is assumed to be small. This problem may be regarded as a model describing the lateral vibrations of an elevator cable excited at its boundaries by the wind-induced building sway. Slow variation of the cable length leads to a singular perturbation problem which is expressed in slowly changing, time-dependent coefficients in the governing differential equation. By providing an interior layer analysis, infinitely many resonance manifolds are detected. Further, the initial-boundary value problem is studied in detail using a three-timescales perturbation method. The constructed formal approximations of the solutions are in agreement with the numerical results.
Vieira, J; Cunha, M C
2011-01-01
This article describes a solution method of solving large nonlinear problems in two steps. The two steps solution approach takes advantage of handling smaller and simpler models and having better starting points to improve solution efficiency. The set of nonlinear constraints (named as complicating constraints) which makes the solution of the model rather complex and time consuming is eliminated from step one. The complicating constraints are added only in the second step so that a solution of the complete model is then found. The solution method is applied to a large-scale problem of conjunctive use of surface water and groundwater resources. The results obtained are compared with solutions determined with the direct solve of the complete model in one single step. In all examples the two steps solution approach allowed a significant reduction of the computation time. This potential gain of efficiency of the two steps solution approach can be extremely important for work in progress and it can be particularly useful for cases where the computation time would be a critical factor for having an optimized solution in due time.
P1 Nonconforming Finite Element Method for the Solution of Radiation Transport Problems
NASA Technical Reports Server (NTRS)
Kang, Kab S.
2002-01-01
The simulation of radiation transport in the optically thick flux-limited diffusion regime has been identified as one of the most time-consuming tasks within large simulation codes. Due to multimaterial complex geometry, the radiation transport system must often be solved on unstructured grids. In this paper, we investigate the behavior and the benefits of the unstructured P(sub 1) nonconforming finite element method, which has proven to be flexible and effective on related transport problems, in solving unsteady implicit nonlinear radiation diffusion problems using Newton and Picard linearization methods. Key words. nonconforrning finite elements, radiation transport, inexact Newton linearization, multigrid preconditioning
Demons registration for in vivo and deformable laser scanning confocal endomicroscopy.
Chiew, Wei-Ming; Lin, Feng; Seah, Hock Soon
2017-09-01
A critical effect found in noninvasive in vivo endomicroscopic imaging modalities is image distortions due to sporadic movement exhibited by living organisms. In three-dimensional confocal imaging, this effect results in a dataset that is tilted across deeper slices. Apart from that, the sequential flow of the imaging-processing pipeline restricts real-time adjustments due to the unavailability of information obtainable only from subsequent stages. To solve these problems, we propose an approach to render Demons-registered datasets as they are being captured, focusing on the coupling between registration and visualization. To improve the acquisition process, we also propose a real-time visual analytics tool, which complements the imaging pipeline and the Demons registration pipeline with useful visual indicators to provide real-time feedback for immediate adjustments. We highlight the problem of deformation within the visualization pipeline for object-ordered and image-ordered rendering. Visualizations of critical information including registration forces and partial renderings of the captured data are also presented in the analytics system. We demonstrate the advantages of the algorithmic design through experimental results with both synthetically deformed datasets and actual in vivo, time-lapse tissue datasets expressing natural deformations. Remarkably, this algorithm design is for embedded implementation in intelligent biomedical imaging instrumentation with customizable circuitry. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Demons registration for in vivo and deformable laser scanning confocal endomicroscopy
NASA Astrophysics Data System (ADS)
Chiew, Wei Ming; Lin, Feng; Seah, Hock Soon
2017-09-01
A critical effect found in noninvasive in vivo endomicroscopic imaging modalities is image distortions due to sporadic movement exhibited by living organisms. In three-dimensional confocal imaging, this effect results in a dataset that is tilted across deeper slices. Apart from that, the sequential flow of the imaging-processing pipeline restricts real-time adjustments due to the unavailability of information obtainable only from subsequent stages. To solve these problems, we propose an approach to render Demons-registered datasets as they are being captured, focusing on the coupling between registration and visualization. To improve the acquisition process, we also propose a real-time visual analytics tool, which complements the imaging pipeline and the Demons registration pipeline with useful visual indicators to provide real-time feedback for immediate adjustments. We highlight the problem of deformation within the visualization pipeline for object-ordered and image-ordered rendering. Visualizations of critical information including registration forces and partial renderings of the captured data are also presented in the analytics system. We demonstrate the advantages of the algorithmic design through experimental results with both synthetically deformed datasets and actual in vivo, time-lapse tissue datasets expressing natural deformations. Remarkably, this algorithm design is for embedded implementation in intelligent biomedical imaging instrumentation with customizable circuitry.
Ivanov, J.; Miller, R.D.; Xia, J.; Steeples, D.; Park, C.B.
2005-01-01
In a set of two papers we study the inverse problem of refraction travel times. The purpose of this work is to use the study as a basis for development of more sophisticated methods for finding more reliable solutions to the inverse problem of refraction travel times, which is known to be nonunique. The first paper, "Types of Geophysical Nonuniqueness through Minimization," emphasizes the existence of different forms of nonuniqueness in the realm of inverse geophysical problems. Each type of nonuniqueness requires a different type and amount of a priori information to acquire a reliable solution. Based on such coupling, a nonuniqueness classification is designed. Therefore, since most inverse geophysical problems are nonunique, each inverse problem must be studied to define what type of nonuniqueness it belongs to and thus determine what type of a priori information is necessary to find a realistic solution. The second paper, "Quantifying Refraction Nonuniqueness Using a Three-layer Model," serves as an example of such an approach. However, its main purpose is to provide a better understanding of the inverse refraction problem by studying the type of nonuniqueness it possesses. An approach for obtaining a realistic solution to the inverse refraction problem is planned to be offered in a third paper that is in preparation. The main goal of this paper is to redefine the existing generalized notion of nonuniqueness and a priori information by offering a classified, discriminate structure. Nonuniqueness is often encountered when trying to solve inverse problems. However, possible nonuniqueness diversity is typically neglected and nonuniqueness is regarded as a whole, as an unpleasant "black box" and is approached in the same manner by applying smoothing constraints, damping constraints with respect to the solution increment and, rarely, damping constraints with respect to some sparse reference information about the true parameters. In practice, when solving geophysical problems different types of nonuniqueness exist, and thus there are different ways to solve the problems. Nonuniqueness is usually regarded as due to data error, assuming the true geology is acceptably approximated by simple mathematical models. Compounding the nonlinear problems, geophysical applications routinely exhibit exact-data nonuniqueness even for models with very few parameters adding to the nonuniqueness due to data error. While nonuniqueness variations have been defined earlier, they have not been linked to specific use of a priori information necessary to resolve each case. Four types of nonuniqueness, typical for minimization problems are defined with the corresponding methods for inclusion of a priori information to find a realistic solution without resorting to a non-discriminative approach. The above-developed stand-alone classification is expected to be helpful when solving any geophysical inverse problems. ?? Birkha??user Verlag, Basel, 2005.
Aghamohammadi, Hossein; Saadi Mesgari, Mohammad; Molaei, Damoon; Aghamohammadi, Hasan
2013-01-01
Location-allocation is a combinatorial optimization problem, and is defined as Non deterministic Polynomial Hard (NP) hard optimization. Therefore, solution of such a problem should be shifted from exact to heuristic or Meta heuristic due to the complexity of the problem. Locating medical centers and allocating injuries of an earthquake to them has high importance in earthquake disaster management so that developing a proper method will reduce the time of relief operation and will consequently decrease the number of fatalities. This paper presents the development of a heuristic method based on two nested genetic algorithms to optimize this location allocation problem by using the abilities of Geographic Information System (GIS). In the proposed method, outer genetic algorithm is applied to the location part of the problem and inner genetic algorithm is used to optimize the resource allocation. The final outcome of implemented method includes the spatial location of new required medical centers. The method also calculates that how many of the injuries at each demanding point should be taken to any of the existing and new medical centers as well. The results of proposed method showed high performance of designed structure to solve a capacitated location-allocation problem that may arise in a disaster situation when injured people has to be taken to medical centers in a reasonable time.
NASA Technical Reports Server (NTRS)
Shishir, Pandya; Chaderjian, Neal; Ahmad, Jsaim; Kwak, Dochan (Technical Monitor)
2001-01-01
Flow simulations using the time-dependent Navier-Stokes equations remain a challenge for several reasons. Principal among them are the difficulty to accurately model complex flows, and the time needed to perform the computations. A parametric study of such complex problems is not considered practical due to the large cost associated with computing many time-dependent solutions. The computation time for each solution must be reduced in order to make a parametric study possible. With successful reduction of computation time, the issue of accuracy, and appropriateness of turbulence models will become more tractable.
Mineral resources: Reserves, peak production and the future
Meinert, Lawrence D.; Robinson, Gilpin; Nassar, Nedal
2016-01-01
The adequacy of mineral resources in light of population growth and rising standards of living has been a concern since the time of Malthus (1798), but many studies erroneously forecast impending peak production or exhaustion because they confuse reserves with “all there is”. Reserves are formally defined as a subset of resources, and even current and potential resources are only a small subset of “all there is”. Peak production or exhaustion cannot be modeled accurately from reserves. Using copper as an example, identified resources are twice as large as the amount projected to be needed through 2050. Estimates of yet-to-be discovered copper resources are up to 40-times more than currently-identified resources, amounts that could last for many centuries. Thus, forecasts of imminent peak production due to resource exhaustion in the next 20–30 years are not valid. Short-term supply problems may arise, however, and supply-chain disruptions are possible at any time due to natural disasters (earthquakes, tsunamis, hurricanes) or political complications. Needed to resolve these problems are education and exploration technology development, access to prospective terrain, better recycling and better accounting of externalities associated with production (pollution, loss of ecosystem services and water and energy use).
The Interpersonal Theory of Suicide
ERIC Educational Resources Information Center
Van Orden, Kimberly A.; Witte, Tracy K.; Cukrowicz, Kelly C.; Braithwaite, Scott R.; Selby, Edward A.; Joiner, Thomas E., Jr.
2010-01-01
Suicidal behavior is a major problem worldwide and, at the same time, has received relatively little empirical attention. This relative lack of empirical attention may be due in part to a relative absence of theory development regarding suicidal behavior. The current article presents the interpersonal theory of suicidal behavior. We propose that…
Promoting Homework Independence for Students with Autism Spectrum Disorders
ERIC Educational Resources Information Center
Hampshire, Patricia Korzekwa; Butera, Gretchen D.; Dustin, Timothy J.
2014-01-01
For students with autism, homework time may be especially challenging due to problems in self-organization and difficulties generalizing skills from one setting to another. Although often problematic, homework can provide a valuable context for teaching organizational skills that become essential as students become more independent. By learning to…
Health and productivity among U.S. workers.
Davis, Karen; Collins, Sara R; Doty, Michelle M; Ho, Alice; Holmgren, Alyssa
2005-08-01
This analysis of Commonwealth Fund survey data estimates the economic impact of health problems on worker productivity. In 2003, an estimated 18 million adults ages 19 to 64 were not working and had a disability or chronic disease, or were not working because of health reasons. Sixty-nine million workers reported missing days due to illness, for a total of 407 million days of lost time at work. Fifty-five million workers reported a time when they were unable to concentrate at work because of their own illness or that of a family member, accounting for another 478 million days. Together, labor time lost due to health reasons represents lost economic output totaling $260 billion per year. Workers without paid time off to see a physician are more likely to report missing work or being unable to concentrate at their job.
Pathgroups, a dynamic data structure for genome reconstruction problems.
Zheng, Chunfang
2010-07-01
Ancestral gene order reconstruction problems, including the median problem, quartet construction, small phylogeny, guided genome halving and genome aliquoting, are NP hard. Available heuristics dedicated to each of these problems are computationally costly for even small instances. We present a data structure enabling rapid heuristic solution to all these ancestral genome reconstruction problems. A generic greedy algorithm with look-ahead based on an automatically generated priority system suffices for all the problems using this data structure. The efficiency of the algorithm is due to fast updating of the structure during run time and to the simplicity of the priority scheme. We illustrate with the first rapid algorithm for quartet construction and apply this to a set of yeast genomes to corroborate a recent gene sequence-based phylogeny. http://albuquerque.bioinformatics.uottawa.ca/pathgroup/Quartet.html chunfang313@gmail.com Supplementary data are available at Bioinformatics online.
Internet use and video gaming predict problem behavior in early adolescence.
Holtz, Peter; Appel, Markus
2011-02-01
In early adolescence, the time spent using the Internet and video games is higher than in any other present-day age group. Due to age-inappropriate web and gaming content, the impact of new media use on teenagers is a matter of public and scientific concern. Based on current theories on inappropriate media use, a study was conducted that comprised 205 adolescents aged 10-14 years (Md = 13). Individuals were identified who showed clinically relevant problem behavior according to the problem scales of the Youth Self Report (YSR). Online gaming, communicational Internet use, and playing first-person shooters were predictive of externalizing behavior problems (aggression, delinquency). Playing online role-playing games was predictive of internalizing problem behavior (including withdrawal and anxiety). Parent-child communication about Internet activities was negatively related to problem behavior. Copyright © 2010 The Association for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.
Digitalizing historical high resolution water level data: Challenges and opportunities
NASA Astrophysics Data System (ADS)
Holinde, Lars; Hein, Hartmut; Barjenbruch, Ulrich
2017-04-01
Historical tide-gauge data offer the opportunities for determining variations in key characteristics for water level data and the analyses of past extreme events (storm surges). These information are important for calculating future trends and scenarios. But there are challenges involved due to the extensive effort needed to digitalize gauge sheets and quality control the resulting historical data. Based on these conditions, two main sources for inaccuracies in historical time series can be identified. First are several challenges due to the digitalization of the historical data, e.g. general quality of the sheets, multiple crossing lines of the observed water levels and additional comments on the sheet describing problems or additional information during the measurements. Second are problems during the measurements themselves. These can include the incorrect positioning of the sheets, trouble with the tide-gauge and maintenance. Errors resulting from these problems can be e.g. flat lines, discontinuities and outlier. Especially, the characterization of outliers has to be conducted carefully, to distinguish between real outliers and the appearance of extreme events. Methods for the quality control process involve the use of statistics, machine learning and neural networks. These will be described and applied to three different time series from tide gauge stations at the cost of Lower Saxony, Germany. Resulting difficulties and outcomes of the quality control process will be presented and explained. Furthermore, we will present a first glance at analyses for these time series.
Unsteady Thermocapillary Migration of Isolated Drops in Creeping Flow
NASA Technical Reports Server (NTRS)
Dill, Loren H.; Balasubramaniam, R.
1992-01-01
The problem of an isolated immiscible drop that slowly migrates due to unsteady thermocapillary stresses is considered. All physical properties except for interfacial tension are assumed constant for the two Newtonian fluids. Explicit expressions are found for the migration rate and stream functions in the Laplace domain. The resulting microgravity theory is useful, e.g., in predicting the distance a drop will migrate due to an impulsive interfacial temperature gradient as well as the time required to attain steady flow conditions from an initially resting state.
Batch Scheduling for Hybrid Assembly Differentiation Flow Shop to Minimize Total Actual Flow Time
NASA Astrophysics Data System (ADS)
Maulidya, R.; Suprayogi; Wangsaputra, R.; Halim, A. H.
2018-03-01
A hybrid assembly differentiation flow shop is a three-stage flow shop consisting of Machining, Assembly and Differentiation Stages and producing different types of products. In the machining stage, parts are processed in batches on different (unrelated) machines. In the assembly stage, each part of the different parts is assembled into an assembly product. Finally, the assembled products will further be processed into different types of final products in the differentiation stage. In this paper, we develop a batch scheduling model for a hybrid assembly differentiation flow shop to minimize the total actual flow time defined as the total times part spent in the shop floor from the arrival times until its due date. We also proposed a heuristic algorithm for solving the problems. The proposed algorithm is tested using a set of hypothetic data. The solution shows that the algorithm can solve the problems effectively.
Strengthened MILP formulation for certain gas turbine unit commitment problems
Pan, Kai; Guan, Yongpei; Watson, Jean -Paul; ...
2015-05-22
In this study, we derive a strengthened MILP formulation for certain gas turbine unit commitment problems, in which the ramping rates are no smaller than the minimum generation amounts. This type of gas turbines can usually start-up faster and have a larger ramping rate, as compared to the traditional coal-fired power plants. Recently, the number of this type of gas turbines increases significantly due to affordable gas prices and their scheduling flexibilities to accommodate intermittent renewable energy generation. In this study, several new families of strong valid inequalities are developed to help reduce the computational time to solve these typesmore » of problems. Meanwhile, the validity and facet-defining proofs are provided for certain inequalities. Finally, numerical experiments on a modified IEEE 118-bus system and the power system data based on recent studies verify the effectiveness of applying our formulation to model and solve this type of gas turbine unit commitment problems, including reducing the computational time to obtain an optimal solution or obtaining a much smaller optimality gap, as compared to the default CPLEX, when the time limit is reached with no optimal solutions obtained.« less
Soltis, Kathryn E.; McDevitt-Murphy, Meghan; Murphy, James G.
2017-01-01
Background Elevated depression and stress have been linked to greater levels of alcohol problems among young adults even after taking into account drinking level. The current study attempts to elucidate variables that might mediate the relation between symptoms of depression and stress and alcohol problems, including alcohol demand, future time orientation, and craving. Methods Participants were 393 undergraduates (60.8% female, 78.9% White/Caucasian) who reported at least 2 binge drinking episodes (4/5+ drinks for women/men, respectively) in the previous month. Participants completed self-report measures of stress and depression, alcohol demand, future time orientation, craving, and alcohol problems. Results In separate mediation models that accounted for gender, race, and weekly alcohol consumption, future orientation and craving significantly mediated the relation between depressive symptoms and alcohol problems. Alcohol demand, future orientation, and craving significantly mediated the relation between stress symptoms and alcohol problems. Conclusions Heavy drinking young adults who experience stress or depression are likely to experience alcohol problems and this is due in part to elevations in craving and alcohol demand, and less sensitivity to future outcomes. Interventions targeting alcohol misuse in young adults with elevated levels of depression and stress should attempt to increase future orientation and decrease craving and alcohol reward value. PMID:28401985
Soltis, Kathryn E; McDevitt-Murphy, Meghan E; Murphy, James G
2017-06-01
Elevated depression and stress have been linked to greater levels of alcohol problems among young adults even after taking into account drinking level. This study attempts to elucidate variables that might mediate the relation between symptoms of depression and stress and alcohol problems, including alcohol demand, future time orientation, and craving. Participants were 393 undergraduates (60.8% female, 78.9% White/Caucasian) who reported at least 2 binge-drinking episodes (4/5+ drinks for women/men, respectively) in the previous month. Participants completed self-report measures of stress and depression, alcohol demand, future time orientation, craving, and alcohol problems. In separate mediation models that accounted for gender, race, and weekly alcohol consumption, future orientation and craving significantly mediated the relation between depressive symptoms and alcohol problems. Alcohol demand, future orientation, and craving significantly mediated the relation between stress symptoms and alcohol problems. Heavy-drinking young adults who experience stress or depression are likely to experience alcohol problems, and this is due in part to elevations in craving and alcohol demand, and less sensitivity to future outcomes. Interventions targeting alcohol misuse in young adults with elevated levels of depression and stress should attempt to increase future orientation and decrease craving and alcohol reward value. Copyright © 2017 by the Research Society on Alcoholism.
Finding minimum-quotient cuts in planar graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, J.K.; Phillips, C.A.
Given a graph G = (V, E) where each vertex v {element_of} V is assigned a weight w(v) and each edge e {element_of} E is assigned a cost c(e), the quotient of a cut partitioning the vertices of V into sets S and {bar S} is c(S, {bar S})/min{l_brace}w(S), w(S){r_brace}, where c(S, {bar S}) is the sum of the costs of the edges crossing the cut and w(S) and w({bar S}) are the sum of the weights of the vertices in S and {bar S}, respectively. The problem of finding a cut whose quotient is minimum for a graph hasmore » in recent years attracted considerable attention, due in large part to the work of Rao and Leighton and Rao. They have shown that an algorithm (exact or approximation) for the minimum-quotient-cut problem can be used to obtain an approximation algorithm for the more famous minimumb-balanced-cut problem, which requires finding a cut (S,{bar S}) minimizing c(S,{bar S}) subject to the constraint bW {le} w(S) {le} (1 {minus} b)W, where W is the total vertex weight and b is some fixed balance in the range 0 < b {le} {1/2}. Unfortunately, the minimum-quotient-cut problem is strongly NP-hard for general graphs, and the best polynomial-time approximation algorithm known for the general problem guarantees only a cut whose quotient is at mostO(lg n) times optimal, where n is the size of the graph. However, for planar graphs, the minimum-quotient-cut problem appears more tractable, as Rao has developed several efficient approximation algorithms for the planar version of the problem capable of finding a cut whose quotient is at most some constant times optimal. In this paper, we improve Rao`s algorithms, both in terms of accuracy and speed. As our first result, we present two pseudopolynomial-time exact algorithms for the planar minimum-quotient-cut problem. As Rao`s most accurate approximation algorithm for the problem -- also a pseudopolynomial-time algorithm -- guarantees only a 1.5-times-optimal cut, our algorithms represent a significant advance.« less
Finding minimum-quotient cuts in planar graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, J.K.; Phillips, C.A.
Given a graph G = (V, E) where each vertex v [element of] V is assigned a weight w(v) and each edge e [element of] E is assigned a cost c(e), the quotient of a cut partitioning the vertices of V into sets S and [bar S] is c(S, [bar S])/min[l brace]w(S), w(S)[r brace], where c(S, [bar S]) is the sum of the costs of the edges crossing the cut and w(S) and w([bar S]) are the sum of the weights of the vertices in S and [bar S], respectively. The problem of finding a cut whose quotient is minimummore » for a graph has in recent years attracted considerable attention, due in large part to the work of Rao and Leighton and Rao. They have shown that an algorithm (exact or approximation) for the minimum-quotient-cut problem can be used to obtain an approximation algorithm for the more famous minimumb-balanced-cut problem, which requires finding a cut (S,[bar S]) minimizing c(S,[bar S]) subject to the constraint bW [le] w(S) [le] (1 [minus] b)W, where W is the total vertex weight and b is some fixed balance in the range 0 < b [le] [1/2]. Unfortunately, the minimum-quotient-cut problem is strongly NP-hard for general graphs, and the best polynomial-time approximation algorithm known for the general problem guarantees only a cut whose quotient is at mostO(lg n) times optimal, where n is the size of the graph. However, for planar graphs, the minimum-quotient-cut problem appears more tractable, as Rao has developed several efficient approximation algorithms for the planar version of the problem capable of finding a cut whose quotient is at most some constant times optimal. In this paper, we improve Rao's algorithms, both in terms of accuracy and speed. As our first result, we present two pseudopolynomial-time exact algorithms for the planar minimum-quotient-cut problem. As Rao's most accurate approximation algorithm for the problem -- also a pseudopolynomial-time algorithm -- guarantees only a 1.5-times-optimal cut, our algorithms represent a significant advance.« less
NASA Technical Reports Server (NTRS)
Sohn, Andrew; Biswas, Rupak
1996-01-01
Solving the hard Satisfiability Problem is time consuming even for modest-sized problem instances. Solving the Random L-SAT Problem is especially difficult due to the ratio of clauses to variables. This report presents a parallel synchronous simulated annealing method for solving the Random L-SAT Problem on a large-scale distributed-memory multiprocessor. In particular, we use a parallel synchronous simulated annealing procedure, called Generalized Speculative Computation, which guarantees the same decision sequence as sequential simulated annealing. To demonstrate the performance of the parallel method, we have selected problem instances varying in size from 100-variables/425-clauses to 5000-variables/21,250-clauses. Experimental results on the AP1000 multiprocessor indicate that our approach can satisfy 99.9 percent of the clauses while giving almost a 70-fold speedup on 500 processors.
System for analysing sickness absenteeism in Poland.
Indulski, J A; Szubert, Z
1997-01-01
The National System of Sickness Absenteeism Statistics has been functioning in Poland since 1977, as the part of the national health statistics. The system is based on a 15-percent random sample of copies of certificates of temporary incapacity for work issued by all health care units and authorised private medical practitioners. A certificate of temporary incapacity for work is received by every insured employee who is compelled to stop working due to sickness, accident, or due to the necessity to care for a sick member of his/her family. The certificate is required on the first day of sickness. Analyses of disease- and accident-related sickness absenteeism carried out each year in Poland within the statistical system lead to the main conclusions: 1. Diseases of the musculoskeletal and peripheral nervous systems accounting, when combined, for 1/3 of the total sickness absenteeism, are a major health problem of the working population in Poland. During the past five years, incapacity for work caused by these diseases in males increased 2.5 times. 2. Circulatory diseases, and arterial hypertension and ischaemic heart disease in particular (41% and 27% of sickness days, respectively), create an essential health problem among males at productive age, especially, in the 40 and older age group. Absenteeism due to these diseases has increased in males more than two times.
Silander, E; Jacobsson, I; Bertéus-Forslund, H; Hammerlid, E
2013-01-01
Malnutrition decreases the cancer patient's ability to manage treatment, affects quality of life and survival, and is common among head and neck (HN) cancer patients due to the tumour location and the treatment received. In this study, advanced HN cancer patients were included and followed during 2 years in order to measure their energy intake, choice of energy sources and to assess problems with dysphagia. The main purpose was to explore when and for how long the patients had dysphagia and lost weight due to insufficient intake and if having a PEG (percutaneous endoscopic gastrostomy) in place for enteral nutrition made a difference. One hundred thirty-four patients were included and randomised to either a prophylactic PEG for early enteral feeding or nutritional care according to clinical praxis. At seven time points weight, dysphagia and energy intake (assessed as oral, nutritional supplements, enteral and parenteral) were measured. Both groups lost weight the first six months due to insufficient energy intake and used enteral nutrition as their main intake source; no significant differences between groups were found. Problems with dysphagia were vast during the 6 months. At the 6-, 12- and 24-month follow-ups both groups reached estimated energy requirements and weight loss ceased. Oral intake was the major energy source after 1 year. HN cancer patients need nutritional support and enteral feeding for a long time period during and after treatment due to insufficient energy intake. A prophylactic PEG did not significantly improve the enteral intake probably due to treatment side effects.
NASA Astrophysics Data System (ADS)
Yuan, Jinlong; Zhang, Xu; Liu, Chongyang; Chang, Liang; Xie, Jun; Feng, Enmin; Yin, Hongchao; Xiu, Zhilong
2016-09-01
Time-delay dynamical systems, which depend on both the current state of the system and the state at delayed times, have been an active area of research in many real-world applications. In this paper, we consider a nonlinear time-delay dynamical system of dha-regulonwith unknown time-delays in batch culture of glycerol bioconversion to 1,3-propanediol induced by Klebsiella pneumonia. Some important properties and strong positive invariance are discussed. Because of the difficulty in accurately measuring the concentrations of intracellular substances and the absence of equilibrium points for the time-delay system, a quantitative biological robustness for the concentrations of intracellular substances is defined by penalizing a weighted sum of the expectation and variance of the relative deviation between system outputs before and after the time-delays are perturbed. Our goal is to determine optimal values of the time-delays. To this end, we formulate an optimization problem in which the time delays are decision variables and the cost function is to minimize the biological robustness. This optimization problem is subject to the time-delay system, parameter constraints, continuous state inequality constraints for ensuring that the concentrations of extracellular and intracellular substances lie within specified limits, a quality constraint to reflect operational requirements and a cost sensitivity constraint for ensuring that an acceptable level of the system performance is achieved. It is approximated as a sequence of nonlinear programming sub-problems through the application of constraint transcription and local smoothing approximation techniques. Due to the highly complex nature of this optimization problem, the computational cost is high. Thus, a parallel algorithm is proposed to solve these nonlinear programming sub-problems based on the filled function method. Finally, it is observed that the obtained optimal estimates for the time-delays are highly satisfactory via numerical simulations.
NASA Astrophysics Data System (ADS)
Lovell, Amy Elizabeth
Computational electromagnetics (CEM) provides numerical methods to simulate electromagnetic waves interacting with its environment. Boundary integral equation (BIE) based methods, that solve the Maxwell's equations in the homogeneous or piecewise homogeneous medium, are both efficient and accurate, especially for scattering and radiation problems. Development and analysis electromagnetic BIEs has been a very active topic in CEM research. Indeed, there are still many open problems that need to be addressed or further studied. A short and important list includes (1) closed-form or quasi-analytical solutions to time-domain integral equations, (2) catastrophic cancellations at low frequencies, (3) ill-conditioning due to high mesh density, multi-scale discretization, and growing electrical size, and (4) lack of flexibility due to re-meshing when increasing number of forward numerical simulations are involved in the electromagnetic design process. This dissertation will address those several aspects of boundary integral equations in computational electromagnetics. The first contribution of the dissertation is to construct quasi-analytical solutions to time-dependent boundary integral equations using a direct approach. Direct inverse Fourier transform of the time-harmonic solutions is not stable due to the non-existence of the inverse Fourier transform of spherical Hankel functions. Using new addition theorems for the time-domain Green's function and dyadic Green's functions, time-domain integral equations governing transient scattering problems of spherical objects are solved directly and stably for the first time. Additional, the direct time-dependent solutions, together with the newly proposed time-domain dyadic Green's functions, can enrich the time-domain spherical multipole theory. The second contribution is to create a novel method of moments (MoM) framework to solve electromagnetic boundary integral equation on subdivision surfaces. The aim is to avoid the meshing and re-meshing stages to accelerate the design process when the geometry needs to be updated. Two schemes to construct basis functions on the subdivision surface have been explored. One is to use the div-conforming basis function, and the other one is to create a rigorous iso-geometric approach based on the subdivision basis function with better smoothness properties. This new framework provides us better accuracy, more stability and high flexibility. The third contribution is a new stable integral equation formulation to avoid catastrophic cancellations due to low-frequency breakdown or dense-mesh breakdown. Many of the conventional integral equations and their associated post-processing operations suffer from numerical catastrophic cancellations, which can lead to ill-conditioning of the linear systems or serious accuracy problems. Examples includes low-frequency breakdown and dense mesh breakdown. Another instability may come from nontrivial null spaces of involving integral operators that might be related with spurious resonance or topology breakdown. This dissertation presents several sets of new boundary integral equations and studies their analytical properties. The first proposed formulation leads to the scalar boundary integral equations where only scalar unknowns are involved. Besides the requirements of gaining more stability and better conditioning in the resulting linear systems, multi-physics simulation is another driving force for new formulations. Scalar and vector potentials (rather than electromagnetic field) based formulation have been studied for this purpose. Those new contributions focus on different stages of boundary integral equations in an almost independent manner, e.g. isogeometric analysis framework can be used to solve different boundary integral equations, and the time-dependent solutions to integral equations from different formulations can be achieved through the same methodology proposed.
Clarsen, Benjamin; Myklebust, Grethe; Bahr, Roald
2013-05-01
Current methods for injury registration in sports injury epidemiology studies may substantially underestimate the true burden of overuse injuries due to a reliance on time-loss injury definitions. To develop and validate a new method for the registration of overuse injuries in sports. A new method, including a new overuse injury questionnaire, was developed and validated in a 13-week prospective study of injuries among 313 athletes from five different sports, cross-country skiing, floorball, handball, road cycling and volleyball. All athletes completed a questionnaire by email each week to register problems in the knee, lower back and shoulder. Standard injury registration methods were also used to record all time-loss injuries that occurred during the study period. The new method recorded 419 overuse problems in the knee, lower back and shoulder during the 3-month-study period. Of these, 142 were classified as substantial overuse problems, defined as those leading to moderate or severe reductions in sports performance or participation, or time loss. Each week, an average of 39% of athletes reported having overuse problems and 13% reported having substantial problems. In contrast, standard methods of injury registration registered only 40 overuse injuries located in the same anatomical areas, the majority of which were of minimal or mild severity. Standard injury surveillance methods only capture a small percentage of the overuse problems affecting the athletes, largely because few problems led to time loss from training or competition. The new method captured a more complete and nuanced picture of the burden of overuse injuries in this cohort.
NASA Technical Reports Server (NTRS)
Rivera, J. M.; Simpson, R. W.
1980-01-01
The aerial relay system network design problem is discussed. A generalized branch and bound based algorithm is developed which can consider a variety of optimization criteria, such as minimum passenger travel time and minimum liner and feeder operating costs. The algorithm, although efficient, is basically useful for small size networks, due to its nature of exponentially increasing computation time with the number of variables.
Problem Gambling Family Impacts: Development of the Problem Gambling Family Impact Scale.
Dowling, N A; Suomi, A; Jackson, A C; Lavis, T
2016-09-01
Although family members of problem gamblers frequently present to treatment services, problem gambling family impacts are under-researched. The most commonly endorsed items on a new measure of gambling-related family impacts [Problem Gambling Family Impact Measure (PG-FIM: Problem Gambler version)] by 212 treatment-seeking problem gamblers included trust (62.5 %), anger (61.8 %), depression or sadness (58.7 %), anxiety (57.7 %), distress due to gambling-related absences (56.1 %), reduced quality time (52.4 %), and communication breakdowns (52.4 %). The PG-FIM (Problem Gambler version) was comprised of three factors: (1) financial impacts, (2) increased responsibility impacts, and (3) psychosocial impacts with good psychometric properties. Younger, more impulsive, non-electronic gaming machine (EGM) gamblers who had more severe gambling problems reported more financial impacts; non-EGM gamblers with poorer general health reported more increased responsibility impacts; and more impulsive non-EGM gamblers with more psychological distress and higher gambling severity reported more psychosocial impacts. The findings have implications for the development of interventions for the family members of problem gamblers.
NASA Technical Reports Server (NTRS)
Daigle, Matthew J.; Sankararaman, Shankar
2013-01-01
Prognostics is centered on predicting the time of and time until adverse events in components, subsystems, and systems. It typically involves both a state estimation phase, in which the current health state of a system is identified, and a prediction phase, in which the state is projected forward in time. Since prognostics is mainly a prediction problem, prognostic approaches cannot avoid uncertainty, which arises due to several sources. Prognostics algorithms must both characterize this uncertainty and incorporate it into the predictions so that informed decisions can be made about the system. In this paper, we describe three methods to solve these problems, including Monte Carlo-, unscented transform-, and first-order reliability-based methods. Using a planetary rover as a case study, we demonstrate and compare the different methods in simulation for battery end-of-discharge prediction.
A hybrid neural networks-fuzzy logic-genetic algorithm for grade estimation
NASA Astrophysics Data System (ADS)
Tahmasebi, Pejman; Hezarkhani, Ardeshir
2012-05-01
The grade estimation is a quite important and money/time-consuming stage in a mine project, which is considered as a challenge for the geologists and mining engineers due to the structural complexities in mineral ore deposits. To overcome this problem, several artificial intelligence techniques such as Artificial Neural Networks (ANN) and Fuzzy Logic (FL) have recently been employed with various architectures and properties. However, due to the constraints of both methods, they yield the desired results only under the specific circumstances. As an example, one major problem in FL is the difficulty of constructing the membership functions (MFs).Other problems such as architecture and local minima could also be located in ANN designing. Therefore, a new methodology is presented in this paper for grade estimation. This method which is based on ANN and FL is called "Coactive Neuro-Fuzzy Inference System" (CANFIS) which combines two approaches, ANN and FL. The combination of these two artificial intelligence approaches is achieved via the verbal and numerical power of intelligent systems. To improve the performance of this system, a Genetic Algorithm (GA) - as a well-known technique to solve the complex optimization problems - is also employed to optimize the network parameters including learning rate, momentum of the network and the number of MFs for each input. A comparison of these techniques (ANN, Adaptive Neuro-Fuzzy Inference System or ANFIS) with this new method (CANFIS-GA) is also carried out through a case study in Sungun copper deposit, located in East-Azerbaijan, Iran. The results show that CANFIS-GA could be a faster and more accurate alternative to the existing time-consuming methodologies for ore grade estimation and that is, therefore, suggested to be applied for grade estimation in similar problems.
A hybrid neural networks-fuzzy logic-genetic algorithm for grade estimation
Tahmasebi, Pejman; Hezarkhani, Ardeshir
2012-01-01
The grade estimation is a quite important and money/time-consuming stage in a mine project, which is considered as a challenge for the geologists and mining engineers due to the structural complexities in mineral ore deposits. To overcome this problem, several artificial intelligence techniques such as Artificial Neural Networks (ANN) and Fuzzy Logic (FL) have recently been employed with various architectures and properties. However, due to the constraints of both methods, they yield the desired results only under the specific circumstances. As an example, one major problem in FL is the difficulty of constructing the membership functions (MFs).Other problems such as architecture and local minima could also be located in ANN designing. Therefore, a new methodology is presented in this paper for grade estimation. This method which is based on ANN and FL is called “Coactive Neuro-Fuzzy Inference System” (CANFIS) which combines two approaches, ANN and FL. The combination of these two artificial intelligence approaches is achieved via the verbal and numerical power of intelligent systems. To improve the performance of this system, a Genetic Algorithm (GA) – as a well-known technique to solve the complex optimization problems – is also employed to optimize the network parameters including learning rate, momentum of the network and the number of MFs for each input. A comparison of these techniques (ANN, Adaptive Neuro-Fuzzy Inference System or ANFIS) with this new method (CANFIS–GA) is also carried out through a case study in Sungun copper deposit, located in East-Azerbaijan, Iran. The results show that CANFIS–GA could be a faster and more accurate alternative to the existing time-consuming methodologies for ore grade estimation and that is, therefore, suggested to be applied for grade estimation in similar problems. PMID:25540468
Docking, S I; Rio, E; Cook, J; Orchard, J W; Fortington, L V
2018-03-23
Little is known about the prevalence and associated of morbidity of tendon problems. With only severe cases of tendon problems missing games, players that have their training and performance impacted are not captured by traditional injury surveillance. The aim of this study was to report the prevalence of Achilles and patellar tendon problems in elite male Australian football players using the Oslo Sports Trauma Research Centre (OSTRC) overuse questionnaire, compared to a time-loss definition. Male athletes from 12 professional Australian football teams were invited to complete a monthly questionnaire over a 9-month period in the 2016 pre- and competitive season. The OSTRC overuse injury questionnaire was used to measure the prevalence and severity of Achilles and patellar tendon symptoms and was compared to traditional match-loss statistics. A total of 441 participants were included. Of all participants, 21.5% (95% CI: 17.9-25.6) and 25.2% (95% CI 21.3-29.4) reported Achilles or patellar tendon problems during the season, respectively. Based on the traditional match-loss definition, a combined 4.1% of participants missed games due to either Achilles or patellar tendon injury. A greater average monthly prevalence was observed during the pre-season compared to the competitive season. Achilles and patellar tendon problems are prevalent in elite male Australian football players. These injuries are not adequately captured using a traditional match-loss definition. Prevention of these injuries may be best targeted during the off- and pre-season due to higher prevalence of symptoms during the pre-season compared to during the competitive season. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Vannoni, Francesca; Mamo, C; Demaria, M; Ceccarelli, C; Costa, G
2005-01-01
Knowledge on the occupational and social factors that influence the relationship between illness, absence from work and occupational mobility is at present insufficient. To map out, by social class and occupational group, the impact of health problems on work and the distribution of accidents and morbidity associated with occupation. Using data from the National Survey of the Italian Labour Force (ISTAT, 1999), covering a sample of 200,384 subjects, prevalence odds ratios of morbidity, work injuries and change of occupation due to health problems were calculated by social class and occupation, adjusting for age and residence. The working class showed a higher risk, due to health problems, of a reduction in time worked (OR = 3.70 in men and OR = 4.10 in women), of choosing to work part-time (OR = 2.04 in men and OR = 2.27 in women), or of withdrawing from the workforce (for artisans, skilled manual workers, farmers and agricultural labourers OR = 1.63 in men and OR = 1.47 in women). This class was also at a greater disadvantage not only with respect to accident rates (OR = 1.85 in men and OR = 1.88 in women), but also with respect to the time needed for post-trauma rehabilitation and return to work (for absences of one week to one month: OR = 1.67 and 1.83 for men and women, respectively; for absences of more than one month: OR = 1.29 and OR = 1.69). Moreover, the working class, when compared to other social classes, had a higher rate of suffering from illness, physical impairment or other physical and psychological problems caused or aggravated by working activity (25% in men and 32% in women). The ISTAT National Survey provides an estimate of minor accidents with prognoses of less than three days, including those not reported to the National Institute for Insurance against Occupational Accidents and Diseases (INAIL). This allows a preliminary exploration of the relationship between health problems and occupational mobility; however, it seems necessary to collect more detailed information in order to more exhaustively explore the mechanisms which generate the inequalities observed.
Accuracy of Time Integration Approaches for Stiff Magnetohydrodynamics Problems
NASA Astrophysics Data System (ADS)
Knoll, D. A.; Chacon, L.
2003-10-01
The simulation of complex physical processes with multiple time scales presents a continuing challenge to the computational plasma physisist due to the co-existence of fast and slow time scales. Within computational plasma physics, practitioners have developed and used linearized methods, semi-implicit methods, and time splitting in an attempt to tackle such problems. All of these methods are understood to generate numerical error. We are currently developing algorithms which remove such error for MHD problems [1,2]. These methods do not rely on linearization or time splitting. We are also attempting to analyze the errors introduced by existing ``implicit'' methods using modified equation analysis (MEA) [3]. In this presentation we will briefly cover the major findings in [3]. We will then extend this work further into MHD. This analysis will be augmented with numerical experiments with the hope of gaining insight, particularly into how these errors accumulate over many time steps. [1] L. Chacon,. D.A. Knoll, J.M. Finn, J. Comput. Phys., vol. 178, pp. 15-36 (2002) [2] L. Chacon and D.A. Knoll, J. Comput. Phys., vol. 188, pp. 573-592 (2003) [3] D.A. Knoll , L. Chacon, L.G. Margolin, V.A. Mousseau, J. Comput. Phys., vol. 185, pp. 583-611 (2003)
Near Field Trailing Edge Tone Noise Computation
NASA Technical Reports Server (NTRS)
Loh, Ching Y.
2002-01-01
Blunt trailing edges in a flow often generate tone noise due to wall-jet shear layer and vortex shedding. In this paper, the space-time conservation element (CE/SE) method is employed to numerically study the near-field noise of blunt trailing edges. Two typical cases, namely, flow past a circular cylinder (aeolian noise problem) and flow past a flat plate of finite thickness are considered. The computed frequencies compare well with experimental data. For the aeolian noise problem, comparisons with the results of other numerical approaches are also presented.
Reachability Analysis in Probabilistic Biological Networks.
Gabr, Haitham; Todor, Andrei; Dobra, Alin; Kahveci, Tamer
2015-01-01
Extra-cellular molecules trigger a response inside the cell by initiating a signal at special membrane receptors (i.e., sources), which is then transmitted to reporters (i.e., targets) through various chains of interactions among proteins. Understanding whether such a signal can reach from membrane receptors to reporters is essential in studying the cell response to extra-cellular events. This problem is drastically complicated due to the unreliability of the interaction data. In this paper, we develop a novel method, called PReach (Probabilistic Reachability), that precisely computes the probability that a signal can reach from a given collection of receptors to a given collection of reporters when the underlying signaling network is uncertain. This is a very difficult computational problem with no known polynomial-time solution. PReach represents each uncertain interaction as a bi-variate polynomial. It transforms the reachability problem to a polynomial multiplication problem. We introduce novel polynomial collapsing operators that associate polynomial terms with possible paths between sources and targets as well as the cuts that separate sources from targets. These operators significantly shrink the number of polynomial terms and thus the running time. PReach has much better time complexity than the recent solutions for this problem. Our experimental results on real data sets demonstrate that this improvement leads to orders of magnitude of reduction in the running time over the most recent methods. Availability: All the data sets used, the software implemented and the alignments found in this paper are available at http://bioinformatics.cise.ufl.edu/PReach/.
USDA-ARS?s Scientific Manuscript database
Proliferation of woody plants in grasslands and savannas (hereafter, “rangelands”) is a persistent problem globally. This widely-observed shift from grass to shrub dominance in rangelands worldwide has been heterogeneous in space and time largely due to cross-scale interactions between soils, climat...
ERIC Educational Resources Information Center
Engman, Leila
Research has indicated that teachers are willing to be involved and are capable of being involved in instructional development. According to Kingham and Benham, team teaching has failed in the past due to three causes: a) no planning time, b) personality clashes, and c) inability to integrate the material. To solve these three problems, one can…
Solving wood chip transport problems with computer simulation.
Dennis P. Bradley; Sharon A. Winsauer
1976-01-01
Efficient chip transport operations are difficult to achieve due to frequent and often unpredictable changes in distance to market, chipping rate, time spent at the mill, and equipment costs. This paper describes a computer simulation model that allows a logger to design an efficient transport system in response to these changing factors.
Making Mathematics Relevant for Students in Bali
ERIC Educational Resources Information Center
Sema, Pryde Nubea
2008-01-01
The reactions of students towards mathematics in Bali (in the NW Province of Cameroon) are appalling. This is due to a misconception regarding its uses. The author thinks that these problems derive partly from the influence that the Western curriculum has had in Bali--mathematical contexts are based around train times in Liverpool instead of from…
The Effect of Cooperative Learning: University Example
ERIC Educational Resources Information Center
Tombak, Busra; Altun, Sertel
2016-01-01
Problem Statement: Motivation is a significant component of success in education, and it is best achieved by constructivist learning methods, especially cooperative Learning (CL). CL is a popular method among primary and secondary schools, but it is rarely used in higher education due to the large numbers of students and time restrictions. The…
Updating the Potential of Culture in the Prevention of Corruption
ERIC Educational Resources Information Center
Dorozhkin, Evgenij M.; Kislov, Alexander G.; Syuzeva, Natalya V.; Ozhegova, Anna P.; Kuznetsov, Andrey V.
2016-01-01
The urgency of the problem under investigation is due to the danger and at the same time the prevalence of corruption, so special attention is given to the need to supplement the repressive state and awareness-raising measures forming, especially in educational institutions of special subculture, raising a categorical rejection of corruption. The…
Bratucu, R; Gheorghe, I R; Purcarea, R M; Gheorghe, C M; Popa Velea, O; Purcarea, V L
2014-09-15
Today, health care consumers are taking more control over their health care problems, investing more time in finding and getting information as well as looking for proper methods in order to investigate more closely the health care information received from their physicians. Unfortunately, in health care consumers' views, the trustworthiness of health authorities and institutions has declined in the last years. So, consumers have found a new solution to their health problems, that is, the Internet. Recently, studies revealed that consumers seeking for health information have more options to look for data in comparison to the methods used a few years ago. Therefore, due to the available technology, consumers have more outlets to search for information. For instance, the Internet is a source that has revolutionized the way consumers seek data due its customized methods of assessing both quantitative and qualitative information which may be achieved with minimal effort and low costs, offering at the same time, several advantages such as making the decision process more efficient.
Bratucu, R; Gheorghe, IR; Purcarea, RM; Gheorghe, CM; Popa Velea, O; Purcarea, VL
2014-01-01
Abstract Today, health care consumers are taking more control over their health care problems, investing more time in finding and getting information as well as looking for proper methods in order to investigate more closely the health care information received from their physicians. Unfortunately, in health care consumers’ views, the trustworthiness of health authorities and institutions has declined in the last years. So, consumers have found a new solution to their health problems, that is, the Internet. Recently, studies revealed that consumers seeking for health information have more options to look for data in comparison to the methods used a few years ago. Therefore, due to the available technology, consumers have more outlets to search for information. For instance, the Internet is a source that has revolutionized the way consumers seek data due its customized methods of assessing both quantitative and qualitative information which may be achieved with minimal effort and low costs, offering at the same time, several advantages such as making the decision process more efficient. PMID:25408746
Robonaut 2 and You: Specifying and Executing Complex Operations
NASA Technical Reports Server (NTRS)
Baker, William; Kingston, Zachary; Moll, Mark; Badger, Julia; Kavraki, Lydia
2017-01-01
Crew time is a precious resource due to the expense of trained human operators in space. Efficient caretaker robots could lessen the manual labor load required by frequent vehicular and life support maintenance tasks, freeing astronaut time for scientific mission objectives. Humanoid robots can fluidly exist alongside human counterparts due to their form, but they are complex and high-dimensional platforms. This paper describes a system that human operators can use to maneuver Robonaut 2 (R2), a dexterous humanoid robot developed by NASA to research co-robotic applications. The system includes a specification of constraints used to describe operations, and the supporting planning framework that solves constrained problems on R2 at interactive speeds. The paper is developed in reference to an illustrative, typical example of an operation R2 performs to highlight the challenges inherent to the problems R2 must face. Finally, the interface and planner is validated through a case-study using the guiding example on the physical robot in a simulated microgravity environment. This work reveals the complexity of employing humanoid caretaker robots and suggest solutions that are broadly applicable.
Faccioli, Michela; Hanley, Nick; Torres, Cati; Font, Antoni Riera
2016-07-15
Environmental cost-benefit analysis has traditionally assumed that the value of benefits is sensitive to their timing and that outcomes are valued higher, the sooner in time they occur following implementation of a project or policy. Though, this assumption might have important implications especially for the social desirability of interventions aiming at counteracting time-persistent environmental problems, whose impacts occur in the long- and very long-term, respectively involving the present and future generations. This study analyzes the time sensitivity of social preferences for preservation policies of adaptation to climate change stresses. Results show that stated preferences are time insensitive, due to sustainability issues: individuals show insignificant differences in benefits they can experience within their own lifetimes compared to those which occur in the longer term, and which will instead be enjoyed by future generations. Whilst these results may be specific to the experimental design employed here, they do raise interesting questions regarding choices over time-persistent environmental problems, particularly in terms of the desirability of interventions which produce longer-term benefits. Copyright © 2016 Elsevier Ltd. All rights reserved.
Using Animal Instincts to Design Efficient Biomedical Studies via Particle Swarm Optimization.
Qiu, Jiaheng; Chen, Ray-Bing; Wang, Weichung; Wong, Weng Kee
2014-10-01
Particle swarm optimization (PSO) is an increasingly popular metaheuristic algorithm for solving complex optimization problems. Its popularity is due to its repeated successes in finding an optimum or a near optimal solution for problems in many applied disciplines. The algorithm makes no assumption of the function to be optimized and for biomedical experiments like those presented here, PSO typically finds the optimal solutions in a few seconds of CPU time on a garden-variety laptop. We apply PSO to find various types of optimal designs for several problems in the biological sciences and compare PSO performance relative to the differential evolution algorithm, another popular metaheuristic algorithm in the engineering literature.
A centralized audio presentation manager
DOE Office of Scientific and Technical Information (OSTI.GOV)
Papp, A.L. III; Blattner, M.M.
1994-05-16
The centralized audio presentation manager addresses the problems which occur when multiple programs running simultaneously attempt to use the audio output of a computer system. Time dependence of sound means that certain auditory messages must be scheduled simultaneously, which can lead to perceptual problems due to psychoacoustic phenomena. Furthermore, the combination of speech and nonspeech audio is examined; each presents its own problems of perceptibility in an acoustic environment composed of multiple auditory streams. The centralized audio presentation manager receives abstract parameterized message requests from the currently running programs, and attempts to create and present a sonic representation in themore » most perceptible manner through the use of a theoretically and empirically designed rule set.« less
Accurate finite difference methods for time-harmonic wave propagation
NASA Technical Reports Server (NTRS)
Harari, Isaac; Turkel, Eli
1994-01-01
Finite difference methods for solving problems of time-harmonic acoustics are developed and analyzed. Multidimensional inhomogeneous problems with variable, possibly discontinuous, coefficients are considered, accounting for the effects of employing nonuniform grids. A weighted-average representation is less sensitive to transition in wave resolution (due to variable wave numbers or nonuniform grids) than the standard pointwise representation. Further enhancement in method performance is obtained by basing the stencils on generalizations of Pade approximation, or generalized definitions of the derivative, reducing spurious dispersion, anisotropy and reflection, and by improving the representation of source terms. The resulting schemes have fourth-order accurate local truncation error on uniform grids and third order in the nonuniform case. Guidelines for discretization pertaining to grid orientation and resolution are presented.
Low Latency MAC Protocol in Wireless Sensor Networks Using Timing Offset
NASA Astrophysics Data System (ADS)
Choi, Seung Sik
This paper proposes a low latency MAC protocol that can be used in sensor networks. To extend the lifetime of sensor nodes, the conventional solution is to synchronize active/sleep periods of all sensor nodes. However, due to these synchronized sensor nodes, packets in the intermediate nodes must wait until the next node wakes up before it can forward a packet. This induces a large delay in sensor nodes. To solve this latency problem, a clustered sensor network which uses two types of sensor nodes and layered architecture is considered. Clustered heads in each cluster are synchronized with different timing offsets to reduce the sleep delay. Using this concept, the latency problem can be solved and more efficient power usage can be obtained.
GRACE RL03-v2 monthly time series of solutions from CNES/GRGS
NASA Astrophysics Data System (ADS)
Lemoine, Jean-Michel; Bourgogne, Stéphane; Bruinsma, Sean; Gégout, Pascal; Reinquin, Franck; Biancale, Richard
2015-04-01
Based on GRACE GPS and KBR Level-1B.v2 data, as well as on LAGEOS-1/2 SLR data, CNES/GRGS has published in 2014 the third full re-iteration of its GRACE gravity field solutions. This monthly time series of solutions, named RL03-v1, complete to spherical harmonics degree/order 80, has displayed interesting performances in terms of spatial resolution and signal amplitude compared to JPL/GFZ/CSR RL05. This is due to a careful selection of the background models (FES2014 ocean tides, ECMWF ERA-interim (atmosphere) and TUGO (non IB-ocean) "dealiasing" models every 3 hours) and to the choice of an original method for gravity field inversion : truncated SVD. Identically to the previous CNES/GRGS releases, no additional filtering of the solutions is necessary before using them. Some problems have however been identified in CNES/GRGS RL03-v1: - an erroneous mass signal located in two small circular rings close to the Earth's poles, leading to the recommendation not to use RL03-v1 above 82° latitudes North and South; - a weakness in the sectorials due to an excessive downweighting of the GRACE GPS observations. These two problems have been understood and addressed, leading to the computation of a corrected time series of solutions, RL03-v2. The corrective steps have been: - to strengthen the determination of the very low degrees by adding Starlette and Stella SLR data to the normal equations; - to increase the weight of the GRACE GPS observations; - to adopt a two steps approach for the computation of the solutions: first a Choleski inversion for the low degrees, followed by a truncated SVD solution. The identification of these problems will be discussed and the performance of the new time series evaluated.
CORRECTING FOR INTERSTELLAR SCATTERING DELAY IN HIGH-PRECISION PULSAR TIMING: SIMULATION RESULTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palliyaguru, Nipuni; McLaughlin, Maura; Stinebring, Daniel
2015-12-20
Light travel time changes due to gravitational waves (GWs) may be detected within the next decade through precision timing of millisecond pulsars. Removal of frequency-dependent interstellar medium (ISM) delays due to dispersion and scattering is a key issue in the detection process. Current timing algorithms routinely correct pulse times of arrival (TOAs) for time-variable delays due to cold plasma dispersion. However, none of the major pulsar timing groups correct for delays due to scattering from multi-path propagation in the ISM. Scattering introduces a frequency-dependent phase change in the signal that results in pulse broadening and arrival time delays. Any methodmore » to correct the TOA for interstellar propagation effects must be based on multi-frequency measurements that can effectively separate dispersion and scattering delay terms from frequency-independent perturbations such as those due to a GW. Cyclic spectroscopy, first described in an astronomical context by Demorest (2011), is a potentially powerful tool to assist in this multi-frequency decomposition. As a step toward a more comprehensive ISM propagation delay correction, we demonstrate through a simulation that we can accurately recover impulse response functions (IRFs), such as those that would be introduced by multi-path scattering, with a realistic signal-to-noise ratio (S/N). We demonstrate that timing precision is improved when scatter-corrected TOAs are used, under the assumptions of a high S/N and highly scattered signal. We also show that the effect of pulse-to-pulse “jitter” is not a serious problem for IRF reconstruction, at least for jitter levels comparable to those observed in several bright pulsars.« less
Phenological features for winter rapeseed identification in Ukraine using satellite data
NASA Astrophysics Data System (ADS)
Kravchenko, Oleksiy
2014-05-01
Winter rapeseed is one of the major oilseed crops in Ukraine that is characterized by high profitability and often grown with violations of the crop rotation requirements leading to soil degradation. Therefore, rapeseed identification using satellite data is a promising direction for operational estimation of the crop acreage and rotation control. Crop acreage of rapeseed is about 0.5-3% of total area of Ukraine, which poses a major problem for identification using satellite data [1]. While winter rapeseed could be classified using biomass features observed during autumn vegetation, these features are quite unstable due to field to field differences in planting dates as well as spatial and temporal heterogeneity in soil moisture availability. Due to this fact autumn biomass level features could be used only locally (at NUTS-3 level) and are not suitable for large-scale country wide crop identification. We propose to use crop parameters at flowering phenological stage for crop identification and present a method for parameters estimation using time-series of moderate resolution data. Rapeseed flowering could be observed as a bell-shaped peak in red reflectance time series. However the duration of the flowering period that is observable by satellite data is about only two weeks, which is quite short period taking into account inevitable cloud coverage issues. Thus we need daily time series to resolve the flowering peak and due to this we are limited to moderate resolution data. We used daily atmospherically corrected MODIS data coming from Terra and Aqua satellites within 90-160 DOY period to perform features calculations. Empirical BRDF correction is used to minimize angular effects. We used Gaussian Processes Regression (GPR) for temporal interpolation to minimize errors due to residual could coverage, atmospheric correction and a mixed pixel problems. We estimate 12 parameters for each time series. They are red and near-infrared (NIR) reflectance and the timing at four stages: before and after the flowering, at the peak flowering and at the maximum NIR level. We used Support Vector Machine for data classification. The most relevant feature for classification is flowering peak timing followed by flowering peak magnitude. The dependency of the peak time on the latitude as a sole feature could be used to reject 90% of non-rapeseed pixels that is greatly reduces the imbalance of the classification problem. To assess the accuracy of our approach we performed a stratified area frame sampling survey in Odessa region (NUTS-2 level) in 2013. The omission error is about 12.6% while commission error is higher at the level of 22%. This fact is explained by high viewing angle composition criterion that is used in our approach to mitigate high cloud coverage problem. However the errors are quite stable spatially and could be easily corrected by regression technique. To do this we performed area estimation for Odessa region using regression estimator and obtained good area estimation accuracy with 4.6% error (1σ). [1] Gallego, F.J., et al., Efficiency assessment of using satellite data for crop area estimation in Ukraine. Int. J. Appl. Earth Observ. Geoinf. (2014), http://dx.doi.org/10.1016/j.jag.2013.12.013
Efficient Learning of Continuous-Time Hidden Markov Models for Disease Progression
Liu, Yu-Ying; Li, Shuang; Li, Fuxin; Song, Le; Rehg, James M.
2016-01-01
The Continuous-Time Hidden Markov Model (CT-HMM) is an attractive approach to modeling disease progression due to its ability to describe noisy observations arriving irregularly in time. However, the lack of an efficient parameter learning algorithm for CT-HMM restricts its use to very small models or requires unrealistic constraints on the state transitions. In this paper, we present the first complete characterization of efficient EM-based learning methods for CT-HMM models. We demonstrate that the learning problem consists of two challenges: the estimation of posterior state probabilities and the computation of end-state conditioned statistics. We solve the first challenge by reformulating the estimation problem in terms of an equivalent discrete time-inhomogeneous hidden Markov model. The second challenge is addressed by adapting three approaches from the continuous time Markov chain literature to the CT-HMM domain. We demonstrate the use of CT-HMMs with more than 100 states to visualize and predict disease progression using a glaucoma dataset and an Alzheimer’s disease dataset. PMID:27019571
[Adverse reactions and other drug-related problems in an emergency service department].
Güemes Artiles, M; Sanz Alvarez, E; Garcia Sánchez-Colomer, M
1999-01-01
Adverse Drug Reactions (ADR) and Drug-Related Problems (DRP's) are a frequency cause of hospital emergency room visits and require better assessment. An analysis was made of 1097 consecutive admission to the emergency room at the Nuestra Senora de los Volcanes, Hospital (currently the General Hospital of Lanzarote) in Arrecife de Lanzarote (Canary Islands) over a three-month period in order to detect any possible DAR or any other drug-related problems. Nineteen (19) of the 1097 admissions were due to Adverse Drug Reactions (ADR) (1.73%; 95% IC:0.96%-2.5%). Some of the most outstanding of the other "Drug-Related Problems" (DRP's) were medication overdose, which was diagnosed in 5 (0.45%) of the patients; the worsening of the symptoms due to ceasing to take the medication was involved in 8 (0.72%), and incorrect treatments which involved medical care at the emergency room totaled 11 (1.0%). The number of drug-related problems (DRP's) in the sample totaled 43 (3.9%). The drug-related problems (DRP's) led to hospitalization in 1.9% of the cases seen in the emergency room and led to hospitalization in 9.6% of all of hospital admission through the emergency room for the period of time under study. The ADR led to 4.1% of the hospital admissions. Drug-related problems are a frequent, major problem which has not been well-analyzed in the emergency rooms. Additionally, emergency rooms can function as the first point of detection of a ADR among an outpatient population.
Effect of conductor geometry on source localization: Implications for epilepsy studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schlitt, H.; Heller, L.; Best, E.
1994-07-01
We shall discuss the effects of conductor geometry on source localization for applications in epilepsy studies. The most popular conductor model for clinical MEG studies is a homogeneous sphere. However, several studies have indicated that a sphere is a poor model for the head when the sources are deep, as is the case for epileptic foci in the mesial temporal lobe. We believe that replacing the spherical model with a more realistic one in the inverse fitting procedure will improve the accuracy of localizing epileptic sources. In order to include a realistic head model in the inverse problem, we mustmore » first solve the forward problem for the realistic conductor geometry. We create a conductor geometry model from MR images, and then solve the forward problem via a boundary integral equation for the electric potential due to a specified primary source. One the electric potential is known, the magnetic field can be calculated directly. The most time-intensive part of the problem is generating the conductor model; fortunately, this needs to be done only once for each patient. It takes little time to change the primary current and calculate a new magnetic field for use in the inverse fitting procedure. We present the results of a series of computer simulations in which we investigate the localization accuracy due to replacing the spherical model with the realistic head model in the inverse fitting procedure. The data to be fit consist of a computer generated magnetic field due to a known current dipole in a realistic head model, with added noise. We compare the localization errors when this field is fit using a spherical model to the fit using a realistic head model. Using a spherical model is comparable to what is usually done when localizing epileptic sources in humans, where the conductor model used in the inverse fitting procedure does not correspond to the actual head.« less
Choi, Ha Ney; Chung, Hye Won
2011-01-01
Our previous studies have demonstrated the inadequate nutritional status of Vietnamese female marriage immigrants in Korea. Major possible reasons include food insecurity due to economic problems as well as a lack of adjustment to unfamiliar Korean foods and limited access to Vietnamese foods; however, no study has investigated food insecurity among such intermarried couples. This study was performed to investigate the prevalence of food insecurity in Korean-husband-Vietnamese-wife couples and to determine whether they exhibit an intrahousehold discrepancy regarding food insecurity. A cross-sectional analysis of the Cohort of Intermarried Women in Korea study was performed with 84 intermarried couples. Among the 84 Vietnamese immigrants, 48.8% and 41.7% had food insecurity due to economic problems and a lack of foods appealing to their appetite, respectively. There was a marked discrepancy in reporting food insecurity between Vietnamese wives (22.6-38.1%) and their Korean husbands (6.0-15.5%). Vietnamese wives were five and two times more food-insecure due to economic problems and no foods appealing to their appetite, respectively, than their Korean spouses. A follow-up study is needed to investigate the causes of this discrepancy and ways of reducing food insecurity among female marriage immigrants living in low-income, rural communities. PMID:22125686
Choi, Ha Ney; Chung, Hye Won; Hwang, Ji-Yun; Chang, Namsoo
2011-10-01
Our previous studies have demonstrated the inadequate nutritional status of Vietnamese female marriage immigrants in Korea. Major possible reasons include food insecurity due to economic problems as well as a lack of adjustment to unfamiliar Korean foods and limited access to Vietnamese foods; however, no study has investigated food insecurity among such intermarried couples. This study was performed to investigate the prevalence of food insecurity in Korean-husband-Vietnamese-wife couples and to determine whether they exhibit an intrahousehold discrepancy regarding food insecurity. A cross-sectional analysis of the Cohort of Intermarried Women in Korea study was performed with 84 intermarried couples. Among the 84 Vietnamese immigrants, 48.8% and 41.7% had food insecurity due to economic problems and a lack of foods appealing to their appetite, respectively. There was a marked discrepancy in reporting food insecurity between Vietnamese wives (22.6-38.1%) and their Korean husbands (6.0-15.5%). Vietnamese wives were five and two times more food-insecure due to economic problems and no foods appealing to their appetite, respectively, than their Korean spouses. A follow-up study is needed to investigate the causes of this discrepancy and ways of reducing food insecurity among female marriage immigrants living in low-income, rural communities.
The Problems Encountered in a CTEV Clinic: Can Better Casting and Bracing Be Accomplished?
Agarwal, Anil; Kumar, Anubrat; Shaharyar, Abbas; Mishra, Madhusudan
2016-09-07
The aim of the study is to create awareness in the practicing health care workers toward the problems encountered during casting and bracing of clubfoot following Ponseti method, and in turn avoid them. Retrospective audit of 6 years' clubfoot clinic records to analyze problems associated with Ponseti method. Problems were encountered in 26 cast and in 6 braced patients. Just 4 patients out of 71 syndromic (5.6%) experienced problems during casting compared with 3% overall incidence. The common problems encountered in casted patients were moisture lesions, hematoma, dermatitis due to occlusion, pressure sores, and fractures. There was excessive bleeding in 1 patient at time of tenotomy. In braced patients, pressure sores and tenderness at tenotomy site were major problems. None of the syndromic patients experienced difficulties during bracing. Problems were encountered with Ponseti method during casting, tenotomy, or bracing. Syndromic children had lesser complication rate than idiopathic clubfeet. It is important to be aware of these problems so that appropriate intervention can be done early. Level IV: Retrospective. © 2016 The Author(s).
NASA Astrophysics Data System (ADS)
Smirnovsky, Alexander A.; Eliseeva, Viktoria O.
2018-05-01
The study of the film flow occurred under the influence of a gas slug flow is of definite interest in heat and mass transfer during the motion of a coolant in the second circuit of a nuclear water-water reactor. Thermohydraulic codes are usually used for analysis of the such problems in which the motion of the liquid film and the vapor is modeled on the basis of a one-dimensional balance equations. Due to a greater inertia of the liquid film motion, film flow parameters changes with a relaxation compared with gas flow. We consider a model problem of film flow under the influence of friction from gas slug flow neglecting such effects as wave formation, droplet breakage and deposition on the film surface, evaporation and condensation. Such a problem is analogous to the well-known problems of Couette and Stokes flows. An analytical solution has been obtained for laminar flow. Numerical RANS-based simulation of turbulent flow was performed using OpenFOAM. It is established that the relaxation process is almost self-similar. This fact opens a possibility of obtaining valuable correlations for the relaxation time.
Knibbe, Ronald Arnold; Joosten, Jan; Choquet, Marie; Derickx, Mieke; Morin, Delphine; Monshouwer, Karin
2007-02-01
Our main goal was to establish whether French and Dutch adolescents differ in rates of substance-related adverse events (e.g. fights, robbery), problems with peers or socializing agents even when controlling for pattern of substance use. For problems with peers and socializing agents due to alcohol we hypothesized that, because of stronger informal control of drinking in France, French adolescents are more likely to report problems with peers and socializing agents. For adverse events due to alcohol no difference was expected after controlling for consumption patterns. For drug-related problems, the hypothesis was that, due to the more restrictive drug policy in France, French adolescents are more likely to report problems with peers, socializing agents and adverse events. Comparable surveys based on samples of adolescent schoolchildren in France (n=9646) and the Netherlands (n=4291) were used. Data were analysed using multilevel logistic regression in which school, age and gender, indicators of substance use and country were used as predictors of substance-related problems. The outcomes show that French adolescents are more likely to report problems with peers and socializing agents due to alcohol even when consumption pattern is controlled for. For adverse events due to alcohol no difference was found between French and Dutch adolescents. For drug-related problems the expected differences were found; i.e. French adolescents are more likely to report problems with peers, socializing agents and adverse events even when controlling for pattern of drug use. It is concluded that there are culturally embedded differences in the rates of some types of problems due to alcohol or drug use. With respect to alcohol use, these differences are most likely due to culturally embedded differences in the informal social control of alcohol use. The differences in rates of drug-related problems are interpreted in the context of national differences in drug policy.
Rank-k modification methods for recursive least squares problems
NASA Astrophysics Data System (ADS)
Olszanskyj, Serge; Lebak, James; Bojanczyk, Adam
1994-09-01
In least squares problems, it is often desired to solve the same problem repeatedly but with several rows of the data either added, deleted, or both. Methods for quickly solving a problem after adding or deleting one row of data at a time are known. In this paper we introduce fundamental rank-k updating and downdating methods and show how extensions of rank-1 downdating methods based on LINPACK, Corrected Semi-Normal Equations (CSNE), and Gram-Schmidt factorizations, as well as new rank-k downdating methods, can all be derived from these fundamental results. We then analyze the cost of each new algorithm and make comparisons tok applications of the corresponding rank-1 algorithms. We provide experimental results comparing the numerical accuracy of the various algorithms, paying particular attention to the downdating methods, due to their potential numerical difficulties for ill-conditioned problems. We then discuss the computation involved for each downdating method, measured in terms of operation counts and BLAS calls. Finally, we provide serial execution timing results for these algorithms, noting preferable points for improvement and optimization. From our experiments we conclude that the Gram-Schmidt methods perform best in terms of numerical accuracy, but may be too costly for serial execution for large problems.
Varieties of centralized intake: the Portland Target Cities Project experience.
Barron, Nancy; McFarland, Bentson H; McCamant, Lynn
2002-01-01
To assess the possible influence of centralized intake on client outcomes, initial, six- and twelve-month Addiction Severity Index composite scores (in the alcohol, drug, legal and psychiatric areas) for clients who experienced provider intake were compared with scores for those going through two different models of centralized intake. Centralized intake clients were more likely than provider intake clients to have legal problems, and those legal problems became fewer over time. Clients from in-jail intake, including pretreatment services and accompanied placement, showed a greater initial and lower subsequent prevalence of drug, psychiatric and legal problems than the clients of the freestanding centralized intake. For all clients, psychiatric composite scores were powerful predictors of problems in alcohol, drug medical and legal areas, and psychiatric symptoms decreased over time. Since baseline differences in demographics and service assignment existed among the three groups, it was difficult to identify whether the outcome differences were due to the nature of the participants, the nature of the intake intervention, or both. However, the Portland Target Cities Projects's emphasis on in-jail centralized intake was associated with enhanced client outcomes.
Test and Evaluation of the Time/Frequency Collision Avoidance System Concept.
1973-09-01
cumulative distributions were then plotted on “normal” graph paper , i.e., graph paper on whit..h a normal distribution will plot as a straight line...apparent problems. 6-8 _ _ _ _ _ _ _ _ _ _ _ _ _ CIMP TER SEVEN CONCLUSIONS AND RECOMMENDAT IONS 7. 1 CONCLUSIONS The time/frequency technique for...instrumentation due to waiting for an event that will not occur , there are time—outs that cause the process to step past the event in questions . In this
Christov, Ivan C.
2015-09-11
We correct certain errors and ambiguities in the recent pedagogical article by Hopkins and de Bruyn. The early-time asymptotics of the solution to the transient version of Stokes’ second problem for an Oldroyd-B fluid in a half-space is presented, as Appendix A, to complement the late-time asymptotics given by Hopkins and de Bruyn.
Healthcare service problems reported in a national survey of South Africans.
Hasumi, Takahiro; Jacobsen, Kathryn H
2014-08-01
To identify common types of health service problems reported by South African adults during their most recent visit to a healthcare provider. Secondary analysis of South Africa's cross-sectional General Household Survey (GHS). Nationally representative weighted sample of households in South Africa. 23,562 household representatives interviewed during the 2010 GHS. Problems experienced during the most recent visit to the usual healthcare provider. In total, 43.8% of participants reported experiencing at least one problem during their last visit; 19.1% reported multiple problems. The most common problems experienced were a long waiting time (34.8% of household representatives), needed drugs not being available (14.1%) and staff who were rude or uncaring or turned patients away (10.1%). Of the 73.6% of participants using public providers, 54.9% reported at least one problem; of the 26.4% of participants using private providers, only 18.0% reported a problem, usually cost. Similar differences in reported problems at public and private providers were reported for all racial/ethnic groups and income groups. Black Africans reported more problems than other population groups due in large part to being significantly more likely to use public providers. Addressing commonly reported problem areas-in particular, long waiting times, unavailable medications and staff who are perceived as being unfriendly-might help prevent delayed care seeking, increase the acceptability of healthcare services and reduce remaining health disparities in South Africa. © The Author 2014. Published by Oxford University Press in association with the International Society for Quality in Health Care; all rights reserved.
A memory structure adapted simulated annealing algorithm for a green vehicle routing problem.
Küçükoğlu, İlker; Ene, Seval; Aksoy, Aslı; Öztürk, Nursel
2015-03-01
Currently, reduction of carbon dioxide (CO2) emissions and fuel consumption has become a critical environmental problem and has attracted the attention of both academia and the industrial sector. Government regulations and customer demands are making environmental responsibility an increasingly important factor in overall supply chain operations. Within these operations, transportation has the most hazardous effects on the environment, i.e., CO2 emissions, fuel consumption, noise and toxic effects on the ecosystem. This study aims to construct vehicle routes with time windows that minimize the total fuel consumption and CO2 emissions. The green vehicle routing problem with time windows (G-VRPTW) is formulated using a mixed integer linear programming model. A memory structure adapted simulated annealing (MSA-SA) meta-heuristic algorithm is constructed due to the high complexity of the proposed problem and long solution times for practical applications. The proposed models are integrated with a fuel consumption and CO2 emissions calculation algorithm that considers the vehicle technical specifications, vehicle load, and transportation distance in a green supply chain environment. The proposed models are validated using well-known instances with different numbers of customers. The computational results indicate that the MSA-SA heuristic is capable of obtaining good G-VRPTW solutions within a reasonable amount of time by providing reductions in fuel consumption and CO2 emissions.
NASA Astrophysics Data System (ADS)
Zhu, Xiaoyuan; Zhang, Hui; Cao, Dongpu; Fang, Zongde
2015-06-01
Integrated motor-transmission (IMT) powertrain system with directly coupled motor and gearbox is a good choice for electric commercial vehicles (e.g., pure electric buses) due to its potential in motor size reduction and energy efficiency improvement. However, the controller design for powertrain oscillation damping becomes challenging due to the elimination of damping components. On the other hand, as controller area network (CAN) is commonly adopted in modern vehicle system, the network-induced time-varying delays that caused by bandwidth limitation will further lead to powertrain vibration or even destabilize the powertrain control system. Therefore, in this paper, a robust energy-to-peak controller is proposed for the IMT powertrain system to address the oscillation damping problem and also attenuate the external disturbance. The control law adopted here is based on a multivariable PI control, which ensures the applicability and performance of the proposed controller in engineering practice. With the linearized delay uncertainties characterized by polytopic inclusions, a delay-free closed-loop augmented system is established for the IMT powertrain system under discrete-time framework. The proposed controller design problem is then converted to a static output feedback (SOF) controller design problem where the feedback control gains are obtained by solving a set of linear matrix inequalities (LMIs). The effectiveness as well as robustness of the proposed controller is demonstrated by comparing its performance against that of a conventional PI controller.
The EMIR experience in the use of software control simulators to speed up the time to telescope
NASA Astrophysics Data System (ADS)
Lopez Ramos, Pablo; López-Ruiz, J. C.; Moreno Arce, Heidy; Rosich, Josefina; Perez Menor, José Maria
2012-09-01
One of the main problems facing development teams working on instrument control systems consists on the need to access mechanisms which are not available until well into the integration phase. The need to work with real hardware creates additional problems like, among others: certain faults cannot be tested due to the possibility of hardware damage, taking the system to the limit may shorten its operational lifespan and the full system may not be available during some periods due to maintenance and/or testing of individual components. These problems can be treated with the use of simulators and by applying software/hardware standards. Since information on the construction and performance of electro-mechanical systems is available at relatively early stages of the project, simulators are developed in advance (before the existence of the mechanism) or, if conventions and standards have been correctly followed, a previously developed simulator might be used. This article describes our experience in building software simulators and the main advantages we have identified, which are: the control software can be developed even in the absence of real hardware, critical tests can be prepared using the simulated systems, test system behavior for hardware failure situations that represent a risk of the real system, and the speed up of in house integration of the entire instrument. The use of simulators allows us to reduce development, testing and integration time.
Martin, Marie H T; Nielsen, Maj Britt D; Madsen, Ida E H; Petersen, Signe M A; Lange, Theis; Rugulies, Reiner
2013-12-01
Sickness absence and exclusion from the labour market due to mental health problems (MHPs) is a growing concern in many countries. Knowledge about effective return-to-work (RTW) intervention models is still limited, but a multidisciplinary, coordinated and tailored approach has shown promising results in the context of musculoskeletal disorders. The purpose of this study was to assess the effectiveness of this approach as implemented among sickness absence beneficiaries with MHPs. In a quasi-randomised, controlled trial, we assessed the intervention's effect in terms of time to RTW and labour market status after 1 year. We used two different analytical strategies to compare time to RTW between participants receiving the intervention (n = 88) and those receiving conventional case management (n = 80): (1) a traditional multivariable regression analysis controlling for measured confounding, and (2) an instrumental variable (IV) analysis controlling for unmeasured confounding. The two analytical approaches provided similar results in terms of a longer time to RTW among recipients of the intervention (HR = 0.50; 95 % CI 0.34-0.75), although the estimate provided by the IV-analysis was non-significant (HR = 0.70; 95 % CI 0.23-2.12). After 1 year, more recipients of the intervention than of conventional case management were receiving sickness absence benefits (p = 0.031). The intervention delayed RTW compared to conventional case management, after accounting for measured confounding. The delayed RTW may be due to either implementation or program failure, or both. It may also reflect the complexity of retaining employees with mental health problems in the workplace.
Zou, Han; Lu, Xiaoxuan; Jiang, Hao; Xie, Lihua
2015-01-15
Nowadays, developing indoor positioning systems (IPSs) has become an attractive research topic due to the increasing demands on location-based service (LBS) in indoor environments. WiFi technology has been studied and explored to provide indoor positioning service for years in view of the wide deployment and availability of existing WiFi infrastructures in indoor environments. A large body of WiFi-based IPSs adopt fingerprinting approaches for localization. However, these IPSs suffer from two major problems: the intensive costs of manpower and time for offline site survey and the inflexibility to environmental dynamics. In this paper, we propose an indoor localization algorithm based on an online sequential extreme learning machine (OS-ELM) to address the above problems accordingly. The fast learning speed of OS-ELM can reduce the time and manpower costs for the offline site survey. Meanwhile, its online sequential learning ability enables the proposed localization algorithm to adapt in a timely manner to environmental dynamics. Experiments under specific environmental changes, such as variations of occupancy distribution and events of opening or closing of doors, are conducted to evaluate the performance of OS-ELM. The simulation and experimental results show that the proposed localization algorithm can provide higher localization accuracy than traditional approaches, due to its fast adaptation to various environmental dynamics.
Addressing Data Veracity in Big Data Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aman, Saima; Chelmis, Charalampos; Prasanna, Viktor
Big data applications such as in smart electric grids, transportation, and remote environment monitoring involve geographically dispersed sensors that periodically send back information to central nodes. In many cases, data from sensors is not available at central nodes at a frequency that is required for real-time modeling and decision-making. This may be due to physical limitations of the transmission networks, or due to consumers limiting frequent transmission of data from sensors located at their premises for security and privacy concerns. Such scenarios lead to partial data problem and raise the issue of data veracity in big data applications. We describemore » a novel solution to the problem of making short term predictions (up to a few hours ahead) in absence of real-time data from sensors in Smart Grid. A key implication of our work is that by using real-time data from only a small subset of influential sensors, we are able to make predictions for all sensors. We thus reduce the communication complexity involved in transmitting sensory data in Smart Grids. We use real-world electricity consumption data from smart meters to empirically demonstrate the usefulness of our method. Our dataset consists of data collected at 15-min intervals from 170 smart meters in the USC Microgrid for 7 years, totaling 41,697,600 data points.« less
Iwata, Masanari; Tang, Suhua; Obana, Sadao
2018-01-01
In large-scale wireless sensor networks (WSNs), nodes close to sink nodes consume energy more quickly than other nodes due to packet forwarding. A mobile sink is a good solution to this issue, although it causes two new problems to nodes: (i) overhead of updating routing information; and (ii) increased operating time due to aperiodic query. To solve these problems, this paper proposes an energy-efficient data collection method, Sink-based Centralized transmission Scheduling (SC-Sched), by integrating asymmetric communication and wake-up radio. Specifically, each node is equipped with a low-power wake-up receiver. The sink node determines transmission scheduling, and transmits a wake-up message using a large transmission power, directly activating a pair of nodes simultaneously which will communicate with a normal transmission power. This paper further investigates how to deal with frame loss caused by fading and how to mitigate the impact of the wake-up latency of communication modules. Simulation evaluations confirm that using multiple channels effectively reduces data collection time and SC-Sched works well with a mobile sink. Compared with the conventional duty-cycling method, SC-Sched greatly reduces total energy consumption and improves the network lifetime by 7.47 times in a WSN with 4 data collection points and 300 sensor nodes. PMID:29642397
The school bus routing and scheduling problem with transfers
Doerner, Karl F.; Parragh, Sophie N.
2015-01-01
In this article, we study the school bus routing and scheduling problem with transfers arising in the field of nonperiodic public transportation systems. It deals with the transportation of pupils from home to their school in the morning taking the possibility that pupils may change buses into account. Allowing transfers has several consequences. On the one hand, it allows more flexibility in the bus network structure and can, therefore, help to reduce operating costs. On the other hand, transfers have an impact on the service level: the perceived service quality is lower due to the existence of transfers; however, at the same time, user ride times may be reduced and, thus, transfers may also have a positive impact on service quality. The main objective is the minimization of the total operating costs. We develop a heuristic solution framework to solve this problem and compare it with two solution concepts that do not consider transfers. The impact of transfers on the service level in terms of time loss (or user ride time) and the number of transfers is analyzed. Our results show that allowing transfers reduces total operating costs significantly while average and maximum user ride times are comparable to solutions without transfers. © 2015 Wiley Periodicals, Inc. NETWORKS, Vol. 65(2), 180–203 2015 PMID:28163329
NASA Astrophysics Data System (ADS)
Hasanah, N.; Hayashi, Y.; Hirashima, T.
2017-02-01
Arithmetic word problems remain one of the most difficult area of teaching mathematics. Learning by problem posing has been suggested as an effective way to improve students’ understanding. However, the practice in usual classroom is difficult due to extra time needed for assessment and giving feedback to students’ posed problems. To address this issue, we have developed a tablet PC software named Monsakun for learning by posing arithmetic word problems based on Triplet Structure Model. It uses the mechanism of sentence-integration, an efficient implementation of problem-posing that enables agent-assessment of posed problems. The learning environment has been used in actual Japanese elementary school classrooms and the effectiveness has been confirmed in previous researches. In this study, ten Indonesian elementary school students living in Japan participated in a learning session of problem posing using Monsakun in Indonesian language. We analyzed their learning activities and show that students were able to interact with the structure of simple word problem using this learning environment. The results of data analysis and questionnaire suggested that the use of Monsakun provides a way of creating an interactive and fun environment for learning by problem posing for Indonesian elementary school students.
Fontes, Cristiano Hora; Budman, Hector
2017-11-01
A clustering problem involving multivariate time series (MTS) requires the selection of similarity metrics. This paper shows the limitations of the PCA similarity factor (SPCA) as a single metric in nonlinear problems where there are differences in magnitude of the same process variables due to expected changes in operation conditions. A novel method for clustering MTS based on a combination between SPCA and the average-based Euclidean distance (AED) within a fuzzy clustering approach is proposed. Case studies involving either simulated or real industrial data collected from a large scale gas turbine are used to illustrate that the hybrid approach enhances the ability to recognize normal and fault operating patterns. This paper also proposes an oversampling procedure to create synthetic multivariate time series that can be useful in commonly occurring situations involving unbalanced data sets. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
A Radiation Transfer Solver for Athena Using Short Characteristics
NASA Astrophysics Data System (ADS)
Davis, Shane W.; Stone, James M.; Jiang, Yan-Fei
2012-03-01
We describe the implementation of a module for the Athena magnetohydrodynamics (MHD) code that solves the time-independent, multi-frequency radiative transfer (RT) equation on multidimensional Cartesian simulation domains, including scattering and non-local thermodynamic equilibrium (LTE) effects. The module is based on well known and well tested algorithms developed for modeling stellar atmospheres, including the method of short characteristics to solve the RT equation, accelerated Lambda iteration to handle scattering and non-LTE effects, and parallelization via domain decomposition. The module serves several purposes: it can be used to generate spectra and images, to compute a variable Eddington tensor (VET) for full radiation MHD simulations, and to calculate the heating and cooling source terms in the MHD equations in flows where radiation pressure is small compared with gas pressure. For the latter case, the module is combined with the standard MHD integrators using operator splitting: we describe this approach in detail, including a new constraint on the time step for stability due to radiation diffusion modes. Implementation of the VET method for radiation pressure dominated flows is described in a companion paper. We present results from a suite of test problems for both the RT solver itself and for dynamical problems that include radiative heating and cooling. These tests demonstrate that the radiative transfer solution is accurate and confirm that the operator split method is stable, convergent, and efficient for problems of interest. We demonstrate there is no need to adopt ad hoc assumptions of questionable accuracy to solve RT problems in concert with MHD: the computational cost for our general-purpose module for simple (e.g., LTE gray) problems can be comparable to or less than a single time step of Athena's MHD integrators, and only few times more expensive than that for more general (non-LTE) problems.
Hamilton-Jacobi-Bellman equations and approximate dynamic programming on time scales.
Seiffertt, John; Sanyal, Suman; Wunsch, Donald C
2008-08-01
The time scales calculus is a key emerging area of mathematics due to its potential use in a wide variety of multidisciplinary applications. We extend this calculus to approximate dynamic programming (ADP). The core backward induction algorithm of dynamic programming is extended from its traditional discrete case to all isolated time scales. Hamilton-Jacobi-Bellman equations, the solution of which is the fundamental problem in the field of dynamic programming, are motivated and proven on time scales. By drawing together the calculus of time scales and the applied area of stochastic control via ADP, we have connected two major fields of research.
ERIC Educational Resources Information Center
Flory, S. Luke; Ingram, Ella L.; Heidinger, Britt J.; Tintjer, Tammy
2005-01-01
Laboratory components of introductory biology college-level courses are becoming increasingly rare. Due to the absence of laboratory funding and time, instructors at all levels are faced with the problem of implementing inquiry-based projects. In this article, the authors present an activity that they developed for the 50-minute discussion period…
Using Networks to Visualize and Analyze Process Data for Educational Assessment
ERIC Educational Resources Information Center
Zhu, Mengxiao; Shu, Zhan; von Davier, Alina A.
2016-01-01
New technology enables interactive and adaptive scenario-based tasks (SBTs) to be adopted in educational measurement. At the same time, it is a challenging problem to build appropriate psychometric models to analyze data collected from these tasks, due to the complexity of the data. This study focuses on process data collected from SBTs. We…
The Causes of Late Coming among High School Students in Soshanguve, Pretoria, South Africa
ERIC Educational Resources Information Center
Maile, Simeon; Olowoyo, Mary Motolani
2017-01-01
Late coming to school has become a major problem in many schools, particularly township schools with serious consequences. Current research has demonstrated that many schools in South Africa are performing badly due to inefficient use of the teaching and learning time. In this article, we argue that while major administrative interventions are…
Tribalism as a Foiled Factor of Africa Nation-Building
ERIC Educational Resources Information Center
Okogu, J. O.; Umudjere, S. O.
2016-01-01
This paper tends to examine tribalism as a foiled factor on Africa nation-building and proffers useful tips to salvaging the Africa land from this deadly social problem. Africans in times past had suffered enormous attacks, injuries, losses, deaths, destruction of properties and human skills and ideas due to the presence of tribalistic views in…
Barth, A D; Wood, M R
1998-01-01
To determine whether declining semen quality associated with health problems may be due to certain antibiotic or anti-inflammatory treatments, semen was collected 3 times per week for up to 42 d from 6 normal bulls after treatment with oxytetracycline, tilmicosin, dihydrostreptomycin, or phenylbutazone. No adverse effects on semen quality were observed. PMID:10051958
A Case Study of the Perceptions of teamUSA Athletes Enrolled in an Education Provider University
ERIC Educational Resources Information Center
Nugen, Robert, Sr.
2017-01-01
The problem addressed in this study is the uniquely demanding experience of contemporaneously being a successful Olympic level athlete and a successful university level student. Due to the time and energy required for successful Olympic competition, a dual-career student-athlete may complete his/her sports career without completing her/his…
School Leadership in Times of Urban Reform. Topics in Educational Leadership.
ERIC Educational Resources Information Center
Bizar, Marilyn, Ed.; Barr, Rebecca, Ed.
Many urban schools are undergoing restructuring due to the problems they face and their resistance to traditional solutions. This volume deals with the various ways in which eight case-study schools in Chicago implemented strategies that called for whole-school change and system reform. The book is comprised of 10 chapters: (1)…
ERIC Educational Resources Information Center
Sikolia, David Wafula
2013-01-01
User non-compliance with information security policies in organizations due to negligence or ignorance is reported as a key data security problem for organizations. The violation of the confidentiality, integrity and availability of organizational data has led to losses in millions of dollars for organizations in terms of money and time spent…
Slow Mapping: Color Word Learning as a Gradual Inductive Process
ERIC Educational Resources Information Center
Wagner, Katie; Dobkins, Karen; Barner, David
2013-01-01
Most current accounts of color word acquisition propose that the delay between children's first production of color words and adult-like understanding is due to problems abstracting color as a domain of meaning. Here we present evidence against this hypothesis, and show that, from the time children produce color words in a labeling task they use…
Non-Markovian dynamics of a qubit due to single-photon scattering in a waveguide
NASA Astrophysics Data System (ADS)
Fang, Yao-Lung L.; Ciccarello, Francesco; Baranger, Harold U.
2018-04-01
We investigate the open dynamics of a qubit due to scattering of a single photon in an infinite or semi-infinite waveguide. Through an exact solution of the time-dependent multi-photon scattering problem, we find the qubit's dynamical map. Tools of open quantum systems theory allow us then to show the general features of this map, find the corresponding non-Linbladian master equation, and assess in a rigorous way its non-Markovian nature. The qubit dynamics has distinctive features that, in particular, do not occur in emission processes. Two fundamental sources of non-Markovianity are present: the finite width of the photon wavepacket and the time delay for propagation between the qubit and the end of the semi-infinite waveguide.
Ishihara, Koji; Morimoto, Jun
2018-03-01
Humans use multiple muscles to generate such joint movements as an elbow motion. With multiple lightweight and compliant actuators, joint movements can also be efficiently generated. Similarly, robots can use multiple actuators to efficiently generate a one degree of freedom movement. For this movement, the desired joint torque must be properly distributed to each actuator. One approach to cope with this torque distribution problem is an optimal control method. However, solving the optimal control problem at each control time step has not been deemed a practical approach due to its large computational burden. In this paper, we propose a computationally efficient method to derive an optimal control strategy for a hybrid actuation system composed of multiple actuators, where each actuator has different dynamical properties. We investigated a singularly perturbed system of the hybrid actuator model that subdivided the original large-scale control problem into smaller subproblems so that the optimal control outputs for each actuator can be derived at each control time step and applied our proposed method to our pneumatic-electric hybrid actuator system. Our method derived a torque distribution strategy for the hybrid actuator by dealing with the difficulty of solving real-time optimal control problems. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Chaves-González, José M.; Vega-Rodríguez, Miguel A.; Gómez-Pulido, Juan A.; Sánchez-Pérez, Juan M.
2011-08-01
This article analyses the use of a novel parallel evolutionary strategy to solve complex optimization problems. The work developed here has been focused on a relevant real-world problem from the telecommunication domain to verify the effectiveness of the approach. The problem, known as frequency assignment problem (FAP), basically consists of assigning a very small number of frequencies to a very large set of transceivers used in a cellular phone network. Real data FAP instances are very difficult to solve due to the NP-hard nature of the problem, therefore using an efficient parallel approach which makes the most of different evolutionary strategies can be considered as a good way to obtain high-quality solutions in short periods of time. Specifically, a parallel hyper-heuristic based on several meta-heuristics has been developed. After a complete experimental evaluation, results prove that the proposed approach obtains very high-quality solutions for the FAP and beats any other result published.
Production loss among employees perceiving work environment problems.
Lohela-Karlsson, Malin; Hagberg, Jan; Bergström, Gunnar
2015-08-01
The overall aim of this explorative study was to investigate the relationship between factors in the psychosocial work environment and work environment-related production loss. Employees at a Swedish university were invited to answer a workplace questionnaire and were selected for this study if they reported having experienced work environment-related problems in the past 7 days (n = 302). A stepwise logistic regression and a modified Poisson regression were used to identify psychosocial work factors associated with work environment-related production loss as well as to identify at what level those factors are associated with production loss. Employees who reported having experienced work environment problems but also fair leadership, good social climate, role clarity and control of decision had significantly lower levels of production loss, whereas employees who reported inequality and high decision demands reported significantly higher levels of production loss. Never or seldom experiencing fair leadership, role clarity, equality, decision demands and good social climate increase the risk of production loss due to work environment problems, compared to those who experience these circumstances frequently, always or most of the time. Several psychosocial work factors are identified as factors associated with a reduced risk of production losses among employees despite the nature of the work environment problem. Knowledge of these factors may be important not only to reduce employee ill-health and the corresponding health-related production loss, but also reduce immediate production loss due to work environment-related problems.
NASA Astrophysics Data System (ADS)
Yusriski, R.; Sukoyo; Samadhi, T. M. A. A.; Halim, A. H.
2016-02-01
In the manufacturing industry, several identical parts can be processed in batches, and setup time is needed between two consecutive batches. Since the processing times of batches are not always fixed during a scheduling period due to learning and deterioration effects, this research deals with batch scheduling problems with simultaneous learning and deterioration effects. The objective is to minimize total actual flow time, defined as a time interval between the arrival of all parts at the shop and their common due date. The decision variables are the number of batches, integer batch sizes, and the sequence of the resulting batches. This research proposes a heuristic algorithm based on the Lagrange Relaxation. The effectiveness of the proposed algorithm is determined by comparing the resulting solutions of the algorithm to the respective optimal solution obtained from the enumeration method. Numerical experience results show that the average of difference among the solutions is 0.05%.
A 3/2-Approximation Algorithm for Multiple Depot Multiple Traveling Salesman Problem
NASA Astrophysics Data System (ADS)
Xu, Zhou; Rodrigues, Brian
As an important extension of the classical traveling salesman problem (TSP), the multiple depot multiple traveling salesman problem (MDMTSP) is to minimize the total length of a collection of tours for multiple vehicles to serve all the customers, where each vehicle must start or stay at its distinct depot. Due to the gap between the existing best approximation ratios for the TSP and for the MDMTSP in literature, which are 3/2 and 2, respectively, it is an open question whether or not a 3/2-approximation algorithm exists for the MDMTSP. We have partially addressed this question by developing a 3/2-approximation algorithm, which runs in polynomial time when the number of depots is a constant.
A novel multi-target regression framework for time-series prediction of drug efficacy.
Li, Haiqing; Zhang, Wei; Chen, Ying; Guo, Yumeng; Li, Guo-Zheng; Zhu, Xiaoxin
2017-01-18
Excavating from small samples is a challenging pharmacokinetic problem, where statistical methods can be applied. Pharmacokinetic data is special due to the small samples of high dimensionality, which makes it difficult to adopt conventional methods to predict the efficacy of traditional Chinese medicine (TCM) prescription. The main purpose of our study is to obtain some knowledge of the correlation in TCM prescription. Here, a novel method named Multi-target Regression Framework to deal with the problem of efficacy prediction is proposed. We employ the correlation between the values of different time sequences and add predictive targets of previous time as features to predict the value of current time. Several experiments are conducted to test the validity of our method and the results of leave-one-out cross-validation clearly manifest the competitiveness of our framework. Compared with linear regression, artificial neural networks, and partial least squares, support vector regression combined with our framework demonstrates the best performance, and appears to be more suitable for this task.
A novel multi-target regression framework for time-series prediction of drug efficacy
Li, Haiqing; Zhang, Wei; Chen, Ying; Guo, Yumeng; Li, Guo-Zheng; Zhu, Xiaoxin
2017-01-01
Excavating from small samples is a challenging pharmacokinetic problem, where statistical methods can be applied. Pharmacokinetic data is special due to the small samples of high dimensionality, which makes it difficult to adopt conventional methods to predict the efficacy of traditional Chinese medicine (TCM) prescription. The main purpose of our study is to obtain some knowledge of the correlation in TCM prescription. Here, a novel method named Multi-target Regression Framework to deal with the problem of efficacy prediction is proposed. We employ the correlation between the values of different time sequences and add predictive targets of previous time as features to predict the value of current time. Several experiments are conducted to test the validity of our method and the results of leave-one-out cross-validation clearly manifest the competitiveness of our framework. Compared with linear regression, artificial neural networks, and partial least squares, support vector regression combined with our framework demonstrates the best performance, and appears to be more suitable for this task. PMID:28098186
Salvalaglio, Matteo; Tiwary, Pratyush; Maggioni, Giovanni Maria; Mazzotti, Marco; Parrinello, Michele
2016-12-07
Condensation of a liquid droplet from a supersaturated vapour phase is initiated by a prototypical nucleation event. As such it is challenging to compute its rate from atomistic molecular dynamics simulations. In fact at realistic supersaturation conditions condensation occurs on time scales that far exceed what can be reached with conventional molecular dynamics methods. Another known problem in this context is the distortion of the free energy profile associated to nucleation due to the small, finite size of typical simulation boxes. In this work the problem of time scale is addressed with a recently developed enhanced sampling method while contextually correcting for finite size effects. We demonstrate our approach by studying the condensation of argon, and showing that characteristic nucleation times of the order of magnitude of hours can be reliably calculated. Nucleation rates spanning a range of 10 orders of magnitude are computed at moderate supersaturation levels, thus bridging the gap between what standard molecular dynamics simulations can do and real physical systems.
NASA Astrophysics Data System (ADS)
Salvalaglio, Matteo; Tiwary, Pratyush; Maggioni, Giovanni Maria; Mazzotti, Marco; Parrinello, Michele
2016-12-01
Condensation of a liquid droplet from a supersaturated vapour phase is initiated by a prototypical nucleation event. As such it is challenging to compute its rate from atomistic molecular dynamics simulations. In fact at realistic supersaturation conditions condensation occurs on time scales that far exceed what can be reached with conventional molecular dynamics methods. Another known problem in this context is the distortion of the free energy profile associated to nucleation due to the small, finite size of typical simulation boxes. In this work the problem of time scale is addressed with a recently developed enhanced sampling method while contextually correcting for finite size effects. We demonstrate our approach by studying the condensation of argon, and showing that characteristic nucleation times of the order of magnitude of hours can be reliably calculated. Nucleation rates spanning a range of 10 orders of magnitude are computed at moderate supersaturation levels, thus bridging the gap between what standard molecular dynamics simulations can do and real physical systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zisman, M.S.
An investigation of collective effects has been undertaken to assess the possibilities for using the low emittance operating mode of the PEP storage ring as a dedicated source of synchrotron radiation. Beam current limitations associated with longitudinal and transverse instabilities, and the expected emittance growth due to intrabeam scattering have been studied as a function of beam energy. Calculations of the beam lifetime due to Touschek and gas scattering are presented, and the growth times of coupled-bunch instabilities are estimated. In general, the results are encouraging, and no fundamental problems have been uncovered. It appears that beam currents up tomore » about 10 mA per bunch should be achievable, and that the emittance growth is not a severe problem at an energy of about 8 GeV. A feedback system to deal with coupled-bunch instabilities is likely to be required. 7 refs., 13 figs.« less
Rapid Frequency Chirps of TAE mode due to Finite Orbit Energetic Particles
NASA Astrophysics Data System (ADS)
Berk, Herb; Wang, Ge
2013-10-01
The tip model for the TAE mode in the large aspect ratio limit, conceived by Rosenbluth et al. in the frequency domain, together with an interaction term in the frequency domain based on a map model, has been extended into the time domain. We present the formal basis for the model, starting with the Lagrangian for the particle wave interaction. We shall discuss the formal nonlinear time domain problem and the procedure that needs to obtain solutions in the adiabatic limit.
Binary Trees and Parallel Scheduling Algorithms.
1980-09-01
been pro- cessed for p. time units. If a job does not complete by its due time, it is tardy. In a nonpreemptive schedule, job i is scheduled to process...the preemptive schedule obtained by the algorithm of section 2.1.2 also minimizes 5Ti, this problem is easily solved in parallel. When lci is to e...August 1978, pp. 657-661. 14. Horn, W. A., "Some simple scheduling algorithms," Naval Res. Logist . Qur., Vol. 21, pp. 177-185, 1974. i5. Hforowitz, E
Correction for spatial averaging in laser speckle contrast analysis
Thompson, Oliver; Andrews, Michael; Hirst, Evan
2011-01-01
Practical laser speckle contrast analysis systems face a problem of spatial averaging of speckles, due to the pixel size in the cameras used. Existing practice is to use a system factor in speckle contrast analysis to account for spatial averaging. The linearity of the system factor correction has not previously been confirmed. The problem of spatial averaging is illustrated using computer simulation of time-integrated dynamic speckle, and the linearity of the correction confirmed using both computer simulation and experimental results. The valid linear correction allows various useful compromises in the system design. PMID:21483623
Neri, Enrico I
2012-01-01
This article examines the socio-cultural significance of betel nut use among Micronesians, in light of the recent migration of Micronesians to Hawai‘i. The different ways of chewing betel nut are the result of historical changes within Micronesia over time due to Spanish and US colonialism as well as the introduction of tobacco. These divergent ways of chewing may have different risks or impacts on health and it remains to be seen whether or not betel nut will become a significant public health problem in Hawai‘i. PMID:22413101
Multitasking the Davidson algorithm for the large, sparse eigenvalue problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Umar, V.M.; Fischer, C.F.
1989-01-01
The authors report how the Davidson algorithm, developed for handling the eigenvalue problem for large and sparse matrices arising in quantum chemistry, was modified for use in atomic structure calculations. To date these calculations have used traditional eigenvalue methods, which limit the range of feasible calculations because of their excessive memory requirements and unsatisfactory performance attributed to time-consuming and costly processing of zero valued elements. The replacement of a traditional matrix eigenvalue method by the Davidson algorithm reduced these limitations. Significant speedup was found, which varied with the size of the underlying problem and its sparsity. Furthermore, the range ofmore » matrix sizes that can be manipulated efficiently was expended by more than one order or magnitude. On the CRAY X-MP the code was vectorized and the importance of gather/scatter analyzed. A parallelized version of the algorithm obtained an additional 35% reduction in execution time. Speedup due to vectorization and concurrency was also measured on the Alliant FX/8.« less
NASA Astrophysics Data System (ADS)
Liu, Tianyu; Du, Xining; Ji, Wei; Xu, X. George; Brown, Forrest B.
2014-06-01
For nuclear reactor analysis such as the neutron eigenvalue calculations, the time consuming Monte Carlo (MC) simulations can be accelerated by using graphics processing units (GPUs). However, traditional MC methods are often history-based, and their performance on GPUs is affected significantly by the thread divergence problem. In this paper we describe the development of a newly designed event-based vectorized MC algorithm for solving the neutron eigenvalue problem. The code was implemented using NVIDIA's Compute Unified Device Architecture (CUDA), and tested on a NVIDIA Tesla M2090 GPU card. We found that although the vectorized MC algorithm greatly reduces the occurrence of thread divergence thus enhancing the warp execution efficiency, the overall simulation speed is roughly ten times slower than the history-based MC code on GPUs. Profiling results suggest that the slow speed is probably due to the memory access latency caused by the large amount of global memory transactions. Possible solutions to improve the code efficiency are discussed.
Space-time adaptive solution of inverse problems with the discrete adjoint method
NASA Astrophysics Data System (ADS)
Alexe, Mihai; Sandu, Adrian
2014-08-01
This paper develops a framework for the construction and analysis of discrete adjoint sensitivities in the context of time dependent, adaptive grid, adaptive step models. Discrete adjoints are attractive in practice since they can be generated with low effort using automatic differentiation. However, this approach brings several important challenges. The space-time adjoint of the forward numerical scheme may be inconsistent with the continuous adjoint equations. A reduction in accuracy of the discrete adjoint sensitivities may appear due to the inter-grid transfer operators. Moreover, the optimization algorithm may need to accommodate state and gradient vectors whose dimensions change between iterations. This work shows that several of these potential issues can be avoided through a multi-level optimization strategy using discontinuous Galerkin (DG) hp-adaptive discretizations paired with Runge-Kutta (RK) time integration. We extend the concept of dual (adjoint) consistency to space-time RK-DG discretizations, which are then shown to be well suited for the adaptive solution of time-dependent inverse problems. Furthermore, we prove that DG mesh transfer operators on general meshes are also dual consistent. This allows the simultaneous derivation of the discrete adjoint for both the numerical solver and the mesh transfer logic with an automatic code generation mechanism such as algorithmic differentiation (AD), potentially speeding up development of large-scale simulation codes. The theoretical analysis is supported by numerical results reported for a two-dimensional non-stationary inverse problem.
Wavelet and adaptive methods for time dependent problems and applications in aerosol dynamics
NASA Astrophysics Data System (ADS)
Guo, Qiang
Time dependent partial differential equations (PDEs) are widely used as mathematical models of environmental problems. Aerosols are now clearly identified as an important factor in many environmental aspects of climate and radiative forcing processes, as well as in the health effects of air quality. The mathematical models for the aerosol dynamics with respect to size distribution are nonlinear partial differential and integral equations, which describe processes of condensation, coagulation and deposition. Simulating the general aerosol dynamic equations on time, particle size and space exhibits serious difficulties because the size dimension ranges from a few nanometer to several micrometer while the spatial dimension is usually described with kilometers. Therefore, it is an important and challenging task to develop efficient techniques for solving time dependent dynamic equations. In this thesis, we develop and analyze efficient wavelet and adaptive methods for the time dependent dynamic equations on particle size and further apply them to the spatial aerosol dynamic systems. Wavelet Galerkin method is proposed to solve the aerosol dynamic equations on time and particle size due to the fact that aerosol distribution changes strongly along size direction and the wavelet technique can solve it very efficiently. Daubechies' wavelets are considered in the study due to the fact that they possess useful properties like orthogonality, compact support, exact representation of polynomials to a certain degree. Another problem encountered in the solution of the aerosol dynamic equations results from the hyperbolic form due to the condensation growth term. We propose a new characteristic-based fully adaptive multiresolution numerical scheme for solving the aerosol dynamic equation, which combines the attractive advantages of adaptive multiresolution technique and the characteristics method. On the aspect of theoretical analysis, the global existence and uniqueness of solutions of continuous time wavelet numerical methods for the nonlinear aerosol dynamics are proved by using Schauder's fixed point theorem and the variational technique. Optimal error estimates are derived for both continuous and discrete time wavelet Galerkin schemes. We further derive reliable and efficient a posteriori error estimate which is based on stable multiresolution wavelet bases and an adaptive space-time algorithm for efficient solution of linear parabolic differential equations. The adaptive space refinement strategies based on the locality of corresponding multiresolution processes are proved to converge. At last, we develop efficient numerical methods by combining the wavelet methods proposed in previous parts and the splitting technique to solve the spatial aerosol dynamic equations. Wavelet methods along the particle size direction and the upstream finite difference method along the spatial direction are alternately used in each time interval. Numerical experiments are taken to show the effectiveness of our developed methods.
Optimal reentry prediction of space objects from LEO using RSM and GA
NASA Astrophysics Data System (ADS)
Mutyalarao, M.; Raj, M. Xavier James
2012-07-01
The accurate estimation of the orbital life time (OLT) of decaying near-Earth objects is of considerable importance for the prediction of risk object re-entry time and hazard assessment as well as for mitigation strategies. Recently, due to the reentries of large number of risk objects, which poses threat to the human life and property, a great concern is developed in the space scientific community all over the World. The evolution of objects in Low Earth Orbit (LEO) is determined by a complex interplay of the perturbing forces, mainly due to atmospheric drag and Earth gravity. These orbits are mostly in low eccentric (eccentricity < 0.2) and have variations in perigee and apogee altitudes due to perturbations during a revolution. The changes in the perigee and apogee altitudes of these orbits are mainly due to the gravitational perturbations of the Earth and the atmospheric density. It has become necessary to use extremely complex force models to match with the present operational requirements and observational techniques. Further the re-entry time of the objects in such orbits is sensitive to the initial conditions. In this paper the problem of predicting re-entry time is attempted as an optimal estimation problem. It is known that the errors are more in eccentricity for the observations based on two line elements (TLEs). Thus two parameters, initial eccentricity and ballistic coefficient, are chosen for optimal estimation. These two parameters are computed with response surface method (RSM) using a genetic algorithm (GA) for the selected time zones, based on rough linear variation of response parameter, the mean semi-major axis during orbit evolution. Error minimization between the observed and predicted mean Semi-major axis is achieved by the application of an optimization algorithm such as Genetic Algorithm (GA). The basic feature of the present approach is that the model and measurement errors are accountable in terms of adjusting the ballistic coefficient and eccentricity. The methodology is tested with the recently reentered objects ROSAT and PHOBOS GRUNT satellites. The study reveals a good agreement with the actual reentry time of these objects. It is also observed that the absolute percentage error in re-entry prediction time for all the two objects is found to be very less. Keywords: low eccentric, Response surface method, Genetic algorithm, apogee altitude, Ballistic coefficient
Bal'de, M S; Konstantinov, O K; Kamara, S K
2006-01-01
Intoxication of the population of Guinea due to venomous snake bites (100-150 intoxications per 100,000 with an 18% mortality rate) is a serious public health problem in the Republic of Guinea. Guinea's fauna of venomous snakes is diverse and numbers 20 species that are dangerous to human beings. The representatives of the family Elapidae (cobras and mambas) whose venom is highly toxic (LD50 5-12 mg) are responsible for the bulk (59.6%) of their bites. There has been recently an increase in the number of deaths from venomous snake bites, as high as 60% of the patients consulting a doctor being notified in one of the prefectures. At the same time the situation associated with the availability of antisnake serum is critical in the country due to its minute amount and to the inaccessibility of its high prices. By taking into account the great demand for the serum in Guinea, as everywhere over West Africa (thousands of doses every year), its manufacture may be profitable for potential investors and partners of the Pasteur Institute of Guinea.
NASA Astrophysics Data System (ADS)
Dattoli, G.; Migliorati, M.; Schiavi, A.
2007-05-01
The coherent synchrotron radiation (CSR) is one of the main problems limiting the performance of high-intensity electron accelerators. The complexity of the physical mechanisms underlying the onset of instabilities due to CSR demands for accurate descriptions, capable of including the large number of features of an actual accelerating device. A code devoted to the analysis of these types of problems should be fast and reliable, conditions that are usually hardly achieved at the same time. In the past, codes based on Lie algebraic techniques have been very efficient to treat transport problems in accelerators. The extension of these methods to the non-linear case is ideally suited to treat CSR instability problems. We report on the development of a numerical code, based on the solution of the Vlasov equation, with the inclusion of non-linear contribution due to wake field effects. The proposed solution method exploits an algebraic technique that uses the exponential operators. We show that the integration procedure is capable of reproducing the onset of instability and the effects associated with bunching mechanisms leading to the growth of the instability itself. In addition, considerations on the threshold of the instability are also developed.
Gravitation and cosmology with York time
NASA Astrophysics Data System (ADS)
Roser, Philipp
Despite decades of inquiry an adequate theory of 'quantum gravity' has remained elusive, in part due to the absence of data that would guide the search and in part due to technical difficulties, prominently among them the 'problem of time'. The problem is a result of the attempt to quantise a classical theory with temporal reparameterisation and refoliation invariance such as general relativity. One way forward is therefore the breaking of this invariance via the identification of a preferred foliation of spacetime into parameterised spatial slices. In this thesis we argue that a foliation into slices of constant extrinsic curvature, parameterised by 'York time', is a viable contender. We argue that the role of York time in the initial-value problem of general relativity as well as a number of the parameter's other properties make it the most promising candidate for a physically preferred notion of time. A Hamiltonian theory describing gravity in the York-time picture may be derived from general relativity by 'Hamiltonian reduction', a procedure that eliminates certain degrees of freedom -- specifically the local scale and its rate of change -- in favour of an explicit time parameter and a functional expression for the associated Hamiltonian. In full generality this procedure is impossible to carry out since the equation that determines the Hamiltonian cannot be solved using known methods. However, it is possible to derive explicit Hamiltonian functions for cosmological scenarios (where matter and geometry is treated as spatially homogeneous). Using a perturbative expansion of the unsolvable equation enables us to derive a quantisable Hamiltonian for cosmological perturbations on such a homogeneous background. We analyse the (classical) theories derived in this manner and look at the York-time description of a number of cosmological processes. We then proceed to apply the canonical quantisation procedure to these systems and analyse the resulting quantum theories. We discuss a number of conceptual and technical points, such as the notion of volume eigen functions and the absence of a momentum representation as a result of the non-canonical commutator structure. While not problematic in a technical sense, the conceptual problems with canonical quantisation are particularly apparent when the procedure is applied in cosmological contexts. In the final part of this thesis we develop a new quantisation method based on configuration-space trajectories and a dynamical configuration-space Weyl geometry. There is no wave function in this type of quantum theory and so many of the conceptual issues do not arise. We outline the application of this quantisation procedure to gravity and discuss some technical points. The actual technical developments are however left for future work. We conclude by reviewing how the York-time Hamiltonian-reduced theory deals with the problem of time. We place it in the wider context of a search for a theory of quantum gravity and briefly discuss the future of physics if and when such a theory is found.
Indoor Air Problems and Hoarseness in Children.
Kallvik, Emma; Putus, Tuula; Simberg, Susanna
2016-01-01
A well-functioning voice is becoming increasingly important because voice-demanding professions are increasing. The largest proportion of voice disorders is caused by factors in the environment. Moisture damage is common and can initiate microbial growth and/or diffusion of chemicals from building materials. Indoor air problems due to moisture damage are associated with a number of health symptoms, for example, rhinitis, cough, and asthma symptoms. The purpose of this study was to investigate if children attending a day care center, preschool, or school with indoor air problems due to moisture damage were hoarse more often than the children in a control group. Information was collected through electronic and paper questionnaires from the parents of 6- to 9-year-old children (n = 1857) attending 57 different day care centers, preschools, or schools with or without indoor air problems due to moisture damage. The results showed a significant correlation between the degree of indoor air problem due to moisture damage and the frequency of hoarseness. Significant predictors for the child being hoarse every week or more often were dry cough, phlegm cough, and nasal congestion. The results indicate that these symptoms and exposure to indoor air problems due to moisture damage should be included in voice anamnesis. Furthermore, efforts should be made to remediate indoor air problems due to moisture damage and to treat health symptoms. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Takayama, T.; Iwasaki, A.
2016-06-01
Above-ground biomass prediction of tropical rain forest using remote sensing data is of paramount importance to continuous large-area forest monitoring. Hyperspectral data can provide rich spectral information for the biomass prediction; however, the prediction accuracy is affected by a small-sample-size problem, which widely exists as overfitting in using high dimensional data where the number of training samples is smaller than the dimensionality of the samples due to limitation of require time, cost, and human resources for field surveys. A common approach to addressing this problem is reducing the dimensionality of dataset. Also, acquired hyperspectral data usually have low signal-to-noise ratio due to a narrow bandwidth and local or global shifts of peaks due to instrumental instability or small differences in considering practical measurement conditions. In this work, we propose a methodology based on fused lasso regression that select optimal bands for the biomass prediction model with encouraging sparsity and grouping, which solves the small-sample-size problem by the dimensionality reduction from the sparsity and the noise and peak shift problem by the grouping. The prediction model provided higher accuracy with root-mean-square error (RMSE) of 66.16 t/ha in the cross-validation than other methods; multiple linear analysis, partial least squares regression, and lasso regression. Furthermore, fusion of spectral and spatial information derived from texture index increased the prediction accuracy with RMSE of 62.62 t/ha. This analysis proves efficiency of fused lasso and image texture in biomass estimation of tropical forests.
Overlooked Transport Participants - Mentally Impaired but Still Mobile
NASA Astrophysics Data System (ADS)
Vlk, Tamara; Wanjek, Monika; Berkowitsch, Claudia; Hauger, Georg
2017-10-01
Providing an inclusive transport system is a global ambition. Whereas, mobility needs and mobility barriers of people suffering from a physical impairment have already been observed frequently, people suffering from mental impairments (due to e.g. anxiety disorders, obsessive-compulsive disorders, dementia or other degenerative diseases) are often overlooked. Numerous studies already suggest that the number of people with mental impairment will significantly increase due to the demographic change and is also shown by the prevalence of mental diseases. Whereby, not even the data collected do necessarily give the full picture of the actual situation. Thus, the importance of mobility needs and mobility problems of people with mental impairments will gain dramatically. Participating in the transport system is a basic need that furthermore requires the ability of adopting different roles (e.g. driver, pedestrian). Due to explanatory studies of the authors, it could be shown what kind of problems people with mental impairment are faced with while participating in the transport system or interacting in public space. Thus, these studies represent the first step that is needed to consider the specific needs of people with mental impairments in future planning. The identified problems of people who are suffering from mental impairment are various. Thereby it can be distinguished between problems triggered by structural (e.g. absence of emergency buttons, spacious stations), organisational (e.g. absence of security stuff, lacking information according time table of transit) or social conditions (e.g. crowed places or vehicles, stigmatisation). This paper presents an overall view of specific requirements of people with mental impairment and suggests possible solutions for planning and designing an inclusive transport system.
Stoolmiller, M; Eddy, J M; Reid, J B
2000-04-01
This study examined theoretical, methodological, and statistical problems involved in evaluating the outcome of aggression on the playground for a universal preventive intervention for conduct disorder. Moderately aggressive children were hypothesized most likely to benefit. Aggression was measured on the playground using observers blind to the group status of the children. Behavior was micro-coded in real time to minimize potential expectancy biases. The effectiveness of the intervention was strongly related to initial levels of aggressiveness. The most aggressive children improved the most. Models that incorporated corrections for low reliability (the ratio of variance due to true time-stable individual differences to total variance) and censoring (a floor effect in the rate data due to short periods of observation) obtained effect sizes 5 times larger than models without such corrections with respect to children who were initially 2 SDs above the mean on aggressiveness.
Time-optimal Aircraft Pursuit-evasion with a Weapon Envelope Constraint
NASA Technical Reports Server (NTRS)
Menon, P. K. A.
1990-01-01
The optimal pursuit-evasion problem between two aircraft including a realistic weapon envelope is analyzed using differential game theory. Six order nonlinear point mass vehicle models are employed and the inclusion of an arbitrary weapon envelope geometry is allowed. The performance index is a linear combination of flight time and the square of the vehicle acceleration. Closed form solution to this high-order differential game is then obtained using feedback linearization. The solution is in the form of a feedback guidance law together with a quartic polynomial for time-to-go. Due to its modest computational requirements, this nonlinear guidance law is useful for on-board real-time implementation.
Fast and reliable symplectic integration for planetary system N-body problems
NASA Astrophysics Data System (ADS)
Hernandez, David M.
2016-06-01
We apply one of the exactly symplectic integrators, which we call HB15, of Hernandez & Bertschinger, along with the Kepler problem solver of Wisdom & Hernandez, to solve planetary system N-body problems. We compare the method to Wisdom-Holman (WH) methods in the MERCURY software package, the MERCURY switching integrator, and others and find HB15 to be the most efficient method or tied for the most efficient method in many cases. Unlike WH, HB15 solved N-body problems exhibiting close encounters with small, acceptable error, although frequent encounters slowed the code. Switching maps like MERCURY change between two methods and are not exactly symplectic. We carry out careful tests on their properties and suggest that they must be used with caution. We then use different integrators to solve a three-body problem consisting of a binary planet orbiting a star. For all tested tolerances and time steps, MERCURY unbinds the binary after 0 to 25 years. However, in the solutions of HB15, a time-symmetric HERMITE code, and a symplectic Yoshida method, the binary remains bound for >1000 years. The methods' solutions are qualitatively different, despite small errors in the first integrals in most cases. Several checks suggest that the qualitative binary behaviour of HB15's solution is correct. The Bulirsch-Stoer and Radau methods in the MERCURY package also unbind the binary before a time of 50 years, suggesting that this dynamical error is due to a MERCURY bug.
ERIC Educational Resources Information Center
Huang, Enmou
2017-01-01
This article reports the findings of a sociolinguistic ethnographic inquiry into the constructions of internal rural-to-urban migrant students by one urban public school in China, and how these students positioned themselves to these constructions against the background of the school's neoliberal transformation. This inquiry finds that, due to the…
Internet Use and Video Gaming Predict Problem Behavior in Early Adolescence
ERIC Educational Resources Information Center
Holtz, Peter; Appel, Markus
2011-01-01
In early adolescence, the time spent using the Internet and video games is higher than in any other present-day age group. Due to age-inappropriate web and gaming content, the impact of new media use on teenagers is a matter of public and scientific concern. Based on current theories on inappropriate media use, a study was conducted that comprised…
Proceedings of the Naval Training Device Center and Industry Conference (2nd, november 28-30, 1967).
ERIC Educational Resources Information Center
Naval Training Device Center, Orlando, FL.
This report consists of 40 conference papers actually presented, and four others submitted but not presented due to lack of time. It concentrates on the technical problems confronting organizations having a prime interest in simulation for training, and stresses the cooperation of the military educator and the technical community to achieve a…
2011-01-01
Background The association between chronic respiratory diseases and work disability has been demonstrated a number of times over the past 20 years, but still little is known about work disability in occupational cohorts of workers exposed to respiratory irritants. This study investigated job or task changes due to respiratory problems as an indicator of work disability in pulp mill workers occupationally exposed to irritants. Methods Data about respiratory symptoms and disease diagnoses, socio-demographic variables, occupational exposures, gassing episodes, and reported work changes due to respiratory problems were collected using a questionnaire answered by 3226 pulp mill workers. Information about work history and departments was obtained from personnel files. Incidence and hazard ratios for respiratory work disability were calculated with 95% confidence intervals (CI). Results The incidence of respiratory work disability among these pulp mill workers was 1.6/1000 person-years. The hazard ratios for respiratory work disability were increased for workers reporting gassings (HR 5.3, 95% CI 2.7-10.5) and for those reporting physician-diagnosed asthma, chronic bronchitis, and chronic rhinitis, when analyzed in the same model. Conclusions This cohort study of pulp mill workers found that irritant peak exposure during gassing episodes was a strong predictor of changing work due to respiratory problems, even after adjustment for asthma, chronic bronchitis, and chronic rhinitis. PMID:21896193
NASA Astrophysics Data System (ADS)
Vincent, Lionel; Kanso, Eva
2017-11-01
Diving induces large pressures during water entry, accompanied by the creation of cavity behind the diver and water splash ejected from the free water surface. To minimize impact forces, divers streamline their shape at impact. Here, we investigate the impact forces and splash evolution of diving wedges as a function of the wedge opening angle. A gradual transition from impactful to smooth entry is observed as the wedge angle decreases. After submersion, diving wedges experience significantly smaller drag forces (two-fold smaller) than immersed wedges. We characterize the shapes of the cavity and splash created by the wedge and find that they are independent of the entry velocity at short times, but that the splash exhibits distinct variations in shape at later times. Combining experimental approach and a discrete fluid particle model, we show that the splash shape is governed by a destabilizing Venturi-suction force due to air rushing between the splash and the water surface and a stabilizing force due to surface tension. These findings may have implications in a wide range of water entry problems, with applications in engineering and bio-related problems, including naval engineering, disease spreading and platform diving. This work was funded by the National Science Foundation.
Zhang, Shuo; Uecker, Martin; Voit, Dirk; Merboldt, Klaus-Dietmar; Frahm, Jens
2010-07-08
Functional assessments of the heart by dynamic cardiovascular magnetic resonance (CMR) commonly rely on (i) electrocardiographic (ECG) gating yielding pseudo real-time cine representations, (ii) balanced gradient-echo sequences referred to as steady-state free precession (SSFP), and (iii) breath holding or respiratory gating. Problems may therefore be due to the need for a robust ECG signal, the occurrence of arrhythmia and beat to beat variations, technical instabilities (e.g., SSFP "banding" artefacts), and limited patient compliance and comfort. Here we describe a new approach providing true real-time CMR with image acquisition times as short as 20 to 30 ms or rates of 30 to 50 frames per second. The approach relies on a previously developed real-time MR method, which combines a strongly undersampled radial FLASH CMR sequence with image reconstruction by regularized nonlinear inversion. While iterative reconstructions are currently performed offline due to limited computer speed, online monitoring during scanning is accomplished using gridding reconstructions with a sliding window at the same frame rate but with lower image quality. Scans of healthy young subjects were performed at 3 T without ECG gating and during free breathing. The resulting images yield T1 contrast (depending on flip angle) with an opposed-phase or in-phase condition for water and fat signals (depending on echo time). They completely avoid (i) susceptibility-induced artefacts due to the very short echo times, (ii) radiofrequency power limitations due to excitations with flip angles of 10 degrees or less, and (iii) the risk of peripheral nerve stimulation due to the use of normal gradient switching modes. For a section thickness of 8 mm, real-time images offer a spatial resolution and total acquisition time of 1.5 mm at 30 ms and 2.0 mm at 22 ms, respectively. Though awaiting thorough clinical evaluation, this work describes a robust and flexible acquisition and reconstruction technique for real-time CMR at the ultimate limit of this technology.
A Computer-Assisted Personalized Approach in an Undergraduate Plant Physiology Class1
Artus, Nancy N.; Nadler, Kenneth D.
1999-01-01
We used Computer-Assisted Personalized Approach (CAPA), a networked teaching and learning tool that generates computer individualized homework problem sets, in our large-enrollment introductory plant physiology course. We saw significant improvement in student examination performance with regular homework assignments, with CAPA being an effective and efficient substitute for hand-graded homework. Using CAPA, each student received a printed set of similar but individualized problems of a conceptual (qualitative) and/or quantitative nature with quality graphics. Because each set of problems is unique, students were encouraged to work together to clarify concepts but were required to do their own work for credit. Students could enter answers multiple times without penalty, and they were able to obtain immediate feedback and hints until the due date. These features increased student time on task, allowing higher course standards and student achievement in a diverse student population. CAPA handles routine tasks such as grading, recording, summarizing, and posting grades. In anonymous surveys, students indicated an overwhelming preference for homework in CAPA format, citing several features such as immediate feedback, multiple tries, and on-line accessibility as reasons for their preference. We wrote and used more than 170 problems on 17 topics in introductory plant physiology, cataloging them in a computer library for general access. Representative problems are compared and discussed. PMID:10198076
Identification of the structure parameters using short-time non-stationary stochastic excitation
NASA Astrophysics Data System (ADS)
Jarczewska, Kamila; Koszela, Piotr; Śniady, PaweŁ; Korzec, Aleksandra
2011-07-01
In this paper, we propose an approach to the flexural stiffness or eigenvalue frequency identification of a linear structure using a non-stationary stochastic excitation process. The idea of the proposed approach lies within time domain input-output methods. The proposed method is based on transforming the dynamical problem into a static one by integrating the input and the output signals. The output signal is the structure reaction, i.e. structure displacements due to the short-time, irregular load of random type. The systems with single and multiple degrees of freedom, as well as continuous systems are considered.
Shiri, Rahman; Kausto, Johanna; Martimo, Kari-Pekka; Kaila-Kangas, Leena; Takala, Esa-Pekka; Viikari-Juntura, Eira
2013-01-01
Previously we reported that early part-time sick leave enhances return to work (RTW) among employees with musculoskeletal disorders (MSD). This paper assesses the health-related effects of this intervention. Patients aged 18-60 years who were unable to perform their regular work due to MSD were randomized to part- or full-time sick leave groups. In the former, workload was reduced by halving working time. Using validated questionnaires, we assessed pain intensity and interference with work and sleep, region-specific disability due to MSD, self-rated general health, health-related quality of life (measured via EuroQol), productivity loss, depression, and sleep disturbance at baseline, 1, 3, 8, 12, and 52 weeks. We analyzed the repeated measures data (171-356 observations) with the generalized estimating equation approach. The intervention (part-time sick leave) and control (full-time sick leave) groups did not differ with regard to pain intensity, pain interference with work and sleep, region-specific disability, productivity loss, depression, or sleep disturbance. The intervention group reported better self-rated general health (adjusted P=0.07) and health-related quality of life (adjusted P=0.02) than the control group. In subgroup analyses, the intervention was more effective among the patients whose current problem began occurring <6 weeks before baseline and those with ≤30% productivity loss at baseline. Our findings showed that part-time sick leave did not exacerbate pain-related symptoms and functional disability, but improved self-rated general health and health-related quality of life in the early stage of work disability due to MSD.
The determination of the pulse pile-up reject (PUR) counting for X and gamma ray spectrometry
NASA Astrophysics Data System (ADS)
Karabıdak, S. M.; Kaya, S.
2017-02-01
The collection the charged particles produced by the incident radiation on a detector requires a time interval. If this time interval is not sufficiently short compared with the peaking time of the amplifier, a loss in the recovered signal amplitude occurs. Another major constraint on the throughput of modern x or gamma-ray spectrometers is the time required for the subsequent the pulse processing by the electronics. Two above-mentioned limitations are cause of counting losses resulting from the dead time and the pile-up. The pulse pile-up is a common problem in x and gamma ray radiation detection systems. The pulses pile-up in spectroscopic analysis can cause significant errors. Therefore, inhibition of these pulses is a vital step. A way to reduce errors due to the pulse pile-up is a pile-up inspection circuitry (PUR). Such a circuit rejects some of the pulse pile-up. Therefore, this circuit leads to counting losses. Determination of these counting losses is an important problem. In this work, a new method is suggested for the determination of the pulse pile-up reject.
Jang, Hae-Won; Ih, Jeong-Guon
2013-03-01
The time domain boundary element method (TBEM) to calculate the exterior sound field using the Kirchhoff integral has difficulties in non-uniqueness and exponential divergence. In this work, a method to stabilize TBEM calculation for the exterior problem is suggested. The time domain CHIEF (Combined Helmholtz Integral Equation Formulation) method is newly formulated to suppress low order fictitious internal modes. This method constrains the surface Kirchhoff integral by forcing the pressures at the additional interior points to be zero when the shortest retarded time between boundary nodes and an interior point elapses. However, even after using the CHIEF method, the TBEM calculation suffers the exponential divergence due to the remaining unstable high order fictitious modes at frequencies higher than the frequency limit of the boundary element model. For complete stabilization, such troublesome modes are selectively adjusted by projecting the time response onto the eigenspace. In a test example for a transiently pulsating sphere, the final average error norm of the stabilized response compared to the analytic solution is 2.5%.
Gaussian-input Gaussian mixture model for representing density maps and atomic models.
Kawabata, Takeshi
2018-07-01
A new Gaussian mixture model (GMM) has been developed for better representations of both atomic models and electron microscopy 3D density maps. The standard GMM algorithm employs an EM algorithm to determine the parameters. It accepted a set of 3D points with weights, corresponding to voxel or atomic centers. Although the standard algorithm worked reasonably well; however, it had three problems. First, it ignored the size (voxel width or atomic radius) of the input, and thus it could lead to a GMM with a smaller spread than the input. Second, the algorithm had a singularity problem, as it sometimes stopped the iterative procedure due to a Gaussian function with almost zero variance. Third, a map with a large number of voxels required a long computation time for conversion to a GMM. To solve these problems, we have introduced a Gaussian-input GMM algorithm, which considers the input atoms or voxels as a set of Gaussian functions. The standard EM algorithm of GMM was extended to optimize the new GMM. The new GMM has identical radius of gyration to the input, and does not suddenly stop due to the singularity problem. For fast computation, we have introduced a down-sampled Gaussian functions (DSG) by merging neighboring voxels into an anisotropic Gaussian function. It provides a GMM with thousands of Gaussian functions in a short computation time. We also have introduced a DSG-input GMM: the Gaussian-input GMM with the DSG as the input. This new algorithm is much faster than the standard algorithm. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.
An accurate, fast, and scalable solver for high-frequency wave propagation
NASA Astrophysics Data System (ADS)
Zepeda-Núñez, L.; Taus, M.; Hewett, R.; Demanet, L.
2017-12-01
In many science and engineering applications, solving time-harmonic high-frequency wave propagation problems quickly and accurately is of paramount importance. For example, in geophysics, particularly in oil exploration, such problems can be the forward problem in an iterative process for solving the inverse problem of subsurface inversion. It is important to solve these wave propagation problems accurately in order to efficiently obtain meaningful solutions of the inverse problems: low order forward modeling can hinder convergence. Additionally, due to the volume of data and the iterative nature of most optimization algorithms, the forward problem must be solved many times. Therefore, a fast solver is necessary to make solving the inverse problem feasible. For time-harmonic high-frequency wave propagation, obtaining both speed and accuracy is historically challenging. Recently, there have been many advances in the development of fast solvers for such problems, including methods which have linear complexity with respect to the number of degrees of freedom. While most methods scale optimally only in the context of low-order discretizations and smooth wave speed distributions, the method of polarized traces has been shown to retain optimal scaling for high-order discretizations, such as hybridizable discontinuous Galerkin methods and for highly heterogeneous (and even discontinuous) wave speeds. The resulting fast and accurate solver is consequently highly attractive for geophysical applications. To date, this method relies on a layered domain decomposition together with a preconditioner applied in a sweeping fashion, which has limited straight-forward parallelization. In this work, we introduce a new version of the method of polarized traces which reveals more parallel structure than previous versions while preserving all of its other advantages. We achieve this by further decomposing each layer and applying the preconditioner to these new components separately and in parallel. We demonstrate that this produces an even more effective and parallelizable preconditioner for a single right-hand side. As before, additional speed can be gained by pipelining several right-hand-sides.
Yussuf, A D; Issa, B A; Ajiboye, P O; Buhari, O I
2013-05-01
This study was prompted by the heightened concerns about the stress inherent in medical education evident from the incessant requests for suspension of studies due to psychological problems. The objectives of the study were to: (i) survey the students for possible psychological problems at admission, and follow them up till exit for possible changes in morbidity, and (ii) ascertain possible risk factors, and coping strategies. This is a preliminary 2-stage cross-sectional report, which is part of a longitudinal survey. It involves first year medical students of the College of Health Sciences of University of Ilorin between March and April, 2011. Questionnaires used included socio demographic, sources of stress, the general health questionnaire-12 (GHQ-12), Maslach's burnout inventory (MBI), and Brief COPE. Data were analysed using SPSS version 15 at 5% significance level. Chi-square, frequency distributions, Pearson's correlation, Odd ratios, and Confidence Intervals were calculated to determine the levels of risk. 79 students returned completed questionnaires. 12 (15.2%) were ghq-12 cases (i.e., scored ≥ 3). Students who had morbidity were 9 times at risk of being stressed consequent upon 'competing with their peers' and 4 times at risk due to 'inadequate learning materials'. Morbidity was significantly more likely to engender use of 'religion', 4 times less likely to engender use of 'positive reframing' with a trend in the use of 'self blame' as coping strategies. Aside from psychosocial/personal issues in this cohort, academic demand was an additional source of psychological problems thereby causing those who had morbidity to utilize 'religion' and 'positive reframing' to cope. There is therefore an apparent need to incorporate the principle of mental health promotion in medical education.
Abnormal posturing - decorticate posture; Traumatic brain injury - decorticate posture ... Brain problem due to drugs, poisoning, or infection Traumatic brain injury Brain problem due to liver failure Increased pressure ...
NASA Technical Reports Server (NTRS)
Yildiz, Yidiray; Kolmanovsky, Ilya V.; Acosta, Diana
2011-01-01
This paper proposes a control allocation system that can detect and compensate the phase shift between the desired and the actual total control effort due to rate limiting of the actuators. Phase shifting is an important problem in control system applications since it effectively introduces a time delay which may destabilize the closed loop dynamics. A relevant example comes from flight control where aggressive pilot commands, high gain of the flight control system or some anomaly in the system may cause actuator rate limiting and effective time delay introduction. This time delay can instigate Pilot Induced Oscillations (PIO), which is an abnormal coupling between the pilot and the aircraft resulting in unintentional and undesired oscillations. The proposed control allocation system reduces the effective time delay by first detecting the phase shift and then minimizing it using constrained optimization techniques. Flight control simulation results for an unstable aircraft with inertial cross coupling are reported, which demonstrate phase shift minimization and recovery from a PIO event.
Spatiotemporal Evolution of Hanle and Zeeman Synthetic Polarization in a Chromospheric Spectral Line
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlin, E. S.; Bianda, M., E-mail: escarlin@irsol.es
Due to the quick evolution of the solar chromosphere, its magnetic field cannot be inferred reliably without accounting for the temporal variations of its polarized light. This has been broadly overlooked in the modeling and interpretation of the polarization, due to technical problems (e.g., lack of temporal resolution or of time-dependent MHD solar models) and/or because many polarization measurements can apparently be explained without dynamics. Here, we show that the temporal evolution is critical for explaining the spectral-line scattering polarization because of its sensitivity to rapidly varying physical quantities and the possibility of signal cancellations and attenuation during extended timemore » integration. For studying the combined effect of time-varying magnetic fields and kinematics, we solved the 1.5D non-LTE problem of the second kind in time-dependent 3D R-MHD solar models and synthesized the Hanle and Zeeman polarization in forward scattering for the chromospheric λ 4227 line. We find that the quiet-Sun polarization amplitudes depend on the periodicity and spectral coherence of the signal enhancements produced by kinematics, but that substantially larger linear polarization signals should exist all over the solar disk for short integration times. The spectral morphology of the polarization is discussed as a combination of Hanle, Zeeman, partial redistribution and dynamic effects. We give physical references for observations by degrading and characterizing our slit time series in different spatiotemporal resolutions. The implications of our results for the interpretation of the second solar spectrum and for the investigation of the solar atmospheric heatings are discussed.« less
NASA Astrophysics Data System (ADS)
Shahani, Amir Reza; Sharifi Torki, Hamid
2018-01-01
The thermoelasticity problem in a thick-walled orthotropic hollow cylinder is solved analytically using finite Hankel transform and Laplace transform. Time-dependent thermal and mechanical boundary conditions are applied on the inner and the outer surfaces of the cylinder. For solving the energy equation, the temperature itself is considered as boundary condition to be applied on both the inner and the outer surfaces of the orthotropic cylinder. Two different cases are assumed for solving the equation of motion: traction-traction problem (tractions are prescribed on both the inner and the outer surfaces) and traction-displacement (traction is prescribed on the inner surface and displacement is prescribed on the outer surface of the hollow orthotropic cylinder). Due to considering uncoupled theory, after obtaining temperature distribution, the dynamical structural problem is solved and closed-form relations are derived for radial displacement, radial and hoop stress. As a case study, exponentially decaying temperature with respect to time is prescribed on the inner surface of the cylinder and the temperature of the outer surface is considered to be zero. Owing to solving dynamical problem, the stress wave propagation and its reflections were observed after plotting the results in both cases.
Dalen, Monica; Theie, Steinar
2012-01-01
Internationally adopted children are often delayed in their development and demonstrate more behaviour problems than nonadopted children due to adverse preadoption circumstances. This is especially true for children adopted from Eastern European countries. Few studies have focused on children adopted from non-European countries. This paper presents results from an ongoing longitudinal study of 119 internationally adopted children from non-European countries during their first two years in Norway. Several scales measuring different aspects of the children's development are included in the study: communication and gross motor development, temperamental characteristics, and behaviour problems. The results show that internationally adopted children are delayed in their general development when they first arrive in their adoptive families. After two years the children have made significant progress in development. However, they still lag behind in communication and motor skills compared to non-adopted children. The temperamental characteristics seem very stable from time of adoption until two years after adoption. The children demonstrate a low frequency of behaviour problems. However, the behaviour problems have changed during the two years. At time of adoption they show more nonphysically challenging behaviour while after two years their physically challenging behaviour has increased.
Fu, Zhongtao; Yang, Wenyu; Yang, Zhen
2013-08-01
In this paper, we present an efficient method based on geometric algebra for computing the solutions to the inverse kinematics problem (IKP) of the 6R robot manipulators with offset wrist. Due to the fact that there exist some difficulties to solve the inverse kinematics problem when the kinematics equations are complex, highly nonlinear, coupled and multiple solutions in terms of these robot manipulators stated mathematically, we apply the theory of Geometric Algebra to the kinematic modeling of 6R robot manipulators simply and generate closed-form kinematics equations, reformulate the problem as a generalized eigenvalue problem with symbolic elimination technique, and then yield 16 solutions. Finally, a spray painting robot, which conforms to the type of robot manipulators, is used as an example of implementation for the effectiveness and real-time of this method. The experimental results show that this method has a large advantage over the classical methods on geometric intuition, computation and real-time, and can be directly extended to all serial robot manipulators and completely automatized, which provides a new tool on the analysis and application of general robot manipulators.
NASA Astrophysics Data System (ADS)
Norazam Yasin, Mohd; Mohamad Zin, Rosli; Halid Abdullah, Abd; Shafiq Mahmad, Muhammad; Fikri Hasmori, Muhammad
2017-11-01
From time to time, the maintenance works become more challenging due to construction of new building and also aging of the existing buildings. University buildings without any exception require proper maintenance services to support their function requirements and this can be considered as major responsibilities to be fulfilled by the maintenance department in the universities. Maintenance department specifically will face various kinds of problems in their operation works and thus this might influence the maintenance work operations itself. This study purposely to identify the common problem facing by the maintenance department and also to examine the current status of the maintenance department. In addition, this study would also propose any suitable approach that could be implemented to overcome the problem facing by the maintenance department. To achieve the objectives of this study, a combination of deep literature study and carrying out a survey is necessary. Literature study aimed to obtain deeper information about this study, meanwhile a survey aimed at identifying the common problem facing by the maintenance department and also to provide the information of the maintenance department’s organization. Several methods will be used in analyzing the data obtained through the survey, including Microsoft Office Excel and also using mean index formula. This study has identified three categories of problem in the maintenance department, which are management problems, human resource problem, and technical problems. Following the findings, several solutions being proposed which can be implemented as the solution to the problem facing. These suggestions have the potential to improve the maintenance department work efficiency, thus could help to increase the department productivity.
Dynamic node immunization for restraint of harmful information diffusion in social networks
NASA Astrophysics Data System (ADS)
Yang, Dingda; Liao, Xiangwen; Shen, Huawei; Cheng, Xueqi; Chen, Guolong
2018-08-01
To restrain the spread of harmful information is crucial for the healthy and sustainable development of social networks. We address the problem of restraining the spread of harmful information by immunizing nodes in the networks. Previous works have developed methods based on the network topology or studied how to immunize nodes in the presence of initial infected nodes. These static methods, in which nodes are immunized at once, may have poor performance in the certain situation due to the dynamics of diffusion. To tackle this problem, we introduce a new dynamic immunization problem of immunizing nodes during the process of the diffusion in this paper. We formulate the problem and propose a novel heuristic algorithm by dealing with two sub-problems: (1) how to select a node to achieve the best immunization effect at the present time? (2) whether the selected node should be immunized right now? Finally, we demonstrate the effectiveness of our algorithm through extensive experiments on various real datasets.
Generation of Look-Up Tables for Dynamic Job Shop Scheduling Decision Support Tool
NASA Astrophysics Data System (ADS)
Oktaviandri, Muchamad; Hassan, Adnan; Mohd Shaharoun, Awaluddin
2016-02-01
Majority of existing scheduling techniques are based on static demand and deterministic processing time, while most job shop scheduling problem are concerned with dynamic demand and stochastic processing time. As a consequence, the solutions obtained from the traditional scheduling technique are ineffective wherever changes occur to the system. Therefore, this research intends to develop a decision support tool (DST) based on promising artificial intelligent that is able to accommodate the dynamics that regularly occur in job shop scheduling problem. The DST was designed through three phases, i.e. (i) the look-up table generation, (ii) inverse model development and (iii) integration of DST components. This paper reports the generation of look-up tables for various scenarios as a part in development of the DST. A discrete event simulation model was used to compare the performance among SPT, EDD, FCFS, S/OPN and Slack rules; the best performances measures (mean flow time, mean tardiness and mean lateness) and the job order requirement (inter-arrival time, due dates tightness and setup time ratio) which were compiled into look-up tables. The well-known 6/6/J/Cmax Problem from Muth and Thompson (1963) was used as a case study. In the future, the performance measure of various scheduling scenarios and the job order requirement will be mapped using ANN inverse model.
Kühn, Simone; Witt, Charlotte; Banaschewski, Tobias; Barbot, Alexis; Barker, Gareth J; Büchel, Christian; Conrod, Patricia J; Flor, Herta; Garavan, Hugh; Ittermann, Bernd; Mann, Karl; Martinot, Jean-Luc; Paus, Tomas; Rietschel, Marcella; Smolka, Michael N; Ströhle, Andreas; Brühl, Rüdiger; Schumann, Gunter; Heinz, Andreas; Gallinat, Jürgen
2016-05-01
Adolescence is a common time for initiation of alcohol use and alcohol use disorders. Importantly, the neuro-anatomical foundation for later alcohol-related problems may already manifest pre-natally, particularly due to smoking and alcohol consumption during pregnancy. In this context, cortical gyrification is an interesting marker of neuronal development but has not been investigated as a risk factor for adolescent alcohol use. On magnetic resonance imaging scans of 595 14-year-old adolescents from the IMAGEN sample, we computed whole-brain mean curvature indices to predict change in alcohol-related problems over the following 2 years. Change of alcohol use-related problems was significantly predicted from mean curvature in left orbitofrontal cortex (OFC). Less gyrification of OFC was associated with an increase in alcohol use-related problems over the next 2 years. Moreover, lower gyrification in left OFC was related to pre-natal alcohol exposure, whereas maternal smoking during pregnancy had no effect. Current alcohol use-related problems of the biological mother had no effect on offsprings' OFC gyrification or drinking behaviour. The data support the idea that alcohol consumption during pregnancy mediates the development of neuro-anatomical phenotypes, which in turn constitute a risk factor for increasing problems due to alcohol consumption in a vulnerable stage of life. Maternal smoking during pregnancy or current maternal alcohol/nicotine consumption had no significant effect. The OFC mediates behaviours known to be disturbed in addiction, namely impulse control and reward processing. The results stress the importance of pre-natal alcohol exposure for later increases in alcohol use-related problems, mediated by structural brain characteristics. © 2015 Society for the Study of Addiction.
Zou, Han; Lu, Xiaoxuan; Jiang, Hao; Xie, Lihua
2015-01-01
Nowadays, developing indoor positioning systems (IPSs) has become an attractive research topic due to the increasing demands on location-based service (LBS) in indoor environments. WiFi technology has been studied and explored to provide indoor positioning service for years in view of the wide deployment and availability of existing WiFi infrastructures in indoor environments. A large body of WiFi-based IPSs adopt fingerprinting approaches for localization. However, these IPSs suffer from two major problems: the intensive costs of manpower and time for offline site survey and the inflexibility to environmental dynamics. In this paper, we propose an indoor localization algorithm based on an online sequential extreme learning machine (OS-ELM) to address the above problems accordingly. The fast learning speed of OS-ELM can reduce the time and manpower costs for the offline site survey. Meanwhile, its online sequential learning ability enables the proposed localization algorithm to adapt in a timely manner to environmental dynamics. Experiments under specific environmental changes, such as variations of occupancy distribution and events of opening or closing of doors, are conducted to evaluate the performance of OS-ELM. The simulation and experimental results show that the proposed localization algorithm can provide higher localization accuracy than traditional approaches, due to its fast adaptation to various environmental dynamics. PMID:25599427
Early stage response problem for post-disaster incidents
NASA Astrophysics Data System (ADS)
Kim, Sungwoo; Shin, Youngchul; Lee, Gyu M.; Moon, Ilkyeong
2018-07-01
Research on evacuation plans for reducing damages and casualties has been conducted to advise defenders against threats. However, despite the attention given to the research in the past, emergency response management, designed to neutralize hazards, has been undermined since planners frequently fail to apprehend the complexities and contexts of the emergency situation. Therefore, this study considers a response problem with unique characteristics for the duration of the emergency. An early stage response problem is identified to find the optimal routing and scheduling plan for responders to prevent further hazards. Due to the complexity of the proposed mathematical model, two algorithms are developed. Data from a high-rise building, called Central City in Seoul, Korea, are used to evaluate the algorithms. Results show that the proposed algorithms can procure near-optimal solutions within a reasonable time.
The boundary element method applied to 3D magneto-electro-elastic dynamic problems
NASA Astrophysics Data System (ADS)
Igumnov, L. A.; Markov, I. P.; Kuznetsov, Iu A.
2017-11-01
Due to the coupling properties, the magneto-electro-elastic materials possess a wide number of applications. They exhibit general anisotropic behaviour. Three-dimensional transient analyses of magneto-electro-elastic solids can hardly be found in the literature. 3D direct boundary element formulation based on the weakly-singular boundary integral equations in Laplace domain is presented in this work for solving dynamic linear magneto-electro-elastic problems. Integral expressions of the three-dimensional fundamental solutions are employed. Spatial discretization is based on a collocation method with mixed boundary elements. Convolution quadrature method is used as a numerical inverse Laplace transform scheme to obtain time domain solutions. Numerical examples are provided to illustrate the capability of the proposed approach to treat highly dynamic problems.
Zhang, Dan; Wang, Qing-Guo; Srinivasan, Dipti; Li, Hongyi; Yu, Li
2018-05-01
This paper is concerned with the asynchronous state estimation for a class of discrete-time switched complex networks with communication constraints. An asynchronous estimator is designed to overcome the difficulty that each node cannot access to the topology/coupling information. Also, the event-based communication, signal quantization, and the random packet dropout problems are studied due to the limited communication resource. With the help of switched system theory and by resorting to some stochastic system analysis method, a sufficient condition is proposed to guarantee the exponential stability of estimation error system in the mean-square sense and a prescribed performance level is also ensured. The characterization of the desired estimator gains is derived in terms of the solution to a convex optimization problem. Finally, the effectiveness of the proposed design approach is demonstrated by a simulation example.
French Meteor Network for High Precision Orbits of Meteoroids
NASA Technical Reports Server (NTRS)
Atreya, P.; Vaubaillon, J.; Colas, F.; Bouley, S.; Gaillard, B.; Sauli, I.; Kwon, M. K.
2011-01-01
There is a lack of precise meteoroids orbit from video observations as most of the meteor stations use off-the-shelf CCD cameras. Few meteoroids orbit with precise semi-major axis are available using film photographic method. Precise orbits are necessary to compute the dust flux in the Earth s vicinity, and to estimate the ejection time of the meteoroids accurately by comparing them with the theoretical evolution model. We investigate the use of large CCD sensors to observe multi-station meteors and to compute precise orbit of these meteoroids. An ideal spatial and temporal resolution to get an accuracy to those similar of photographic plates are discussed. Various problems faced due to the use of large CCD, such as increasing the spatial and the temporal resolution at the same time and computational problems in finding the meteor position are illustrated.
Hybrid Metaheuristics for Solving a Fuzzy Single Batch-Processing Machine Scheduling Problem
Molla-Alizadeh-Zavardehi, S.; Tavakkoli-Moghaddam, R.; Lotfi, F. Hosseinzadeh
2014-01-01
This paper deals with a problem of minimizing total weighted tardiness of jobs in a real-world single batch-processing machine (SBPM) scheduling in the presence of fuzzy due date. In this paper, first a fuzzy mixed integer linear programming model is developed. Then, due to the complexity of the problem, which is NP-hard, we design two hybrid metaheuristics called GA-VNS and VNS-SA applying the advantages of genetic algorithm (GA), variable neighborhood search (VNS), and simulated annealing (SA) frameworks. Besides, we propose three fuzzy earliest due date heuristics to solve the given problem. Through computational experiments with several random test problems, a robust calibration is applied on the parameters. Finally, computational results on different-scale test problems are presented to compare the proposed algorithms. PMID:24883359
Dynamically Reconfigurable Approach to Multidisciplinary Problems
NASA Technical Reports Server (NTRS)
Alexandrov, Natalie M.; Lewis, Robert Michael
2003-01-01
The complexity and autonomy of the constituent disciplines and the diversity of the disciplinary data formats make the task of integrating simulations into a multidisciplinary design optimization problem extremely time-consuming and difficult. We propose a dynamically reconfigurable approach to MDO problem formulation wherein an appropriate implementation of the disciplinary information results in basic computational components that can be combined into different MDO problem formulations and solution algorithms, including hybrid strategies, with relative ease. The ability to re-use the computational components is due to the special structure of the MDO problem. We believe that this structure can and should be used to formulate and solve optimization problems in the multidisciplinary context. The present work identifies the basic computational components in several MDO problem formulations and examines the dynamically reconfigurable approach in the context of a popular class of optimization methods. We show that if the disciplinary sensitivity information is implemented in a modular fashion, the transfer of sensitivity information among the formulations under study is straightforward. This enables not only experimentation with a variety of problem formations in a research environment, but also the flexible use of formulations in a production design environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mundt, Michael; Kuemmel, Stephan
2006-08-15
The integral equation for the time-dependent optimized effective potential (TDOEP) in time-dependent density-functional theory is transformed into a set of partial-differential equations. These equations only involve occupied Kohn-Sham orbitals and orbital shifts resulting from the difference between the exchange-correlation potential and the orbital-dependent potential. Due to the success of an analog scheme in the static case, a scheme that propagates orbitals and orbital shifts in real time is a natural candidate for an exact solution of the TDOEP equation. We investigate the numerical stability of such a scheme. An approximation beyond the Krieger-Li-Iafrate approximation for the time-dependent exchange-correlation potential ismore » analyzed.« less
NASA Technical Reports Server (NTRS)
Desideri, J. A.; Steger, J. L.; Tannehill, J. C.
1978-01-01
The iterative convergence properties of an approximate-factorization implicit finite-difference algorithm are analyzed both theoretically and numerically. Modifications to the base algorithm were made to remove the inconsistency in the original implementation of artificial dissipation. In this way, the steady-state solution became independent of the time-step, and much larger time-steps can be used stably. To accelerate the iterative convergence, large time-steps and a cyclic sequence of time-steps were used. For a model transonic flow problem governed by the Euler equations, convergence was achieved with 10 times fewer time-steps using the modified differencing scheme. A particular form of instability due to variable coefficients is also analyzed.
Traffic crash accidents in Tehran, Iran: Its relation with circadian rhythm of sleepiness.
Sadeghniiat-Haghighi, Khosro; Yazdi, Zohreh; Moradinia, Mohsen; Aminian, Omid; Esmaili, Alireza
2015-01-01
Road traffic accidents are one of main problems in Iran. Multiple factors cause traffic accidents and the most important one is sleepiness. This factor, however, is given less attention in our country. Road traffic accidents relevant to sleepiness are studied. In this cross-sectional study, all road traffic accidents relevant to sleepiness, which were reported by police, were studied in Tehran province in 2009. The risk of road traffic accidents due to sleepiness was increased by more than sevenfold (odds ratio = 7.33) in low alertness hours (0:00-6:00) compared to other time of day. The risk of road traffic accidents due to sleepiness was decreased by 0.15-fold (odds ratio = 0.15) in hours with maximum of alertness (18:00-22:00) of circadian rhythm compared to other time of day. The occurrence of road traffic accidents due to sleepiness has significant statistical relations with driving during lowest point of alertness of circadian rhythm.
Conduct problems and attention deficit behaviour in middle childhood and cannabis use by age 15.
Fergusson, D M; Lynskey, M T; Horwood, L J
1993-12-01
The relationship between conduct problems and attention deficit behaviours at ages 6, 8, 10 and 12 years and the early onset of cannabis usage by the age of 15 years was studied in a birth cohort of New Zealand children. The analysis showed that while conduct problems during middle childhood were significantly associated with later cannabis use (p < 0.05) there was no association between early attention deficit behaviours and cannabis use (p > 0.40) when the associations between conduct problems and attention deficit behaviours were taken into account. It was estimated that children who showed tendencies to conduct disorder behaviour in middle childhood were between 2.1 to 2.7 times more likely to engage in early cannabis use than children not prone to conduct problems even when a range of factors including family social background, parental separation and parental conflict were taken into account. It is concluded that early conduct disorder behaviours are a risk factor for later cannabis use when due allowance is made for social and contextual factors associated with both early conduct problems and later cannabis use.
Application of symbolic/numeric matrix solution techniques to the NASTRAN program
NASA Technical Reports Server (NTRS)
Buturla, E. M.; Burroughs, S. H.
1977-01-01
The matrix solving algorithm of any finite element algorithm is extremely important since solution of the matrix equations requires a large amount of elapse time due to null calculations and excessive input/output operations. An alternate method of solving the matrix equations is presented. A symbolic processing step followed by numeric solution yields the solution very rapidly and is especially useful for nonlinear problems.
Elephantiasis nostras verrucosa on the legs and abdomen with morbid obesity in an Indian lady.
Sarma, Podila S; Ghorpade, Ashok
2008-12-15
Elephantiasis nostras verrucosa (ENV) of the legs and abdomen in a morbidly obese woman with multiple medical problems is reported. The diagnosis was suggested by the classical clinical features and confirmed by histopathology. The patient succumbed due to her multisystem diseases. Elephantiasis nostras verrucosa involving the abdomen is uncommon and has been reported only five times in the past.
Towards Uncovering the Mysterious World of Math Homework
ERIC Educational Resources Information Center
Feng, Mingyu
2014-01-01
Homework has been a mysterious world to educators due to the fact that it is hard to collect data with regard to homework behaviors. Little is known about when a student works on homework, how long it takes him to complete the homework, how much time he spends on a problem and whether and where he has struggled, etc. Such information not only have…
Army Hearing Program Talking Points Calendar Year 2016
2017-09-12
Reserve ARMY HEARING PROGRAM TALKING POINTS CALENDAR YEAR 2016 TIP No. 51-065-0817 2 BACKGROUND Hearing health in the Army has improved...over time, largely due to the dedicated work of hearing health experts. However, noise-induced hearing loss and associated problems have not been...eliminated. The Army Hearing Program continually evolves to address hearing health challenges, and maintains the momentum to build iteratively upon
ERIC Educational Resources Information Center
Sedere, Upali M.
2008-01-01
School based general education aught to be a future oriented subject. However, over the years, due to parental and grand-parental generations setting policies of education for the younger generation, education is always more past oriented than future oriented. This trend did not cause much of a problem when the change over time was moderate. As…
NASA Astrophysics Data System (ADS)
Refice, Alberto; Tijani, Khalid; Lovergine, Francesco P.; D'Addabbo, Annarita; Nutricato, Raffaele; Morea, Alberto
2017-04-01
Satellite monitoring of flood events at high spatial and temporal resolution is considered a difficult problem, mainly due to the lack of data with sufficient acquisition frequency and timeliness. The problem is worsened by the typically cloudy weather conditions associated to floods, which obstacle the propagation of e.m. waves in the optical spectral range, forbidding acquisitions by optical sensors. This problem is not present for longer wavelengths, so that radar imaging sensors are recognized as viable solutions for long-term flood monitoring. In selected cases, however, weather conditions may remain clear for sufficient amounts of time, enabling monitoring of the evolution of flood events through long time series of satellite images, both optical and radar. In this contribution, we present a case study of long-term integrated monitoring of a flood event which affected part of the Strymonas river basin, a transboundary river with source in Bulgaria, which flows then through Greece up to the Aegean Sea. The event, which affected the floodplain close to the river mouth, started at the beginning of April 2015, due to heavy rain, and lasted for several months, with some water pools still present at the beginning of September. Due to the arid climate characterizing the area, weather conditions were cloud-free for most of the period covering the event. We collected one high-resolution, X-band, COSMO-SkyMed, 5 C-band, Sentinel-1 SAR images, and 11 optical Landsat-8 images of the area. SAR images were calibrated, speckle-filtered and precisely geocoded; optical images were radiometrically corrected to obtain ground reflectance values from which NDVI maps were derived. The images were then thresholded to obtain binary flood maps for each day. Threshold values for microwave and optical data were calibrated by comparing one SAR and one optical image acquired on the same date. Results allow to draw a multi-temporal map of the flood evolution with high temporal resolution. The extension of flooded area can also be tracked in time, allowing to envisage testing of evapotranspiration/absorption models.
The Arbitrary Body of Revolution Code (ABORC) for SGEMP/IEMP
1976-07-01
Ill, ,4 t iwv. dependent Spect ria, I’a eallt rlllt ,ýcltllt i , itlld currll - in.icct iwill silIkit ion tests of satel I ites. "S1 1’. Waanaasl ; et...time. For example, in the case where the emission is due to,. photon interaction with materials, the photon energy and time spect run determines the...ally performed by separating the i. onse of the in-._ tn, p rtion of ’he problem from thai of the external iort(n. Thus, 0i details of tbi - internal
Thermal boundary layer due to sudden heating of fluid
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurkal, K.R.; Munukutla, S.
This paper proposes to solve computationally the heat-transfer problems (introduced by Munukutla and Venkataraman, 1988) related to a closed-cycle pulsed high-power laser flow loop. The continuity and the momentum equations as well as the unsteady energy equation are solved using the Keller-Box method. The solutions were compared with the steady-state solutions at large times, and the comparison was found to be excellent. Empirical formulas are proposed for calculating the time-dependent boundary-layer thickness and mass-heat transfer, that can be used by laser flow loop designers. 6 refs.
Thermal boundary layer due to sudden heating of fluid
NASA Astrophysics Data System (ADS)
Kurkal, K. R.; Munukutla, S.
1989-10-01
This paper proposes to solve computationally the heat-transfer problems (introduced by Munukutla and Venkataraman, 1988) related to a closed-cycle pulsed high-power laser flow loop. The continuity and the momentum equations as well as the unsteady energy equation are solved using the Keller-Box method. The solutions were compared with the steady-state solutions at large times, and the comparison was found to be excellent. Empirical formulas are proposed for calculating the time-dependent boundary-layer thickness and mass-heat transfer, that can be used by laser flow loop designers.
Distributed Optimization of Multi-Agent Systems: Framework, Local Optimizer, and Applications
NASA Astrophysics Data System (ADS)
Zu, Yue
Convex optimization problem can be solved in a centralized or distributed manner. Compared with centralized methods based on single-agent system, distributed algorithms rely on multi-agent systems with information exchanging among connected neighbors, which leads to great improvement on the system fault tolerance. Thus, a task within multi-agent system can be completed with presence of partial agent failures. By problem decomposition, a large-scale problem can be divided into a set of small-scale sub-problems that can be solved in sequence/parallel. Hence, the computational complexity is greatly reduced by distributed algorithm in multi-agent system. Moreover, distributed algorithm allows data collected and stored in a distributed fashion, which successfully overcomes the drawbacks of using multicast due to the bandwidth limitation. Distributed algorithm has been applied in solving a variety of real-world problems. Our research focuses on the framework and local optimizer design in practical engineering applications. In the first one, we propose a multi-sensor and multi-agent scheme for spatial motion estimation of a rigid body. Estimation performance is improved in terms of accuracy and convergence speed. Second, we develop a cyber-physical system and implement distributed computation devices to optimize the in-building evacuation path when hazard occurs. The proposed Bellman-Ford Dual-Subgradient path planning method relieves the congestion in corridor and the exit areas. At last, highway traffic flow is managed by adjusting speed limits to minimize the fuel consumption and travel time in the third project. Optimal control strategy is designed through both centralized and distributed algorithm based on convex problem formulation. Moreover, a hybrid control scheme is presented for highway network travel time minimization. Compared with no controlled case or conventional highway traffic control strategy, the proposed hybrid control strategy greatly reduces total travel time on test highway network.
On the performance of exponential integrators for problems in magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Einkemmer, Lukas; Tokman, Mayya; Loffeld, John
2017-02-01
Exponential integrators have been introduced as an efficient alternative to explicit and implicit methods for integrating large stiff systems of differential equations. Over the past decades these methods have been studied theoretically and their performance was evaluated using a range of test problems. While the results of these investigations showed that exponential integrators can provide significant computational savings, the research on validating this hypothesis for large scale systems and understanding what classes of problems can particularly benefit from the use of the new techniques is in its initial stages. Resistive magnetohydrodynamic (MHD) modeling is widely used in studying large scale behavior of laboratory and astrophysical plasmas. In many problems numerical solution of MHD equations is a challenging task due to the temporal stiffness of this system in the parameter regimes of interest. In this paper we evaluate the performance of exponential integrators on large MHD problems and compare them to a state-of-the-art implicit time integrator. Both the variable and constant time step exponential methods of EPIRK-type are used to simulate magnetic reconnection and the Kevin-Helmholtz instability in plasma. Performance of these methods, which are part of the EPIC software package, is compared to the variable time step variable order BDF scheme included in the CVODE (part of SUNDIALS) library. We study performance of the methods on parallel architectures and with respect to magnitudes of important parameters such as Reynolds, Lundquist, and Prandtl numbers. We find that the exponential integrators provide superior or equal performance in most circumstances and conclude that further development of exponential methods for MHD problems is warranted and can lead to significant computational advantages for large scale stiff systems of differential equations such as MHD.
Time Domain Propagation of Quantum and Classical Systems using a Wavelet Basis Set Method
NASA Astrophysics Data System (ADS)
Lombardini, Richard; Nowara, Ewa; Johnson, Bruce
2015-03-01
The use of an orthogonal wavelet basis set (Optimized Maximum-N Generalized Coiflets) to effectively model physical systems in the time domain, in particular the electromagnetic (EM) pulse and quantum mechanical (QM) wavefunction, is examined in this work. Although past research has demonstrated the benefits of wavelet basis sets to handle computationally expensive problems due to their multiresolution properties, the overlapping supports of neighboring wavelet basis functions poses problems when dealing with boundary conditions, especially with material interfaces in the EM case. Specifically, this talk addresses this issue using the idea of derivative matching creating fictitious grid points (T.A. Driscoll and B. Fornberg), but replaces the latter element with fictitious wavelet projections in conjunction with wavelet reconstruction filters. Two-dimensional (2D) systems are analyzed, EM pulse incident on silver cylinders and the QM electron wave packet circling the proton in a hydrogen atom system (reduced to 2D), and the new wavelet method is compared to the popular finite-difference time-domain technique.
Design of a tokamak fusion reactor first wall armor against neutral beam impingement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Myers, R.A.
1977-12-01
The maximum temperatures and thermal stresses are calculated for various first wall design proposals, using both analytical solutions and the TRUMP and SAP IV Computer Codes. Beam parameters, such as pulse time, cycle time, and beam power, are varied. It is found that uncooled plates should be adequate for near-term devices, while cooled protection will be necessary for fusion power reactors. Graphite and tungsten are selected for analysis because of their desirable characteristics. Graphite allows for higher heat fluxes compared to tungsten for similar pulse times. Anticipated erosion (due to surface effects) and plasma impurity fraction are estimated. Neutron irradiationmore » damage is also discussed. Neutron irradiation damage (rather than erosion, fatigue, or creep) is estimated to be the lifetime-limiting factor on the lifetime of the component in fusion power reactors. It is found that the use of tungsten in fusion power reactors, when directly exposed to the plasma, will cause serious plasma impurity problems; graphite should not present such an impurity problem.« less
Mechanical Clogging Processes in Unconsolidated Porous Media Near Pumping Wells
NASA Astrophysics Data System (ADS)
de Zwart, B.; Schotting, R.; Hassanizadeh, M.
2003-12-01
In the Netherlands water supply companies produce over more than one billion cubic meters of drinking water every year. About 2500 water wells are used to pump up the groundwater from aquifers in the Dutch subsurface. More than 50% of these wells will encounter a number of technical problems during their lifetime. The main problem is the decrease in capacity due to well clogging. Clogging shows up after a number of operation years and results in extra, expensive cleaning operations and in early replacement of the pumping wells. This problem has been acknowledged by other industries, for example the metal, petroleum, beer industry and underground storage projects. Well clogging is the result of a number of interacting mechanisms creating a complex problem in the subsurface. In most clogging cases mechanical mechanisms are involved. A large number of studies have been performed to comprehend these processes. Investigations on mechanical processes are focused on transport of small particles through pores and deposition of particles due to physical or physical-chemical processes. After a period of deposition the particles plug the pores and decrease the permeability of the medium. Particle deposition in porous media is usually modelled using filtration theory. In order to get the dynamics of clogging this theory is not sufficient. The porous media is continuously altered due to deposition and mobilization. Therefore the capture characteristics will also continuously change and deposition rates will change in time. A new formula is derived to describe (re)mobilization of particles and allow changing deposition rates. This approach incorporates detachment and reattachment of deposited particles. This work also includes derivation of the filtration theory in radial coordinates. A comparison between the radial filtration theory and the new formula will be shown.
NASA Astrophysics Data System (ADS)
Noer, Fadhly; Matondang, A. Rahim; Sirojuzilam, Saleh, Sofyan M.
2017-11-01
Due to the shifting of city urban development causing the shift of city services center, so there is a change in space pattern and space structure in Banda Aceh, then resulting urban sprawl which can lead to congestion problem occurs on the arterial road in Banda Aceh, it can be seen from the increasing number of vehicles per year by 6%. Another issue occurs by urban sprawl is not well organized of settlement due to the uncontrolled use of space so that caused grouping or the differences in socioeconomic strata that can impact to the complexity of population mobility problem. From this background problem considered to be solved by a concept that is Transit Oriented Development (TOD), that is a concept of transportation development in co-operation with spatial. This research will get the model of transportation infrastructure development with TOD concept that can handle transportation problem in Banda Aceh, due to change of spatial structure, and to find whether TOD concept can use for the area that has a population in medium density range. The result that is obtained equation so the space structure is: Space Structure = 0.520 + 0.206X3 + 0.264X6 + 0.100X7 and Transportation Infrastructure Development = -1.457 + 0.652X1 + 0.388X5 + 0.235X6 + 0.222X7 + 0.327X8, So results obtained with path analysis method obtained variable influences, node ratio, network connectivity, travel frequency, travel destination, travel cost, and travel time, it has a lower value when direct effect with transportation infrastructure development, but if the indirect effect through the structure of space has a greater influence, can be seen from spatial structure path scheme - transportation infrastructure development.
Caviola, Sara; Carey, Emma; Mammarella, Irene C; Szucs, Denes
2017-01-01
We review how stress induction, time pressure manipulations and math anxiety can interfere with or modulate selection of problem-solving strategies (henceforth "strategy selection") in arithmetical tasks. Nineteen relevant articles were identified, which contain references to strategy selection and time limit (or time manipulations), with some also discussing emotional aspects in mathematical outcomes. Few of these take cognitive processes such as working memory or executive functions into consideration. We conclude that due to the sparsity of available literature our questions can only be partially answered and currently there is not much evidence of clear associations. We identify major gaps in knowledge and raise a series of open questions to guide further research.
Multi-GPU implementation of a VMAT treatment plan optimization algorithm.
Tian, Zhen; Peng, Fei; Folkerts, Michael; Tan, Jun; Jia, Xun; Jiang, Steve B
2015-06-01
Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU's relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors' group, on a multi-GPU platform to solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors' method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H&N) cancer case is then used to validate the authors' method. The authors also compare their multi-GPU implementation with three different single GPU implementation strategies, i.e., truncating DDC matrix (S1), repeatedly transferring DDC matrix between CPU and GPU (S2), and porting computations involving DDC matrix to CPU (S3), in terms of both plan quality and computational efficiency. Two more H&N patient cases and three prostate cases are used to demonstrate the advantages of the authors' method. The authors' multi-GPU implementation can finish the optimization process within ∼ 1 min for the H&N patient case. S1 leads to an inferior plan quality although its total time was 10 s shorter than the multi-GPU implementation due to the reduced matrix size. S2 and S3 yield the same plan quality as the multi-GPU implementation but take ∼4 and ∼6 min, respectively. High computational efficiency was consistently achieved for the other five patient cases tested, with VMAT plans of clinically acceptable quality obtained within 23-46 s. Conversely, to obtain clinically comparable or acceptable plans for all six of these VMAT cases that the authors have tested in this paper, the optimization time needed in a commercial TPS system on CPU was found to be in an order of several minutes. The results demonstrate that the multi-GPU implementation of the authors' column-generation-based VMAT optimization can handle the large-scale VMAT optimization problem efficiently without sacrificing plan quality. The authors' study may serve as an example to shed some light on other large-scale medical physics problems that require multi-GPU techniques.
A RADIATION TRANSFER SOLVER FOR ATHENA USING SHORT CHARACTERISTICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, Shane W.; Stone, James M.; Jiang Yanfei
2012-03-01
We describe the implementation of a module for the Athena magnetohydrodynamics (MHD) code that solves the time-independent, multi-frequency radiative transfer (RT) equation on multidimensional Cartesian simulation domains, including scattering and non-local thermodynamic equilibrium (LTE) effects. The module is based on well known and well tested algorithms developed for modeling stellar atmospheres, including the method of short characteristics to solve the RT equation, accelerated Lambda iteration to handle scattering and non-LTE effects, and parallelization via domain decomposition. The module serves several purposes: it can be used to generate spectra and images, to compute a variable Eddington tensor (VET) for full radiationmore » MHD simulations, and to calculate the heating and cooling source terms in the MHD equations in flows where radiation pressure is small compared with gas pressure. For the latter case, the module is combined with the standard MHD integrators using operator splitting: we describe this approach in detail, including a new constraint on the time step for stability due to radiation diffusion modes. Implementation of the VET method for radiation pressure dominated flows is described in a companion paper. We present results from a suite of test problems for both the RT solver itself and for dynamical problems that include radiative heating and cooling. These tests demonstrate that the radiative transfer solution is accurate and confirm that the operator split method is stable, convergent, and efficient for problems of interest. We demonstrate there is no need to adopt ad hoc assumptions of questionable accuracy to solve RT problems in concert with MHD: the computational cost for our general-purpose module for simple (e.g., LTE gray) problems can be comparable to or less than a single time step of Athena's MHD integrators, and only few times more expensive than that for more general (non-LTE) problems.« less
Analysis of Station Quality Issues from EarthScope's Transportable Array
NASA Astrophysics Data System (ADS)
Pfeifer, C.; Barstow, N.; Busby, R.; Hafner, K.
2008-12-01
160 of the first 400 Earthscope USARRY transportable array (TA) stations have completed their first two-year deployment and are being moved to their next locations. Over the past 4 years the majority of stations have run with few interruptions in the transfer of real time data to the Array Network Facility (ANF) at the Univ of CA San Diego and near real time data to the IRIS Data Management System (DMS). The combination of telemetered data and dedicated people reviewing the waveforms and state of health data have revealed several conditions that can affect the data quality or cause loss of data. The data problems fall into three broad categories; station power, equipment malfunction, and communication failures. Station power issues have been implicated in several types of noise seen in the seismic data (as well as causing station failures and resultant data gaps). The most common type of equipment problem that has been found to degrade data quality is caused by sensor problems, and has affected all 3 types of sensors used in the TA to varying degrees. While communication problems can cause real time data loss, they do not cause a degradation of the quality of the data, and any gaps in the real time data due solely to communications problems are filled in later with the continuous data recorded to disk at each TA station. Over the past 4 years the TA team has recognized a number of noise sources and have made several design changes to minimize the effects on data quality. Design/procedural changes include: stopping water incursion into the stations, power conditioning, changing mass re-center voltage thresholds. Figures that demonstrate examples are provided. Changes have created better data quality and improved the station performance. Vigilance and deployment of service teams to reestablish communications, replace noisy sensors, and troubleshoot problems is also key to maintaining the high-quality TA network.
A Dynamic Programming Approach for Base Station Sleeping in Cellular Networks
NASA Astrophysics Data System (ADS)
Gong, Jie; Zhou, Sheng; Niu, Zhisheng
The energy consumption of the information and communication technology (ICT) industry, which has become a serious problem, is mostly due to the network infrastructure rather than the mobile terminals. In this paper, we focus on reducing the energy consumption of base stations (BSs) by adjusting their working modes (active or sleep). Specifically, the objective is to minimize the energy consumption while satisfying quality of service (QoS, e.g., blocking probability) requirement and, at the same time, avoiding frequent mode switching to reduce signaling and delay overhead. The problem is modeled as a dynamic programming (DP) problem, which is NP-hard in general. Based on cooperation among neighboring BSs, a low-complexity algorithm is proposed to reduce the size of state space as well as that of action space. Simulations demonstrate that, with the proposed algorithm, the active BS pattern well meets the time variation and the non-uniform spatial distribution of system traffic. Moreover, the tradeoff between the energy saving from BS sleeping and the cost of switching is well balanced by the proposed scheme.
Heterogeneous continuous-time random walks
NASA Astrophysics Data System (ADS)
Grebenkov, Denis S.; Tupikina, Liubov
2018-01-01
We introduce a heterogeneous continuous-time random walk (HCTRW) model as a versatile analytical formalism for studying and modeling diffusion processes in heterogeneous structures, such as porous or disordered media, multiscale or crowded environments, weighted graphs or networks. We derive the exact form of the propagator and investigate the effects of spatiotemporal heterogeneities onto the diffusive dynamics via the spectral properties of the generalized transition matrix. In particular, we show how the distribution of first-passage times changes due to local and global heterogeneities of the medium. The HCTRW formalism offers a unified mathematical language to address various diffusion-reaction problems, with numerous applications in material sciences, physics, chemistry, biology, and social sciences.
Factors Affecting Aerosol Radiative Forcing from Both Production-based and Consumption-based View
NASA Astrophysics Data System (ADS)
Wang, J.; Lin, J.; Ni, R.
2017-12-01
Aerosol radiative forcing (RF) is determined by emissions and various chemical-transport-radiative processes in the atmosphere, a multi-factor problem whose individual contributors have not been well quantified. This problem becomes more complicated when taking into account the role of international trade, which means reallocated aerosol RF due to separation of regions producing goods and emissions and regions consuming those goods. Here we analyze major factors affecting RF of secondary inorganic aerosols (SIOAs, including sulfate, nitrate and ammonium), primary organic aerosol (POA) and black carbon (BC), extending the work of Lin et al. (2016, Nature Geoscience). We contrast five factors determining production-based (RFp, due to a region's production of goods) and consumption-based (RFc, due to a region's consumption) forcing by 11 major regions, including population size, per capita output, emission intensity (emission per output), chemical efficiency (mass per unit emission) and radiative efficiency (RF per unit mass). Comparing across the 11 regions, East Asia produces the strongest RFp and RFc of SIOA and BC and the second largest RFp and RFc of POA primarily due to its high emission intensity. Although Middle East and North Africa has low emissions, its RFp is strengthened by its largest chemical efficiency for POA and BC and second largest chemical efficiency for SIOA. However, RFp of South-East Asia and Pacific is greatly weakened by its lowest chemical efficiency. Economic trade means that net importers (Western Europe, North America and Pacific OECD) have higher RFc than RFp by 50-100%. And such forcing difference is mainly due to the high emission intensity of the exporters supplying these regions. For North America, SIOA's RFc is 50% stronger than RFp, for that emission intensity of SIOA is 5.2 times in East Asia and 2.5 times in Latin America and Caribbean compared with that in North America, and the chemical efficiency in the top four exporters are 1.4-2.1 times of North America. For East Asia, the RFc of SIOA is 20% lower than RFp due to the relatively low emission intensity and chemical efficiency of the top two exporters (Pacific OECD and Western Europe). Overall, economic, emission and atmospheric factors all play important roles in differentiating regions' RFp and RFc.
Working Memory and Aging: Separating the Effects of Content and Context
Bopp, Kara L.; Verhaeghen, Paul
2009-01-01
In three experiments, we investigated the hypothesis that age-related differences in working memory might be due to the inability to bind content with context. Participants were required to find a repeating stimulus within a single series (no context memory required) or within multiple series (necessitating memory for context). Response time and accuracy were examined in two task domains: verbal and visuospatial. Binding content with context led to longer processing time and poorer accuracy in both age groups, even when working memory load was held constant. Although older adults were overall slower and less accurate than younger adults, the need for context memory did not differentially affect their performance. It is therefore unlikely that age differences in working memory are due to specific age-related problems with content-with-context binding. PMID:20025410
The effect of high indoor temperatures on self-perceived health of elderly persons.
van Loenhout, J A F; le Grand, A; Duijm, F; Greven, F; Vink, N M; Hoek, G; Zuurbier, M
2016-04-01
Exposure to high ambient temperatures leads to an increase in mortality and morbidity, especially in the elderly. This relationship is usually assessed with outdoor temperature, even though the elderly spend most of their time indoors. Our study investigated the relationship between indoor temperature and heat-related health problems of elderly individuals. The study was conducted in the Netherlands between April and August 2012. Temperature and relative humidity were measured continuously in the living rooms and bedrooms of 113 elderly individuals. Respondents were asked to fill out an hourly diary during three weeks with high temperature and one cold reference week, and a questionnaire at the end of these weeks, on health problems that they experienced due to heat. During the warmest week of the study period (14-20 August), average living room and bedroom temperatures were approximately 5°C higher than during the reference week. More than half of the respondents perceived their indoor climate as too warm during this week. The most reported symptoms were thirst (42.7%), sleep disturbance (40.6%) and excessive sweating (39.6%). There was a significant relationship between both indoor and outdoor temperatures with the number of hours that heat-related health problems were reported per day. For an increase of 1°C of indoor temperature, annoyance due to heat and sleep disturbance increased with 33% and 24% respectively. Outdoor temperature was associated with smaller increases: 13% and 11% for annoyance due to heat and sleep disturbance, respectively. The relationship between outdoor temperature and heat-related health problems disappeared when indoor and outdoor temperatures were included in one model. The relationship with heat-related health problems in the elderly is stronger with indoor (living room and bedroom) temperature than with outdoor temperature. This should be taken into account when looking for measures to reduce heat exposure in this vulnerable group. Copyright © 2015 Elsevier Inc. All rights reserved.
Towards a dynamical scheduler for ALMA: a science - software collaboration
NASA Astrophysics Data System (ADS)
Avarias, Jorge; Toledo, Ignacio; Espada, Daniel; Hibbard, John; Nyman, Lars-Ake; Hiriart, Rafael
2016-07-01
State-of-the art astronomical facilities are costly to build and operate, hence it is essential that these facilities must be operated as much efficiently as possible, trying to maximize the scientific output and at the same time minimizing overhead times. Over the latest decades the scheduling problem has drawn attention of research because new facilities have been demonstrated that is unfeasible to try to schedule observations manually, due the complexity to satisfy the astronomical and instrumental constraints and the number of scientific proposals to be reviewed and evaluated in near real-time. In addition, the dynamic nature of some constraints make this problem even more difficult. The Atacama Large Millimeter/submillimeter Array (ALMA) is a major collaboration effort between European (ESO), North American (NRAO) and East Asian countries (NAOJ), under operations on the Chilean Chajnantor plateau, at 5.000 meters of altitude. During normal operations at least two independent arrays are available, aiming to achieve different types of science. Since ALMA does not observe in the visible spectrum, observations are not limited to night time only, thus a 24/7 operation with little downtime as possible is expected when full operations state will have been reached. However, during preliminary operations (early-science) ALMA has been operated on tied schedules using around half of the whole day-time to conduct scientific observations. The purpose of this paper is to explain how the observation scheduling and its optimization is done within ALMA, giving details about the problem complexity, its similarities and differences with traditional scheduling problems found in the literature. The paper delves into the current recommendation system implementation and the difficulties found during the road to its deployment in production.
NASA Astrophysics Data System (ADS)
Jafari, Hamed; Salmasi, Nasser
2015-09-01
The nurse scheduling problem (NSP) has received a great amount of attention in recent years. In the NSP, the goal is to assign shifts to the nurses in order to satisfy the hospital's demand during the planning horizon by considering different objective functions. In this research, we focus on maximizing the nurses' preferences for working shifts and weekends off by considering several important factors such as hospital's policies, labor laws, governmental regulations, and the status of nurses at the end of the previous planning horizon in one of the largest hospitals in Iran i.e., Milad Hospital. Due to the shortage of available nurses, at first, the minimum total number of required nurses is determined. Then, a mathematical programming model is proposed to solve the problem optimally. Since the proposed research problem is NP-hard, a meta-heuristic algorithm based on simulated annealing (SA) is applied to heuristically solve the problem in a reasonable time. An initial feasible solution generator and several novel neighborhood structures are applied to enhance performance of the SA algorithm. Inspired from our observations in Milad hospital, random test problems are generated to evaluate the performance of the SA algorithm. The results of computational experiments indicate that the applied SA algorithm provides solutions with average percentage gap of 5.49 % compared to the upper bounds obtained from the mathematical model. Moreover, the applied SA algorithm provides significantly better solutions in a reasonable time than the schedules provided by the head nurses.
Zhang, Rui
2017-01-01
The traditional way of scheduling production processes often focuses on profit-driven goals (such as cycle time or material cost) while tending to overlook the negative impacts of manufacturing activities on the environment in the form of carbon emissions and other undesirable by-products. To bridge the gap, this paper investigates an environment-aware production scheduling problem that arises from a typical paint shop in the automobile manufacturing industry. In the studied problem, an objective function is defined to minimize the emission of chemical pollutants caused by the cleaning of painting devices which must be performed each time before a color change occurs. Meanwhile, minimization of due date violations in the downstream assembly shop is also considered because the two shops are interrelated and connected by a limited-capacity buffer. First, we have developed a mixed-integer programming formulation to describe this bi-objective optimization problem. Then, to solve problems of practical size, we have proposed a novel multi-objective particle swarm optimization (MOPSO) algorithm characterized by problem-specific improvement strategies. A branch-and-bound algorithm is designed for accurately assessing the most promising solutions. Finally, extensive computational experiments have shown that the proposed MOPSO is able to match the solution quality of an exact solver on small instances and outperform two state-of-the-art multi-objective optimizers in literature on large instances with up to 200 cars. PMID:29295603
Zhang, Rui
2017-12-25
The traditional way of scheduling production processes often focuses on profit-driven goals (such as cycle time or material cost) while tending to overlook the negative impacts of manufacturing activities on the environment in the form of carbon emissions and other undesirable by-products. To bridge the gap, this paper investigates an environment-aware production scheduling problem that arises from a typical paint shop in the automobile manufacturing industry. In the studied problem, an objective function is defined to minimize the emission of chemical pollutants caused by the cleaning of painting devices which must be performed each time before a color change occurs. Meanwhile, minimization of due date violations in the downstream assembly shop is also considered because the two shops are interrelated and connected by a limited-capacity buffer. First, we have developed a mixed-integer programming formulation to describe this bi-objective optimization problem. Then, to solve problems of practical size, we have proposed a novel multi-objective particle swarm optimization (MOPSO) algorithm characterized by problem-specific improvement strategies. A branch-and-bound algorithm is designed for accurately assessing the most promising solutions. Finally, extensive computational experiments have shown that the proposed MOPSO is able to match the solution quality of an exact solver on small instances and outperform two state-of-the-art multi-objective optimizers in literature on large instances with up to 200 cars.
NASA Astrophysics Data System (ADS)
Forouzanfar, F.; Tavakkoli-Moghaddam, R.; Bashiri, M.; Baboli, A.; Hadji Molana, S. M.
2017-11-01
This paper studies a location-routing-inventory problem in a multi-period closed-loop supply chain with multiple suppliers, producers, distribution centers, customers, collection centers, recovery, and recycling centers. In this supply chain, centers are multiple levels, a price increase factor is considered for operational costs at centers, inventory and shortage (including lost sales and backlog) are allowed at production centers, arrival time of vehicles of each plant to its dedicated distribution centers and also departure from them are considered, in such a way that the sum of system costs and the sum of maximum time at each level should be minimized. The aforementioned problem is formulated in the form of a bi-objective nonlinear integer programming model. Due to the NP-hard nature of the problem, two meta-heuristics, namely, non-dominated sorting genetic algorithm (NSGA-II) and multi-objective particle swarm optimization (MOPSO), are used in large sizes. In addition, a Taguchi method is used to set the parameters of these algorithms to enhance their performance. To evaluate the efficiency of the proposed algorithms, the results for small-sized problems are compared with the results of the ɛ-constraint method. Finally, four measuring metrics, namely, the number of Pareto solutions, mean ideal distance, spacing metric, and quality metric, are used to compare NSGA-II and MOPSO.
Long working hours and sleep problems among public junior high school teachers in Japan.
Bannai, Akira; Ukawa, Shigekazu; Tamakoshi, Akiko
2015-01-01
Long working hours may impact human health. In Japan, teachers tend to work long hours. From 2002 to 2012, the number of leaves of absence due to diseases other than mental disorders, or mental disorders among public school teachers increased by 1.3 times (from 2,616 to 3,381), or 1.8 times (from 2,687 to 4,960), respectively. The present study aimed to investigate the association between long working hours and sleep problems among public school teachers. This cross-sectional study was conducted from mid-July to September 2013 in Hokkaido Prefecture, Japan. Questionnaires were distributed to 1,245 teachers in public junior high schools. Information about basic characteristics including working hours, and responses to the Pittsburgh Sleep Quality Index were collected anonymously. Multiple logistic regression analysis was used to calculate odds ratios (ORs) for the association between long working hours and sleep problems separately by sex. The response rate was 44.8% (n=558). After excluding ineligible responses, the final sample comprised 515 teachers (335 males and 180 females). Sleep problems was identified in 41.5% of males and 44.4% of females. Our results showed a significantly increased risk of sleep problems in males working >60 hours per week (OR 2.05 [95% CI 1.01-4.30]) compared with those working ≤40 hours per week. No significant association was found in females. There is a significant association between long working hours and sleep problems in male teachers. Reducing working hours may contribute to a reduction in sleep problems.
Spillover, nonlinearity, and flexible structures
NASA Technical Reports Server (NTRS)
Bass, Robert W.; Zes, Dean
1991-01-01
Many systems whose evolution in time is governed by Partial Differential Equations (PDEs) are linearized around a known equilibrium before Computer Aided Control Engineering (CACE) is considered. In this case, there are infinitely many independent vibrational modes, and it is intuitively evident on physical grounds that infinitely many actuators would be needed in order to control all modes. A more precise, general formulation of this grave difficulty (spillover problem) is due to A.V. Balakrishnan. A possible route to circumvention of this difficulty lies in leaving the PDE in its original nonlinear form, and adding the essentially finite dimensional control action prior to linearization. One possibly applicable technique is the Liapunov Schmidt rigorous reduction of singular infinite dimensional implicit function problems to finite dimensional implicit function problems. Omitting details of Banach space rigor, the formalities of this approach are given.
Fund allocation using capacitated vehicle routing problem
NASA Astrophysics Data System (ADS)
Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah Rozita; Darus, Maslina
2014-09-01
In investment fund allocation, it is unwise for an investor to distribute his fund into several assets simultaneously due to economic reasons. One solution is to allocate the fund into a particular asset at a time in a sequence that will either maximize returns or minimize risks depending on the investor's objective. The vehicle routing problem (VRP) provides an avenue to this issue. VRP answers the question on how to efficiently use the available fleet of vehicles to meet a given service demand, subjected to a set of operational requirements. This paper proposes an idea of using capacitated vehicle routing problem (CVRP) to optimize investment fund allocation by employing data of selected stocks in the FTSE Bursa Malaysia. Results suggest that CRVP can be applied to solve the issue of investment fund allocation and increase the investor's profit.
On the Coplanar Integrable Case of the Twice-Averaged Hill Problem with Central Body Oblateness
NASA Astrophysics Data System (ADS)
Vashkov'yak, M. A.
2018-01-01
The twice-averaged Hill problem with the oblateness of the central planet is considered in the case where its equatorial plane coincides with the plane of its orbital motion relative to the perturbing body. A qualitative study of this so-called coplanar integrable case was begun by Y. Kozai in 1963 and continued by M.L. Lidov and M.V. Yarskaya in 1974. However, no rigorous analytical solution of the problem can be obtained due to the complexity of the integrals. In this paper we obtain some quantitative evolution characteristics and propose an approximate constructive-analytical solution of the evolution system in the form of explicit time dependences of satellite orbit elements. The methodical accuracy has been estimated for several orbits of artificial lunar satellites by comparison with the numerical solution of the evolution system.
NASA Astrophysics Data System (ADS)
Dou, Hao; Sun, Xiao; Li, Bin; Deng, Qianqian; Yang, Xubo; Liu, Di; Tian, Jinwen
2018-03-01
Aircraft detection from very high resolution remote sensing images, has gained more increasing interest in recent years due to the successful civil and military applications. However, several problems still exist: 1) how to extract the high-level features of aircraft; 2) locating objects within such a large image is difficult and time consuming; 3) A common problem of multiple resolutions of satellite images still exists. In this paper, inspirited by biological visual mechanism, the fusion detection framework is proposed, which fusing the top-down visual mechanism (deep CNN model) and bottom-up visual mechanism (GBVS) to detect aircraft. Besides, we use multi-scale training method for deep CNN model to solve the problem of multiple resolutions. Experimental results demonstrate that our method can achieve a better detection result than the other methods.
Developing a new stochastic competitive model regarding inventory and price
NASA Astrophysics Data System (ADS)
Rashid, Reza; Bozorgi-Amiri, Ali; Seyedhoseini, S. M.
2015-09-01
Within the competition in today's business environment, the design of supply chains becomes more complex than before. This paper deals with the retailer's location problem when customers choose their vendors, and inventory costs have been considered for retailers. In a competitive location problem, price and location of facilities affect demands of customers; consequently, simultaneous optimization of the location and inventory system is needed. To prepare a realistic model, demand and lead time have been assumed as stochastic parameters, and queuing theory has been used to develop a comprehensive mathematical model. Due to complexity of the problem, a branch and bound algorithm has been developed, and its performance has been validated in several numerical examples, which indicated effectiveness of the algorithm. Also, a real case has been prepared to demonstrate performance of the model for real world.
A Short-Term Population Model of the Suicide Risk: The Case of Spain.
De la Poza, Elena; Jódar, Lucas
2018-06-14
A relevant proportion of deaths by suicide have been attributed to other causes that produce the number of suicides remains hidden. The existence of a hidden number of cases is explained by the nature of the problem. Problems like this involve violence, and produce fear and social shame in victims' families. The existence of violence, fear and social shame experienced by victims favours a considerable number of suicides, identified as accidents or natural deaths. This paper proposes a short time discrete compartmental mathematical model to measure the suicidal risk for the case of Spain. The compartment model classifies and quantifies the amount of the Spanish population within the age intervals (16, 78) by their degree of suicide risk and their changes over time. Intercompartmental transits are due to the combination of quantitative and qualitative factors. Results are computed and simulations are performed to analyze the sensitivity of the model under uncertain coefficients.
Treatment for preschool children with interpersonal sexual behavior problems: a pilot study.
Silovsky, Jane F; Niec, Larissa; Bard, David; Hecht, Debra B
2007-01-01
This pilot study evaluated a 12-week group treatment program for preschool children with interpersonal sexual behavior problems (SBP; N = 85; 53 completed at least 8 sessions). Many children presented with co-occurring trauma symptoms and disruptive behaviors. In intent-to-treat analysis, a significant linear reduction in SBP due to number of treatment sessions attended was found, an effect that was independent of linear reductions affiliated with elapsed time. Under the assumption that treatment can have an incremental impact, more than one third of the variance was accounted for by treatment effects, with female and older children most favorably impacted. Caregivers reported increase in knowledge, satisfaction, and usefulness of treatment. In addition to replication, future research is needed to examine (a) effects of environment change and time on SBP, (b) stability of treatment effects, and (c) best practices to integrate evidence-based treatments for comorbid conditions.
Dual Key Speech Encryption Algorithm Based Underdetermined BSS
Zhao, Huan; Chen, Zuo; Zhang, Xixiang
2014-01-01
When the number of the mixed signals is less than that of the source signals, the underdetermined blind source separation (BSS) is a significant difficult problem. Due to the fact that the great amount data of speech communications and real-time communication has been required, we utilize the intractability of the underdetermined BSS problem to present a dual key speech encryption method. The original speech is mixed with dual key signals which consist of random key signals (one-time pad) generated by secret seed and chaotic signals generated from chaotic system. In the decryption process, approximate calculation is used to recover the original speech signals. The proposed algorithm for speech signals encryption can resist traditional attacks against the encryption system, and owing to approximate calculation, decryption becomes faster and more accurate. It is demonstrated that the proposed method has high level of security and can recover the original signals quickly and efficiently yet maintaining excellent audio quality. PMID:24955430
Achieving a high mode count in the exact electromagnetic simulation of diffractive optical elements.
Junker, André; Brenner, Karl-Heinz
2018-03-01
The application of rigorous optical simulation algorithms, both in the modal as well as in the time domain, is known to be limited to the nano-optical scale due to severe computing time and memory constraints. This is true even for today's high-performance computers. To address this problem, we develop the fast rigorous iterative method (FRIM), an algorithm based on an iterative approach, which, under certain conditions, allows solving also large-size problems approximation free. We achieve this in the case of a modal representation by avoiding the computationally complex eigenmode decomposition. Thereby, the numerical cost is reduced from O(N 3 ) to O(N log N), enabling a simulation of structures like certain diffractive optical elements with a significantly higher mode count than presently possible. Apart from speed, another major advantage of the iterative FRIM over standard modal methods is the possibility to trade runtime against accuracy.
A bi-objective model for robust yard allocation scheduling for outbound containers
NASA Astrophysics Data System (ADS)
Liu, Changchun; Zhang, Canrong; Zheng, Li
2017-01-01
This article examines the yard allocation problem for outbound containers, with consideration of uncertainty factors, mainly including the arrival and operation time of calling vessels. Based on the time buffer inserting method, a bi-objective model is constructed to minimize the total operational cost and to maximize the robustness of fighting against the uncertainty. Due to the NP-hardness of the constructed model, a two-stage heuristic is developed to solve the problem. In the first stage, initial solutions are obtained by a greedy algorithm that looks n-steps ahead with the uncertainty factors set as their respective expected values; in the second stage, based on the solutions obtained in the first stage and with consideration of uncertainty factors, a neighbourhood search heuristic is employed to generate robust solutions that can fight better against the fluctuation of uncertainty factors. Finally, extensive numerical experiments are conducted to test the performance of the proposed method.
NASA Astrophysics Data System (ADS)
Tashakkori, H.; Rajabifard, A.; Kalantari, M.
2016-10-01
Search and rescue procedures for indoor environments are quite complicated due to the fact that much of the indoor information is unavailable to rescuers before physical entrance to the incident scene. Thus, decision making regarding the number of crew required and the way they should be dispatched in the building considering the various access points and complexities in the buildings in order to cover the search area in minimum time is dependent on prior knowledge and experience of the emergency commanders. Hence, this paper introduces the Search and Rescue Problem (SRP) which aims at finding best search and rescue routes that minimize the overall search time in the buildings. 3D BIM-oriented indoor GIS is integrated in the indoor route graph to find accurate routes based on the building geometric and semantic information. An Ant Colony Based Algorithm is presented that finds the number of first responders required and their individual routes to search all rooms and points of interest inside the building to minimize the overall time spent by all rescuers inside the disaster area. The evaluation of the proposed model for a case study building shows a significant improve in search and rescue time which will lead to a higher chance of saving lives and less exposure of emergency crew to danger.
Emergence of occupational medicine in Victorian times1
Lee, W. R.
1973-01-01
Lee, W. R. (1973).British Journal of Industrial Medicine,30, 118-124. Emergence of occupational medicine in Victorian times. The events surrounding the establishment and development of legislation to protect the health of people at work in Victorian times are already well documented. This paper deals with some other aspects of the development of occupational medicine. Medical opinions at the time did not always see the misuse of child labour as due simply to avaricious mill owners, but in part due to the parents and in part to the workmen subcontractors. The establishment of the certifying surgeons is briefly reviewed and their coming together to form an association in 1868 may be related to questions about the need for medical certificates of age which were being requested by the many factory owners brought under factory legislation for the first time in 1864 and 1867. The plight of injured workmen and their dependents was early recognized, although it was late in the Victorian era before any statutory provision was made for them. The idea of linking compensation with preventive measures came to the fore in 1845 when some Manchester doctors, later supported by Edwin Chadwick, examined the workings at the Woodhead railway tunnel across the Pennines. When compensation legislation was passed some half a century later the idea was lost, and to this day compensation for and prevention of industrial injury and disease remain separated. The change of industrial diseases from a medical curiosity to a problem requiring State intervention is traced over the latter part of the Victorian era. The whole piecemeal pattern illustrating the precept that `social problems come first, social philosophy after' has persisted until the far-reaching changes in health and safety legislation of the present day. PMID:4267346
Applications of GIFTS III to Structural Engineering Problems.
The paper describes the latest version of the GIFTS SYSTEM (Graphics Oriented Interactive Finite Element Package for Time-Sharing), due for release...at the end of March 1976. The paper gives a description of the program modules available in the GIFTS library and the options available within its...framework. Examples are given to demonstrate the use of GIFTS in design-oriented applications. Some performance measurements are included. Amongst the
Onchocerciasis-related epilepsy? Prospects at a time of uncertainty.
Marin, Benoît; Boussinesq, Michel; Druet-Cabanac, Michel; Kamgno, Joseph; Bouteille, Bernard; Preux, Pierre-Marie
2006-01-01
Epilepsy and onchocerciasis (river blindness) constitute serious public health problems in several tropical countries. There are four main mechanisms that might explain a relationship between these two diseases: (i) the presence of Onchocerca volvulus in the central nervous system; (ii) the pathogenicity of various O. volvulus strains; (iii) immunological mechanisms involving cross-reactive immunization or cytokine production during infection; and (iv) the triggering role of insomnia due to itching.
Current Trends in Metric Conversion in the United States: Potential Trouble for National Defense.
1980-05-01
measures in the United States appears inevitable, but is being prolonged due to present legisla- *tion which allows each industrial sector to convert...centralized government planning and leadership toward metrication in the shortest possible time. II. Problem: Spurred by the automotive industry , voluntary...conversion among major U.S. industries is snowballing while public resistance to met- rication is stiffening. This confrontation threatens to
Dezhurov holds a GTS electronics unit in Zvezda during Expedition Three
2001-08-01
ISS003-E-5477 (August 2001) --- Cosmonaut Vladimir Dezhurov of Rosaviakosmos, Expedition Three flight engineer, holds a Global Time System (GTS) electronics unit in the Zvezda Service Module. Please note: The date identifiers on some frames are not accurate due to a technical problem with one of the Expedition Three cameras. When a specific date is given in the text or description portion, it is correct.
Electrorheological (ER) Fluids: A Research Needs Assessment
1993-05-01
transmission, and to reduce energy loss and damage due to vibration and oscillation. A large number and variety of ER devices have been invented; they are...suspension must be stabilized to prevent settling. Separation could cause loss of fluid performance, plugging of flow paths, and other problems...performa.ice of ER fluids, especially those containing water, can change with time as a result of component loss through evaporation at elevated use
Leão, Erico; Montez, Carlos; Moraes, Ricardo; Portugal, Paulo; Vasques, Francisco
2017-01-01
The use of Wireless Sensor Network (WSN) technologies is an attractive option to support wide-scale monitoring applications, such as the ones that can be found in precision agriculture, environmental monitoring and industrial automation. The IEEE 802.15.4/ZigBee cluster-tree topology is a suitable topology to build wide-scale WSNs. Despite some of its known advantages, including timing synchronisation and duty-cycle operation, cluster-tree networks may suffer from severe network congestion problems due to the convergecast pattern of its communication traffic. Therefore, the careful adjustment of transmission opportunities (superframe durations) allocated to the cluster-heads is an important research issue. This paper proposes a set of proportional Superframe Duration Allocation (SDA) schemes, based on well-defined protocol and timing models, and on the message load imposed by child nodes (Load-SDA scheme), or by number of descendant nodes (Nodes-SDA scheme) of each cluster-head. The underlying reasoning is to adequately allocate transmission opportunities (superframe durations) and parametrize buffer sizes, in order to improve the network throughput and avoid typical problems, such as: network congestion, high end-to-end communication delays and discarded messages due to buffer overflows. Simulation assessments show how proposed allocation schemes may clearly improve the operation of wide-scale cluster-tree networks. PMID:28134822
Shooter position estimation with muzzle blast and shockwave measurements from separate locations
NASA Astrophysics Data System (ADS)
Grasing, David
2016-05-01
There are two acoustical events associated with small arms fire: the muzzle blast (created by bullets being expelled from the barrel of the weapon), and the shockwave (created by bullets which exceed the speed of sound). Assuming the ballistics of a round are known, the times and directions of arrival of the acoustic events furnish sufficient information to determine the origin of the shot. Existing methods tacitly assume that it is a single sensor which makes measurements of the times and direction of arrival. If the sensor is located past the point where the bullet goes transonic or if the sensor is far off the axis of the shot line a single sensor localization become highly inaccurate due to the ill-conditioning of the localization problem. In this paper, a more general approach is taken which allows for localizations from measurements made at separate locations. There are considerable advantages to this approach, the most noteworthy of which is the improvement in localization accuracy due to the improvement in the conditioning of the problem. Additional benefits include: the potential to locate in cases where a single sensor has insufficient information, furnishing high quality initialization to data fusion algorithms, and the potential to identify the round from a set of possible rounds.
NASA Astrophysics Data System (ADS)
Kifonidis, K.; Müller, E.
2012-08-01
Aims: We describe and study a family of new multigrid iterative solvers for the multidimensional, implicitly discretized equations of hydrodynamics. Schemes of this class are free of the Courant-Friedrichs-Lewy condition. They are intended for simulations in which widely differing wave propagation timescales are present. A preferred solver in this class is identified. Applications to some simple stiff test problems that are governed by the compressible Euler equations, are presented to evaluate the convergence behavior, and the stability properties of this solver. Algorithmic areas are determined where further work is required to make the method sufficiently efficient and robust for future application to difficult astrophysical flow problems. Methods: The basic equations are formulated and discretized on non-orthogonal, structured curvilinear meshes. Roe's approximate Riemann solver and a second-order accurate reconstruction scheme are used for spatial discretization. Implicit Runge-Kutta (ESDIRK) schemes are employed for temporal discretization. The resulting discrete equations are solved with a full-coarsening, non-linear multigrid method. Smoothing is performed with multistage-implicit smoothers. These are applied here to the time-dependent equations by means of dual time stepping. Results: For steady-state problems, our results show that the efficiency of the present approach is comparable to the best implicit solvers for conservative discretizations of the compressible Euler equations that can be found in the literature. The use of red-black as opposed to symmetric Gauss-Seidel iteration in the multistage-smoother is found to have only a minor impact on multigrid convergence. This should enable scalable parallelization without having to seriously compromise the method's algorithmic efficiency. For time-dependent test problems, our results reveal that the multigrid convergence rate degrades with increasing Courant numbers (i.e. time step sizes). Beyond a Courant number of nine thousand, even complete multigrid breakdown is observed. Local Fourier analysis indicates that the degradation of the convergence rate is associated with the coarse-grid correction algorithm. An implicit scheme for the Euler equations that makes use of the present method was, nevertheless, able to outperform a standard explicit scheme on a time-dependent problem with a Courant number of order 1000. Conclusions: For steady-state problems, the described approach enables the construction of parallelizable, efficient, and robust implicit hydrodynamics solvers. The applicability of the method to time-dependent problems is presently restricted to cases with moderately high Courant numbers. This is due to an insufficient coarse-grid correction of the employed multigrid algorithm for large time steps. Further research will be required to help us to understand and overcome the observed multigrid convergence difficulties for time-dependent problems.
Reproductive health problems loom in LDCs.
1995-01-01
According to reports from the Program for Appropriate Technology in Health (PATH) and the World Bank, women in less developed countries (LDCs) suffer the greatest risk due to reproductive health problems. At any given time, a woman in a LDC is more likely than not to have at least 1 reproductive health problem that could be treated by a primary care provider or counseling and referral ("Women's Reproductive Health: The Role of Family Planning Programs," a PATH report). Among diseases for which cost-effective interventions exist (treatments or preventive measures), reproductive health problems account for the majority of the disease burden (a measure of healthy years lost due to disability or premature death) among women aged 15-44. A study of 650 women in India found that more than 50% reported specific gynecological problems; clinical examination found more than 90% had 1 or more such problems. In a study of 509 nonpregnant women in rural Egypt, it was discovered that more than 52% had a reproductive tract infection, 56% had some form of uterine prolapse, 14% had a urinary tract infection, and 11% had an abnormal Pap smear. Major reproductive health problems continue into menopause; cervical cancer, which is linked to reproductive tract infections and early and frequent childbearing, strikes 400,000 women in LDCs each year. Sexually transmitted disease (STD) and human immunodeficiency virus (HIV) infections are also problems; women are twice as likely as men to contact gonorrhea from an infected sex partner, and 14 million women will have been infected with HIV by the year 2000 (WHO estimate). Treatment is often unsought by women because they do not understand the risk, are unaware of the symptoms, or fear the stigma of attending a clinic. If all the women who wanted to control their fertility had access to family planning services, maternal mortality would decrease by nearly 50%. Reproductive health services (routine gynecological care, perinatal care, family planning services, cancer screening, STD/HIV services, nutritional supplementation, and other services appropriate to age) are needed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hollingsworth, Jeff
2014-07-31
The purpose of this project was to develop tools and techniques to improve the ability of computational scientists to investigate and correct problems (bugs) in their programs. Specifically, the University of Maryland component of this project focused on the problems associated with the finite number of bits available in a computer to represent numeric values. In large scale scientific computation, numbers are frequently added to and multiplied with each other billions of times. Thus even small errors due to the representation of numbers can accumulate into big errors. However, using too many bits to represent a number results in additionalmore » computation, memory, and energy costs. Thus it is critical to find the right size for numbers. This project focused on several aspects of this general problem. First, we developed a tool to look for cancelations, the catastrophic loss of precision in numbers due to the addition of two numbers whose actual values are close to each other, but whose representation in a computer is identical or nearly so. Second, we developed a suite of tools to allow programmers to identify exactly how much precision is required for each operation in their program. This tool allows programmers to both verify that enough precision is available, but more importantly find cases where extra precision could be eliminated to allow the program to use less memory, computer time, or energy. These tools use advanced binary modification techniques to allow the analysis of actual optimized code. The system, called Craft, has been applied to a number of benchmarks and real applications.« less
SORPTION OF ARSENATE AND ARSENITE ON RUO2·XH2O: ANALYSIS OF SORBED PHASE OXIDATION STATE BY XANES
Arsenic contamination in water, soil and sediment is a global problem. Awareness of the problems created by As contamination have increased in recent years due to reports from Asia describing immense health problems due to As in drinking water [1, 2]. Changes in the U.S. regulati...
Problematic use of social networking sites among urban school going teenagers.
Meena, Parth Singh; Mittal, Pankaj Kumar; Solanki, Ram Kumar
2012-07-01
Social networking sites like Facebook, Orkut and Twitter are virtual communities where users can create individual public profiles, interact with real-life friends and meet other people based on shared interests. An exponential rise in usage of Social Networking Sites have been seen within the last few years. Their ease of use and immediate gratification effect on users has changed the way people in general and students in particular spend their time. Young adults, particularly teenagers tended to be unaware of just how much time they really spent on social networking sites. Negative correlates of Social Networking Sites usage include the decrease in real life social community participation and academic achievement, as well as relationship problems, each of which may be indicative of potential addiction. the aim of the study was to find out whether teenagers, specially those living in cities spend too much time on social networking websites. 200 subjects, both boys and girls were included in the cross sectional study who were given a 20 item Young's internet addiction test modified for social networking sites. The responses were analyzed using chi square test and Fisher's exact test. 24.74% of the students were having occasional or 'frequency' problems while 2.02% of them were experiencing severe problems due to excessive time spent using social networking sites. With the ever increasing popularity of social media, teenagers are devoting significant time to social networking on websites and are prone to get 'addicted' to such form of online social interaction.
NASA Astrophysics Data System (ADS)
Chakraborty, S.; Banerjee, A.; Gupta, S. K. S.; Christensen, P. R.; Papandreou-Suppappola, A.
2017-12-01
Multitemporal observations acquired frequently by satellites with short revisit periods such as the Moderate Resolution Imaging Spectroradiometer (MODIS), is an important source for modeling land cover. Due to the inherent seasonality of the land cover, harmonic modeling reveals hidden state parameters characteristic to it, which is used in classifying different land cover types and in detecting changes due to natural or anthropogenic factors. In this work, we use an eight day MODIS composite to create a Normalized Difference Vegetation Index (NDVI) time-series of ten years. Improved hidden parameter estimates of the nonlinear harmonic NDVI model are obtained using the Particle Filter (PF), a sequential Monte Carlo estimator. The nonlinear estimation based on PF is shown to improve parameter estimation for different land cover types compared to existing techniques that use the Extended Kalman Filter (EKF), due to linearization of the harmonic model. As these parameters are representative of a given land cover, its applicability in near real-time detection of land cover change is also studied by formulating a metric that captures parameter deviation due to change. The detection methodology is evaluated by considering change as a rare class problem. This approach is shown to detect change with minimum delay. Additionally, the degree of change within the change perimeter is non-uniform. By clustering the deviation in parameters due to change, this spatial variation in change severity is effectively mapped and validated with high spatial resolution change maps of the given regions.
Apprehensions and problems after laryngectomy: Patients' perspective.
Hirani, Ismail; Siddiqui, Atif Hafeez; Muhammad Khyani, Iqbal Abdul
2015-11-01
To evaluate the apprehensions, social, sexual and financial problems in patients with advanced laryngeal cancer after total laryngectomy and the impact of attending laryngeal club on these problems. The analytical study was conducted at the Dow Medical College and Civil Hospital Karachi from January 1996 to December 2011. Patients with total laryngectomy, operated for advanced laryngeal cancer at various centres of Sindh and Balochistan, attending Laryngeal Club of Pakistan, situated at Civil Hospital Karachi, were included. All the patients were evaluated through a questionnaire covering their apprehensions regarding social, sexual and financial impact on their lives after total laryngectomy. Data was analysed using SPSS 16. Of the 125 patients, 120(96%) were males, and 5(4%) were females; all housewives. The overall mean age was 54.8±0.5 years (range: 31-65 years). Further, 92(74%) participants were worried about financial uncertainty, while 84(67%) had regrets over loss of their voice; patients worried about losing family support and facing social rejection were 23(18%) and 15(12%) respectively. Only 7(5%) patients feared losing sexual relationship with their spouse. All these apprehensions were subdued after attending the Laryngeal Club of Pakistan. Severe financial impact was faced by 55(44%) patients due to loss of job, while 05(4%) had moderate impact due to change of job with lower income and 60(48%) patients had no financial problem. A good 102(82%) participants thought the support from their friends and family was upto their expectations; 98(78%) enjoyed satisfactory sex life although with reduced frequency of 1 to 2 intercourses per month; 21(17%) were having the frequency of 3-10 per month; and 3(2%) had more than 10 per month. Only 16(13%) patients were not involved in sexual relations with their spouses due to various reasons. Majority of laryngectomised patients expressed apprehensions and showed some social problems after laryngectomy, especially in the initial phase which improved either with passage of time or after attending Laryngectomy Club. The main problem was financial constraints; majority had good friends and family support and enjoyed satisfactory sexual relationship with their spouse.
NASA Astrophysics Data System (ADS)
Soleilhac, Antonin; Bertorelle, Franck; Antoine, Rodolphe
2018-03-01
Protein-templated gold nanoclusters (AuNCs) are very attractive due to their unique fluorescence properties. A major problem however may arise due to protein structure changes upon the nucleation of an AuNC within the protein for any future use as in vivo probes, for instance. In this work, we propose a simple and reliable fluorescence based technique measuring the hydrodynamic size of protein-templated gold nanoclusters. This technique uses the relation between the time resolved fluorescence anisotropy decay and the hydrodynamic volume, through the rotational correlation time. We determine the molecular size of protein-directed AuNCs, with protein templates of increasing sizes, e.g. insulin, lysozyme, and bovine serum albumin (BSA). The comparison of sizes obtained by other techniques (e.g. dynamic light scattering and small-angle X-ray scattering) between bare and gold clusters containing proteins allows us to address the volume changes induced either by conformational changes (for BSA) or the formation of protein dimers (for insulin and lysozyme) during cluster formation and incorporation.
A Sarsa(λ)-based control model for real-time traffic light coordination.
Zhou, Xiaoke; Zhu, Fei; Liu, Quan; Fu, Yuchen; Huang, Wei
2014-01-01
Traffic problems often occur due to the traffic demands by the outnumbered vehicles on road. Maximizing traffic flow and minimizing the average waiting time are the goals of intelligent traffic control. Each junction wants to get larger traffic flow. During the course, junctions form a policy of coordination as well as constraints for adjacent junctions to maximize their own interests. A good traffic signal timing policy is helpful to solve the problem. However, as there are so many factors that can affect the traffic control model, it is difficult to find the optimal solution. The disability of traffic light controllers to learn from past experiences caused them to be unable to adaptively fit dynamic changes of traffic flow. Considering dynamic characteristics of the actual traffic environment, reinforcement learning algorithm based traffic control approach can be applied to get optimal scheduling policy. The proposed Sarsa(λ)-based real-time traffic control optimization model can maintain the traffic signal timing policy more effectively. The Sarsa(λ)-based model gains traffic cost of the vehicle, which considers delay time, the number of waiting vehicles, and the integrated saturation from its experiences to learn and determine the optimal actions. The experiment results show an inspiring improvement in traffic control, indicating the proposed model is capable of facilitating real-time dynamic traffic control.
NASA Astrophysics Data System (ADS)
Taousser, Fatima; Defoort, Michael; Djemai, Mohamed
2016-01-01
This paper investigates the consensus problem for linear multi-agent system with fixed communication topology in the presence of intermittent communication using the time-scale theory. Since each agent can only obtain relative local information intermittently, the proposed consensus algorithm is based on a discontinuous local interaction rule. The interaction among agents happens at a disjoint set of continuous-time intervals. The closed-loop multi-agent system can be represented using mixed linear continuous-time and linear discrete-time models due to intermittent information transmissions. The time-scale theory provides a powerful tool to combine continuous-time and discrete-time cases and study the consensus protocol under a unified framework. Using this theory, some conditions are derived to achieve exponential consensus under intermittent information transmissions. Simulations are performed to validate the theoretical results.
Bíró, Oszkár; Koczka, Gergely; Preis, Kurt
2014-01-01
An efficient finite element method to take account of the nonlinearity of the magnetic materials when analyzing three-dimensional eddy current problems is presented in this paper. The problem is formulated in terms of vector and scalar potentials approximated by edge and node based finite element basis functions. The application of Galerkin techniques leads to a large, nonlinear system of ordinary differential equations in the time domain. The excitations are assumed to be time-periodic and the steady-state periodic solution is of interest only. This is represented either in the frequency domain as a finite Fourier series or in the time domain as a set of discrete time values within one period for each finite element degree of freedom. The former approach is the (continuous) harmonic balance method and, in the latter one, discrete Fourier transformation will be shown to lead to a discrete harmonic balance method. Due to the nonlinearity, all harmonics, both continuous and discrete, are coupled to each other. The harmonics would be decoupled if the problem were linear, therefore, a special nonlinear iteration technique, the fixed-point method is used to linearize the equations by selecting a time-independent permeability distribution, the so-called fixed-point permeability in each nonlinear iteration step. This leads to uncoupled harmonics within these steps. As industrial applications, analyses of large power transformers are presented. The first example is the computation of the electromagnetic field of a single-phase transformer in the time domain with the results compared to those obtained by traditional time-stepping techniques. In the second application, an advanced model of the same transformer is analyzed in the frequency domain by the harmonic balance method with the effect of the presence of higher harmonics on the losses investigated. Finally a third example tackles the case of direct current (DC) bias in the coils of a single-phase transformer. PMID:24829517
Bíró, Oszkár; Koczka, Gergely; Preis, Kurt
2014-05-01
An efficient finite element method to take account of the nonlinearity of the magnetic materials when analyzing three-dimensional eddy current problems is presented in this paper. The problem is formulated in terms of vector and scalar potentials approximated by edge and node based finite element basis functions. The application of Galerkin techniques leads to a large, nonlinear system of ordinary differential equations in the time domain. The excitations are assumed to be time-periodic and the steady-state periodic solution is of interest only. This is represented either in the frequency domain as a finite Fourier series or in the time domain as a set of discrete time values within one period for each finite element degree of freedom. The former approach is the (continuous) harmonic balance method and, in the latter one, discrete Fourier transformation will be shown to lead to a discrete harmonic balance method. Due to the nonlinearity, all harmonics, both continuous and discrete, are coupled to each other. The harmonics would be decoupled if the problem were linear, therefore, a special nonlinear iteration technique, the fixed-point method is used to linearize the equations by selecting a time-independent permeability distribution, the so-called fixed-point permeability in each nonlinear iteration step. This leads to uncoupled harmonics within these steps. As industrial applications, analyses of large power transformers are presented. The first example is the computation of the electromagnetic field of a single-phase transformer in the time domain with the results compared to those obtained by traditional time-stepping techniques. In the second application, an advanced model of the same transformer is analyzed in the frequency domain by the harmonic balance method with the effect of the presence of higher harmonics on the losses investigated. Finally a third example tackles the case of direct current (DC) bias in the coils of a single-phase transformer.
NASA Astrophysics Data System (ADS)
Jayanthi, Aditya; Coker, Christopher
2016-11-01
In the last decade, CFD simulations have transitioned from the stage where they are used to validate the final designs to the main stream development of products driven by the simulation. However, there are still niche areas of applications liking oiling simulations, where the traditional CFD simulation times are probative to use them in product development and have to rely on experimental methods, which are expensive. In this paper a unique example of Sprocket-Chain simulation will be presented using nanoFluidx a commercial SPH code developed by FluiDyna GmbH and Altair Engineering. The grid less nature of the of SPH method has inherent advantages in the areas of application with complex geometry which pose severe challenge to classical finite volume CFD methods due to complex moving geometries, moving meshes and high resolution requirements leading to long simulation times. The simulations times using nanoFluidx can be reduced from weeks to days allowing the flexibility to run more simulation and can be in used in main stream product development. The example problem under consideration is a classical Multiphysics problem and a sequentially coupled solution of Motion Solve and nanoFluidX will be presented. This abstract is replacing DFD16-2016-000045.
Wen, Tingxi; Zhang, Zhongnan; Wong, Kelvin K. L.
2016-01-01
Unmanned aerial vehicle (UAV) has been widely used in many industries. In the medical environment, especially in some emergency situations, UAVs play an important role such as the supply of medicines and blood with speed and efficiency. In this paper, we study the problem of multi-objective blood supply by UAVs in such emergency situations. This is a complex problem that includes maintenance of the supply blood’s temperature model during transportation, the UAVs’ scheduling and routes’ planning in case of multiple sites requesting blood, and limited carrying capacity. Most importantly, we need to study the blood’s temperature change due to the external environment, the heating agent (or refrigerant) and time factor during transportation, and propose an optimal method for calculating the mixing proportion of blood and appendage in different circumstances and delivery conditions. Then, by introducing the idea of transportation appendage into the traditional Capacitated Vehicle Routing Problem (CVRP), this new problem is proposed according to the factors of distance and weight. Algorithmically, we use the combination of decomposition-based multi-objective evolutionary algorithm and local search method to perform a series of experiments on the CVRP public dataset. By comparing our technique with the traditional ones, our algorithm can obtain better optimization results and time performance. PMID:27163361
NASA Astrophysics Data System (ADS)
Volkov, D.
2017-12-01
We introduce an algorithm for the simultaneous reconstruction of faults and slip fields on those faults. We define a regularized functional to be minimized for the reconstruction. We prove that the minimum of that functional converges to the unique solution of the related fault inverse problem. Due to inherent uncertainties in measurements, rather than seeking a deterministic solution to the fault inverse problem, we consider a Bayesian approach. The advantage of such an approach is that we obtain a way of quantifying uncertainties as part of our final answer. On the downside, this Bayesian approach leads to a very large computation. To contend with the size of this computation we developed an algorithm for the numerical solution to the stochastic minimization problem which can be easily implemented on a parallel multi-core platform and we discuss techniques to save on computational time. After showing how this algorithm performs on simulated data and assessing the effect of noise, we apply it to measured data. The data was recorded during a slow slip event in Guerrero, Mexico.
Wen, Tingxi; Zhang, Zhongnan; Wong, Kelvin K L
2016-01-01
Unmanned aerial vehicle (UAV) has been widely used in many industries. In the medical environment, especially in some emergency situations, UAVs play an important role such as the supply of medicines and blood with speed and efficiency. In this paper, we study the problem of multi-objective blood supply by UAVs in such emergency situations. This is a complex problem that includes maintenance of the supply blood's temperature model during transportation, the UAVs' scheduling and routes' planning in case of multiple sites requesting blood, and limited carrying capacity. Most importantly, we need to study the blood's temperature change due to the external environment, the heating agent (or refrigerant) and time factor during transportation, and propose an optimal method for calculating the mixing proportion of blood and appendage in different circumstances and delivery conditions. Then, by introducing the idea of transportation appendage into the traditional Capacitated Vehicle Routing Problem (CVRP), this new problem is proposed according to the factors of distance and weight. Algorithmically, we use the combination of decomposition-based multi-objective evolutionary algorithm and local search method to perform a series of experiments on the CVRP public dataset. By comparing our technique with the traditional ones, our algorithm can obtain better optimization results and time performance.
Modifications to Axially Symmetric Simulations Using New DSMC (2007) Algorithms
NASA Technical Reports Server (NTRS)
Liechty, Derek S.
2008-01-01
Several modifications aimed at improving physical accuracy are proposed for solving axially symmetric problems building on the DSMC (2007) algorithms introduced by Bird. Originally developed to solve nonequilibrium, rarefied flows, the DSMC method is now regularly used to solve complex problems over a wide range of Knudsen numbers. These new algorithms include features such as nearest neighbor collisions excluding the previous collision partners, separate collision and sampling cells, automatically adaptive variable time steps, a modified no-time counter procedure for collisions, and discontinuous and event-driven physical processes. Axially symmetric solutions require radial weighting for the simulated molecules since the molecules near the axis represent fewer real molecules than those farther away from the axis due to the difference in volume of the cells. In the present methodology, these radial weighting factors are continuous, linear functions that vary with the radial position of each simulated molecule. It is shown that how one defines the number of tentative collisions greatly influences the mean collision time near the axis. The method by which the grid is treated for axially symmetric problems also plays an important role near the axis, especially for scalar pressure. A new method to treat how the molecules are traced through the grid is proposed to alleviate the decrease in scalar pressure at the axis near the surface. Also, a modification to the duplication buffer is proposed to vary the duplicated molecular velocities while retaining the molecular kinetic energy and axially symmetric nature of the problem.
Klügl, Ines; Hiller, Karl-Anton; Landthaler, Michael; Bäumler, Wolfgang
2010-08-01
Millions of people are tattooed. However, the frequency of health problems is unknown. We performed an Internet survey in German-speaking countries. The provenance of tattooed participants (n = 3,411) was evenly distributed in Germany. The participants had many (28%; >4) and large tattoos (36%; >or=900 cm(2)). After tattooing, the people described skin problems (67.5%) or systemic reactions (6.6%). Four weeks after tattooing, 9% still had health problems. Six percent reported persistent health problems due to the tattoo, of which females (7.3%) were more frequently concerned than males (4.2%). Colored tattoos provoked more short-term skin (p = 0.003) or systemic (p = 0.0001) reactions than black tattoos. Also the size of tattoos and the age at the time of tattooing play a significant role in many health problems. Our results show that millions of people in the Western world supposedly have transient or persisting health problems after tattooing. Owing to the large number and size of the tattoos, tattooists inject several grams of tattoo colorants into the skin, which partly spread in the human body and stay for a lifetime. The latter might cause additional health problems in the long term. Copyright 2010 S. Karger AG, Basel.
The analysis of 146 patients with difficult laparoscopic cholecystectomy.
Bat, Orhan
2015-01-01
Laparoscopic cholecystectomy (LC) is very commonly performed surgical intervention. Acute or chronic cholecystitis, adhesions due to previous upper abdomen surgeries, Mirrizi's syndrome and obesity are common clinical conditions that can be associated with difficult cholecystectomy. In this study, we evaluated and scored the patients with difficult surgical exploration during laparoscopic cholecystectomy. All patients who underwent LC from 2010 to 2015 were retrospectively rewieved. According to intraoperative findings DLC cases were described and classified. Class I difficulty: Adhesion of omentum majus, transverse colon, duodenum to the fundus of the gallbladder. Class II difficulty: Adhesions in Calot's triangle and difficulty in dissection of cystic artery and cystic duct Class III difficulty: Difficulty in dissection of gallbladder bed (scleroathrophic gallbladder, hemorrhage from liver during dissection of gallbladder, chirotic liver). Class IV difficulty: Difficulty in exploration of gallbladder due to intraabdominal adhesions including technical problems. A total of 146 patients were operated with DLC. The most common difficulty type was Class I difficulty (88 patients/60.2%). Laparoscopic cholecystectomy was converted to laparotomy in 98 patients. Operation time was found to be related with conversion to open surgery (P<0.05). Wound infection rate was also statistically higher in conversion group (P<0.05). The opertion time was found to be longest with Class II difficulty. Conversion rate to open surgery was also highest with Class II difficulty group. Class II difficulty characterized by severe adhesions in calot's triangle is most serious problem among all DLC cases. They have longer operation time and higher conversion rate.
Stergiannis, Pantelis; Katsoulas, Theodoros; Fildissis, George; Intas, George; Galanis, Peter; Kosta, Natalia; Zidianakis, Vasilios; Baltopoulos, George
2014-01-01
The objective of this study was to assess changes in health-related quality of life (HRQOL) in multiple trauma patients due to motor vehicle crashes during a follow-up period of 2 years after discharge from an intensive care unit (ICU) and the effect of income and financial cost of rehabilitation in HRQOL. The study was a prospective observational study of multiple trauma patients from January 2009 to January 2011 who were hospitalized in a general, medical, and surgical ICU of a district hospital in Athens, Greece. Eighty-five patients with multiple traumas due to motor vehicle crashes and with an ICU stay of more than 24 hours were included in the study. HRQOL was assessed by a general questionnaire, the EuroQol 5D. Increased monthly household income and absence of traumatic brain injuries were associated with an improved EQ-VAS score. The frequency of severe problems in mobility, self-care, usual activities, pain/discomfort, and anxiety/depression decreased over time. The financial cost of rehabilitation was initially high but decreased over time. Severely injured victims of motor vehicle crashes suffer from serious problems in terms of HRQOL which is gradually improved even 2 years after hospital discharge. In addition, HRQOL is significantly related to income. Resources used for rehabilitation decrease over time, but even at 24 months, the patients still use half of the amount as compared with the cost of the first 6 months after trauma.
Costa, G; Pickup, L; Di Martino, V
1988-01-01
This report summarizes the main results of research promoted by the European Foundation for the Improvement of Living and Working Conditions, concerning the impact of commuting on the health and safety of workers. An empirical study, carried out among 1167 industrial Italian workers, shows that "commuters" (workers whose journey from home to work usually does not take less than 45 min in each direction) experienced a more stressed life-style than did "non commuters" (whose journey does not take more than 20 min). Commuting appears for many workers to be a necessity which is imposed by external factors, such as the housing market and job opportunities. Commuting is shown to interfere with patterns of everyday life by restricting free-time and reducing sleeping time. A majority of commuters use public transport mainly because of cost. Public transport commuters have problems due to more changes between modes, idle waiting times and delays leading to late arrival at work. Inside transport modes, commuters suffered discomfort as a result of overcrowding, microclimatic conditions, noise and vibrations. Commuters also reported higher psychological stress scores, more health complaints, essentially of psychosomatic nature, and greater absenteeism from work due to sickness. Commuting, in addition to shiftwork, further increases sleep problems, psychosomatic complaints and difficulties with family and social life. Women commuters were at a greater disadvantage than men, having more family difficulties, more travelling complaints and higher absenteeism.
Parastomal hernias after radical cystectomy and ileal conduit diversion
Donahue, Timothy F.
2016-01-01
Parastomal hernia, defined as an "incisional hernia related to an abdominal wall stoma", is a frequent complication after conduit urinary diversion that can negatively impact quality of life and present a clinically significant problem for many patients. Parastomal hernia (PH) rates may be as high as 65% and while many patients are asymptomatic, in some series up to 30% of patients require surgical intervention due to pain, leakage, ostomy appliance problems, urinary obstruction, and rarely bowel obstruction or strangulation. Local tissue repair, stoma relocation, and mesh repairs have been performed to correct PH, however, long-term results have been disappointing with recurrence rates of 30%–76% reported after these techniques. Due to high recurrence rates and the potential morbidity of PH repair, efforts have been made to prevent PH development at the time of the initial surgery. Randomized trials of circumstomal prophylactic mesh placement at the time of colostomy and ileostomy stoma formation have shown significant reductions in PH rates with acceptably low complication profiles. We have placed prophylactic mesh at the time of ileal conduit creation in patients at high risk for PH development and found it to be safe and effective in reducing the PH rates over the short-term. In this review, we describe the clinical and radiographic definitions of PH, the clinical impact and risk factors associated with its development, and the use of prophylactic mesh placement for patients undergoing ileal conduit urinary diversion with the intent of reducing PH rates. PMID:27437533
Design and Research of the Sewage Treatment Control System
NASA Astrophysics Data System (ADS)
Chu, J.; Hu, W. W.
Due to the rapid development of China's economy, the water pollution has become a problem that we have to face. In particular, how to deal with industrial wastewater has become a top priority. In wastewater treatment, the control system based on PLC has met the design requirement in real-time, reliability, precision and so on. The integration of sequence control and process control in PLC, has the characteristics of high reliability, simple network, convenient and flexible use. PLC is a powerful tool for small and medium-sized industrial automation. Therefore, the sewage treatment control system take PLC as the core of control system, can nicely solve the problem of industrial wastewater in a certain extent.
Why don’t you use Evolutionary Algorithms in Big Data?
NASA Astrophysics Data System (ADS)
Stanovov, Vladimir; Brester, Christina; Kolehmainen, Mikko; Semenkina, Olga
2017-02-01
In this paper we raise the question of using evolutionary algorithms in the area of Big Data processing. We show that evolutionary algorithms provide evident advantages due to their high scalability and flexibility, their ability to solve global optimization problems and optimize several criteria at the same time for feature selection, instance selection and other data reduction problems. In particular, we consider the usage of evolutionary algorithms with all kinds of machine learning tools, such as neural networks and fuzzy systems. All our examples prove that Evolutionary Machine Learning is becoming more and more important in data analysis and we expect to see the further development of this field especially in respect to Big Data.
Towards Water Sensitive City: Lesson Learned From Bogor Flood Hazard in 2017
NASA Astrophysics Data System (ADS)
Ramdhan, Muhammad; Arifin, Hadi Susilo; Suharnoto, Yuli; Tarigan, Suria Darma
2018-02-01
Bogor known as rain city and it's located at an altitude range of 190-330 meters above sea level. In February 2017 Bogor experienced a series of natural disasters related to heavy rainfall that fell during that time. The hazard in the form of flash floods that cause casualties was shocked, due to the location of Bogor city that located in the foothills with a fairly steep slope. There is a problem with the drainage system in the city of Bogor. Australia Indonesia Center in cooperation with Bogor city government held a focus group discussion to seek a permanent solution for the problems and so that similar incidents do not occur in the future.
Analytical study of the effects of clouds on the light produced by lightning
NASA Technical Reports Server (NTRS)
Phanord, Dieudonne D.
1990-01-01
Researchers consider the scattering of visible and infrared light due to lightning by cubic, cylindrical and spherical clouds. The researchers extend to cloud physics the work by Twersky for single and multiple scattering of electromagnetic waves. They solve the interior problem separately to obtain the bulk parameters for the scatterer equivalent to the ensemble of spherical droplets. With the interior solution or the equivalent medium approach, the multiple scattering problem is reduced to that of a single scatterer in isolation. Hence, the computing methods of Wiscombe or Bohren specialized to Mie scattering with the possibility for absorption were used to generate numerical results in short computer time.
Optimization of Airport Surface Traffic: A Case-Study of Incheon International Airport
NASA Technical Reports Server (NTRS)
Eun, Yeonju; Jeon, Daekeun; Lee, Hanbong; Jung, Yoon C.; Zhu, Zhifan; Jeong, Myeongsook; Kim, Hyounkong; Oh, Eunmi; Hong, Sungkwon
2017-01-01
This study aims to develop a controllers decision support tool for departure and surface management of ICN. Airport surface traffic optimization for Incheon International Airport (ICN) in South Korea was studied based on the operational characteristics of ICN and airspace of Korea. For surface traffic optimization, a multiple runway scheduling problem and a taxi scheduling problem were formulated into two Mixed Integer Linear Programming (MILP) optimization models. The Miles-In-Trail (MIT) separation constraint at the departure fix shared by the departure flights from multiple runways and the runway crossing constraints due to the taxi route configuration specific to ICN were incorporated into the runway scheduling and taxiway scheduling problems, respectively. Since the MILP-based optimization model for the multiple runway scheduling problem may be computationally intensive, computation times and delay costs of different solving methods were compared for a practical implementation. This research was a collaboration between Korea Aerospace Research Institute (KARI) and National Aeronautics and Space Administration (NASA).
Optimization of Airport Surface Traffic: A Case-Study of Incheon International Airport
NASA Technical Reports Server (NTRS)
Eun, Yeonju; Jeon, Daekeun; Lee, Hanbong; Jung, Yoon Chul; Zhu, Zhifan; Jeong, Myeong-Sook; Kim, Hyoun Kyoung; Oh, Eunmi; Hong, Sungkwon
2017-01-01
This study aims to develop a controllers' decision support tool for departure and surface management of ICN. Airport surface traffic optimization for Incheon International Airport (ICN) in South Korea was studied based on the operational characteristics of ICN and airspace of Korea. For surface traffic optimization, a multiple runway scheduling problem and a taxi scheduling problem were formulated into two Mixed Integer Linear Programming (MILP) optimization models. The Miles-In-Trail (MIT) separation constraint at the departure fix shared by the departure flights from multiple runways and the runway crossing constraints due to the taxi route configuration specific to ICN were incorporated into the runway scheduling and taxiway scheduling problems, respectively. Since the MILP-based optimization model for the multiple runway scheduling problem may be computationally intensive, computation times and delay costs of different solving methods were compared for a practical implementation. This research was a collaboration between Korea Aerospace Research Institute (KARI) and National Aeronautics and Space Administration (NASA).
Hybrid General Pattern Search and Simulated Annealing for Industrail Production Planning Problems
NASA Astrophysics Data System (ADS)
Vasant, P.; Barsoum, N.
2010-06-01
In this paper, the hybridization of GPS (General Pattern Search) method and SA (Simulated Annealing) incorporated in the optimization process in order to look for the global optimal solution for the fitness function and decision variables as well as minimum computational CPU time. The real strength of SA approach been tested in this case study problem of industrial production planning. This is due to the great advantage of SA for being easily escaping from trapped in local minima by accepting up-hill move through a probabilistic procedure in the final stages of optimization process. Vasant [1] in his Ph. D thesis has provided 16 different techniques of heuristic and meta-heuristic in solving industrial production problems with non-linear cubic objective functions, eight decision variables and 29 constraints. In this paper, fuzzy technological problems have been solved using hybrid techniques of general pattern search and simulated annealing. The simulated and computational results are compared to other various evolutionary techniques.
Heart Surgery Waiting Time: Assessing the Effectiveness of an Action.
Badakhshan, Abbas; Arab, Mohammad; Gholipour, Mahin; Behnampour, Naser; Saleki, Saeid
2015-08-01
Waiting time is an index assessing patient satisfaction, managerial effectiveness and horizontal equity in providing health care. Although heart surgery centers establishment is attractive for politicians. They are always faced with the question of to what extent they solve patient's problems. The objective of this study was to evaluate factors influencing waiting time in patients of heart surgery centers, and to make recommendations for health-care policy-makers for reducing waiting time and increasing the quality of services from this perspective. This cross-sectional study was performed in 2013. After searching articles on PubMed, Elsevier, Google Scholar, Ovid, Magiran, IranMedex, and SID, a list of several criteria, which relate to waiting time, was provided. Afterwards, the data on waiting time were collected by a researcher-structured checklist from 156 hospitalized patients. The data were analyzed by SPSS 16. The Kolmogorov Smirnov and Shapiro tests were used for determination of normality. Due to the non-normal distribution, non-parametric tests, such as Kruskal-Wallis and Mann-Whitney were chosen for reporting significance. Parametric tests also used reporting medians. Among the studied variables, just economic status had a significant relation with waiting time (P = 0.37). Fifty percent of participants had diabetes, whereas this estimate was 43.58% for high blood pressure. As the cause of delay, 28.2% of patients reported financial problems, 18.6% personal problem and 13.5% a delay in providing equipment by the hospital. It seems the studied hospital should review its waiting time arrangements and detach them, as far as possible, from subjective and personal (specialists) decisions. On the other hand, ministries of health and insurance companies should consider more financial support. It is also recommend that hospitals should arrange preoperational psychiatric consultation for increasing patients' emotionally readiness.
Umari, Amjad M.J.; Gorelick, Steven M.
1986-01-01
In the numerical modeling of groundwater solute transport, explicit solutions may be obtained for the concentration field at any future time without computing concentrations at intermediate times. The spatial variables are discretized and time is left continuous in the governing differential equation. These semianalytical solutions have been presented in the literature and involve the eigensystem of a coefficient matrix. This eigensystem may be complex (i.e., have imaginary components) due to the asymmetry created by the advection term in the governing advection-dispersion equation. Previous investigators have either used complex arithmetic to represent a complex eigensystem or chosen large dispersivity values for which the imaginary components of the complex eigenvalues may be ignored without significant error. It is shown here that the error due to ignoring the imaginary components of complex eigenvalues is large for small dispersivity values. A new algorithm that represents the complex eigensystem by converting it to a real eigensystem is presented. The method requires only real arithmetic.
Opinions of women towards cesarean delivery and priority issues of care in the postpartum period.
Kisa, Sezer; Zeyneloğlu, Simge
2016-05-01
This study was conducted, in order to determine the opinions of women who had a cesarean delivery and the problems that they faced in the postpartum period. This descriptive study was conducted with 337 women who delivered babies by cesarean section. The data were collected using a semi-structured questionnaire. The results of the study showed that 53.4% of women underwent cesarean delivery for the first time, and 83.1% said that it was the obstetrician's decision to have a cesarean delivery. More than half of the women (61.1%) had a negative experience with cesarean delivery due to postpartum pain (44.7%) and inability to care for their infant (35.9%). The most common problems associated with cesarean delivery were postpartum pain (96.1%), back pain (68.2%), problems passing gas (62.0%), bleeding (56.1%), breastfeeding problems (49.6%) and limitation of movement (43.6%) respectively. Understanding the the opinions and problems of women towards cesarean delivery assists healthcare professionals in identifying better ways to provide appropriate care and support. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Chumakova, Lyubov; Rzeznik, Andrew; Rosales, Rodolfo R.
2017-11-01
In many dispersive/conservative wave problems, waves carry energy outside of the domain of interest and never return. Inside the domain of interest, this wave leakage acts as an effective dissipation mechanism, causing solutions to decay. In classical geophysical fluid dynamics problems this scenario occurs in the troposphere, if one assumes a homogeneous stratosphere. In this talk we present several classic GFD problems, where we seek the solution in the troposphere alone. Assuming that upward propagating waves that reach the stratosphere never return, we demonstrate how classic baroclinic modes become leaky, with characteristic decay time-scales that can be calculated. We also show how damping due to wave leakage changes the classic baroclinic instability problem in the presence of shear. This presentation is a part of a joint project. The mathematical approach used here relies on extending the classical concept of group velocity to leaky waves with complex wavenumber and frequency, which will be presented at this meeting by A. Rzeznik in the talk ``Group Velocity for Leaky Waves''. This research is funded by the Royal Soc. of Edinburgh, Scottish Government, and NSF.
Changing perceptions of protected area benefits and problems around Kibale National Park, Uganda.
MacKenzie, Catrina A; Salerno, Jonathan; Hartter, Joel; Chapman, Colin A; Reyna, Rafael; Tumusiime, David Mwesigye; Drake, Michael
2017-09-15
Local residents' changing perceptions of benefits and problems from living next to a protected area in western Uganda are assessed by comparing household survey data from 2006, 2009, and 2012. Findings are contextualized and supported by long-term data sources for tourism, protected area-based employment, tourism revenue sharing, resource access agreements, and problem animal abundance. We found decreasing perceived benefit and increasing perceived problems associated with the protected area over time, with both trends dominated by increased human-wildlife conflict due to recovering elephant numbers. Proportions of households claiming benefit from specific conservation strategies were increasing, but not enough to offset crop raiding. Ecosystem services mitigated perceptions of problems. As human and animal populations rise, wildlife authorities in Sub-Saharan Africa will be challenged to balance perceptions and adapt policies to ensure the continued existence of protected areas. Understanding the dynamic nature of local people's perceptions provides a tool to adapt protected area management plans, prioritize conservation resources, and engage local communities to support protected areas. Copyright © 2017 Elsevier Ltd. All rights reserved.
Three-Dimensional Inverse Transport Solver Based on Compressive Sensing Technique
NASA Astrophysics Data System (ADS)
Cheng, Yuxiong; Wu, Hongchun; Cao, Liangzhi; Zheng, Youqi
2013-09-01
According to the direct exposure measurements from flash radiographic image, a compressive sensing-based method for three-dimensional inverse transport problem is presented. The linear absorption coefficients and interface locations of objects are reconstructed directly at the same time. It is always very expensive to obtain enough measurements. With limited measurements, compressive sensing sparse reconstruction technique orthogonal matching pursuit is applied to obtain the sparse coefficients by solving an optimization problem. A three-dimensional inverse transport solver is developed based on a compressive sensing-based technique. There are three features in this solver: (1) AutoCAD is employed as a geometry preprocessor due to its powerful capacity in graphic. (2) The forward projection matrix rather than Gauss matrix is constructed by the visualization tool generator. (3) Fourier transform and Daubechies wavelet transform are adopted to convert an underdetermined system to a well-posed system in the algorithm. Simulations are performed and numerical results in pseudo-sine absorption problem, two-cube problem and two-cylinder problem when using compressive sensing-based solver agree well with the reference value.
Dynamic cellular manufacturing system considering machine failure and workload balance
NASA Astrophysics Data System (ADS)
Rabbani, Masoud; Farrokhi-Asl, Hamed; Ravanbakhsh, Mohammad
2018-02-01
Machines are a key element in the production system and their failure causes irreparable effects in terms of cost and time. In this paper, a new multi-objective mathematical model for dynamic cellular manufacturing system (DCMS) is provided with consideration of machine reliability and alternative process routes. In this dynamic model, we attempt to resolve the problem of integrated family (part/machine cell) formation as well as the operators' assignment to the cells. The first objective minimizes the costs associated with the DCMS. The second objective optimizes the labor utilization and, finally, a minimum value of the variance of workload between different cells is obtained by the third objective function. Due to the NP-hard nature of the cellular manufacturing problem, the problem is initially validated by the GAMS software in small-sized problems, and then the model is solved by two well-known meta-heuristic methods including non-dominated sorting genetic algorithm and multi-objective particle swarm optimization in large-scaled problems. Finally, the results of the two algorithms are compared with respect to five different comparison metrics.
Self-similar solutions to isothermal shock problems
NASA Astrophysics Data System (ADS)
Deschner, Stephan C.; Illenseer, Tobias F.; Duschl, Wolfgang J.
We investigate exact solutions for isothermal shock problems in different one-dimensional geometries. These solutions are given as analytical expressions if possible, or are computed using standard numerical methods for solving ordinary differential equations. We test the numerical solutions against the analytical expressions to verify the correctness of all numerical algorithms. We use similarity methods to derive a system of ordinary differential equations (ODE) yielding exact solutions for power law density distributions as initial conditions. Further, the system of ODEs accounts for implosion problems (IP) as well as explosion problems (EP) by changing the initial or boundary conditions, respectively. Taking genuinely isothermal approximations into account leads to additional insights of EPs in contrast to earlier models. We neglect a constant initial energy contribution but introduce a parameter to adjust the initial mass distribution of the system. Moreover, we show that due to this parameter a constant initial density is not allowed for isothermal EPs. Reasonable restrictions for this parameter are given. Both, the (genuinely) isothermal implosion as well as the explosion problem are solved for the first time.
Gearing, Robin E; MacKenzie, Michael J; Schwalbe, Craig S; Brewer, Kathryne B; Ibrahim, Rawan W
2013-02-01
This study aimed to establish the prevalence rates of mental health and behavioral problems of Arab youths residing in Jordanian care centers due to family disintegration, maltreatment, or abandonment and to examine how functioning varies by child characteristics and placement history. Child Behavior Checklist and case history data were collected for 70 youths across four Jordanian care centers. Approximately 53% of the adolescents were identified as experiencing mental health problems, and 43% and 46% had high internalizing and externalizing scores, respectively. Ordinary least-squares regression models examining mental health functioning showed that male gender, care entry because of maltreatment, time in care, and transfers were the most significant predictors of problems. Paralleling international research, this study found high levels of mental health needs among institutionalized youths. The impact of transfers on functioning is particularly worrisome, given the standard practice of transferring youths to another facility when they reach age 12. Improving the institutional care model by requiring fewer transfers and offering family-based community alternatives may ameliorate risks of developing mental and behavioral problems.
Part-time sick leave as a treatment method for individuals with musculoskeletal disorders.
Andrén, Daniela; Svensson, Mikael
2012-09-01
There is increasing evidence that staying active is an important part of a recovery process for individuals on sick leave due to musculoskeletal disorders (MSDs). It has been suggested that using part-time sick-leave rather than full-time sick leave will enhance the possibility of full recovery to the workforce, and several countries actively favor this policy. The aim of this paper is to examine if it is beneficial for individuals on sick leave due to MSDs to be on part-time sick leave compared to full-time sick leave. A sample of 1,170 employees from the RFV-LS (register) database of the Social Insurance Agency of Sweden is used. The effect of being on part-time sick leave compared to full-time sick leave is estimated for the probability of returning to work with full recovery of lost work capacity. A two-stage recursive bivariate probit model is used to deal with the endogeneity problem. The results indicate that employees assigned to part-time sick leave do recover to full work capacity with a higher probability than those assigned to full-time sick leave. The average treatment effect of part-time sick leave is 25 percentage points. Considering that part-time sick leave may also be less expensive than assigning individuals to full-time sick leave, this would imply efficiency improvements from assigning individuals, when possible, to part-time sick leave.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chandra, Mani; Gammie, Charles F.; Foucart, Francois, E-mail: manic@illinois.edu, E-mail: gammie@illinois.edu, E-mail: fvfoucart@lbl.gov
Hot, diffuse, relativistic plasmas such as sub-Eddington black-hole accretion flows are expected to be collisionless, yet are commonly modeled as a fluid using ideal general relativistic magnetohydrodynamics (GRMHD). Dissipative effects such as heat conduction and viscosity can be important in a collisionless plasma and will potentially alter the dynamics and radiative properties of the flow from that in ideal fluid models; we refer to models that include these processes as Extended GRMHD. Here we describe a new conservative code, grim, that enables all of the above and additional physics to be efficiently incorporated. grim combines time evolution and primitive variablemore » inversion needed for conservative schemes into a single step using an algorithm that only requires the residuals of the governing equations as inputs. This algorithm enables the code to be physics agnostic as well as flexibility regarding time-stepping schemes. grim runs on CPUs, as well as on GPUs, using the same code. We formulate a performance model and use it to show that our implementation runs optimally on both architectures. grim correctly captures classical GRMHD test problems as well as a new suite of linear and nonlinear test problems with anisotropic conduction and viscosity in special and general relativity. As tests and example applications, we resolve the shock substructure due to the presence of dissipation, and report on relativistic versions of the magneto-thermal instability and heat flux driven buoyancy instability, which arise due to anisotropic heat conduction, and of the firehose instability, which occurs due to anisotropic pressure (i.e., viscosity). Finally, we show an example integration of an accretion flow around a Kerr black hole, using Extended GRMHD.« less
Analysis of the lifetime and culling reasons for AI boars.
Knecht, Damian; Jankowska-Mąkosa, Anna; Duziński, Kamil
2017-01-01
The aim of the study was to analyze the lifetime and culling reasons for boars used in insemination centers (AI centers). The data collected from 355 culled boars from 1998 to 2013 included: age at start of semen collection, boar herd life, culling reason, daily gain and lean meat content, and number of ejaculates not meeting sales requirements after dilution. Culling reasons were divided into 7 groups: low semen value (LSV), low or lack of libido (LL), leg problems (LP), infectious diseases (ID), old age (OA), reduced demand for semen from the given boar (RD), and others (OT). The most common culling reasons for boars were LSV (23.7%) and RD (22.5%). It was observed that the lowest daily gains were noted in boars culled due to OA. Boars culled due to OA and RD were maintained in production for the longest time (over 1000 d), for LSV and ID retention was about 700 d, and due to LL below 400 d. The survival probability was over 0.9 until 1.5 yr, and just over 0.2 until 4 yr. The highest relative frequency was observed in the 36 th and 42 nd mo of life (over 16%). Hazard risk analysis revealed a more than 10 times higher risk of culling in the case of LL, ID or OT, in comparison to OA. The results can be used as a direct point of reference for the identification of emerging problems in AI boar exploitation and the development of an appropriate culling policy in AI centers.
Role of multiple representations in physics problem solving
NASA Astrophysics Data System (ADS)
Maries, Alexandru
This thesis explores the role of multiple representations in introductory physics students' problem solving performance through several investigations. Representations can help students focus on the conceptual aspects of physics and play a major role in effective problem solving. Diagrammatic representations can play a particularly important role in the initial stages of conceptual analysis and planning of the problem solution. Findings suggest that students who draw productive diagrams are more successful problem solvers even if their approach is primarily mathematical. Furthermore, students provided with a diagram of the physical situation presented in a problem sometimes exhibited deteriorated performance. Think-aloud interviews suggest that this deteriorated performance is in part due to reduced conceptual planning time which caused students to jump to the implementation stage without fully understanding the problem and planning problem solution. Another study investigated two interventions aimed at improving introductory students' representational consistency between mathematical and graphical representations and revealed that excessive scaffolding can have a detrimental effect. The detrimental effect was partly due to increased cognitive load brought on by the additional steps and instructions. Moreover, students who exhibited representational consistency also showed improved problem solving performance. The final investigation is centered on a problem solving task designed to provide information about the pedagogical content knowledge (PCK) of graduate student teaching assistants (TAs). In particular, the TAs identified what they considered to be the most common difficulties of introductory physics students related to graphical representations of kinematics concepts as they occur in the Test of Understanding Graphs in Kinematics (TUG-K). As an extension, the Force Concept Inventory (FCI) was also used to assess this aspect of PCK related to knowledge of student difficulties of both physics instructors and TAs. We find that teaching an independent course and recent teaching experience do not correlate with improved PCK. In addition, the performance of American TAs, Chinese TAs and other foreign TAs in identifying common student difficulties both in the context of the TUG-K and in the context of the FCI is similar. Moreover, there were many common difficulties of introductory physics students that were not identified by many instructors and TAs.
Discrete charge diagnostics on Pre-DIRECT COURSE
NASA Astrophysics Data System (ADS)
Guice, R. L.; Bryant, C.
1984-02-01
The Air Force Weapons Laboratory attempted to make 100 time-of-arrival measurements on Pre-DIRECT COURSE. With an 88 percent success rate, the detonation wave propagation within the charge was measured. The top and bottom hemispheres detonated at two different rates. However, the detonation velocities were well within the existing data base for Ammonium-Nitrate Fuel Oil charges. One large jet was observed on the charge but its location should not have caused any problems for ground level measurements. Twenty experimental time-of-arrival crystals were also fielded; however, the results are skeptical due to the grounding system of the support structure.
STS-114 Engine Cut-off Sensor Anomaly Technical Consultation Report
NASA Technical Reports Server (NTRS)
Wilson, Timmy R.; Kichak, Robert A.; Ungar, Eugene K.; Cherney, Robert; Rickman, Steve L.
2009-01-01
The NESC consultation team participated in real-time troubleshooting of the Main Propulsion System (MPS) Engine Cutoff (ECO) sensor system failures during STS-114 launch countdown. The team assisted with External Tank (ET) thermal and ECO Point Sensor Box (PSB) circuit analyses, and made real-time inputs to the Space Shuttle Program (SSP) problem resolution teams. Several long-term recommendations resulted. One recommendation was to conduct cryogenic tests of the ECO sensors to validate, or disprove, the theory that variations in circuit impedance due to cryogenic effects on swaged connections within the sensor were the root cause of STS-114 failures.
NASA Technical Reports Server (NTRS)
Chamitoff, Gregory Errol
1992-01-01
Intelligent optimization methods are applied to the problem of real-time flight control for a class of airbreathing hypersonic vehicles (AHSV). The extreme flight conditions that will be encountered by single-stage-to-orbit vehicles, such as the National Aerospace Plane, present a tremendous challenge to the entire spectrum of aerospace technologies. Flight control for these vehicles is particularly difficult due to the combination of nonlinear dynamics, complex constraints, and parametric uncertainty. An approach that utilizes all available a priori and in-flight information to perform robust, real time, short-term trajectory planning is presented.
Kernel canonical-correlation Granger causality for multiple time series
NASA Astrophysics Data System (ADS)
Wu, Guorong; Duan, Xujun; Liao, Wei; Gao, Qing; Chen, Huafu
2011-04-01
Canonical-correlation analysis as a multivariate statistical technique has been applied to multivariate Granger causality analysis to infer information flow in complex systems. It shows unique appeal and great superiority over the traditional vector autoregressive method, due to the simplified procedure that detects causal interaction between multiple time series, and the avoidance of potential model estimation problems. However, it is limited to the linear case. Here, we extend the framework of canonical correlation to include the estimation of multivariate nonlinear Granger causality for drawing inference about directed interaction. Its feasibility and effectiveness are verified on simulated data.
Caviola, Sara; Carey, Emma; Mammarella, Irene C.; Szucs, Denes
2017-01-01
We review how stress induction, time pressure manipulations and math anxiety can interfere with or modulate selection of problem-solving strategies (henceforth “strategy selection”) in arithmetical tasks. Nineteen relevant articles were identified, which contain references to strategy selection and time limit (or time manipulations), with some also discussing emotional aspects in mathematical outcomes. Few of these take cognitive processes such as working memory or executive functions into consideration. We conclude that due to the sparsity of available literature our questions can only be partially answered and currently there is not much evidence of clear associations. We identify major gaps in knowledge and raise a series of open questions to guide further research. PMID:28919870
Aerial Refueling Process Rescheduling Under Job Related Disruptions
NASA Technical Reports Server (NTRS)
Kaplan, Sezgin; Rabadi, Ghaith
2011-01-01
The Aerial Refueling Scheduling Problem (ARSP) can be defined as determining the refueling completion times for each fighter aircraft (job) on the multiple tankers (machines) to minimize the total weighted tardiness. ARSP assumes that the jobs have different release times and due dates. The ARSP is dynamic environment and unexpected events may occur. In this paper, rescheduling in the aerial refueling process with a time set of jobs will be studied to deal with job related disruptions such as the arrival of new jobs, the departure of an existing job, high deviations in the release times and changes in job priorities. In order to keep the stability and to avoid excessive computation, partial schedule repair algorithm is developed and its preliminary results are presented.
Galerkin v. discrete-optimal projection in nonlinear model reduction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlberg, Kevin Thomas; Barone, Matthew Franklin; Antil, Harbir
Discrete-optimal model-reduction techniques such as the Gauss{Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible ow problems where standard Galerkin techniques have failed. However, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform projection at the time-continuous level, while discrete-optimal techniques do so at the time-discrete level. This work provides a detailed theoretical and experimental comparison of the two techniques for two common classes of time integrators: linear multistep schemes and Runge{Kutta schemes.more » We present a number of new ndings, including conditions under which the discrete-optimal ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and experimentally that decreasing the time step does not necessarily decrease the error for the discrete-optimal ROM; instead, the time step should be `matched' to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible- ow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the discrete-optimal reduced-order model by an order of magnitude.« less
Problem behaviour and traumatic dental injuries in adolescents.
Ramchandani, Damini; Marcenes, Wagner; Stansfeld, Stephen A; Bernabé, Eduardo
2016-02-01
To explore the relationship between problem behaviour and traumatic dental injuries (TDI) among 15- to 16-year-old schoolchildren from East London. This cross-sectional study used data from 794 adolescents who participated in phase III of the Research with East London Adolescents Community Health Survey (RELACHS), a school-based prospective study of a representative sample of adolescents. Participants completed a questionnaire and were clinically examined for TDI, overjet and lip coverage. The Strength and Difficulties Questionnaire (SDQ) was used to assess problem behaviour, which provided a total score and five domain scores (emotional symptoms, conduct problems, hyperactivity, peer problems and pro-social behaviour). The association between problem behaviour and TDI was assessed in unadjusted and adjusted logistic regression models. Adjusted models controlled for demographic (sex, age and ethnicity), socio-economic (parental employment) and clinical factors (overjet and lip coverage). The prevalence of TDI was 17% and the prevalence of problem behaviour, according to the SDQ, was 10%. In the adjusted model, adolescents with problem behaviour were 1.87 (95% confidence interval: 1.03-3.37) times more likely to have TDI than those without problem behaviour. In subsequent analysis by SDQ domains, it was found that only peer problems were associated with TDI (OR = 1.78, 95% CI: 1.01-3.14), even after adjustment for confounders. This study found evidence for a relationship between problem behaviour and TDI among adolescents, which was mainly due to peer relationship problems. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Adham, Manal T; Bentley, Peter J
2016-08-01
This paper proposes and evaluates a solution to the truck redistribution problem prominent in London's Santander Cycle scheme. Due to the complexity of this NP-hard combinatorial optimisation problem, no efficient optimisation techniques are known to solve the problem exactly. This motivates our use of the heuristic Artificial Ecosystem Algorithm (AEA) to find good solutions in a reasonable amount of time. The AEA is designed to take advantage of highly distributed computer architectures and adapt to changing problems. In the AEA a problem is first decomposed into its relative sub-components; they then evolve solution building blocks that fit together to form a single optimal solution. Three variants of the AEA centred on evaluating clustering methods are presented: the baseline AEA, the community-based AEA which groups stations according to journey flows, and the Adaptive AEA which actively modifies clusters to cater for changes in demand. We applied these AEA variants to the redistribution problem prominent in bike share schemes (BSS). The AEA variants are empirically evaluated using historical data from Santander Cycles to validate the proposed approach and prove its potential effectiveness. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Badeau, Ryan; White, Daniel R.; Ibrahim, Bashirah; Ding, Lin; Heckler, Andrew F.
2017-12-01
The ability to solve physics problems that require multiple concepts from across the physics curriculum—"synthesis" problems—is often a goal of physics instruction. Three experiments were designed to evaluate the effectiveness of two instructional methods employing worked examples on student performance with synthesis problems; these instructional techniques, analogical comparison and self-explanation, have previously been studied primarily in the context of single-concept problems. Across three experiments with students from introductory calculus-based physics courses, both self-explanation and certain kinds of analogical comparison of worked examples significantly improved student performance on a target synthesis problem, with distinct improvements in recognition of the relevant concepts. More specifically, analogical comparison significantly improved student performance when the comparisons were invoked between worked synthesis examples. In contrast, similar comparisons between corresponding pairs of worked single-concept examples did not significantly improve performance. On a more complicated synthesis problem, self-explanation was significantly more effective than analogical comparison, potentially due to differences in how successfully students encoded the full structure of the worked examples. Finally, we find that the two techniques can be combined for additional benefit, with the trade-off of slightly more time on task.
Civil helicopter propulsion system reliability and engine monitoring technology assessments
NASA Technical Reports Server (NTRS)
Murphy, J. A.; Zuk, J.
1982-01-01
A study to reduce operating costs of helicopters, particularly directed at the maintenance of the propulsion subsystem, is presented. The tasks of the study consisted of problem definition refinement, technology solutions, diagnostic system concepts, and emergency power augmentation. Quantifiable benefits (reduced fuel consumption, on-condition engine maintenance, extended drive system overhaul periods, and longer oil change intervals) would increase the initial cost by $43,000, but the benefit of $24.46 per hour would result in breakeven at 1758 hours. Other benefits not capable of being quantified but perhaps more important include improved aircraft avilability due to reduced maintenance time, potential for increased operating limits due to continuous automatic monitoring of gages, and less time and fuel required to make engine power checks. The most important improvement is the on-condition maintenance program, which will require the development of algorithms, equipment, and procedures compatible with all operating environments.
Supersonic projectile models for asynchronous shooter localization
NASA Astrophysics Data System (ADS)
Kozick, Richard J.; Whipps, Gene T.; Ash, Joshua N.
2011-06-01
In this work we consider the localization of a gunshot using a distributed sensor network measuring time differences of arrival between a firearm's muzzle blast and the shockwave induced by a supersonic bullet. This so-called MB-SW approach is desirable because time synchronization is not required between the sensors, however it suffers from increased computational complexity and requires knowledge of the bullet's velocity at all points along its trajectory. While the actual velocity profile of a particular gunshot is unknown, one may use a parameterized model for the velocity profile and simultaneously fit the model and localize the shooter. In this paper we study efficient solutions for the localization problem and identify deceleration models that trade off localization accuracy and computational complexity. We also develop a statistical analysis that includes bias due to mismatch between the true and actual deceleration models and covariance due to additive noise.
A space-efficient quantum computer simulator suitable for high-speed FPGA implementation
NASA Astrophysics Data System (ADS)
Frank, Michael P.; Oniciuc, Liviu; Meyer-Baese, Uwe H.; Chiorescu, Irinel
2009-05-01
Conventional vector-based simulators for quantum computers are quite limited in the size of the quantum circuits they can handle, due to the worst-case exponential growth of even sparse representations of the full quantum state vector as a function of the number of quantum operations applied. However, this exponential-space requirement can be avoided by using general space-time tradeoffs long known to complexity theorists, which can be appropriately optimized for this particular problem in a way that also illustrates some interesting reformulations of quantum mechanics. In this paper, we describe the design and empirical space/time complexity measurements of a working software prototype of a quantum computer simulator that avoids excessive space requirements. Due to its space-efficiency, this design is well-suited to embedding in single-chip environments, permitting especially fast execution that avoids access latencies to main memory. We plan to prototype our design on a standard FPGA development board.
NASA Astrophysics Data System (ADS)
Cox, Christopher
Low-order numerical methods are widespread in academic solvers and ubiquitous in industrial solvers due to their robustness and usability. High-order methods are less robust and more complicated to implement; however, they exhibit low numerical dissipation and have the potential to improve the accuracy of flow simulations at a lower computational cost when compared to low-order methods. This motivates our development of a high-order compact method using Huynh's flux reconstruction scheme for solving unsteady incompressible flow on unstructured grids. We use Chorin's classic artificial compressibility formulation with dual time stepping to solve unsteady flow problems. In 2D, an implicit non-linear lower-upper symmetric Gauss-Seidel scheme with backward Euler discretization is used to efficiently march the solution in pseudo time, while a second-order backward Euler discretization is used to march in physical time. We verify and validate implementation of the high-order method coupled with our implicit time stepping scheme using both steady and unsteady incompressible flow problems. The current implicit time stepping scheme is proven effective in satisfying the divergence-free constraint on the velocity field in the artificial compressibility formulation. The high-order solver is extended to 3D and parallelized using MPI. Due to its simplicity, time marching for 3D problems is done explicitly. The feasibility of using the current implicit time stepping scheme for large scale three-dimensional problems with high-order polynomial basis still remains to be seen. We directly use the aforementioned numerical solver to simulate pulsatile flow of a Newtonian blood-analog fluid through a rigid 180-degree curved artery model. One of the most physiologically relevant forces within the cardiovascular system is the wall shear stress. This force is important because atherosclerotic regions are strongly correlated with curvature and branching in the human vasculature, where the shear stress is both oscillatory and multidirectional. Also, the combined effect of curvature and pulsatility in cardiovascular flows produces unsteady vortices. The aim of this research as it relates to cardiovascular fluid dynamics is to predict the spatial and temporal evolution of vortical structures generated by secondary flows, as well as to assess the correlation between multiple vortex pairs and wall shear stress. We use a physiologically (pulsatile) relevant flow rate and generate results using both fully developed and uniform entrance conditions, the latter being motivated by the fact that flow upstream of a curved artery may not have sufficient straight entrance length to become fully developed. Under the two pulsatile inflow conditions, we characterize the morphology and evolution of various vortex pairs and their subsequent effect on relevant haemodynamic wall shear stress metrics.
Quadrat Data for Fermilab Prairie Plant Survey
Quadrat Data 2012 Quadrat Data 2013 Quadrat Data None taken by volunteers in 2014 due to weather problems . 2015 Quadrat Data 2016 Quadrat Data None taken by volunteers in 2017 due to weather and other problems
Bergerot, Cristiane Decat; Mitchell, Hannah-Rose; Ashing, Kimlin Tam; Kim, Youngmee
2017-06-01
Monitoring distress assessment in cancer patients during the treatment phase is a component of good quality care practice. Yet, there is a dearth of prospective studies examining distress. In an attempt to begin filling this gap and inform clinical practice, we conducted a prospective, longitudinal study examining changes in distress (anxiety, depression, and problems in living) by age and gender and the roles of age and gender in predicting distress. Newly diagnosed Brazilian cancer patients (N = 548) were assessed at three time points during chemotherapy. Age and gender were identified on the first day of chemotherapy (T1); anxiety, depression, and problems in living were self-reported at T1, the planned midway point (T2), and the last day of chemotherapy (T3). At T1, 37 and 17% of patients reported clinically significant levels of anxiety and depression, respectively. At T3, the prevalence was reduced to 4.6% for anxiety and 5.1% for depression (p < .001). Patients 40-55 years, across all time points, reported greater anxiety and practical problems than patients >70 years (p < .03). Female patients reported greater emotional, physical, and family problems than their male counterparts (p < .04). For most patients, elevated levels of distress noted in the beginning of treatment subsided by the time of treatment completion. However, middle-aged and female patients continued to report heightened distress. Evidence-based psychosocial intervention offered to at risk patients during early phases of the treatment may provide distress relief and improve outcomes over the illness trajectory while preventing psychosocial and physical morbidity due to untreated chronic distress.
Problems on the Theory of Heat Resistance of Alloys
1960-07-26
same material under real service conditions is conceded; in this case, the shapes and dimensions of the product, the vibrations, the sharp and...as the load duration factor). The introduction of a newly created material some- times proves unsuccessful due to the fact that, though meeting all...structural nonuniformity of real solids on the mechanism of the development of deformation in them and on their mechanical properties*; (d) the influence of
A Novel Numerical Method for Fuzzy Boundary Value Problems
NASA Astrophysics Data System (ADS)
Can, E.; Bayrak, M. A.; Hicdurmaz
2016-05-01
In the present paper, a new numerical method is proposed for solving fuzzy differential equations which are utilized for the modeling problems in science and engineering. Fuzzy approach is selected due to its important applications on processing uncertainty or subjective information for mathematical models of physical problems. A second-order fuzzy linear boundary value problem is considered in particular due to its important applications in physics. Moreover, numerical experiments are presented to show the effectiveness of the proposed numerical method on specific physical problems such as heat conduction in an infinite plate and a fin.
Practical solutions for reducing container ships' waiting times at ports using simulation model
NASA Astrophysics Data System (ADS)
Sheikholeslami, Abdorreza; Ilati, Gholamreza; Yeganeh, Yones Eftekhari
2013-12-01
The main challenge for container ports is the planning required for berthing container ships while docked in port. Growth of containerization is creating problems for ports and container terminals as they reach their capacity limits of various resources which increasingly leads to traffic and port congestion. Good planning and management of container terminal operations reduces waiting time for liner ships. Reducing the waiting time improves the terminal's productivity and decreases the port difficulties. Two important keys to reducing waiting time with berth allocation are determining suitable access channel depths and increasing the number of berths which in this paper are studied and analyzed as practical solutions. Simulation based analysis is the only way to understand how various resources interact with each other and how they are affected in the berthing time of ships. We used the Enterprise Dynamics software to produce simulation models due to the complexity and nature of the problems. We further present case study for berth allocation simulation of the biggest container terminal in Iran and the optimum access channel depth and the number of berths are obtained from simulation results. The results show a significant reduction in the waiting time for container ships and can be useful for major functions in operations and development of container ship terminals.
OMA analysis of a launcher under operational conditions with time-varying properties
NASA Astrophysics Data System (ADS)
Eugeni, M.; Coppotelli, G.; Mastroddi, F.; Gaudenzi, P.; Muller, S.; Troclet, B.
2018-05-01
The objective of this paper is the investigation of the capability of operational modal analysis approaches to deal with time-varying system in the low-frequency domain. Specifically, the problem of the identification of the dynamic properties of a launch vehicle, working under actual operative conditions, is studied. Two OMA methods are considered: the frequency-domain decomposition and the Hilbert transform method. It is demonstrated that both OMA approaches allow the time-tracking of modal parameters, namely, natural frequencies, damping ratios, and mode shapes, from the response accelerations only recorded during actual flight tests of a launcher characterized by a large mass variation due to fuel burning typical of the first phase of the flight.
NASA Astrophysics Data System (ADS)
Gonçalves, L. D.; Rocco, E. M.; de Moraes, R. V.
2013-10-01
A study evaluating the influence due to the lunar gravitational potential, modeled by spherical harmonics, on the gravity acceleration is accomplished according to the model presented in Konopliv (2001). This model provides the components x, y and z for the gravity acceleration at each moment of time along the artificial satellite orbit and it enables to consider the spherical harmonic degree and order up to100. Through a comparison between the gravity acceleration from a central field and the gravity acceleration provided by Konopliv's model, it is obtained the disturbing velocity increment applied to the vehicle. Then, through the inverse problem, the Keplerian elements of perturbed orbit of the satellite are calculated allowing the orbital motion analysis. Transfer maneuvers and orbital correction of lunar satellites are simulated considering the disturbance due to non-uniform gravitational potential of the Moon, utilizing continuous thrust and trajectory control in closed loop. The simulations are performed using the Spacecraft Trajectory Simulator-STRS, Rocco (2008), which evaluate the behavior of the orbital elements, fuel consumption and thrust applied to the satellite over the time.
Collective effects in force generation by multiple cytoskeletal filaments pushing an obstacle
NASA Astrophysics Data System (ADS)
Aparna, J. S.; Das, Dipjyoti; Padinhateeri, Ranjith; Das, Dibyendu
2015-09-01
We report here recent findings that multiple cytoskeletal filaments (assumed rigid) pushing an obstacle typically generate more force than just the sum of the forces due to individual ones. This interesting phenomenon, due to the hydrolysis process being out of equilibrium, escaped attention in previous experimental and theoretical literature. We first demonstrate this numerically within a constant force ensemble, for a well known model of cytoskeletal filament dynamics with random mechanism of hydrolysis. Two methods of detecting the departure from additivity of the collective stall force, namely from the force-velocity curve in the growing phase, and from the average collapse time versus force curve in the bounded phase, is discussed. Since experiments have already been done for a similar system of multiple microtubules in a harmonic optical trap, we study the problem theoretically under harmonic force. We show that within the varying harmonic force ensemble too, the mean collective stall force of N filaments is greater than N times the mean stall force due to a single filament; the actual extent of departure is a function of the monomer concentration.
NASA Technical Reports Server (NTRS)
Probst, D.; Jensen, L.
1991-01-01
Delay-insensitive VLSI systems have a certain appeal on the ground due to difficulties with clocks; they are even more attractive in space. We answer the question, is it possible to control state explosion arising from various sources during automatic verification (model checking) of delay-insensitive systems? State explosion due to concurrency is handled by introducing a partial-order representation for systems, and defining system correctness as a simple relation between two partial orders on the same set of system events (a graph problem). State explosion due to nondeterminism (chiefly arbitration) is handled when the system to be verified has a clean, finite recurrence structure. Backwards branching is a further optimization. The heart of this approach is the ability, during model checking, to discover a compact finite presentation of the verified system without prior composition of system components. The fully-implemented POM verification system has polynomial space and time performance on traditional asynchronous-circuit benchmarks that are exponential in space and time for other verification systems. We also sketch the generalization of this approach to handle delay-constrained VLSI systems.
A minimum propellant solution to an orbit-to-orbit transfer using a low thrust propulsion system
NASA Technical Reports Server (NTRS)
Cobb, Shannon S.
1991-01-01
The Space Exploration Initiative is considering the use of low thrust (nuclear electric, solar electric) and intermediate thrust (nuclear thermal) propulsion systems for transfer to Mars and back. Due to the duration of such a mission, a low thrust minimum-fuel solution is of interest; a savings of fuel can be substantial if the propulsion system is allowed to be turned off and back on. This switching of the propulsion system helps distinguish the minimal-fuel problem from the well-known minimum-time problem. Optimal orbit transfers are also of interest to the development of a guidance system for orbital maneuvering vehicles which will be needed, for example, to deliver cargoes to the Space Station Freedom. The problem of optimizing trajectories for an orbit-to-orbit transfer with minimum-fuel expenditure using a low thrust propulsion system is addressed.
Ludwig, T; Kern, P; Bongards, M; Wolf, C
2011-01-01
The optimization of relaxation and filtration times of submerged microfiltration flat modules in membrane bioreactors used for municipal wastewater treatment is essential for efficient plant operation. However, the optimization and control of such plants and their filtration processes is a challenging problem due to the underlying highly nonlinear and complex processes. This paper presents the use of genetic algorithms for this optimization problem in conjunction with a fully calibrated simulation model, as computational intelligence methods are perfectly suited to the nonconvex multi-objective nature of the optimization problems posed by these complex systems. The simulation model is developed and calibrated using membrane modules from the wastewater simulation software GPS-X based on the Activated Sludge Model No.1 (ASM1). Simulation results have been validated at a technical reference plant. They clearly show that filtration process costs for cleaning and energy can be reduced significantly by intelligent process optimization.
Pricing Resources in LTE Networks through Multiobjective Optimization
Lai, Yung-Liang; Jiang, Jehn-Ruey
2014-01-01
The LTE technology offers versatile mobile services that use different numbers of resources. This enables operators to provide subscribers or users with differential quality of service (QoS) to boost their satisfaction. On one hand, LTE operators need to price the resources high for maximizing their profits. On the other hand, pricing also needs to consider user satisfaction with allocated resources and prices to avoid “user churn,” which means subscribers will unsubscribe services due to dissatisfaction with allocated resources or prices. In this paper, we study the pricing resources with profits and satisfaction optimization (PRPSO) problem in the LTE networks, considering the operator profit and subscribers' satisfaction at the same time. The problem is modelled as nonlinear multiobjective optimization with two optimal objectives: (1) maximizing operator profit and (2) maximizing user satisfaction. We propose to solve the problem based on the framework of the NSGA-II. Simulations are conducted for evaluating the proposed solution. PMID:24526889
Pricing resources in LTE networks through multiobjective optimization.
Lai, Yung-Liang; Jiang, Jehn-Ruey
2014-01-01
The LTE technology offers versatile mobile services that use different numbers of resources. This enables operators to provide subscribers or users with differential quality of service (QoS) to boost their satisfaction. On one hand, LTE operators need to price the resources high for maximizing their profits. On the other hand, pricing also needs to consider user satisfaction with allocated resources and prices to avoid "user churn," which means subscribers will unsubscribe services due to dissatisfaction with allocated resources or prices. In this paper, we study the pricing resources with profits and satisfaction optimization (PRPSO) problem in the LTE networks, considering the operator profit and subscribers' satisfaction at the same time. The problem is modelled as nonlinear multiobjective optimization with two optimal objectives: (1) maximizing operator profit and (2) maximizing user satisfaction. We propose to solve the problem based on the framework of the NSGA-II. Simulations are conducted for evaluating the proposed solution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moore, Thomas W.; Quach, Tu-Thach; Detry, Richard Joseph
Complex Adaptive Systems of Systems, or CASoS, are vastly complex ecological, sociological, economic and/or technical systems which we must understand to design a secure future for the nation and the world. Perturbations/disruptions in CASoS have the potential for far-reaching effects due to pervasive interdependencies and attendant vulnerabilities to cascades in associated systems. Phoenix was initiated to address this high-impact problem space as engineers. Our overarching goals are maximizing security, maximizing health, and minimizing risk. We design interventions, or problem solutions, that influence CASoS to achieve specific aspirations. Through application to real-world problems, Phoenix is evolving the principles and discipline ofmore » CASoS Engineering while growing a community of practice and the CASoS engineers to populate it. Both grounded in reality and working to extend our understanding and control of that reality, Phoenix is at the same time a solution within a CASoS and a CASoS itself.« less
Genomic big data hitting the storage bottleneck.
Papageorgiou, Louis; Eleni, Picasi; Raftopoulou, Sofia; Mantaiou, Meropi; Megalooikonomou, Vasileios; Vlachakis, Dimitrios
2018-01-01
During the last decades, there is a vast data explosion in bioinformatics. Big data centres are trying to face this data crisis, reaching high storage capacity levels. Although several scientific giants examine how to handle the enormous pile of information in their cupboards, the problem remains unsolved. On a daily basis, there is a massive quantity of permanent loss of extensive information due to infrastructure and storage space problems. The motivation for sequencing has fallen behind. Sometimes, the time that is spent to solve storage space problems is longer than the one dedicated to collect and analyse data. To bring sequencing to the foreground, scientists have to slide over such obstacles and find alternative ways to approach the issue of data volume. Scientific community experiences the data crisis era, where, out of the box solutions may ease the typical research workflow, until technological development meets the needs of Bioinformatics.
A survey of outpatient visits in a United States Army forward unit during Operation Desert Shield.
Wasserman, G M; Martin, B L; Hyams, K C; Merrill, B R; Oaks, H G; McAdoo, H A
1997-06-01
Reports suggest that deployed soldiers during Operations Desert Shield and Desert Storm remained healthy, but primary care data are limited. We reviewed the outpatient visit surveillance data from the 3d Armored Cavalry Regiment to obtain information regarding soldiers' health in the field. Nontraumatic orthopedic problems accounted for the highest incidence of primary health care visits, followed by unintended injuries, gastrointestinal, respiratory, and dermatologic conditions. Visits for heat injuries, sexually transmitted diseases, unexplained fever, and psychiatric problems were low, probably due to preventive measures. These results suggest that increased prevention to decrease orthopedic problems and unintended injuries may substantially reduce outpatient visits during future deployments. Medical surveillance during future deployments can be improved by taking advantage of current advances in technology to facilitate patient data retrieval and provide timely information to first- and second-echelon medical personnel.
Ant Colony Optimization With Local Search for Dynamic Traveling Salesman Problems.
Mavrovouniotis, Michalis; Muller, Felipe M; Yang, Shengxiang
2016-06-13
For a dynamic traveling salesman problem (DTSP), the weights (or traveling times) between two cities (or nodes) may be subject to changes. Ant colony optimization (ACO) algorithms have proved to be powerful methods to tackle such problems due to their adaptation capabilities. It has been shown that the integration of local search operators can significantly improve the performance of ACO. In this paper, a memetic ACO algorithm, where a local search operator (called unstring and string) is integrated into ACO, is proposed to address DTSPs. The best solution from ACO is passed to the local search operator, which removes and inserts cities in such a way that improves the solution quality. The proposed memetic ACO algorithm is designed to address both symmetric and asymmetric DTSPs. The experimental results show the efficiency of the proposed memetic algorithm for addressing DTSPs in comparison with other state-of-the-art algorithms.
Optimal matching for prostate brachytherapy seed localization with dimension reduction.
Lee, Junghoon; Labat, Christian; Jain, Ameet K; Song, Danny Y; Burdette, Everette C; Fichtinger, Gabor; Prince, Jerry L
2009-01-01
In prostate brachytherapy, x-ray fluoroscopy has been used for intra-operative dosimetry to provide qualitative assessment of implant quality. More recent developments have made possible 3D localization of the implanted radioactive seeds. This is usually modeled as an assignment problem and solved by resolving the correspondence of seeds. It is, however, NP-hard, and the problem is even harder in practice due to the significant number of hidden seeds. In this paper, we propose an algorithm that can find an optimal solution from multiple projection images with hidden seeds. It solves an equivalent problem with reduced dimensional complexity, thus allowing us to find an optimal solution in polynomial time. Simulation results show the robustness of the algorithm. It was validated on 5 phantom and 18 patient datasets, successfully localizing the seeds with detection rate of > or = 97.6% and reconstruction error of < or = 1.2 mm. This is considered to be clinically excellent performance.
Skin problems in amputees: a descriptive study.
Koc, Erol; Tunca, Mustafa; Akar, Ahmet; Erbil, A Hakan; Demiralp, Bahtiyar; Arca, Ercan
2008-05-01
Skin problems are common in amputee patients. These problems may restrict the normal use of a prosthetic limb. We aimed to determine the range, incidence, causes and patterns of dermatological problems seen in a population of amputees. One hundred and forty two amputees, were enrolled to the study. Age, sex, age at the time of amputation, level of amputation, reason for amputation, and types of prosthesis were noted. Dermatological problems were recorded. Stumps were swabbed for bacteriological and mycological examination, and patch tests were performed in suspected patients. Of these 142 patients, 139 (97.9%) were males and 3 (2.1%) were females. The reasons for amputation in the majority of the cases were wounds due to mine explosion (n = 114, 80.3%) and gunshot wounds (n = 19, 13.4%). The other reasons were arterial diseases, traffic accidents, congenital absence of the tibia, and vascular complication of diabetes. At least one skin problem was detected in 105 (73.9%) of 142 cases. Positive reactions to allergens have been detected in 28 (43%) of 65 cases with dermatitis. Bacterial infection was detected in 12 patients and fungal infection was detected in 4 patients. Our descriptive study shows that skin problems have a high prevalence, up to 73.9% in amputee patients. This high percentage indicates that dermatological problems are important in amputees. Early recognition and treatment of these problems can prevent the amputee's mental, social, and economic losses.
NASA Astrophysics Data System (ADS)
Pohl, E.; Maximini, M.; Bauschulte, A.; vom Schloß, J.; Hermanns, R. T. E.
2015-02-01
HT-PEM fuel cells suffer from performance losses due to degradation effects. Therefore, the durability of HT-PEM is currently an important factor of research and development. In this paper a novel approach is presented for an integrated short term and long term simulation of HT-PEM accelerated lifetime testing. The physical phenomena of short term and long term effects are commonly modeled separately due to the different time scales. However, in accelerated lifetime testing, long term degradation effects have a crucial impact on the short term dynamics. Our approach addresses this problem by applying a novel method for dual time scale simulation. A transient system simulation is performed for an open voltage cycle test on a HT-PEM fuel cell for a physical time of 35 days. The analysis describes the system dynamics by numerical electrochemical impedance spectroscopy. Furthermore, a performance assessment is performed in order to demonstrate the efficiency of the approach. The presented approach reduces the simulation time by approximately 73% compared to conventional simulation approach without losing too much accuracy. The approach promises a comprehensive perspective considering short term dynamic behavior and long term degradation effects.
The Reduced Basis Method in Geosciences: Practical examples for numerical forward simulations
NASA Astrophysics Data System (ADS)
Degen, D.; Veroy, K.; Wellmann, F.
2017-12-01
Due to the highly heterogeneous character of the earth's subsurface, the complex coupling of thermal, hydrological, mechanical, and chemical processes, and the limited accessibility we have to face high-dimensional problems associated with high uncertainties in geosciences. Performing the obviously necessary uncertainty quantifications with a reasonable number of parameters is often not possible due to the high-dimensional character of the problem. Therefore, we are presenting the reduced basis (RB) method, being a model order reduction (MOR) technique, that constructs low-order approximations to, for instance, the finite element (FE) space. We use the RB method to address this computationally challenging simulations because this method significantly reduces the degrees of freedom. The RB method is decomposed into an offline and online stage, allowing to make the expensive pre-computations beforehand to get real-time results during field campaigns. Generally, the RB approach is most beneficial in the many-query and real-time context.We will illustrate the advantages of the RB method for the field of geosciences through two examples of numerical forward simulations.The first example is a geothermal conduction problem demonstrating the implementation of the RB method for a steady-state case. The second examples, a Darcy flow problem, shows the benefits for transient scenarios. In both cases, a quality evaluation of the approximations is given. Additionally, the runtimes for both the FE and the RB simulations are compared. We will emphasize the advantages of this method for repetitive simulations by showing the speed-up for the RB solution in contrast to the FE solution. Finally, we will demonstrate how the used implementation is usable in high-performance computing (HPC) infrastructures and evaluate its performance for such infrastructures. Hence, we will especially point out its scalability, yielding in an optimal usage on HPC infrastructures and normal working stations.
Arefnasab, Zahra; Ghanei, Mostafa; Noorbala, Ahmad Ali; Alipour, Ahmad; Babamahmoodi, Farhang; Babamahmoodi, Abdolreza; Salehi, Maryam
2013-09-01
Studies have shown that Mindfulness Based Stress Reduction (MBSR) has positive effect on physical and psychological dimensions of chronic illnesses. In this study for the first time we examine the effect of this new technique on quality of life and pulmonary function in chemically pulmonary injured veterans who have chronic pulmonary problem, psychological problems and low quality of life. Forty male pulmonary injured veterans were randomly replaced in two groups with 20 participants (MBSR and control Wait List (WL)). Then MBSR group received 8-weekly sessions intervention. We evaluate quality of life (used SF-36 questionnaire) and Spirometry parameters two times; before and after intervention in two group. We used "mixed factorial analyses of variance" test for analyzing data in each dependent variables. Then if we have significant interactional effect, we used -paired- sample t-test" for comparing before and after intervention data of each group, and "Independent-Sample t-test" for comparing after intervention data of two groups. The MBSR compare to WL group improved SF-36 total score, (F (1, 38) =12.09, P=0.001), "Role limitations due to physical problems"(F(1,38)= 6.92, P=0.01), "Role limitations due to emotional problems"(F(1,38)= 7.75, P=0.008), "Social functioning"(F(1,38)= 9.89, P=0.003), "Mental health"(F(1,38)= 15.93, P=0), "Vitality"(F(1,38)= 40.03, P≤0.001), and "Pain"(F(1,38)= 27.60, P≤0.001). MBSR had no significant effect on "FEV1" (F (1, 38) = 0.03, P=0.85),"FVC" (F (1, 38) = 0.16, P=0.69) and "FEV1/FVC" (F (1, 38) = 2.21, P=0.14). MBSR can improve individual's quality of life but not lung function in chemically pulmonary injured veterans.
Application of symbolic and algebraic manipulation software in solving applied mechanics problems
NASA Technical Reports Server (NTRS)
Tsai, Wen-Lang; Kikuchi, Noboru
1993-01-01
As its name implies, symbolic and algebraic manipulation is an operational tool which not only can retain symbols throughout computations but also can express results in terms of symbols. This report starts with a history of symbolic and algebraic manipulators and a review of the literatures. With the help of selected examples, the capabilities of symbolic and algebraic manipulators are demonstrated. These applications to problems of applied mechanics are then presented. They are the application of automatic formulation to applied mechanics problems, application to a materially nonlinear problem (rigid-plastic ring compression) by finite element method (FEM) and application to plate problems by FEM. The advantages and difficulties, contributions, education, and perspectives of symbolic and algebraic manipulation are discussed. It is well known that there exist some fundamental difficulties in symbolic and algebraic manipulation, such as internal swelling and mathematical limitation. A remedy for these difficulties is proposed, and the three applications mentioned are solved successfully. For example, the closed from solution of stiffness matrix of four-node isoparametrical quadrilateral element for 2-D elasticity problem was not available before. Due to the work presented, the automatic construction of it becomes feasible. In addition, a new advantage of the application of symbolic and algebraic manipulation found is believed to be crucial in improving the efficiency of program execution in the future. This will substantially shorten the response time of a system. It is very significant for certain systems, such as missile and high speed aircraft systems, in which time plays an important role.
Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems
Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R
2006-01-01
Background We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. Results We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Conclusion Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems. PMID:17081289
Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems.
Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R
2006-11-02
We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems.
Suicide related ideation and behavior among Canadian gay and bisexual men: a syndemic analysis.
Ferlatte, Olivier; Dulai, Joshun; Hottes, Travis Salway; Trussler, Terry; Marchand, Rick
2015-07-02
While several studies have demonstrated that gay and bisexual men are at increased risk of suicide less attention has been given to the processes that generate the inherent inequity with the mainstream population. This study tested whether syndemic theory can explain the excess suicide burden in a sample of Canadian gay and bisexual men. Syndemic theory accounts for co-occurring and mutually reinforcing epidemics suffered by vulnerable groups due to the effects of social marginalization. This study used data from Sex Now 2011, a cross-sectional survey of Canadian gay and bisexual men (n = 8382). The analysis measured the extent to which anti-gay marginalization and several psychosocial health problems are associated with suicide related ideation and attempts. Since psychosocial health problems were hypothesized to have an additive effect on suicide related ideation and attempts, the analysis calculated the effect of accumulated psychosocial health problems on suicide behavior. Suicide ideation and attempts were positively associated with each individual marginalization indicator (verbal violence, physical violence, bullying, sexual violence and work discrimination) and psychosocial health problems (smoking, party drugs, depression, anxiety, STIs, HIV risk and HIV). Furthermore, prevalence of suicide ideation and attempts increased with each added psychosocial health problem. Those who reported 3 or more had 6.90 (5.47-8.70) times the odds of experiencing suicide ideation and 16.29 (9.82-27.02) times the odds of a suicide attempt compared to those with no psychosocial health problems. This investigation suggests that syndemics is a useful theory for studying suicide behavior among gay and bisexual men. Moreover, the findings highlight a need to address gay and bisexual men's health problems holistically and the urgent need to reduce this population's experience with marginalization and violence.
NASA Astrophysics Data System (ADS)
Efstratiadis, Andreas; Tsoukalas, Ioannis; Kossieris, Panayiotis; Karavokiros, George; Christofides, Antonis; Siskos, Alexandros; Mamassis, Nikos; Koutsoyiannis, Demetris
2015-04-01
Modelling of large-scale hybrid renewable energy systems (HRES) is a challenging task, for which several open computational issues exist. HRES comprise typical components of hydrosystems (reservoirs, boreholes, conveyance networks, hydropower stations, pumps, water demand nodes, etc.), which are dynamically linked with renewables (e.g., wind turbines, solar parks) and energy demand nodes. In such systems, apart from the well-known shortcomings of water resources modelling (nonlinear dynamics, unknown future inflows, large number of variables and constraints, conflicting criteria, etc.), additional complexities and uncertainties arise due to the introduction of energy components and associated fluxes. A major difficulty is the need for coupling two different temporal scales, given that in hydrosystem modeling, monthly simulation steps are typically adopted, yet for a faithful representation of the energy balance (i.e. energy production vs. demand) a much finer resolution (e.g. hourly) is required. Another drawback is the increase of control variables, constraints and objectives, due to the simultaneous modelling of the two parallel fluxes (i.e. water and energy) and their interactions. Finally, since the driving hydrometeorological processes of the integrated system are inherently uncertain, it is often essential to use synthetically generated input time series of large length, in order to assess the system performance in terms of reliability and risk, with satisfactory accuracy. To address these issues, we propose an effective and efficient modeling framework, key objectives of which are: (a) the substantial reduction of control variables, through parsimonious yet consistent parameterizations; (b) the substantial decrease of computational burden of simulation, by linearizing the combined water and energy allocation problem of each individual time step, and solve each local sub-problem through very fast linear network programming algorithms, and (c) the substantial decrease of the required number of function evaluations for detecting the optimal management policy, using an innovative, surrogate-assisted global optimization approach.
NASA Astrophysics Data System (ADS)
Hartatiek; Yudyanto; Haryoto, Dwi
2017-05-01
A Special Theory of Relativity handbook has been successfully arranged to guide students tutorial activity in the Modern Physics course. The low of students’ problem-solving ability was overcome by giving the tutorial in addition to the lecture class. It was done due to the limited time in the class during the course to have students do some exercises for their problem-solving ability. The explicit problem-solving based tutorial handbook was written by emphasizing to this 5 problem-solving strategies: (1) focus on the problem, (2) picture the physical facts, (3) plan the solution, (4) solve the problem, and (5) check the result. This research and development (R&D) consisted of 3 main steps: (1) preliminary study, (2) draft I. product development, and (3) product validation. The developed draft product was validated by experts to measure the feasibility of the material and predict the effect of the tutorial giving by means of questionnaires with scale 1 to 4. The students problem-solving ability in Special Theory of Relativity showed very good qualification. It implied that the tutorial giving with the help of tutorial handbook increased students problem-solving ability. The empirical test revealed that the developed handbook was significantly affected in improving students’ mastery concept and problem-solving ability. Both students’ mastery concept and problem-solving ability were in middle category with gain of 0.31 and 0.41, respectively.
Time-symmetric integration in astrophysics
NASA Astrophysics Data System (ADS)
Hernandez, David M.; Bertschinger, Edmund
2018-04-01
Calculating the long-term solution of ordinary differential equations, such as those of the N-body problem, is central to understanding a wide range of dynamics in astrophysics, from galaxy formation to planetary chaos. Because generally no analytic solution exists to these equations, researchers rely on numerical methods that are prone to various errors. In an effort to mitigate these errors, powerful symplectic integrators have been employed. But symplectic integrators can be severely limited because they are not compatible with adaptive stepping and thus they have difficulty in accommodating changing time and length scales. A promising alternative is time-reversible integration, which can handle adaptive time-stepping, but the errors due to time-reversible integration in astrophysics are less understood. The goal of this work is to study analytically and numerically the errors caused by time-reversible integration, with and without adaptive stepping. We derive the modified differential equations of these integrators to perform the error analysis. As an example, we consider the trapezoidal rule, a reversible non-symplectic integrator, and show that it gives secular energy error increase for a pendulum problem and for a Hénon-Heiles orbit. We conclude that using reversible integration does not guarantee good energy conservation and that, when possible, use of symplectic integrators is favoured. We also show that time-symmetry and time-reversibility are properties that are distinct for an integrator.
Porthé, Victoria; Vargas, Ingrid; Ronda, Elena; Malmusi, Davide; Bosch, Lola; Vázquez, M Luisa
2017-06-02
To analyse changes in health professionals' and immigrant users' perceptions of the quality of care provided to the immigrant population during the crisis. A qualitative descriptive-interpretative and exploratory study was conducted in two areas of Catalonia. Semi-structured individual interviews were used with a theoretical sample of medical (n=24) and administrative (n=10) professionals in primary care (PC) and secondary care (SC), and immigrant users (n=20). Thematic analysis was conducted and the results were triangulated. Problems related to technical and interpersonal quality emerged from the discourse of both professionals and immigrants. These problems were attributed to cutbacks during the economic crisis. Regarding technical quality, respondents reported an increase in erroneous or non-specific diagnoses, inappropriate use of diagnostic tests and non-specific treatments, due to reduction in consultation times as a result of cuts in human resources. With regard to interpersonal quality, professionals reported less empathy, and users also reported worse communication, due to changes in professionals' working conditions and users' attitudes. Finally, a reduction in the resolution capacity of the health services emerged: professionals described unnecessary repeated PC visits and limited responses in SC, while young immigrants reported an insufficient response to their health problems. The results indicate a deterioration in perceived technical and interpersonal quality during the economic crisis, due to cutbacks mainly in human resources. These changes affect the whole population, but especially immigrants. Copyright © 2017 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.
Choubisa, Shanti Lal; Choubisa, Darshana
2016-04-01
Hydrofluorosis in humans and domestic animals is a worldwide health problem and caused by a prolonged period of fluoride exposure through drinking of fluoride contaminated water. But in recent years, due to rapid industrialization in India, diverse serious health problems among industrial workers and residents and domestic animals living in the industrial areas due to fluoride pollution are on the rise. A number of coal-burning and industrial activities such as power-generating stations, welding operations and the manufacturing or production of steel, iron, aluminum, zinc, phosphorus, chemical fertilizers, bricks, glass, plastic, cement, and hydrofluoric acid are generally discharging fluoride in both gaseous and particulate/dust forms into surrounding environments which create a industrial fluoride pollution and are an important cause of occupational exposure to fluoride in several countries including India. An industrial emitted fluoride contaminates not only surrounding soil, air, and water but also vegetation, crops and many other biotic communities on which man and animals are generally dependants for food. Long- time of inhalation or ingestion of industrial fluoride also causes serious health problems in the forms of industrial and neighborhood fluorosis. In India, whatever research works conducted so far on the chronic industrial fluoride intoxication or poisoning (industrial and neighborhood fluorosis) in man and various species of domestic animals due to a prolonged period of industrial fluoride exposure or pollution (contamination) are critically reviewed in the present communication. Simultaneously, we are also focused the various bio-indicators and bio-markers for chronic industrial fluoride intoxication or pollution.
[Pathophysiological aspects of wound healing in normal and diabetic foot].
Maksimova, N V; Lyundup, A V; Lubimov, R O; Melnichenko, G A; Nikolenko, V N
2014-01-01
The main cause of long-term healing of ulcers in patients with diabetic foot is considered to be direct mechanical damage when walking due to reduced sensitivity to due to neuropathy, hyperglycemia, infection and peripheral artery disease. These factors determine the standard approaches to the treatment of diabeticfoot, which include: offloading, glycemic control, debridement of ulcers, antibiotic therapy and revascularization. Recently, however, disturbances in the healing process of the skin in diabetes recognized an additional factor affecting the timing of healing patients with diabetic foot. Improved understanding and correction of cellular, molecular and biochemical abnormalities in chronic wound in combination with standard of care for affords new ground for solving the problem of ulcer healing in diabetes.
Cross hole GPR traveltime inversion using a fast and accurate neural network as a forward model
NASA Astrophysics Data System (ADS)
Mejer Hansen, Thomas
2017-04-01
Probabilistic formulated inverse problems can be solved using Monte Carlo based sampling methods. In principle both advanced prior information, such as based on geostatistics, and complex non-linear forward physical models can be considered. However, in practice these methods can be associated with huge computational costs that in practice limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error, that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival travel time inversion of cross hole ground-penetrating radar (GPR) data. An accurate forward model, based on 2D full-waveform modeling followed by automatic travel time picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the full forward model, and considerably faster, and more accurate, than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of the types of inverse problems that can be solved using non-linear Monte Carlo sampling techniques.
Patients' experiences of everyday life after lung transplantation.
Thomsen, Doris; Jensen, Birte Østergaard
2009-12-01
To investigate the experiences of everyday life after lung transplantation of patients with previous chronic obstructive pulmonary disease (COPD). Compared with patients being transplanted due to other indications, those with COPD prior to lung transplantation report more problems in the form of shortness of breath, fatigue, sexual problems, insomnia and increased appetite. In addition, they are often faced with problems returning to normal working life. How these problems influence the patient's everyday life is unknown. An exploratory qualitative study. Ten COPD patients (five females and five males) aged 51-69 and more than six months post transplantation, were interviewed using of a semi-structured interview guide. All interviews were taperecorded, transcribed verbatim and analysed using qualitative content analysis. The analysis revealed four themes of experience: a second chance; an ordinary life without chronic rejection; even minor daily activities take time with chronic rejection; and need for support and knowledge that were considered important by the participants for their situation and daily life. This is the first study describing the experiences of everyday life after lung transplantation of patients with COPD prior to surgery. The findings highlight the importance of addressing these patients' experiences of gratitude, positive life orientation and informational needs in relation to everyday life. Health professionals should be aware of the kind of problems both women and men may experience a long time after the lung transplantation. They constitute a basic knowledge of a patient's everyday life that is important when planning individual counselling and rehabilitation.
A Real-Time Greedy-Index Dispatching Policy for using PEVs to Provide Frequency Regulation Service
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ke, Xinda; Wu, Di; Lu, Ning
This article presents a real-time greedy-index dispatching policy (GIDP) for using plug-in electric vehicles (PEVs) to provide frequency regulation services. A new service cost allocation mechanism is proposed to award PEVs based on the amount of service they provided, while considering compensations for delayed-charging and reduction of battery lifetime due to participation of the service. The GIDP transforms the optimal dispatch problem from a high-dimensional space into a one-dimensional space while preserving the solution optimality. When solving the transformed problem in real-time, the global optimality of the GIDP solution can be guaranteed by mathematically proved “indexability”. Because the GIDP indexmore » can be calculated upon the PEV’s arrival and used for the entire decision making process till its departure, the computational burden is minimized and the complexity of the aggregator dispatch process is significantly reduced. Finally, simulation results are used to evaluate the proposed GIDP, and to demonstrate the potential profitability from providing frequency regulation service by using PEVs.« less
Series resonant converter with auxiliary winding turns: analysis, design and implementation
NASA Astrophysics Data System (ADS)
Lin, Bor-Ren
2018-05-01
Conventional series resonant converters have researched and applied for high-efficiency power units due to the benefit of its low switching losses. The main problems of series resonant converters are wide frequency variation and high circulating current. Thus, resonant converter is limited at narrow input voltage range and large input capacitor is normally adopted in commercial power units to provide the minimum hold-up time requirement when AC power is off. To overcome these problems, the resonant converter with auxiliary secondary windings are presented in this paper to achieve high voltage gain at low input voltage case such as hold-up time duration when utility power is off. Since the high voltage gain is used at low input voltage cased, the frequency variation of the proposed converter compared to the conventional resonant converter is reduced. Compared to conventional resonant converter, the hold-up time in the proposed converter is more than 40ms. The larger magnetising inductance of transformer is used to reduce the circulating current losses. Finally, a laboratory prototype is constructed and experiments are provided to verify the converter performance.
Application of lean six sigma to waste minimization in cigarette paper industry
NASA Astrophysics Data System (ADS)
Syahputri, K.; Sari, R. M.; Anizar; Tarigan, I. R.; Siregar, I.
2018-02-01
The cigarette paper industry is one of the industry that is always experiencing increasing demand from consumers. Consumer expectations for the products produced also increased both in terms of quality and quantity. The company continuously improves the quality of its products by trying to minimize nonconformity, waste and improve the efficiency of the whole production process of the company. In this cigarette industry, there is a disability whose value is above the company’s defect tolerance that is 10% of the production amount per month. Another problem also occurs in the production time is too long due to the many activities that are not value added (non value added activities) on the production floor. To overcome this problem, it is necessary to improve the production process of cigarette paper and minimize production time by reducing non value added activities. Repairs done with Lean Six Sigma. Lean Six Sigma is a combination of Lean and Six Sigma concept with DMAIC method (Define, Measure, Analyze, Improve, Control). With this Lean approach, obtained total production time of 1479.13 minutes proposal with cycle efficiency process increased by 12.64%.
A Real-Time Greedy-Index Dispatching Policy for using PEVs to Provide Frequency Regulation Service
Ke, Xinda; Wu, Di; Lu, Ning
2017-09-18
This article presents a real-time greedy-index dispatching policy (GIDP) for using plug-in electric vehicles (PEVs) to provide frequency regulation services. A new service cost allocation mechanism is proposed to award PEVs based on the amount of service they provided, while considering compensations for delayed-charging and reduction of battery lifetime due to participation of the service. The GIDP transforms the optimal dispatch problem from a high-dimensional space into a one-dimensional space while preserving the solution optimality. When solving the transformed problem in real-time, the global optimality of the GIDP solution can be guaranteed by mathematically proved “indexability”. Because the GIDP indexmore » can be calculated upon the PEV’s arrival and used for the entire decision making process till its departure, the computational burden is minimized and the complexity of the aggregator dispatch process is significantly reduced. Finally, simulation results are used to evaluate the proposed GIDP, and to demonstrate the potential profitability from providing frequency regulation service by using PEVs.« less
García Fernández, Francisco Pedro; López Casanova, Pablo; Pancorbo Hidalgo, Pedro Luis; Verdú Soriano, José
2009-01-01
Throughout the course of human history many people have been affected by the presence of chronic wounds. Millions of anonymous people have suffered bed sores, varicose ulcers, arterial ulcers or neuropathic ulcers. But there have been some famous people who, from time to time, remove these lesions from their cloak of invisibility In our day and age, every time a famous person suffers from these wounds, we observe how the means of communication publicize this health problem. However famous people also suffered from these wounds in the past. In this article, the authors will review historical figures who died due to these feared sores. Kings or saints have been affected by this problem. Specifically the authors will focus on six historical figures: three kings, one composer and two saints,; the authors shall analyze the influence of chronic wounds as a cause of their deaths. This article was submitted at the VII National Symposium on Bed Sores and Chronic Wounds and at the First Latin American Congress on Ulcers and Wounds.
Yan-Jun Liu; Shu Li; Shaocheng Tong; Chen, C L Philip
2017-07-01
In this paper, an adaptive control approach-based neural approximation is developed for a class of uncertain nonlinear discrete-time (DT) systems. The main characteristic of the considered systems is that they can be viewed as a class of multi-input multioutput systems in the nonstrict feedback structure. The similar control problem of this class of systems has been addressed in the past, but it focused on the continuous-time systems. Due to the complicacies of the system structure, it will become more difficult for the controller design and the stability analysis. To stabilize this class of systems, a new recursive procedure is developed, and the effect caused by the noncausal problem in the nonstrict feedback DT structure can be solved using a semirecurrent neural approximation. Based on the Lyapunov difference approach, it is proved that all the signals of the closed-loop system are semiglobal, ultimately uniformly bounded, and a good tracking performance can be guaranteed. The feasibility of the proposed controllers can be validated by setting a simulation example.
NASA Astrophysics Data System (ADS)
Tamura, Tetsuro; Kawaguchi, Masaharu; Kawai, Hidenori; Tao, Tao
2017-11-01
The connection between a meso-scale model and a micro-scale large eddy simulation (LES) is significant to simulate the micro-scale meteorological problem such as strong convective events due to the typhoon or the tornado using LES. In these problems the mean velocity profiles and the mean wind directions change with time according to the movement of the typhoons or tornadoes. Although, a fine grid micro-scale LES could not be connected to a coarse grid meso-scale WRF directly. In LES when the grid is suddenly refined at the interface of nested grids which is normal to the mean advection the resolved shear stresses decrease due to the interpolation errors and the delay of the generation of smaller scale turbulence that can be resolved on the finer mesh. For the estimation of wind gust disaster the peak wind acting on buildings and structures has to be correctly predicted. In the case of meteorological model the velocity fluctuations have a tendency of diffusive variation without the high frequency component due to the numerically filtering effects. In order to predict the peak value of wind velocity with good accuracy, this paper proposes a LES-based method for generating the higher frequency components of velocity and temperature fields obtained by meteorological model.
Real-time image annotation by manifold-based biased Fisher discriminant analysis
NASA Astrophysics Data System (ADS)
Ji, Rongrong; Yao, Hongxun; Wang, Jicheng; Sun, Xiaoshuai; Liu, Xianming
2008-01-01
Automatic Linguistic Annotation is a promising solution to bridge the semantic gap in content-based image retrieval. However, two crucial issues are not well addressed in state-of-art annotation algorithms: 1. The Small Sample Size (3S) problem in keyword classifier/model learning; 2. Most of annotation algorithms can not extend to real-time online usage due to their low computational efficiencies. This paper presents a novel Manifold-based Biased Fisher Discriminant Analysis (MBFDA) algorithm to address these two issues by transductive semantic learning and keyword filtering. To address the 3S problem, Co-Training based Manifold learning is adopted for keyword model construction. To achieve real-time annotation, a Bias Fisher Discriminant Analysis (BFDA) based semantic feature reduction algorithm is presented for keyword confidence discrimination and semantic feature reduction. Different from all existing annotation methods, MBFDA views image annotation from a novel Eigen semantic feature (which corresponds to keywords) selection aspect. As demonstrated in experiments, our manifold-based biased Fisher discriminant analysis annotation algorithm outperforms classical and state-of-art annotation methods (1.K-NN Expansion; 2.One-to-All SVM; 3.PWC-SVM) in both computational time and annotation accuracy with a large margin.
A Semi-implicit Method for Resolution of Acoustic Waves in Low Mach Number Flows
NASA Astrophysics Data System (ADS)
Wall, Clifton; Pierce, Charles D.; Moin, Parviz
2002-09-01
A semi-implicit numerical method for time accurate simulation of compressible flow is presented. By extending the low Mach number pressure correction method, a Helmholtz equation for pressure is obtained in the case of compressible flow. The method avoids the acoustic CFL limitation, allowing a time step restricted only by the convective velocity, resulting in significant efficiency gains. Use of a discretization that is centered in both time and space results in zero artificial damping of acoustic waves. The method is attractive for problems in which Mach numbers are low, and the acoustic waves of most interest are those having low frequency, such as acoustic combustion instabilities. Both of these characteristics suggest the use of time steps larger than those allowable by an acoustic CFL limitation. In some cases it may be desirable to include a small amount of numerical dissipation to eliminate oscillations due to small-wavelength, high-frequency, acoustic modes, which are not of interest; therefore, a provision for doing this in a controlled manner is included in the method. Results of the method for several model problems are presented, and the performance of the method in a large eddy simulation is examined.
Problems with the Fraser report Chapter 1: Pitfalls in BMI time trend analysis.
Lo, Ernest
2014-11-05
The first chapter of the Fraser report "Obesity in Canada: Overstated Problems, Misguided Policy Solutions" presents a flawed and misleading analysis of BMI time trends. The objective of this commentary is to provide a tutorial on BMI time trend analysis through the examination of these flaws. Three issues are discussed: 1. Spotting regions of confidence interval overlap is a statistically flawed method of assessing trend; regression methods which measure the behaviour of the data as a whole are preferred. 2. Temporal stability in overweight (25≤BMI<30) prevalence must be interpreted in the context of the underlying population BMI distribution. 3. BMI is considered reliable for tracking population-level weight trends due to its high correlation with body fat percentage. BMI-defined obesity prevalence represents a conservative underestimate of the population at risk. The findings of the Fraser report Chapter 1 are either refuted or substantially mitigated once the above issues are accounted for, and we do not find that the 'Canadian situation largely lacks a disconcerting or negative trend', as claimed. It is hoped that this commentary will help guide public health professionals who need to interpret, or wish to perform their own, time trend analyses of BMI.
Adaptive NN controller design for a class of nonlinear MIMO discrete-time systems.
Liu, Yan-Jun; Tang, Li; Tong, Shaocheng; Chen, C L Philip
2015-05-01
An adaptive neural network tracking control is studied for a class of multiple-input multiple-output (MIMO) nonlinear systems. The studied systems are in discrete-time form and the discretized dead-zone inputs are considered. In addition, the studied MIMO systems are composed of N subsystems, and each subsystem contains unknown functions and external disturbance. Due to the complicated framework of the discrete-time systems, the existence of the dead zone and the noncausal problem in discrete-time, it brings about difficulties for controlling such a class of systems. To overcome the noncausal problem, by defining the coordinate transformations, the studied systems are transformed into a special form, which is suitable for the backstepping design. The radial basis functions NNs are utilized to approximate the unknown functions of the systems. The adaptation laws and the controllers are designed based on the transformed systems. By using the Lyapunov method, it is proved that the closed-loop system is stable in the sense that the semiglobally uniformly ultimately bounded of all the signals and the tracking errors converge to a bounded compact set. The simulation examples and the comparisons with previous approaches are provided to illustrate the effectiveness of the proposed control algorithm.
Time and length scales within a fire and implications for numerical simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
TIESZEN,SHELDON R.
2000-02-02
A partial non-dimensionalization of the Navier-Stokes equations is used to obtain order of magnitude estimates of the rate-controlling transport processes in the reacting portion of a fire plume as a function of length scale. Over continuum length scales, buoyant times scales vary as the square root of the length scale; advection time scales vary as the length scale, and diffusion time scales vary as the square of the length scale. Due to the variation with length scale, each process is dominant over a given range. The relationship of buoyancy and baroclinc vorticity generation is highlighted. For numerical simulation, first principlesmore » solution for fire problems is not possible with foreseeable computational hardware in the near future. Filtered transport equations with subgrid modeling will be required as two to three decades of length scale are captured by solution of discretized conservation equations. By whatever filtering process one employs, one must have humble expectations for the accuracy obtainable by numerical simulation for practical fire problems that contain important multi-physics/multi-length-scale coupling with up to 10 orders of magnitude in length scale.« less
NASA Astrophysics Data System (ADS)
Mohebbi, Akbar
2018-02-01
In this paper we propose two fast and accurate numerical methods for the solution of multidimensional space fractional Ginzburg-Landau equation (FGLE). In the presented methods, to avoid solving a nonlinear system of algebraic equations and to increase the accuracy and efficiency of method, we split the complex problem into simpler sub-problems using the split-step idea. For a homogeneous FGLE, we propose a method which has fourth-order of accuracy in time component and spectral accuracy in space variable and for nonhomogeneous one, we introduce another scheme based on the Crank-Nicolson approach which has second-order of accuracy in time variable. Due to using the Fourier spectral method for fractional Laplacian operator, the resulting schemes are fully diagonal and easy to code. Numerical results are reported in terms of accuracy, computational order and CPU time to demonstrate the accuracy and efficiency of the proposed methods and to compare the results with the analytical solutions. The results show that the present methods are accurate and require low CPU time. It is illustrated that the numerical results are in good agreement with the theoretical ones.
Gender differences in 16-year trends in assault- and police-related problems due to drinking.
Timko, Christine; Moos, Bernice S; Moos, Rudolf H
2009-09-01
This study examined the frequency and predictors of physical assault and having trouble with the police due to drinking over 16 years among women and men who, at baseline, were untreated for their alcohol use disorder. Predictors examined were the personal characteristics of impulsivity, self-efficacy, and problem-solving and emotional-discharge coping, as well as outpatient treatment and Alcoholics Anonymous (AA) participation. Women and men were similar on rates of perpetrating assault due to drinking, but men were more likely to have had trouble with the police due to drinking. Respondents who, at baseline, were more impulsive and relied more on emotional discharge coping, and less on problem-solving coping, assaulted others more frequently during the first year of follow-up. Similarly, less problem-solving coping at baseline was related to having had trouble with the police more often at one and 16 years due to drinking. The association between impulsivity and more frequent assault was stronger for women, whereas associations of self-efficacy and problem-solving coping with less frequent assault and police trouble were stronger for men. Participation in AA was also associated with a lower likelihood of having trouble with the police at one year, especially for men. Interventions aimed at decreasing impulsivity and emotional discharge coping, and bolstering self-efficacy and problem-solving coping, during substance abuse treatment, and encouragement to become involved in AA, may be helpful in reducing assaultive and other illegal behaviors.
Hydrothermal carbonization of rice husk for fuel upgrading
NASA Astrophysics Data System (ADS)
Suteerawattananonda, N.; Kongkaew, N.; Patumsawad, S.
2018-01-01
The biomass is popularly used as renewable energy. In Thailand rice is the most consume agricultural products. Agricultural residues from rice husk can be an energy resource. However, alkali and alkali earth materials (AAEMs) in biomass ash are the causes of corrosion and erosion problem in the heat exchanger equipment, while the acidity of ash affects the slagging agglomeration problem. Reduction of alkali and alkali earth materials can minimize the problem. In order to challenge the reduction of alkali and alkali earth materials in biomass ash, hydrothermal carbonization process was selected. Thai rice husk was used as sample to compare the result of treatment. The rice husk was heated under the condition of different temperature ranged from 180°C to 250°C, at operate pressure ranges from 12 bar to 42 bar with residence holding reaction time 1 hour. The results of proximate analysis show that the percentage by mass of fixed carbon are increased 2 times, but volatile matter is decreased by 40% and ash content is decreased by 11% due to the increment of temperature. Meanwhile, the X-Ray fluorescence (XRF) analysis results show the decreasing of alkali and alkali earth materials are reduced.
Modified artificial bee colony for the vehicle routing problems with time windows.
Alzaqebah, Malek; Abdullah, Salwani; Jawarneh, Sana
2016-01-01
The natural behaviour of the honeybee has attracted the attention of researchers in recent years and several algorithms have been developed that mimic swarm behaviour to solve optimisation problems. This paper introduces an artificial bee colony (ABC) algorithm for the vehicle routing problem with time windows (VRPTW). A Modified ABC algorithm is proposed to improve the solution quality of the original ABC. The high exploration ability of the ABC slows-down its convergence speed, which may due to the mechanism used by scout bees in replacing abandoned (unimproved) solutions with new ones. In the Modified ABC a list of abandoned solutions is used by the scout bees to memorise the abandoned solutions, then the scout bees select a solution from the list based on roulette wheel selection and replace by a new solution with random routs selected from the best solution. The performance of the Modified ABC is evaluated on Solomon benchmark datasets and compared with the original ABC. The computational results demonstrate that the Modified ABC outperforms the original ABC also produce good solutions when compared with the best-known results in the literature. Computational investigations show that the proposed algorithm is a good and promising approach for the VRPTW.
De Geeter, Nele; Crevecoeur, Guillaume; Dupre, Luc
2011-02-01
In many important bioelectromagnetic problem settings, eddy-current simulations are required. Examples are the reduction of eddy-current artifacts in magnetic resonance imaging and techniques, whereby the eddy currents interact with the biological system, like the alteration of the neurophysiology due to transcranial magnetic stimulation (TMS). TMS has become an important tool for the diagnosis and treatment of neurological diseases and psychiatric disorders. A widely applied method for simulating the eddy currents is the impedance method (IM). However, this method has to contend with an ill conditioned problem and consequently a long convergence time. When dealing with optimal design problems and sensitivity control, the convergence rate becomes even more crucial since the eddy-current solver needs to be evaluated in an iterative loop. Therefore, we introduce an independent IM (IIM), which improves the conditionality and speeds up the numerical convergence. This paper shows how IIM is based on IM and what are the advantages. Moreover, the method is applied to the efficient simulation of TMS. The proposed IIM achieves superior convergence properties with high time efficiency, compared to the traditional IM and is therefore a useful tool for accurate and fast TMS simulations.
Beyond deficits: intimate partner violence, maternal parenting, and child behavior over time.
Greeson, Megan R; Kennedy, Angie C; Bybee, Deborah I; Beeble, Marisa; Adams, Adrienne E; Sullivan, Cris
2014-09-01
Exposure to intimate partner violence (IPV) has negative consequences for children's well-being and behavior. Much of the research on parenting in the context of IPV has focused on whether and how IPV victimization may negatively shape maternal parenting, and how parenting may in turn negatively influence child behavior, resulting in a deficit model of mothering in the context of IPV. However, extant research has yet to untangle the interrelationships among the constructs and test whether the negative effects of IPV on child behavior are indeed attributable to IPV affecting mothers' parenting. The current study employed path analysis to examine the relationships among IPV, mothers' parenting practices, and their children's externalizing behaviors over three waves of data collection among a sample of 160 women with physically abusive partners. Findings indicate that women who reported higher levels of IPV also reported higher levels of behavior problems in their children at the next time point. When parenting practices were examined individually as mediators of the relationship between IPV and child behavior over time, one type of parenting was significant, such that higher IPV led to higher authoritative parenting and lower child behavior problems [corrected]. On the other hand, there was no evidence that higher levels of IPV contributed to more child behavior problems due to maternal parenting. Instead, IPV had a significant cumulative indirect effect on child behavior via the stability of both IPV and behavior over time. Implications for promoting women's and children's well-being in the context of IPV are discussed.
Psychoanalytic peregrinations. III: Confusion of tongues, psychoanalyst as translator.
Chessick, Richard D
2002-01-01
A variety of problems cause a confusion of tongues between the psychoanalyst and the patient. In this sense the psychoanalyst faces the same problems as the translator of a text from one language to another. Examples are given of confusion due cultural differences, confusion due translation differences among translators, confusion due translator prejudice or ignorance, confusion due ambiguous visual cues and images, and confusion due to an inherently ambiguous text. It is due to this unavoidable confusion that the humanistic sciences cannot in principle achieve the mathematical exactness of the natural sciences and should not be expected to do so or condemned because they do not.
Problematic use of social networking sites among urban school going teenagers
Meena, Parth Singh; Mittal, Pankaj Kumar; Solanki, Ram Kumar
2012-01-01
Background: Social networking sites like Facebook, Orkut and Twitter are virtual communities where users can create individual public profiles, interact with real-life friends and meet other people based on shared interests. An exponential rise in usage of Social Networking Sites have been seen within the last few years. Their ease of use and immediate gratification effect on users has changed the way people in general and students in particular spend their time. Young adults, particularly teenagers tended to be unaware of just how much time they really spent on social networking sites. Negative correlates of Social Networking Sites usage include the decrease in real life social community participation and academic achievement, as well as relationship problems, each of which may be indicative of potential addiction. Aims: the aim of the study was to find out whether teenagers, specially those living in cities spend too much time on social networking websites. Materials and Methods: 200 subjects, both boys and girls were included in the cross sectional study who were given a 20 item Young's internet addiction test modified for social networking sites. The responses were analyzed using chi square test and Fisher's exact test. Results: 24.74% of the students were having occasional or ‘frequency’ problems while 2.02% of them were experiencing severe problems due to excessive time spent using social networking sites. Conclusion: With the ever increasing popularity of social media, teenagers are devoting significant time to social networking on websites and are prone to get ‘addicted’ to such form of online social interaction. PMID:24250039
Mapping on Slope Seepage Problem using Electrical Resistivity Imaging (ERI)
NASA Astrophysics Data System (ADS)
Hazreek, Z. A. M.; Nizam, Z. M.; Aziman, M.; Dan, M. F. Md; Shaylinda, M. Z. N.; Faizal, T. B. M.; Aishah, M. A. N.; Ambak, K.; Rosli, S.; Rais, Y.; Ashraf, M. I. M.; Alel, M. N. A.
2018-04-01
The stability of slope may influenced by several factors such as its geomaterial properties, geometry and environmental factors. Problematic slope due to seepage phenomenon will influenced the slope strength thus promoting to its failure. In the past, slope seepage mapping suffer from several limitation due to cost, time and data coverage. Conventional engineering tools to detect or mapped the seepage on slope experienced those problems involving large and high elevation of slope design. As a result, this study introduced geophysical tools for slope seepage mapping based on electrical resistivity method. Two spread lines of electrical resistivity imaging were performed on the slope crest using ABEM SAS 4000 equipment. Data acquisition configuration was based on long and short arrangement, schlumberger array and 2.5 m of equal electrode spacing interval. Raw data obtained from data acquisition was analyzed using RES2DINV software. Both of the resistivity results show that the slope studied consists of three different anomalies representing top soil (200 – 1000 Ωm), perched water (10 – 100 Ωm) and hard/dry layer (> 200 Ωm). It was found that seepage problem on slope studied was derived from perched water zones with electrical resistivity value of 10 – 100 Ωm. Perched water zone has been detected at 6 m depth from the ground level with varying thickness at 5 m and over. Resistivity results have shown some good similarity output with reference to borehole data, geological map and site observation thus verified the resistivity results interpretation. Hence, this study has shown that the electrical resistivity imaging was applicable in slope seepage mapping which consider efficient in term of cost, time, data coverage and sustainability.
Navier-Stokes simulations of unsteady transonic flow phenomena
NASA Technical Reports Server (NTRS)
Atwood, C. A.
1992-01-01
Numerical simulations of two classes of unsteady flows are obtained via the Navier-Stokes equations: a blast-wave/target interaction problem class and a transonic cavity flow problem class. The method developed for the viscous blast-wave/target interaction problem assumes a laminar, perfect gas implemented in a structured finite-volume framework. The approximately factored implicit scheme uses Newton subiterations to obtain the spatially and temporally second-order accurate time history of the blast-waves with stationary targets. The inviscid flux is evaluated using either of two upwind techniques, while the full viscous terms are computed by central differencing. Comparisons of unsteady numerical, analytical, and experimental results are made in two- and three-dimensions for Couette flows, a starting shock-tunnel, and a shock-tube blockage study. The results show accurate wave speed resolution and nonoscillatory discontinuity capturing of the predominantly inviscid flows. Viscous effects were increasingly significant at large post-interaction times. While the blast-wave/target interaction problem benefits from high-resolution methods applied to the Euler terms, the transonic cavity flow problem requires the use of an efficient scheme implemented in a geometrically flexible overset mesh environment. Hence, the Reynolds averaged Navier-Stokes equations implemented in a diagonal form are applied to the cavity flow class of problems. Comparisons between numerical and experimental results are made in two-dimensions for free shear layers and both rectangular and quieted cavities, and in three-dimensions for Stratospheric Observatory For Infrared Astronomy (SOFIA) geometries. The acoustic behavior of the rectangular and three-dimensional cavity flows compare well with experiment in terms of frequency, magnitude, and quieting trends. However, there is a more rapid decrease in computed acoustic energy with frequency than observed experimentally owing to numerical dissipation. In addition, optical phase distortion due to the time-varying density field is modelled using geometrical constructs. The computed optical distortion trends compare with the experimentally inferred result, but underpredicts the fluctuating phase difference magnitude.
New Y2K problem for mask making (or, Surviving mask data problems after 2000)
NASA Astrophysics Data System (ADS)
Sturgeon, Roger
1999-08-01
The Y2K problem has analogies in the mask-making world. With the Y2K problem where a date field has just two bytes for the year, there are some cases of mask-making data in which the file size cannot exceed 2 gigabytes. Where a two-digit date field can only unambiguously use a limited range of values (00 to 99), design coordinates can only cover a range of about 4 billion values, which is getting a little uncomfortable for all of the new applications. In retrospect, with a degree of foresight and planning the Y2K date problem could have been easily solved if new encodings had been allowed in the two- digit field. Likewise, in the mask-making industry we currently have the opportunity to achieve far superior data compression if we allow some new forms of data encoding in our data. But this will require universal agreement. The correct way to look at the Y2K problem is that some information was left out of the data stream due to common understandings that made the additional information superfluous. But as the year 2000 approaches, it has become widely recognized that missing data needs to be stated explicitly, and any ambiguities in the representation of the data will need to be eliminated with precise specifications. In a similar way, old mask data generation methods have had numerous flaws that we have been able to ignore for a long time. But now is the time to fix theses flaws and provide extended capabilities. What is not yet clear is if the old data generation methods can be modified to meet these developing needs. Unilateral action is not likely to lead to much progress, so some united effort is required by all interested parties if success is to be achieved in the brief time that remains.
Web Based Information System for Job Training Activities Using Personal Extreme Programming (PXP)
NASA Astrophysics Data System (ADS)
Asri, S. A.; Sunaya, I. G. A. M.; Rudiastari, E.; Setiawan, W.
2018-01-01
Job training is one of the subjects in university or polytechnic that involves many users and reporting activities. Time and distance became problems for users to reporting and to do obligations tasks during job training due to the location where the job training took place. This research tried to develop a web based information system of job training to overcome the problems. This system was developed using Personal Extreme Programming (PXP). PXP is one of the agile methods is combination of Extreme Programming (XP) and Personal Software Process (PSP). The information system that has developed and tested which are 24% of users are strongly agree, 74% are agree, 1% disagree and 0% strongly disagree about system functionality.
Financing care for the uninsured: the dilemma vexes New Jersey hospitals and payers.
Wells, E V
1996-05-01
New Jersey's diverse constituencies and special interest groups don't usually agree on a public policy issue. However, almost everyone in the public policy arena agrees that hospitals should treat people who show up in emergency departments with problems requiring medical attention. For over a decade, Garden State policymakers, payers, and providers have faced the dilemma of excess demand on hospitals that treat the uninsured. This demand has risen due to increasing health care costs, development of costly technology, state deregulation of hospital payments, and employers' reluctance to insure workers and their families coupled with a mobile workforce holding part-time and seasonal jobs. The fiscal solvency of inner-city hospitals is threatened yet the problem continues to elude resolution.
A simple anaesthetic and monitoring system for magnetic resonance imaging.
Rejger, V S; Cohn, B F; Vielvoye, G J; de Raadt, F B
1989-09-01
Clinical magnetic resonance imaging (MRI) is a digital tomographic technique which utilizes radio waves emitted by hydrogen protons in a powerful magnetic field to form an image of soft-tissue structures and abnormalities within the body. Unfortunately, because of the relatively long scanning time required and the narrow deep confines of the MRI tunnel and Faraday cage, some patients cannot be examined without the use of heavy sedation or general anaesthesia. Due to poor access to the patient and the strong magnetic field, several problems arise in monitoring and administering anaesthesia during this procedure. In this presentation these problems and their solutions, as resolved by our institution, are discussed. Of particular interest is the anaesthesia circuit specifically adapted for use during MRI scanning.
Three-Axis Time-Optimal Attitude Maneuvers of a Rigid-Body
NASA Astrophysics Data System (ADS)
Wang, Xijing; Li, Jisheng
With the development trends for modern satellites towards macro-scale and micro-scale, new demands are requested for its attitude adjustment. Precise pointing control and rapid maneuvering capabilities have long been part of many space missions. While the development of computer technology enables new optimal algorithms being used continuously, a powerful tool for solving problem is provided. Many papers about attitude adjustment have been published, the configurations of the spacecraft are considered rigid body with flexible parts or gyrostate-type systems. The object function always include minimum time or minimum fuel. During earlier satellite missions, the attitude acquisition was achieved by using the momentum ex change devices, performed by a sequential single-axis slewing strategy. Recently, the simultaneous three-axis minimum-time maneuver(reorientation) problems have been studied by many researchers. It is important to research the minimum-time maneuver of a rigid spacecraft within onboard power limits, because of potential space application such as surveying multiple targets in space and academic value. The minimum-time maneuver of a rigid spacecraft is a basic problem because the solutions for maneuvering flexible spacecraft are based on the solution to the rigid body slew problem. A new method for the open-loop solution for a rigid spacecraft maneuver is presented. Having neglected all perturbation torque, the necessary conditions of spacecraft from one state to another state can be determined. There is difference between single-axis with multi-axis. For single- axis analytical solution is possible and the switching line passing through the state-space origin belongs to parabolic. For multi-axis, it is impossible to get analytical solution due to the dynamic coupling between the axes and must be solved numerically. Proved by modern research, Euler axis rotations are quasi-time-optimal in general. On the basis of minimum value principles, a research for reorienting an inertial syrnmetric spacecraft with time cost function from an initial state of rest to a final state of rest is deduced. And the solution to it is stated below: Firstly, the essential condition for solving the problem is deduced with the minimum value principle. The necessary conditions for optimality yield a two point boundary-value problem (TPBVP), which, when solved, produces the control history that minimize time performance index. In the nonsingular control, the solution is the' bang-bang maneuver. The control profile is characterized by Saturated controls for the entire maneuver. The singular control maybe existed. It is only singular in mathematics. According to physical principle, the bigger the mode of the control torque is, the shorter the time is. So saturated controls are used in singular control. Secondly, the control parameters are always in maximum, so the key problem is to determine switch point thus original problem is changed to find the changing time. By the use of adjusting the switch on/off time, the genetic algorithm, which is a new robust method is optimized to determine the switch features without the gyroscopic coupling. There is improvement upon the traditional GA in this research. The homotopy method to find the nonlinear algebra is based on rigorous topology continuum theory. Based on the idea of the homotopy, the relaxation parameters are introduced, and the switch point is figured out with simulated annealing. Computer simulation results using a rigid body show that the new method is feasible and efficient. A practical method of computing approximate solutions to the time-optimal control- switch times for rigid body reorientation has been developed.
2011-10-28
of municipal police, politicians and other officials having been investigated or removed from duty in the last five years suggests the problem was... removed or resigned. See Chuck Neubauer, "Mexican Prosecutors Step Down Amid Purge," The Washington Times, August 2, 2011, http://www.washingtontimes...financial regimes in Mexico have posed a barrier to success. Too few meaningful cartel members are effectively brought to justice due to
A Summary of the Naval Postgraduate School Research Program and Recent Publications
1990-09-01
principles to divide the spectrum of MATLAB computer program on a 386-type a wide-band spread-spectrum signal into sub- computer. Because of the high rf...original in time and a large data sample was required. An signal. Effects due the fiber optic pickup array extended version of MATLAB that allows and...application, such as orbital mechanics and weather prediction. Professor Gragg has also developed numerous MATLAB programs for linear programming problems
Spectral and correlation analysis with applications to middle-atmosphere radars
NASA Technical Reports Server (NTRS)
Rastogi, Prabhat K.
1989-01-01
The correlation and spectral analysis methods for uniformly sampled stationary random signals, estimation of their spectral moments, and problems arising due to nonstationary are reviewed. Some of these methods are already in routine use in atmospheric radar experiments. Other methods based on the maximum entropy principle and time series models have been used in analyzing data, but are just beginning to receive attention in the analysis of radar signals. These methods are also briefly discussed.
The cutoff phenomenon in finite Markov chains.
Diaconis, P
1996-01-01
Natural mixing processes modeled by Markov chains often show a sharp cutoff in their convergence to long-time behavior. This paper presents problems where the cutoff can be proved (card shuffling, the Ehrenfests' urn). It shows that chains with polynomial growth (drunkard's walk) do not show cutoffs. The best general understanding of such cutoffs (high multiplicity of second eigenvalues due to symmetry) is explored. Examples are given where the symmetry is broken but the cutoff phenomenon persists. PMID:11607633
Applications of Sharp Interface Method for Flow Dynamics, Scattering and Control Problems
2012-07-30
Reynolds number, Advances in Applied Mathematics and Mechanics, to appear. 17. K. Ito and K. Kunisch, Optimal Control of Parabolic Variational ...provides more precise and detailed sensitivity of the solution and describes the dynamical change due to the variation in the Reynolds number. The immersed... Inequalities , Journal de Math. Pures et Appl, 93 (2010), no. 4, 329-360. 18. K. Ito and K. Kunisch, Semi-smooth Newton Methods for Time-Optimal Control for a
JPRS Report, East Asia, Southeast Asia, Vietnam: TAP CHI CONG SAN, No. 6, June 1989
1989-12-21
outstanding cultural figures such as Truong Chinh , Tran Huy Lieu, Nguyen Van To, Le Thuoc, and Ca Van Thinh were mentioned. At the same time, a few...Individualism and the Struggle To Overcome It [Nguyen Chi My; article not translated] 16 The Prestige of Leaders [ Tran Ngoc Khue] 17...Continuing Reform in General Education [Nguyen Quang Vinh] 18 Exchange of Opinions The Problem of Building a Marxist-Leninist Psychology [ Tran Due Thao
A Sarsa(λ)-Based Control Model for Real-Time Traffic Light Coordination
Zhu, Fei; Liu, Quan; Fu, Yuchen; Huang, Wei
2014-01-01
Traffic problems often occur due to the traffic demands by the outnumbered vehicles on road. Maximizing traffic flow and minimizing the average waiting time are the goals of intelligent traffic control. Each junction wants to get larger traffic flow. During the course, junctions form a policy of coordination as well as constraints for adjacent junctions to maximize their own interests. A good traffic signal timing policy is helpful to solve the problem. However, as there are so many factors that can affect the traffic control model, it is difficult to find the optimal solution. The disability of traffic light controllers to learn from past experiences caused them to be unable to adaptively fit dynamic changes of traffic flow. Considering dynamic characteristics of the actual traffic environment, reinforcement learning algorithm based traffic control approach can be applied to get optimal scheduling policy. The proposed Sarsa(λ)-based real-time traffic control optimization model can maintain the traffic signal timing policy more effectively. The Sarsa(λ)-based model gains traffic cost of the vehicle, which considers delay time, the number of waiting vehicles, and the integrated saturation from its experiences to learn and determine the optimal actions. The experiment results show an inspiring improvement in traffic control, indicating the proposed model is capable of facilitating real-time dynamic traffic control. PMID:24592183
Vidor, Emmanuel; Soubeyrand, Benoit
2016-12-01
The manufacture of DTP-backboned combination vaccines is complex, and vaccine quality is evaluated by both batch composition and conformance of manufacturing history. Since their first availability, both the manufacturing regulations for DTP combination vaccines and their demand have evolved significantly. This has resulted in a constant need to modify manufacturing and quality control processes. Areas covered: Regulations that govern the manufacture of complex vaccines can be inconsistent between countries and need to be aligned with the regulatory requirements that apply in all countries of distribution. Changes in product mix and quantities can lead to uncertainty in vaccine supply maintenance. These problems are discussed in the context of the importance of these products as essential public health tools. Expert commentary: Increasing demand for complex vaccines globally has led to problems in supply due to intrinsically complex manufacturing and regulatory procedures. Vaccine manufacturers are fully engaged in the resolution of these challenges, but currently changes in demand need ideally to be anticipated approximately 3 years in advance due to long production cycle times.
A New Remote Health-Care System Based on Moving Robot Intended for the Elderly at Home
Zhou, Bing; Wu, Kaige; Wang, Jing; Chen, Gang; Ji, Bo; Liu, Siying
2018-01-01
Nowadays, due to the growing need for remote care and the constantly increasing popularity of mobile devices, a large amount of mobile applications for remote care support has been developed. Although mobile phones are very suitable for young people, there are still many problems related to remote health care of the elderly. Due to hearing loss or limited movements, it is difficult for the elderly to contact their families or doctors via real-time video call. In this paper, we introduce a new remote health-care system based on moving robots intended for the elderly at home. Since the proposed system is an online system, the elderly can contact their families and doctors quickly anytime and anywhere. Besides call, our system involves the accurate indoor object detection algorithms and automatic health data collection, which are not included in existing remote care systems. Therefore, the proposed system solves some challenging problems related to the elderly care. The experiment has shown that the proposed care system achieves excellent performance and provides good user experience. PMID:29599949
Mariottini, Gian Luigi; Giacco, Elisabetta; Pane, Luigi
2008-01-01
The toxicity of Cnidaria is a subject of concern due to its influence on humans. In particular, jellyfish blooms can highly affect human economical activities, such as bathing, fishery, tourism, etc., as well as the public health. Stinging structures of Cnidaria (nematocysts) produce remarkable effects on human skin, such as erythema, swelling, burning and vesicles, and at times further severe dermonecrotic, cardio- and neurotoxic effects, which are particularly dangerous in sensitive subjects. In several zones the toxicity of jellyfish is a very important health problem, thus it has stimulated the research on these organisms; to date toxicological research on Cnidarian venoms in the Mediterranean region is not well developed due to the weak poisonousness of venoms of jellyfish and anemones living in this area. In spite of this, during last decades several problems were also caused in the Mediterranean by stinging consequent to Cnidarian blooms mainly caused by Pelagia noctiluca (Forsskål, 1775) which is known to be the most venomous Mediterranean jellyfish. This paper reviews the knowledge on this jellyfish species, particularly considering its occurrence and toxicity. PMID:19005582
Fujisawa, Jun-ichi
2015-05-14
Interfacial charge-transfer (ICT) transitions are expected to be a novel charge-separation mechanism for efficient photovoltaic conversion featuring one-step charge separation without energy loss. Photovoltaic conversion due to ICT transitions has been investigated using several TiO2-organic hybrid materials that show organic-to-inorganic ICT transitions in the visible region. In applications of ICT transitions to photovoltaic conversion, there is a significant problem that rapid carrier recombination is caused by organic-inorganic electronic coupling that is necessary for the ICT transitions. In order to solve this problem, in this work, I have theoretically studied light-to-current conversions due to the ICT transitions on the basis of the Marcus theory with density functional theory (DFT) and time-dependent DFT (TD-DFT) calculations. An apparent correlation between the reported incident photon-to-current conversion efficiencies (IPCE) and calculated reorganization energies was clearly found, in which the IPCE increases with decreasing the reorganization energy consistent with the Marcus theory in the inverted region. This activation-energy dependence was systematically explained by the equation formulated by the Marcus theory based on a simple excited-state kinetic scheme. This result indicates that the reduction of the reorganization energy can suppress the carrier recombination and enhance the IPCE. The reorganization energy is predominantly governed by the structural change in the chemical-adsorption moiety between the ground and ICT excited states. This work provides crucial knowledge for efficient photovoltaic conversion due to ICT transitions.
Tackling the malaria problem in the South-East Asia Region: need for a change in policy?
Bharati, Kaushik; Ganguly, N K
2013-01-01
Malaria is largely neglected in the South-East Asia Region (SEAR), although it has the highest number of people susceptible to the disease. Malaria in the SEAR exhibits special epidemiological characteristics such as "forest malaria" and malaria due to migration across international borders. The Greater Mekong Subregion (GMS) has been a focal-point for the emergence of drug resistant malaria. With the recent emergence of artemisinin resistance, coupled with the limited availability of insecticides, malaria control efforts in the SEAR face a steep challenge. Indirect man-made factors such as climate change, as well as direct man-made factors such as the circulation of counterfeit drugs have added to the problem. Increased monitoring, surveillance, pharmacovigilance as well as cross-border collaboration are required to address these problems. Regional networking and data-sharing will keep all stakeholders updated about the status of various malaria control programmes in the SEAR. Cutting-edge technologies such as GIS/GPS (geographical information system/global positioning system) systems and mobile phones can provide information in "real-time". A holistic and sustained approach to malaria control by integrated vector management (IVM) is suggested, in which all the stakeholder countries work collaboratively as a consortium. This approach will address the malaria problem in a collective manner so that malaria control can be sustained over time.
Magnus integrators on multicore CPUs and GPUs
NASA Astrophysics Data System (ADS)
Auer, N.; Einkemmer, L.; Kandolf, P.; Ostermann, A.
2018-07-01
In the present paper we consider numerical methods to solve the discrete Schrödinger equation with a time dependent Hamiltonian (motivated by problems encountered in the study of spin systems). We will consider both short-range interactions, which lead to evolution equations involving sparse matrices, and long-range interactions, which lead to dense matrices. Both of these settings show very different computational characteristics. We use Magnus integrators for time integration and employ a framework based on Leja interpolation to compute the resulting action of the matrix exponential. We consider both traditional Magnus integrators (which are extensively used for these types of problems in the literature) as well as the recently developed commutator-free Magnus integrators and implement them on modern CPU and GPU (graphics processing unit) based systems. We find that GPUs can yield a significant speed-up (up to a factor of 10 in the dense case) for these types of problems. In the sparse case GPUs are only advantageous for large problem sizes and the achieved speed-ups are more modest. In most cases the commutator-free variant is superior but especially on the GPU this advantage is rather small. In fact, none of the advantage of commutator-free methods on GPUs (and on multi-core CPUs) is due to the elimination of commutators. This has important consequences for the design of more efficient numerical methods.
Discreteness of time in the evolution of the universe
NASA Astrophysics Data System (ADS)
Faizal, Mir; Ali, Ahmed Farag; Das, Saurya
2017-04-01
In this paper, we will first derive the Wheeler-DeWitt equation for the generalized geometry which occurs in M-theory. Then we will observe that M2-branes act as probes for this generalized geometry, and as M2-branes have an extended structure, their extended structure will limits the resolution to which this generalized geometry can be defined. We will demonstrate that this will deform the Wheeler-DeWitt equation for the generalized geometry. We analyze such a deformed Wheeler-DeWitt equation in the minisuperspace approximation, and observe that this deformation can be used as a solution to the problem of time. This is because this deformation gives rise to time crystals in our universe due to the spontaneous breaking of time reparametrization invariance.
Frequency hopping signal detection based on wavelet decomposition and Hilbert-Huang transform
NASA Astrophysics Data System (ADS)
Zheng, Yang; Chen, Xihao; Zhu, Rui
2017-07-01
Frequency hopping (FH) signal is widely adopted by military communications as a kind of low probability interception signal. Therefore, it is very important to research the FH signal detection algorithm. The existing detection algorithm of FH signals based on the time-frequency analysis cannot satisfy the time and frequency resolution requirement at the same time due to the influence of window function. In order to solve this problem, an algorithm based on wavelet decomposition and Hilbert-Huang transform (HHT) was proposed. The proposed algorithm removes the noise of the received signals by wavelet decomposition and detects the FH signals by Hilbert-Huang transform. Simulation results show the proposed algorithm takes into account both the time resolution and the frequency resolution. Correspondingly, the accuracy of FH signals detection can be improved.
The analysis of 146 patients with difficult laparoscopic cholecystectomy
Bat, Orhan
2015-01-01
Introduction: Laparoscopic cholecystectomy (LC) is very commonly performed surgical intervention. Acute or chronic cholecystitis, adhesions due to previous upper abdomen surgeries, Mirrizi’s syndrome and obesity are common clinical conditions that can be associated with difficult cholecystectomy. In this study, we evaluated and scored the patients with difficult surgical exploration during laparoscopic cholecystectomy. Material and Method: All patients who underwent LC from 2010 to 2015 were retrospectively rewieved. According to intraoperative findings DLC cases were described and classified. Class I difficulty: Adhesion of omentum majus, transverse colon, duodenum to the fundus of the gallbladder. Class II difficulty: Adhesions in Calot’s triangle and difficulty in dissection of cystic artery and cystic duct Class III difficulty: Difficulty in dissection of gallbladder bed (scleroathrophic gallbladder, hemorrhage from liver during dissection of gallbladder, chirotic liver). Class IV difficulty: Difficulty in exploration of gallbladder due to intraabdominal adhesions including technical problems. Results: A total of 146 patients were operated with DLC. The most common difficulty type was Class I difficulty (88 patients/60.2%). Laparoscopic cholecystectomy was converted to laparotomy in 98 patients. Operation time was found to be related with conversion to open surgery (P<0.05). Wound infection rate was also statistically higher in conversion group (P<0.05). The opertion time was found to be longest with Class II difficulty. Conversion rate to open surgery was also highest with Class II difficulty group. Conclusion: Class II difficulty characterized by severe adhesions in calot’s triangle is most serious problem among all DLC cases. They have longer operation time and higher conversion rate. PMID:26629124
A weight based genetic algorithm for selecting views
NASA Astrophysics Data System (ADS)
Talebian, Seyed H.; Kareem, Sameem A.
2013-03-01
Data warehouse is a technology designed for supporting decision making. Data warehouse is made by extracting large amount of data from different operational systems; transforming it to a consistent form and loading it to the central repository. The type of queries in data warehouse environment differs from those in operational systems. In contrast to operational systems, the analytical queries that are issued in data warehouses involve summarization of large volume of data and therefore in normal circumstance take a long time to be answered. On the other hand, the result of these queries must be answered in a short time to enable managers to make decisions as short time as possible. As a result, an essential need in this environment is in improving the performances of queries. One of the most popular methods to do this task is utilizing pre-computed result of queries. In this method, whenever a new query is submitted by the user instead of calculating the query on the fly through a large underlying database, the pre-computed result or views are used to answer the queries. Although, the ideal option would be pre-computing and saving all possible views, but, in practice due to disk space constraint and overhead due to view updates it is not considered as a feasible choice. Therefore, we need to select a subset of possible views to save on disk. The problem of selecting the right subset of views is considered as an important challenge in data warehousing. In this paper we suggest a Weighted Based Genetic Algorithm (WBGA) for solving the view selection problem with two objectives.
Patients with mental problems - the most defenseless travellers.
Felkai, Peter; Kurimay, Tamas
2017-09-01
Severe mental illness occurring abroad is a difficult situation for patients, their families, and for the local medical community. Patients with mental problem are doublely stigmatized due to their mental illness and because they are foreigners in an unfamiliar country. The appropriate treatment is often delayed, while patients are often dealt with in a manner that violates their human rights. Moreover, repatriation - which is vital in this case - is often delayed due to the lack of international protocols for the transportation and treatment of mentally ill travelers. Authors analyzed several factors related to acute mental health problems during travel: the etiology of symptoms, the appropriate treatment possibilities abroad, and medical evacuation and repatriation of the psychotic patient. The article presents a brief review of travel-related mental disorders, the epidemiology of mental health issues faced by travelers, and the significance of pre-travel advice for these patients. The first problem is to recognize (and redress) the particular challenges faced by a psychotic patient in a strange country. The second challenge is to prepare the patients, often in a poor psychiatric state, for medical evacuation by commercial aircraft. Another important question is the best way to take the patient through customs and security control. All of these, as yet unresolved, issues can make the mental patient virtually defenseless. Although timely repatriation of a mentally ill patient is vital and urgent, most travel insurance policies exclude treatment and repatriation costs incurred due to acute mental illness. The high cost of treatment and repatriation must be paid by the patient or their family, which could lead to severe financial strain or insolvency. Changing the approaches taken by the local mental health care community, police, airport security, and insurance companies remain a challenge for psychiatrists. © International Society of Travel Medicine, 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Plagiarism Due to Misunderstanding: Online Instructor Perceptions
ERIC Educational Resources Information Center
Greenberger, Scott; Holbeck, Rick; Steele, John; Dyer, Thomas
2016-01-01
Plagiarism is an ongoing problem in higher education. This problem exists in both online and face-to-face modalities. The literature indicates that there are three ways higher education institutions define plagiarism, which includes theft, deception, and misunderstanding. Plagiarism due to misunderstanding has received less attention in the…
Interior Noise Reduction by Adaptive Feedback Vibration Control
NASA Technical Reports Server (NTRS)
Lim, Tae W.
1998-01-01
The objective of this project is to investigate the possible use of adaptive digital filtering techniques in simultaneous, multiple-mode identification of the modal parameters of a vibrating structure in real-time. It is intended that the results obtained from this project will be used for state estimation needed in adaptive structural acoustics control. The work done in this project is basically an extension of the work on real-time single mode identification, which was performed successfully using a digital signal processor (DSP) at NASA, Langley. Initially, in this investigation the single mode identification work was duplicated on a different processor, namely the Texas Instruments TMS32OC40 DSP. The system identification results for the single mode case were very good. Then an algorithm for simultaneous two mode identification was developed and tested using analytical simulation. When it successfully performed the expected tasks, it was implemented in real-time on the DSP system to identify the first two modes of vibration of a cantilever aluminum beam. The results of the simultaneous two mode case were good but some problems were identified related to frequency warping and spurious mode identification. The frequency warping problem was found to be due to the bilinear transformation used in the algorithm to convert the system transfer function from the continuous-time domain to the discrete-time domain. An alternative approach was developed to rectify the problem. The spurious mode identification problem was found to be associated with high sampling rates. Noise in the signal is suspected to be the cause of this problem but further investigation will be needed to clarify the cause. For simultaneous identification of more than two modes, it was found that theoretically an adaptive digital filter can be designed to identify the required number of modes, but the algebra became very complex which made it impossible to implement in the DSP system used in this study. The on-line identification algorithm developed in this research will be useful in constructing a state estimator for feedback vibration control.
NASA Astrophysics Data System (ADS)
Genco, Filippo
Damage to plasma-facing components (PFC) due to various plasma instabilities is still a major concern for the successful development of fusion energy and represents a significant research obstacle in the community. It is of great importance to fully understand the behavior and lifetime expectancy of PFC under both low energy cycles during normal events and highly energetic events as disruptions, Edge-Localized Modes (ELM), Vertical Displacement Events (VDE), and Run-away electron (RE). The consequences of these high energetic dumps with energy fluxes ranging from 10 MJ/m2 up to 200 MJ/m 2 applied in very short periods (0.1 to 5 ms) can be catastrophic both for safety and economic reasons. Those phenomena can cause a) large temperature increase in the target material b) consequent melting, evaporation and erosion losses due to the extremely high heat fluxes c) possible structural damage and permanent degradation of the entire bulk material with probable burnout of the coolant tubes; d) plasma contamination, transport of target material into the chamber far from where it was originally picked. The modeling of off-normal events such as Disruptions and ELMs requires the simultaneous solution of three main problems along time: a) the heat transfer in the plasma facing component b) the interaction of the produced vapor from the surface with the incoming plasma particles c) the transport of the radiation produced in the vapor-plasma cloud. In addition the moving boundaries problem has to be considered and solved at the material surface. Considering the carbon divertor as target, the moving boundaries are two since for the given conditions, carbon doesn't melt: the plasma front and the moving eroded material surface. The current solution methods for this problem use finite differences and moving coordinates system based on the Crank-Nicholson method and Alternating Directions Implicit Method (ADI). Currently Particle-In-Cell (PIC) methods are widely used for solving complex dynamics problems involving distorted plasma hydrodynamic problems and plasma physics. The PIC method solves the hydrodynamic equations solving all field equations tracking at the same time "sample particles" or pseudo-particles (representative of the much more numerous real ones) as the move under the influence of diffusion or magnetic force. The superior behavior of the PIC techniques over the more classical Lagrangian finite difference methods stands in the fact that detailed information about the particles are available at all times as well as mass and momentum transport values are constantly provided. This allows with a relative small number of particles to well describe the behavior of plasma even in presence of highly distorted flows without losing accuracy. The radiation transport equation is solved at each time step calculating for each cell the opacity and emissivity coefficients. Photon radiation continuum and line fluxes are also calculated per the entire domain and provide useful information for the entire energetic calculation of the system which in the end provides the total values of erosion and lifetime of the target material. In this thesis, a new code named HEIGHTS-PIC code has been created and modified using a new approach of the PIC technique to solve the three physics problems involved integrating each of them as a continuum providing insight on the plasma behavior, evolution along time and physical understanding of the very complex phenomena taking place. The results produced with the models are compared with the well-known and benchmarked HEIGHTS package and also with existing experimental results especially produced in Russia at the TRINITI facility. Comparisons with LASER experiments are also discussed.
Can a quantum state over time resemble a quantum state at a single time?
NASA Astrophysics Data System (ADS)
Horsman, Dominic; Heunen, Chris; Pusey, Matthew F.; Barrett, Jonathan; Spekkens, Robert W.
2017-09-01
The standard formalism of quantum theory treats space and time in fundamentally different ways. In particular, a composite system at a given time is represented by a joint state, but the formalism does not prescribe a joint state for a composite of systems at different times. If there were a way of defining such a joint state, this would potentially permit a more even-handed treatment of space and time, and would strengthen the existing analogy between quantum states and classical probability distributions. Under the assumption that the joint state over time is an operator on the tensor product of single-time Hilbert spaces, we analyse various proposals for such a joint state, including one due to Leifer and Spekkens, one due to Fitzsimons, Jones and Vedral, and another based on discrete Wigner functions. Finding various problems with each, we identify five criteria for a quantum joint state over time to satisfy if it is to play a role similar to the standard joint state for a composite system: that it is a Hermitian operator on the tensor product of the single-time Hilbert spaces; that it represents probabilistic mixing appropriately; that it has the appropriate classical limit; that it has the appropriate single-time marginals; that composing over multiple time steps is associative. We show that no construction satisfies all these requirements. If Hermiticity is dropped, then there is an essentially unique construction that satisfies the remaining four criteria.
Medical-Grade Channel Access and Admission Control in 802.11e EDCA for Healthcare Applications
Son, Sunghwa; Park, Kyung-Joon; Park, Eun-Chan
2016-01-01
In this paper, we deal with the problem of assuring medical-grade quality of service (QoS) for real-time medical applications in wireless healthcare systems based on IEEE 802.11e. Firstly, we show that the differentiated channel access of IEEE 802.11e cannot effectively assure medical-grade QoS because of priority inversion. To resolve this problem, we propose an efficient channel access algorithm. The proposed algorithm adjusts arbitrary inter-frame space (AIFS) in the IEEE 802.11e protocol depending on the QoS measurement of medical traffic, to provide differentiated near-absolute priority for medical traffic. In addition, based on rigorous capacity analysis, we propose an admission control scheme that can avoid performance degradation due to network overload. Via extensive simulations, we show that the proposed mechanism strictly assures the medical-grade QoS and improves the throughput of low-priority traffic by more than several times compared to the conventional IEEE 802.11e. PMID:27490666
Remote Operations of the Deep Space Network Radio Science Subsystem
NASA Astrophysics Data System (ADS)
Caetta, J.; Asmar, S.; Abbate, S.; Connally, M.; Goltz, G.
1998-04-01
The capability for scientists to remotely control systems located at the Deep Space Network facilities only recently has been incorporated in the design and implementation of new equipment. However, time lines for the implementation, distribution, and operational readiness of such systems can extend much farther into the future than the users can wait. The Radio Science Systems Group was faced with just that circumstance; new hardware was not scheduled to become operational for several years, but the increasing number of experiments and configurations for Cassini, Galileo, Mars missions, and other flight projects made that time frame impractical because of the associated increasing risk of not acquiring critical data. Therefore, a method of interfacing with the current radio science subsystem has been developed and used with a high degree of success, although with occasional problems due to this capability not having been originally designed into the system. This article discusses both the method and the problems involved in integrating this new (remote) method of control with a legacy system.
NASA Astrophysics Data System (ADS)
Ji, Yu; Sheng, Wanxing; Jin, Wei; Wu, Ming; Liu, Haitao; Chen, Feng
2018-02-01
A coordinated optimal control method of active and reactive power of distribution network with distributed PV cluster based on model predictive control is proposed in this paper. The method divides the control process into long-time scale optimal control and short-time scale optimal control with multi-step optimization. The models are transformed into a second-order cone programming problem due to the non-convex and nonlinear of the optimal models which are hard to be solved. An improved IEEE 33-bus distribution network system is used to analyse the feasibility and the effectiveness of the proposed control method
Bandages and difficulty with bathing: introducing Seal-Tight.
Lindsay, Ellie
2005-06-01
Patients with compression bandages experience difficulty with bathing due to the possibility that bandages may become wet and affect the wound. Bandage and dressing changes resulting from accidental wetting also cost the NHS considerable time and money. This product focus highlights the social and psychological impact on the patient when they are unable to bathe and offers a solution to the problem. Seal-Tight is a product that has been newly placed on the drug tariff, making it widely available to all patients who wear bandages (or plaster casts). Seal-Tight enables the patient to bathe, in some cases for the first time for months or even years.
NASA Technical Reports Server (NTRS)
Padovan, J.; Tovichakchaikul, S.
1983-01-01
This paper will develop a new solution strategy which can handle elastic-plastic-creep problems in an inherently stable manner. This is achieved by introducing a new constrained time stepping algorithm which will enable the solution of creep initiated pre/postbuckling behavior where indefinite tangent stiffnesses are encountered. Due to the generality of the scheme, both monotone and cyclic loading histories can be handled. The presentation will give a thorough overview of current solution schemes and their short comings, the development of constrained time stepping algorithms as well as illustrate the results of several numerical experiments which benchmark the new procedure.
NASA Astrophysics Data System (ADS)
Zupan, Jure
1995-04-01
All problems that in some way are linked to handling of multi-variate experiments versus multi-variate responses can be approached by the group of methods that has recently became known as the artificial neural network (ANN) techniques. In this lecture, the types of the problems that can be solved by ANN techniques rather than the ANN techniques themselves will be addressed first. This issue is rather important due to the fact that the ANN techniques can be used for a very broad range of problems and choosing the wrong method can often result in either a failure to produce an effective solution or in a very time consuming and ineffective handling. Among the types of problems that can be solved by different ANN techniques the classification, mapping, look-up table, and modelling will be emphasized and discussed. Because all mentioned methods can be solved by different standard techniques, special emphasis will be paid to stress the advantages and drawbacks when employing different ANN techniques. Due to the fact that the range of possible use of ANN is so broad, even a very specific problem can be solved by many different ANN architectures or even using different learning strategies within ANN. In the second part the main learning strategies and corresponding choices of ANN architectures will be discussed. In this part the parameters and some guidelines how to select the method and the design of the ANNs will be shown on the examples of reported ANN applications in chemistry. The ANN learning strategies discussed will be back-propagation of errors, the Kohonen, and the counter propagation learning. The potential user of ANN should first, consider the problem, second, he must inspect the availability of data and the data themselves to decide for which ANN method they are best suited. In this respect, the amount of data, the dimensionality of the measurement space, the form of data (alphanumeric entries, binary, real, or even mixed forms of data) are crucial. After considering all this factors, the determination of the appropriate neural network architecture can be made. Additionally, the selection the optimal ANN involves the determination of specific internal parameters like the learning rate, the momentum term, the neighbourhood function, the time dependent decrease of corrections, etc. Even after all these decisions have been made the learning procedure itself is not a straightforward task. Here, the division of the entire ensemble of data into three data sets: training, controlling and the test set are crucial. This problem is addressed as well.
Schermerhorn, Alice C; D'Onofrio, Brian M; Slutske, Wendy S; Emery, Robert E; Turkheimer, Eric; Harden, K Paige; Heath, Andrew C; Martin, Nicholas G
2012-12-01
Previous studies have found that child attention-deficit/hyperactivity disorder (ADHD) is associated with more parental marital problems. However, the reasons for this association are unclear. The association might be due to genetic or environmental confounds that contribute to both marital problems and ADHD. Data were drawn from the Australian Twin Registry, including 1,296 individual twins, their spouses, and offspring. We studied adult twins who were discordant for offspring ADHD.Using a discordant twin pairs design, we examined the extent to which genetic and environmental confounds,as well as measured parental and offspring characteristics, explain the ADHD-marital problems association. Offspring ADHD predicted parental divorce and marital conflict. The associations were also robust when comparing differentially exposed identical twins to control for unmeasured genetic and environmental factors, when controlling for measured maternal and paternal psychopathology,when restricting the sample based on timing of parental divorce and ADHD onset, and when controlling for other forms of offspring psychopathology. Each of these controls rules out alternative explanations for the association. The results of the current study converge with those of prior research in suggesting that factors directly associated with offspring ADHD increase parental marital problems.
A Genetic Algorithm for Flow Shop Scheduling with Assembly Operations to Minimize Makespan
NASA Astrophysics Data System (ADS)
Bhongade, A. S.; Khodke, P. M.
2014-04-01
Manufacturing systems, in which, several parts are processed through machining workstations and later assembled to form final products, is common. Though scheduling of such problems are solved using heuristics, available solution approaches can provide solution for only moderate sized problems due to large computation time required. In this work, scheduling approach is developed for such flow-shop manufacturing system having machining workstations followed by assembly workstations. The initial schedule is generated using Disjunctive method and genetic algorithm (GA) is applied further for generating schedule for large sized problems. GA is found to give near optimal solution based on the deviation of makespan from lower bound. The lower bound of makespan of such problem is estimated and percent deviation of makespan from lower bounds is used as a performance measure to evaluate the schedules. Computational experiments are conducted on problems developed using fractional factorial orthogonal array, varying the number of parts per product, number of products, and number of workstations (ranging upto 1,520 number of operations). A statistical analysis indicated the significance of all the three factors considered. It is concluded that GA method can obtain optimal makespan.
Complex fuzzy soft expert sets
NASA Astrophysics Data System (ADS)
Selvachandran, Ganeshsree; Hafeed, Nisren A.; Salleh, Abdul Razak
2017-04-01
Complex fuzzy sets and its accompanying theory although at its infancy, has proven to be superior to classical type-1 fuzzy sets, due its ability in representing time-periodic problem parameters and capturing the seasonality of the fuzziness that exists in the elements of a set. These are important characteristics that are pervasive in most real world problems. However, there are two major problems that are inherent in complex fuzzy sets: it lacks a sufficient parameterization tool and it does not have a mechanism to validate the values assigned to the membership functions of the elements in a set. To overcome these problems, we propose the notion of complex fuzzy soft expert sets which is a hybrid model of complex fuzzy sets and soft expert sets. This model incorporates the advantages of complex fuzzy sets and soft sets, besides having the added advantage of allowing the users to know the opinion of all the experts in a single model without the need for any additional cumbersome operations. As such, this model effectively improves the accuracy of representation of problem parameters that are periodic in nature, besides having a higher level of computational efficiency compared to similar models in literature.
Mortimer, Duncan; Segal, Leonie
2006-01-01
To propose methods for the inclusion of within-family external effects in clinical and economic evaluations. To demonstrate the extent of bias due to the exclusion of within-family external effects when measuring the relative performance of interventions for problem drinking and alcohol dependence. The timing and magnitude of treatment effects are modified to accommodate the external health-related quality of life impact of having a problem or dependent drinker in the family home. The inclusion of within-family external effects reduces cost per QALY estimates of interventions for problem drinking and alcohol dependence thereby improving the performance of all evaluated interventions. In addition, the inclusion of within-family external effects improves the relative performance of interventions targeted at those with moderate-to-severe alcohol dependence as compared to interventions targeted at less severe alcohol problems. Failure to take account of external effects in clinical and economic evaluations results in an uneven playing field. Interventions with readily quantifiable health benefits (where social costs and benefits are predominantly comprised of private costs and benefits) are at a distinct advantage when competing for public funding against interventions with quantitatively important external effects.
NASA Technical Reports Server (NTRS)
Vajingortin, L. D.; Roisman, W. P.
1991-01-01
The problem of ensuring the required quality of products and/or technological processes often becomes more difficult due to the fact that there is not general theory of determining the optimal sets of value of the primary factors, i.e., of the output parameters of the parts and units comprising an object and ensuring the correspondence of the object's parameters to the quality requirements. This is the main reason for the amount of time taken to finish complex vital article. To create this theory, one has to overcome a number of difficulties and to solve the following tasks: the creation of reliable and stable mathematical models showing the influence of the primary factors on the output parameters; finding a new technique of assigning tolerances for primary factors with regard to economical, technological, and other criteria, the technique being based on the solution of the main problem; well reasoned assignment of nominal values for primary factors which serve as the basis for creating tolerances. Each of the above listed tasks is of independent importance. An attempt is made to give solutions for this problem. The above problem dealing with quality ensuring an mathematically formalized aspect is called the multiple inverse problem.
Schermerhorn, Alice C.; D’Onofrio, Brian M.; Slutske, Wendy S.; Emery, Robert E.; Turkheimer, Eric; Harden, K. Paige; Heath, Andrew C.; Martin, Nicholas G.
2013-01-01
Background Previous studies have found that child attention-deficit/hyperactivity disorder (ADHD) is associated with more parental marital problems. The reasons for this association are unclear, however. The association might be due to genetic or environmental confounds that contribute to both marital problems and ADHD. Method Data were drawn from the Australian Twin Registry, including 1296 individual twins, their spouses, and offspring. We studied adult twins who were discordant for offspring ADHD. Using a discordant twin pairs design, we examined the extent to which genetic and environmental confounds, as well as measured parental and offspring characteristics, explain the ADHD-marital problems association. Results Offspring ADHD predicted parental divorce and marital conflict. The associations were also robust when comparing differentially exposed identical twins to control for unmeasured genetic and environmental factors, when controlling for measured maternal and paternal psychopathology, when restricting the sample based on timing of parental divorce and ADHD onset, and when controlling for other forms of offspring psychopathology. Each of these controls rules out alternative explanations for the association. Conclusion The results of the current study converge with those of prior research in suggesting that factors directly associated with offspring ADHD increase parental marital problems. PMID:22958575
A Description of Suicides in the Army National Guard During 2007-2014 and Associated Risk Factors.
Griffith, James
2017-06-01
Suicide, due to its increased occurrence in recent years, has been a chief concern of the U.S. military. While there have been many published studies on the topic, conspicuously absent are studies that have included reserve military personnel. To fill this gap, this study reports descriptive statistics of personnel information and events surrounding 706 Army National Guard suicides that had occurred from 2007 through 2014. Comparative personnel information for random samples of nonsuicides for similar years (8 years, 1,000 cases per year) allowed examining factors associated most with suicide. Findings were very similar to those observed in the active duty Army and civilian populations. Primary risk factors for suicide were as follows: age (young), gender (male), and race/ethnicity (White). Most suicides occurred in nonmilitary status (86%) involving personal firearms (72%). Most frequent events surrounding the suicide were as follows: poor military performance (36% of all suicides), parent-family relationship problems (28%), substance abuse (27%), past behavioral health problem (20%), current behavioral health problems (10%), income problems (22%), and full-time employment problems (18%). Implications of findings for suicide prevention are discussed. © 2016 The American Association of Suicidology.
Monitoring risk-adjusted medical outcomes allowing for changes over time.
Steiner, Stefan H; Mackay, R Jock
2014-10-01
We consider the problem of monitoring and comparing medical outcomes, such as surgical performance, over time. Performance is subject to change due to a variety of reasons including patient heterogeneity, learning, deteriorating skills due to aging, etc. For instance, we expect inexperienced surgeons to improve their skills with practice. We propose a graphical method to monitor surgical performance that incorporates risk adjustment to account for patient heterogeneity. The procedure gives more weight to recent outcomes and down-weights the influence of outcomes further in the past. The chart is clinically interpretable as it plots an estimate of the failure rate for a "standard" patient. The chart also includes a measure of uncertainty in this estimate. We can implement the method using historical data or start from scratch. As the monitoring proceeds, we can base the estimated failure rate on a known risk model or use the observed outcomes to update the risk model as time passes. We illustrate the proposed method with an example from cardiac surgery. © The Author 2013. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Soleilhac, Antonin; Bertorelle, Franck; Antoine, Rodolphe
2018-03-15
Protein-templated gold nanoclusters (AuNCs) are very attractive due to their unique fluorescence properties. A major problem however may arise due to protein structure changes upon the nucleation of an AuNC within the protein for any future use as in vivo probes, for instance. In this work, we propose a simple and reliable fluorescence based technique measuring the hydrodynamic size of protein-templated gold nanoclusters. This technique uses the relation between the time resolved fluorescence anisotropy decay and the hydrodynamic volume, through the rotational correlation time. We determine the molecular size of protein-directed AuNCs, with protein templates of increasing sizes, e.g. insulin, lysozyme, and bovine serum albumin (BSA). The comparison of sizes obtained by other techniques (e.g. dynamic light scattering and small-angle X-ray scattering) between bare and gold clusters containing proteins allows us to address the volume changes induced either by conformational changes (for BSA) or the formation of protein dimers (for insulin and lysozyme) during cluster formation and incorporation. Copyright © 2017 Elsevier B.V. All rights reserved.
Chan, Wai; Smith, Leann E.; Greenberg, Jan S.; Hong, Jinkuk; Mailick, Marsha R.
2017-01-01
The present investigation explored long-term relationships of behavioral symptoms of adolescents and adults with developmental disabilities with the mental health of their mothers. Fragile X premutation carrier mothers of an adolescent or adult child with fragile X syndrome (n = 95), and mothers of a grown child with autism (n = 213) were included. Behavioral symptoms at Time 1 were hypothesized to predict maternal depressive symptoms at Time 3 via maternal executive dysfunction at Time 2. Results provided support for the mediating pathway of executive dysfunction. Additionally, the association of behavioral symptoms with executive dysfunction differed across the two groups, suggesting that premutation carriers may be more susceptible to caregiving stress due to their genotype. PMID:28095060
Predictors of Change in Self-Reported Social Networks among Homeless Young People
Falci, Christina D.; Whitbeck, Les B.; Hoyt, Dan R.; Rose, Trina
2011-01-01
This research investigates changes in social network size and composition of 351 homeless adolescents over three years. Findings show that network size decreases over time. Homeless youth with a conduct disorder begin street life with small networks that remain small over time. Caregiver abuse is associated with smaller emotional networks due to fewer home ties, especially to parents, and a more rapid loss of emotional home ties over time. Homeless youth with major depression start out with small networks, but are more likely to maintain network ties. Youth with substance abuse problems are more likely to maintain instrumental home ties. Finally, homeless adolescents tend to reconnect with their parents for instrumental aid and form romantic relationship that provide emotional support. PMID:22121332
Provable classically intractable sampling with measurement-based computation in constant time
NASA Astrophysics Data System (ADS)
Sanders, Stephen; Miller, Jacob; Miyake, Akimasa
We present a constant-time measurement-based quantum computation (MQC) protocol to perform a classically intractable sampling problem. We sample from the output probability distribution of a subclass of the instantaneous quantum polynomial time circuits introduced by Bremner, Montanaro and Shepherd. In contrast with the usual circuit model, our MQC implementation includes additional randomness due to byproduct operators associated with the computation. Despite this additional randomness we show that our sampling task cannot be efficiently simulated by a classical computer. We extend previous results to verify the quantum supremacy of our sampling protocol efficiently using only single-qubit Pauli measurements. Center for Quantum Information and Control, Department of Physics and Astronomy, University of New Mexico, Albuquerque, NM 87131, USA.
Nonequilibrium statistical mechanics Brussels-Austin style
NASA Astrophysics Data System (ADS)
Bishop, Robert C.
The fundamental problem on which Ilya Prigogine and the Brussels-Austin Group have focused can be stated briefly as follows. Our observations indicate that there is an arrow of time in our experience of the world (e.g., decay of unstable radioactive atoms like uranium, or the mixing of cream in coffee). Most of the fundamental equations of physics are time reversible, however, presenting an apparent conflict between our theoretical descriptions and experimental observations. Many have thought that the observed arrow of time was either an artifact of our observations or due to very special initial conditions. An alternative approach, followed by the Brussels-Austin Group, is to consider the observed direction of time to be a basic physical phenomenon due to the dynamics of physical systems. This essay focuses mainly on recent developments in the Brussels-Austin Group after the mid-1980s. The fundamental concerns are the same as in their earlier approaches (subdynamics, similarity transformations), but the contemporary approach utilizes rigged Hilbert space (whereas the older approaches used Hilbert space). While the emphasis on nonequilibrium statistical mechanics remains the same, their more recent approach addresses the physical features of large Poincaré systems, nonlinear dynamics and the mathematical tools necessary to analyze them.
Rapid dissolution of propofol emulsions under sink conditions.
Damitz, Robert; Chauhan, Anuj
2015-03-15
Pain accompanying intravenous injections of propofol is a major problem in anesthesia. Pain is ascribed to the interaction of propofol with the local vasculature and could be impacted by rapid dissolution of the emulsion formulation to release the drug. In this paper, we measure the dissolution of propofol emulsions including the commercial formulation Diprivan(®). We image the turbidity of blood protein sink solutions after emulsions are injected. The images are digitized, and the drug release times are estimated from the pixel intensity data for a range of starting emulsion droplet size. Drug release times are compared to a mechanistic model. After injection, pixel intensity or turbidity decreases due to reductions in emulsion droplet size. Drug release times can still be measured even if the emulsion does not completely dissolve such as with Diprivan(®). Both pure propofol emulsions and Diprivan(®) release drug very rapidly (under five seconds). Reducing emulsion droplet size significantly increases the drug release rate. Drug release times observed are slightly longer than the model prediction likely due to imperfect mixing. Drug release from emulsions occurs very rapidly after injection. This could be a contributing factor to pain on injection of propofol emulsions. Copyright © 2015. Published by Elsevier B.V.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brink, J.
Crude oil (c. 10,700 BOPD) was produced through temporary topside facilities in the Rolf Field offshore Denmark from January 7th to September 17th, 1986. These simple, unmanned and remote controlled facilities were a low cost solution to a problem caused by delays of the permanent topside facilities. Project execution time was two months from start of conceptual design until start-up of oil production. Installation works were performed from a jack-up drilling rig - in part simultaneously with drilling operations. Materials and equipment installed were obtained with very short delivery times. The facilties which were certified by a Certification Society andmore » approved by the Danish Authoritites included all necessary safety features. Total costs for the facilities amounted to c. 1 million US$ (excl. rig time for installation). Due to simplicity high reliability of the production system was obtained. Availability of the facilities for the entire period from start-up was 99.6 percent. The facilities were manned 3.2 percent of the total operating time mainly due to wireline work for reservoir monitoring. It is considered that the experience with the concept applied for the early production from the Rolf Field could form the basis for more simple and cost effective topside facilities for minor offshore fields.« less
Robust Landing Using Time-to-Collision Measurement with Actuator Saturation
NASA Technical Reports Server (NTRS)
Kuwata, Yoshiaki; Matthies, Larry
2009-01-01
This paper considers a landing problem for an MAV that uses only a monocular camera for guidance. Although this sensor cannot measure the absolute distance to the target, by using optical flow algorithms, time-to-collision to the target is obtained. Existing work has applied a simple proportional feedback control to simple dynamics and demonstrated its potential. However, due to the singularity in the time-to-collision measurement around the target, this feedback could require an infinite control action. This paper extends the approach into nonlinear dynamics. In particular, we explicitly consider the saturation of the actuator and include the effect of the aerial drag. It is shown that the convergence to the target is guaranteed from a set of initial conditions, and the boundaries of such initial conditions in the state space are numerically obtained. The paper then introduces parametric uncertainties in the vehicle model and in the time-to-collision measurements. Using an argument similar to the nominal case, the robust convergence to the target is proven, but the region of attraction is shown to shrink due to the existence of uncertainties. The numerical simulation validates these theoretical results.
Hard Real-Time: C++ Versus RTSJ
NASA Technical Reports Server (NTRS)
Dvorak, Daniel L.; Reinholtz, William K.
2004-01-01
In the domain of hard real-time systems, which language is better: C++ or the Real-Time Specification for Java (RTSJ)? Although ordinary Java provides a more productive programming environment than C++ due to its automatic memory management, that benefit does not apply to RTSJ when using NoHeapRealtimeThread and non-heap memory areas. As a result, RTSJ programmers must manage non-heap memory explicitly. While that's not a deterrent for veteran real-time programmers-where explicit memory management is common-the lack of certain language features in RTSJ (and Java) makes that manual memory management harder to accomplish safely than in C++. This paper illustrates the problem for practitioners in the context of moving data and managing memory in a real-time producer/consumer pattern. The relative ease of implementation and safety of the C++ programming model suggests that RTSJ has a struggle ahead in the domain of hard real-time applications, despite its other attractive features.
Toward interactive scheduling systems for managing medical resources.
Oddi, A; Cesta, A
2000-10-01
Managers of medico-hospital facilities are facing two general problems when allocating resources to activities: (1) to find an agreement between several and contrasting requirements; (2) to manage dynamic and uncertain situations when constraints suddenly change over time due to medical needs. This paper describes the results of a research aimed at applying constraint-based scheduling techniques to the management of medical resources. A mixed-initiative problem solving approach is adopted in which a user and a decision support system interact to incrementally achieve a satisfactory solution to the problem. A running prototype is described called Interactive Scheduler which offers a set of functionalities for a mixed-initiative interaction to cope with the medical resource management. Interactive Scheduler is endowed with a representation schema used for describing the medical environment, a set of algorithms that address the specific problems of the domain, and an innovative interaction module that offers functionalities for the dialogue between the support system and its user. A particular contribution of this work is the explicit representation of constraint violations, and the definition of scheduling algorithms that aim at minimizing the amount of constraint violations in a solution.
Swallowing Disorders in Schizophrenia.
Kulkarni, Deepika P; Kamath, Vandan D; Stewart, Jonathan T
2017-08-01
Disorders of swallowing are poorly characterized but quite common in schizophrenia. They are a source of considerable morbidity and mortality in this population, generally as a result of either acute asphyxia from airway obstruction or more insidious aspiration and pneumonia. The death rate from acute asphyxia may be as high as one hundred times that of the general population. Most swallowing disorders in schizophrenia seem to fall into one of two categories, changes in eating and swallowing due to the illness itself and changes related to psychotropic medications. Behavioral changes related to the illness are poorly understood and often involve eating too quickly or taking inappropriately large boluses of food. Iatrogenic problems are mostly related to drug-induced extrapyramidal side effects, including drug-induced parkinsonism, dystonia, and tardive dyskinesia, but may also include xerostomia, sialorrhea, and changes related to sedation. This paper will provide an overview of common swallowing problems encountered in patients with schizophrenia, their pathophysiology, and management. While there is a scarcity of quality evidence in the literature, a thorough history and examination will generally elucidate the predominant problem or problems, often leading to effective management strategies.
Problems and methods of calculating the Legendre functions of arbitrary degree and order
NASA Astrophysics Data System (ADS)
Novikova, Elena; Dmitrenko, Alexander
2016-12-01
The known standard recursion methods of computing the full normalized associated Legendre functions do not give the necessary precision due to application of IEEE754-2008 standard, that creates a problems of underflow and overflow. The analysis of the problems of the calculation of the Legendre functions shows that the problem underflow is not dangerous by itself. The main problem that generates the gross errors in its calculations is the problem named the effect of "absolute zero". Once appeared in a forward column recursion, "absolute zero" converts to zero all values which are multiplied by it, regardless of whether a zero result of multiplication is real or not. Three methods of calculating of the Legendre functions, that removed the effect of "absolute zero" from the calculations are discussed here. These methods are also of interest because they almost have no limit for the maximum degree of Legendre functions. It is shown that the numerical accuracy of these three methods is the same. But, the CPU calculation time of the Legendre functions with Fukushima method is minimal. Therefore, the Fukushima method is the best. Its main advantage is computational speed which is an important factor in calculation of such large amount of the Legendre functions as 2 401 336 for EGM2008.
Early Warning of Food Security Crises in Urban Areas: The Case of Harare, Zimbabwe, 2007
NASA Technical Reports Server (NTRS)
Brown, Molly E.; Funk, Christopher C.
2008-01-01
In 2007, the citizens of Harare, Zimbabwe began experiencing an intense food security crisis. The crisis, due to a complex mix of poor government policies, high inflation rates and production decline due to drought, resulted in a massive increase in the number of food insecure people in Harare. The international humanitarian aid response to this crisis was largely successful due to the early agreement among donors and humanitarian aid officials as to the size and nature of the problem. Remote sensing enabled an early and decisive movement of resources greatly assisting the delivery of food aid in a timely manner. Remote sensing data gave a clear and compelling assessment of significant crop production shortfalls, and provided donors of humanitarian assistance a single number around which they could come to agreement. This use of remote sensing data typifies how remote sensing may be used in early warning systems in Africa.
Multi-GPU implementation of a VMAT treatment plan optimization algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tian, Zhen, E-mail: Zhen.Tian@UTSouthwestern.edu, E-mail: Xun.Jia@UTSouthwestern.edu, E-mail: Steve.Jiang@UTSouthwestern.edu; Folkerts, Michael; Tan, Jun
Purpose: Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU’s relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors’ group, on a multi-GPU platform tomore » solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. Methods: The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors’ method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H and N) cancer case is then used to validate the authors’ method. The authors also compare their multi-GPU implementation with three different single GPU implementation strategies, i.e., truncating DDC matrix (S1), repeatedly transferring DDC matrix between CPU and GPU (S2), and porting computations involving DDC matrix to CPU (S3), in terms of both plan quality and computational efficiency. Two more H and N patient cases and three prostate cases are used to demonstrate the advantages of the authors’ method. Results: The authors’ multi-GPU implementation can finish the optimization process within ∼1 min for the H and N patient case. S1 leads to an inferior plan quality although its total time was 10 s shorter than the multi-GPU implementation due to the reduced matrix size. S2 and S3 yield the same plan quality as the multi-GPU implementation but take ∼4 and ∼6 min, respectively. High computational efficiency was consistently achieved for the other five patient cases tested, with VMAT plans of clinically acceptable quality obtained within 23–46 s. Conversely, to obtain clinically comparable or acceptable plans for all six of these VMAT cases that the authors have tested in this paper, the optimization time needed in a commercial TPS system on CPU was found to be in an order of several minutes. Conclusions: The results demonstrate that the multi-GPU implementation of the authors’ column-generation-based VMAT optimization can handle the large-scale VMAT optimization problem efficiently without sacrificing plan quality. The authors’ study may serve as an example to shed some light on other large-scale medical physics problems that require multi-GPU techniques.« less