A fixed energy fixed angle inverse scattering in interior transmission problem
NASA Astrophysics Data System (ADS)
Chen, Lung-Hui
2017-06-01
We study the inverse acoustic scattering problem in mathematical physics. The problem is to recover the index of refraction in an inhomogeneous medium by measuring the scattered wave fields in the far field. We transform the problem to the interior transmission problem in the study of the Helmholtz equation. We find an inverse uniqueness on the scatterer with a knowledge of a fixed interior transmission eigenvalue. By examining the solution in a series of spherical harmonics in the far field, we can determine uniquely the perturbation source for the radially symmetric perturbations.
Nuclear reactor transient analysis via a quasi-static kinetics Monte Carlo method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jo, YuGwon; Cho, Bumhee; Cho, Nam Zin, E-mail: nzcho@kaist.ac.kr
2015-12-31
The predictor-corrector quasi-static (PCQS) method is applied to the Monte Carlo (MC) calculation for reactor transient analysis. To solve the transient fixed-source problem of the PCQS method, fission source iteration is used and a linear approximation of fission source distributions during a macro-time step is introduced to provide delayed neutron source. The conventional particle-tracking procedure is modified to solve the transient fixed-source problem via MC calculation. The PCQS method with MC calculation is compared with the direct time-dependent method of characteristics (MOC) on a TWIGL two-group problem for verification of the computer code. Then, the results on a continuous-energy problemmore » are presented.« less
libFLASM: a software library for fixed-length approximate string matching.
Ayad, Lorraine A K; Pissis, Solon P P; Retha, Ahmad
2016-11-10
Approximate string matching is the problem of finding all factors of a given text that are at a distance at most k from a given pattern. Fixed-length approximate string matching is the problem of finding all factors of a text of length n that are at a distance at most k from any factor of length ℓ of a pattern of length m. There exist bit-vector techniques to solve the fixed-length approximate string matching problem in time [Formula: see text] and space [Formula: see text] under the edit and Hamming distance models, where w is the size of the computer word; as such these techniques are independent of the distance threshold k or the alphabet size. Fixed-length approximate string matching is a generalisation of approximate string matching and, hence, has numerous direct applications in computational molecular biology and elsewhere. We present and make available libFLASM, a free open-source C++ software library for solving fixed-length approximate string matching under both the edit and the Hamming distance models. Moreover we describe how fixed-length approximate string matching is applied to solve real problems by incorporating libFLASM into established applications for multiple circular sequence alignment as well as single and structured motif extraction. Specifically, we describe how it can be used to improve the accuracy of multiple circular sequence alignment in terms of the inferred likelihood-based phylogenies; and we also describe how it is used to efficiently find motifs in molecular sequences representing regulatory or functional regions. The comparison of the performance of the library to other algorithms show how it is competitive, especially with increasing distance thresholds. Fixed-length approximate string matching is a generalisation of the classic approximate string matching problem. We present libFLASM, a free open-source C++ software library for solving fixed-length approximate string matching. The extensive experimental results presented here suggest that other applications could benefit from using libFLASM, and thus further maintenance and development of libFLASM is desirable.
Willert, Jeffrey; Park, H.; Taitano, William
2015-11-01
High-order/low-order (or moment-based acceleration) algorithms have been used to significantly accelerate the solution to the neutron transport k-eigenvalue problem over the past several years. Recently, the nonlinear diffusion acceleration algorithm has been extended to solve fixed-source problems with anisotropic scattering sources. In this paper, we demonstrate that we can extend this algorithm to k-eigenvalue problems in which the scattering source is anisotropic and a significant acceleration can be achieved. Lastly, we demonstrate that the low-order, diffusion-like eigenvalue problem can be solved efficiently using a technique known as nonlinear elimination.
The pressure distribution for biharmonic transmitting array: theoretical study
NASA Astrophysics Data System (ADS)
Baranowska, A.
2005-03-01
The aim of the paper is theoretical analysis of the finite amplitude waves interaction problem for the biharmonic transmitting array. We assume that the array consists of 16 circular pistons of the same dimensions that regrouped in two sections. Two different arrangements of radiating elements were considered. In this situation the radiating surface is non-continuous without axial symmetry. The mathematical model was built on the basis of the Khokhlov - Zabolotskaya - Kuznetsov (KZK) equation. To solve the problem the finite-difference method was applied. On-axis pressure amplitude for different frequency waves as a function of distance from the source, transverse pressure distribution of these waves at fixed distances from the source and pressure amplitude distribution for them at fixed planes were examined. Especially changes of normalized pressure amplitude for difference frequency were studied. The paper presents mathematical model and some results of theoretical investigations obtained for different values of source parameters.
Equivalent source modeling of the core magnetic field using magsat data
NASA Technical Reports Server (NTRS)
Mayhew, M. A.; Estes, R. H.
1983-01-01
Experiments are carried out on fitting the main field using different numbers of equivalent sources arranged in equal area at fixed radii at and inside the core-mantle boundary. In fixing the radius for a given series of runs, the convergence problems that result from the extreme nonlinearity of the problem when dipole positions are allowed to vary are avoided. Results are presented from a comparison between this approach and the standard spherical harmonic approach for modeling the main field in terms of accuracy and computational efficiency. The modeling of the main field with an equivalent dipole representation is found to be comparable to the standard spherical harmonic approach in accuracy. The 32 deg dipole density (42 dipoles) corresponds approximately to an eleventh degree/order spherical harmonic expansion (143 parameters), whereas the 21 dipole density (92 dipoles) corresponds to approximately a seventeenth degree and order expansion (323 parameters). It is pointed out that fixing the dipole positions results in rapid convergence of the dipole solutions for single-epoch models.
NASA Technical Reports Server (NTRS)
Usher, P. D.
1971-01-01
The almucantar radio telescope development and characteristics are presented. The radio telescope consists of a paraboloidal reflector free to rotate in azimuth but limited in altitude between two fixed angles from the zenith. The fixed angles are designed to provide the capability where sources lying between two small circles parallel with the horizon (almucantars) are accessible at any one instant. Basic geometrical considerations in the almucantar design are presented. The capabilities of the almucantar telescope for source counting and for monitoring which are essential to a resolution of the cosmological problem are described.
Accuracy of six elastic impression materials used for complete-arch fixed partial dentures.
Stauffer, J P; Meyer, J M; Nally, J N
1976-04-01
1. The accuracy of four types of impression materials used to make a complete-arch fixed partial denture was evaluated by visual comparison and indirect measurement methods. 2. None of the tested materials allows safe finishing of a complete-arch fixed partial denture on a cast poured from one single master impression. 3. All of the tested materials can be used for impressions for a complete-arch fixed partial denture provided it is not finished on one single cast. Errors can be avoided by making a new impression with the fitted castings in place. Assembly and soldering should be done on the second cast. 4. In making the master fixed partial denture for this study, inaccurate soldering was a problem that was overcome with the use of epoxy glue. Hence, soldering seems to be a major source of inaccuracy for every fixed partial denture.
Effort-reward imbalance and its association with health among permanent and fixed-term workers
2010-01-01
Background In the past decade, the changing labor market seems to have rejected the traditional standards employment and has begun to support a variety of non-standard forms of work in their place. The purpose of our study was to compare the degree of job stress, sources of job stress, and association of high job stress with health among permanent and fixed-term workers. Methods Our study subjects were 709 male workers aged 30 to 49 years in a suburb of Tokyo, Japan. In 2008, we conducted a cross-sectional study to compare job stress using an effort-reward imbalance (ERI) model questionnaire. Lifestyles, subjective symptoms, and body mass index were also observed from the 2008 health check-up data. Results The rate of job stress of the high-risk group measured by ERI questionnaire was not different between permanent and fixed-term workers. However, the content of the ERI components differed. Permanent workers were distressed more by effort, overwork, or job demand, while fixed-term workers were distressed more by their job insecurity. Moreover, higher ERI was associated with existence of subjective symptoms (OR = 2.07, 95% CI: 1.42-3.03) and obesity (OR = 2.84, 95% CI:1.78-4.53) in fixed-term workers while this tendency was not found in permanent workers. Conclusions Our study showed that workers with different employment types, permanent and fixed-term, have dissimilar sources of job stress even though their degree of job stress seems to be the same. High ERI was associated with existing subjective symptoms and obesity in fixed-term workers. Therefore, understanding different sources of job stress and their association with health among permanent and fixed-term workers should be considered to prevent further health problems. PMID:21054838
NASA Astrophysics Data System (ADS)
Zhang, Ye; Gong, Rongfang; Cheng, Xiaoliang; Gulliksson, Mårten
2018-06-01
This study considers the inverse source problem for elliptic partial differential equations with both Dirichlet and Neumann boundary data. The unknown source term is to be determined by additional boundary conditions. Unlike the existing methods found in the literature, which usually employ the first-order in time gradient-like system (such as the steepest descent methods) for numerically solving the regularized optimization problem with a fixed regularization parameter, we propose a novel method with a second-order in time dissipative gradient-like system and a dynamical selected regularization parameter. A damped symplectic scheme is proposed for the numerical solution. Theoretical analysis is given for both the continuous model and the numerical algorithm. Several numerical examples are provided to show the robustness of the proposed algorithm.
ERIC Educational Resources Information Center
Rathje, William L.
1989-01-01
Presents a historical perspective of garbage management, including an exploration of what is cited as myths about garbage that guide our thinking in dealing with solid waste disposal problems. A partial list of topics includes quantity of garbage, landfills, toxic wastes, technological fixes, economics, source reduction, resource recovery,…
SPH for impact force and ricochet behavior of water-entry bodies
NASA Astrophysics Data System (ADS)
Omidvar, Pourya; Farghadani, Omid; Nikeghbali, Pooyan
The numerical modeling of fluid interaction with a bouncing body has many applications in scientific and engineering application. In this paper, the problem of water impact of a body on free-surface is investigated, where the fixed ghost boundary condition is added to the open source code SPHysics2D1 to rectify the oscillations in pressure distributions with the repulsive boundary condition. First, after introducing the methodology of SPH and the option of boundary conditions, the still water problem is simulated using two types of boundary conditions. It is shown that the fixed ghost boundary condition gives a better result for a hydrostatics pressure. Then, the dam-break problem, which is a bench mark test case in SPH, is simulated and compared with available data. In order to show the behavior of the hydrostatics forces on bodies, a fix/floating cylinder is placed on free surface looking carefully at the force and heaving profile. Finally, the impact of a body on free-surface is successfully simulated for different impact angles and velocities.
JPL Year 2000 Project. A Project Manager's Observations: Y2k
NASA Technical Reports Server (NTRS)
Mathison, Richard P. (Technical Monitor)
1999-01-01
This paper presents observations from a project manager on the Y2K problem. The topics include: 1) Agenda: 2) Scope; 3) Project Organization; 4) The Fixes; 5) The Toughest Part; 6) Validation versus Time; and 7) Information Sources. This paper is in viewgraph form.
Enhancement of SPES source performances.
Fagotti, E; Palmieri, A; Ren, X
2008-02-01
Installation of SPES source at LNL was finished in July 2006 and the first beam was extracted in September 2006. Commissioning results confirmed very good performance of the extracted current density. Conversely, source reliability was very poor due to glow-discharge phenomena, which were caused by the ion source axial magnetic field protruding in the high-voltage column. This problem was fixed by changing the stainless steel plasma electrode support with a ferromagnetic one. This new configuration required us to recalculate ion source solenoids positions and fields in order to recover the correct resonance pattern. Details on magnetic simulations and experimental results of high voltage column shielding are presented.
Multicompare tests of the performance of different metaheuristics in EEG dipole source localization.
Escalona-Vargas, Diana Irazú; Lopez-Arevalo, Ivan; Gutiérrez, David
2014-01-01
We study the use of nonparametric multicompare statistical tests on the performance of simulated annealing (SA), genetic algorithm (GA), particle swarm optimization (PSO), and differential evolution (DE), when used for electroencephalographic (EEG) source localization. Such task can be posed as an optimization problem for which the referred metaheuristic methods are well suited. Hence, we evaluate the localization's performance in terms of metaheuristics' operational parameters and for a fixed number of evaluations of the objective function. In this way, we are able to link the efficiency of the metaheuristics with a common measure of computational cost. Our results did not show significant differences in the metaheuristics' performance for the case of single source localization. In case of localizing two correlated sources, we found that PSO (ring and tree topologies) and DE performed the worst, then they should not be considered in large-scale EEG source localization problems. Overall, the multicompare tests allowed to demonstrate the little effect that the selection of a particular metaheuristic and the variations in their operational parameters have in this optimization problem.
Galileo Attitude Determination: Experiences with a Rotating Star Scanner
NASA Technical Reports Server (NTRS)
Merken, L.; Singh, G.
1991-01-01
The Galileo experience with a rotating star scanner is discussed in terms of problems encountered in flight, solutions implemented, and lessons learned. An overview of the Galileo project and the attitude and articulation control subsystem is given and the star scanner hardware and relevant software algorithms are detailed. The star scanner is the sole source of inertial attitude reference for this spacecraft. Problem symptoms observed in flight are discussed in terms of effects on spacecraft performance and safety. Sources of thse problems include contributions from flight software idiosyncrasies and inadequate validation of the ground procedures used to identify target stars for use by the autonomous on-board star identification algorithm. Problem fixes (some already implemented and some only proposed) are discussed. A general conclusion is drawn regarding the inherent difficulty of performing simulation tests to validate algorithms which are highly sensitive to external inputs of statistically 'rare' events.
Efficient dynamic optimization of logic programs
NASA Technical Reports Server (NTRS)
Laird, Phil
1992-01-01
A summary is given of the dynamic optimization approach to speed up learning for logic programs. The problem is to restructure a recursive program into an equivalent program whose expected performance is optimal for an unknown but fixed population of problem instances. We define the term 'optimal' relative to the source of input instances and sketch an algorithm that can come within a logarithmic factor of optimal with high probability. Finally, we show that finding high-utility unfolding operations (such as EBG) can be reduced to clause reordering.
Homogenization of the Brush Problem with a Source Term in L 1
NASA Astrophysics Data System (ADS)
Gaudiello, Antonio; Guibé, Olivier; Murat, François
2017-07-01
We consider a domain which has the form of a brush in 3 D or the form of a comb in 2 D, i.e. an open set which is composed of cylindrical vertical teeth distributed over a fixed basis. All the teeth have a similar fixed height; their cross sections can vary from one tooth to another and are not supposed to be smooth; moreover the teeth can be adjacent, i.e. they can share parts of their boundaries. The diameter of every tooth is supposed to be less than or equal to ɛ, and the asymptotic volume fraction of the teeth (as ɛ tends to zero) is supposed to be bounded from below away from zero, but no periodicity is assumed on the distribution of the teeth. In this domain we study the asymptotic behavior (as ɛ tends to zero) of the solution of a second order elliptic equation with a zeroth order term which is bounded from below away from zero, when the homogeneous Neumann boundary condition is satisfied on the whole of the boundary. First, we revisit the problem where the source term belongs to L 2. This is a classical problem, but our homogenization result takes place in a geometry which is more general that the ones which have been considered before. Moreover we prove a corrector result which is new. Then, we study the case where the source term belongs to L 1. Working in the framework of renormalized solutions and introducing a definition of renormalized solutions for degenerate elliptic equations where only the vertical derivative is involved (such a definition is new), we identify the limit problem and prove a corrector result.
NASA Astrophysics Data System (ADS)
Shoemaker, Christine; Wan, Ying
2016-04-01
Optimization of nonlinear water resources management issues which have a mixture of fixed (e.g. construction cost for a well) and variable (e.g. cost per gallon of water pumped) costs has been not well addressed because prior algorithms for the resulting nonlinear mixed integer problems have required many groundwater simulations (with different configurations of decision variable), especially when the solution space is multimodal. In particular heuristic methods like genetic algorithms have often been used in the water resources area, but they require so many groundwater simulations that only small systems have been solved. Hence there is a need to have a method that reduces the number of expensive groundwater simulations. A recently published algorithm for nonlinear mixed integer programming using surrogates was shown in this study to greatly reduce the computational effort for obtaining accurate answers to problems involving fixed costs for well construction as well as variable costs for pumping because of a substantial reduction in the number of groundwater simulations required to obtain an accurate answer. Results are presented for a US EPA hazardous waste site. The nonlinear mixed integer surrogate algorithm is general and can be used on other problems arising in hydrology with open source codes in Matlab and python ("pySOT" in Bitbucket).
UltraPse: A Universal and Extensible Software Platform for Representing Biological Sequences.
Du, Pu-Feng; Zhao, Wei; Miao, Yang-Yang; Wei, Le-Yi; Wang, Likun
2017-11-14
With the avalanche of biological sequences in public databases, one of the most challenging problems in computational biology is to predict their biological functions and cellular attributes. Most of the existing prediction algorithms can only handle fixed-length numerical vectors. Therefore, it is important to be able to represent biological sequences with various lengths using fixed-length numerical vectors. Although several algorithms, as well as software implementations, have been developed to address this problem, these existing programs can only provide a fixed number of representation modes. Every time a new sequence representation mode is developed, a new program will be needed. In this paper, we propose the UltraPse as a universal software platform for this problem. The function of the UltraPse is not only to generate various existing sequence representation modes, but also to simplify all future programming works in developing novel representation modes. The extensibility of UltraPse is particularly enhanced. It allows the users to define their own representation mode, their own physicochemical properties, or even their own types of biological sequences. Moreover, UltraPse is also the fastest software of its kind. The source code package, as well as the executables for both Linux and Windows platforms, can be downloaded from the GitHub repository.
Yang, Jiao-lan; Chen, Dong-qing; Li, Shu-min; Yue, Yin-ling; Jin, Xin; Zhao, Bing-cheng; Ying, Bo
2010-02-05
The fluorosis derived from coal burning is a very serious problem in China. By using fluorine-fixing technology during coal burning we are able to reduce the release of fluorides in coal at the source in order to reduce pollution to the surrounding environment by coal burning pollutants as well as decrease the intake and accumulating amounts of fluorine in the human body. The aim of this study was to conduct a pilot experiment on calcium-based fluorine-fixing material efficiency during coal burning to demonstrate and promote the technology based on laboratory research. A proper amount of calcium-based fluorine sorbent was added into high-fluorine coal to form briquettes so that the fluorine in high-fluorine coal can be fixed in coal slag and its release into atmosphere reduced. We determined figures on various components in briquettes and fluorine in coal slag as well as the concentrations of indoor air pollutants, including fluoride, sulfur dioxide and respirable particulate matter (RPM), and evaluated the fluorine-fixing efficiency of calcium-based fluorine sorbents and the levels of indoor air pollutants. Pilot experiments on fluorine-fixing efficiency during coal burning as well as its demonstration and promotion were carried out separately in Guiding and Longli Counties of Guizhou Province, two areas with coal burning fluorosis problems. If the calcium-based fluorine sorbent mixed coal was made into honeycomb briquettes the average fluorine-fixing ratio in the pilot experiment was 71.8%. If the burning calcium-based fluorine-fixing bitumite was made into a coalball, the average of fluorine-fixing ratio was 77.3%. The concentration of fluoride, sulfur dioxide and PM10 of indoor air were decreased significantly. There was a 10% increase in the cost of briquettes due to the addition of calcium-based fluorine sorbent. The preparation process of calcium-based fluorine-fixing briquette is simple yet highly flammable and it is applicable to regions with abundant bitumite coal. As a small scale application, villagers may make fluorine-fixing coalballs or briquettes by themselves, achieving the optimum fluorine-fixing efficiency and reducing indoor air pollutants providing environmental and social benefits.
Ambiguity Resolution for Phase-Based 3-D Source Localization under Fixed Uniform Circular Array.
Chen, Xin; Liu, Zhen; Wei, Xizhang
2017-05-11
Under fixed uniform circular array (UCA), 3-D parameter estimation of a source whose half-wavelength is smaller than the array aperture would suffer from a serious phase ambiguity problem, which also appears in a recently proposed phase-based algorithm. In this paper, by using the centro-symmetry of UCA with an even number of sensors, the source's angles and range can be decoupled and a novel algorithm named subarray grouping and ambiguity searching (SGAS) is addressed to resolve angle ambiguity. In the SGAS algorithm, each subarray formed by two couples of centro-symmetry sensors can obtain a batch of results under different ambiguities, and by searching the nearest value among subarrays, which is always corresponding to correct ambiguity, rough angle estimation with no ambiguity is realized. Then, the unambiguous angles are employed to resolve phase ambiguity in a phase-based 3-D parameter estimation algorithm, and the source's range, as well as more precise angles, can be achieved. Moreover, to improve the practical performance of SGAS, the optimal structure of subarrays and subarray selection criteria are further investigated. Simulation results demonstrate the satisfying performance of the proposed method in 3-D source localization.
A comprehensive approach to reactive power scheduling in restructured power systems
NASA Astrophysics Data System (ADS)
Shukla, Meera
Financial constraints, regulatory pressure, and need for more economical power transfers have increased the loading of interconnected transmission systems. As a consequence, power systems have been operated close to their maximum power transfer capability limits, making the system more vulnerable to voltage instability events. The problem of voltage collapse characterized by a severe local voltage depression is generally believed to be associated with inadequate VAr support at key buses. The goal of reactive power planning is to maintain a high level of voltage security, through installation of properly sized and located reactive sources and their optimal scheduling. In case of vertically-operated power systems, the reactive requirement of the system is normally satisfied by using all of its reactive sources. But in case of different scenarios of restructured power systems, one may consider a fixed amount of exchange of reactive power through tie lines. Reviewed literature suggests a need for optimal scheduling of reactive power generation for fixed inter area reactive power exchange. The present work proposed a novel approach for reactive power source placement and a novel approach for its scheduling. The VAr source placement technique was based on the property of system connectivity. This is followed by development of optimal reactive power dispatch formulation which facilitated fixed inter area tie line reactive power exchange. This formulation used a Line Flow-Based (LFB) model of power flow analysis. The formulation determined the generation schedule for fixed inter area tie line reactive power exchange. Different operating scenarios were studied to analyze the impact of VAr management approach for vertically operated and restructured power systems. The system loadability, losses, generation and the cost of generation were the performance measures to study the impact of VAr management strategy. The novel approach was demonstrated on IEEE 30 bus system.
Generation and Radiation of Acoustic Waves from a 2D Shear Layer
NASA Technical Reports Server (NTRS)
Dahl, Milo D.
2000-01-01
A thin free shear layer containing an inflection point in the mean velocity profile is inherently unstable. Disturbances in the flow field can excite the unstable behavior of a shear layer, if the appropriate combination of frequencies and shear layer thicknesses exists, causing instability waves to grow. For other combinations of frequencies and thicknesses, these instability waves remain neutral in amplitude or decay in the downstream direction. A growing instability wave radiates noise when its phase velocity becomes supersonic relative to the ambient speed of sound. This occurs primarily when the mean jet flow velocity is supersonic. Thus, the small disturbances in the flow, which themselves may generate noise, have generated an additional noise source. It is the purpose of this problem to test the ability of CAA to compute this additional source of noise. The problem is idealized such that the exciting disturbance is a fixed known acoustic source pulsating at a single frequency. The source is placed inside of a 2D jet with parallel flow; hence, the shear layer thickness is constant. With the source amplitude small enough, the problem is governed by the following set of linear equations given in dimensional form.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bergmann, Ryan M.; Rowland, Kelly L.
2017-04-12
WARP, which can stand for ``Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed at UC Berkeley to efficiently execute on NVIDIA graphics processing unit (GPU) platforms. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo method, namely, that very few physical and geometrical simplifications are applied. WARP is able to calculate multiplication factors, neutron flux distributions (in both space and energy), and fission source distributions for time-independent neutron transport problems. It can run in both criticality or fixed source modes, but fixed source mode is currentlymore » not robust, optimized, or maintained in the newest version. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. The goal of developing WARP is to investigate algorithms that can grow into a full-featured, continuous energy, Monte Carlo neutron transport code that is accelerated by running on GPUs. The crux of the effort is to make Monte Carlo calculations faster while producing accurate results. Modern supercomputers are commonly being built with GPU coprocessor cards in their nodes to increase their computational efficiency and performance. GPUs execute efficiently on data-parallel problems, but most CPU codes, including those for Monte Carlo neutral particle transport, are predominantly task-parallel. WARP uses a data-parallel neutron transport algorithm to take advantage of the computing power GPUs offer.« less
Bit-wise arithmetic coding for data compression
NASA Technical Reports Server (NTRS)
Kiely, A. B.
1994-01-01
This article examines the problem of compressing a uniformly quantized independent and identically distributed (IID) source. We present a new compression technique, bit-wise arithmetic coding, that assigns fixed-length codewords to the quantizer output and uses arithmetic coding to compress the codewords, treating the codeword bits as independent. We examine the performance of this method and evaluate the overhead required when used block-adaptively. Simulation results are presented for Gaussian and Laplacian sources. This new technique could be used as the entropy coder in a transform or subband coding system.
From psychiatric ward to the streets and shelters.
Forchuk, C; Russell, G; Kingston-Macclure, S; Turner, K; Dill, S
2006-06-01
The issue of discharge from hospital ward to the streets is seldom explored in the literature, but all too commonly experienced by individuals experiencing psychiatric disorders. The Community University Research Alliance on Housing and Mental Health sought to determine how frequently people were discharged from psychiatric wards to shelters or the street in London, Ontario, Canada. A number of data sources were accessed to determine instances of discharges to shelters or the street. Data were analysed to determine the number of moves occurring between hospital and shelter or no fixed address. All datasets revealed the problem of discharge to shelters or the street occurred regularly. All data sources used have the difficulty of likely underestimating the extent of the problem. This type of discharge occurred at least 194 times in 2002 in London, Ontario, Canada. Policies that contribute to this problem include income-support policies, the reduction in psychiatric hospital beds and the lack of community supports. Without recognition, this problem is at risk of remaining invisible with no further improvements to the situation.
Hewitt, Tanya Anne; Chreim, Samia
2015-05-01
Practitioners frequently encounter safety problems that they themselves can resolve on the spot. We ask: when faced with such a problem, do practitioners fix it in the moment and forget about it, or do they fix it in the moment and report it? We consider factors underlying these two approaches. We used a qualitative case study design employing in-depth interviews with 40 healthcare practitioners in a tertiary care hospital in Ontario, Canada. We conducted a thematic analysis, and compared the findings with the literature. 'Fixing and forgetting' was the main choice that most practitioners made in situations where they faced problems that they themselves could resolve. These situations included (A) handling near misses, which were seen as unworthy of reporting since they did not result in actual harm to the patient, (B) prioritising solving individual patients' safety problems, which were viewed as unique or one-time events and (C) encountering re-occurring safety problems, which were framed as inevitable, routine events. In only a few instances was 'fixing and reporting' mentioned as a way that the providers dealt with problems that they could resolve. We found that generally healthcare providers do not prioritise reporting if a safety problem is fixed. We argue that fixing and forgetting patient safety problems encountered may not serve patient safety as well as fixing and reporting. The latter approach aligns with recent calls for patient safety to be more preventive. We consider implications for practice. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Satellite sound broadcasting system, portable reception
NASA Technical Reports Server (NTRS)
Golshan, Nasser; Vaisnys, Arvydas
1990-01-01
Studies are underway at JPL in the emerging area of Satellite Sound Broadcast Service (SSBS) for direct reception by low cost portable, semi portable, mobile and fixed radio receivers. This paper addresses the portable reception of digital broadcasting of monophonic audio with source material band limited to 5 KHz (source audio comparable to commercial AM broadcasting). The proposed system provides transmission robustness, uniformity of performance over the coverage area and excellent frequency reuse. Propagation problems associated with indoor portable reception are considered in detail and innovative antenna concepts are suggested to mitigate these problems. It is shown that, with the marriage of proper technologies a single medium power satellite can provide substantial direct satellite audio broadcast capability to CONUS in UHF or L Bands, for high quality portable indoor reception by low cost radio receivers.
Sources of spurious force oscillations from an immersed boundary method for moving-body problems
NASA Astrophysics Data System (ADS)
Lee, Jongho; Kim, Jungwoo; Choi, Haecheon; Yang, Kyung-Soo
2011-04-01
When a discrete-forcing immersed boundary method is applied to moving-body problems, it produces spurious force oscillations on a solid body. In the present study, we identify two sources of these force oscillations. One source is from the spatial discontinuity in the pressure across the immersed boundary when a grid point located inside a solid body becomes that of fluid with a body motion. The addition of mass source/sink together with momentum forcing proposed by Kim et al. [J. Kim, D. Kim, H. Choi, An immersed-boundary finite volume method for simulations of flow in complex geometries, Journal of Computational Physics 171 (2001) 132-150] reduces the spurious force oscillations by alleviating this pressure discontinuity. The other source is from the temporal discontinuity in the velocity at the grid points where fluid becomes solid with a body motion. The magnitude of velocity discontinuity decreases with decreasing the grid spacing near the immersed boundary. Four moving-body problems are simulated by varying the grid spacing at a fixed computational time step and at a constant CFL number, respectively. It is found that the spurious force oscillations decrease with decreasing the grid spacing and increasing the computational time step size, but they depend more on the grid spacing than on the computational time step size.
A survey of methods of feasible directions for the solution of optimal control problems
NASA Technical Reports Server (NTRS)
Polak, E.
1972-01-01
Three methods of feasible directions for optimal control are reviewed. These methods are an extension of the Frank-Wolfe method, a dual method devised by Pironneau and Polack, and a Zontendijk method. The categories of continuous optimal control problems are shown as: (1) fixed time problems with fixed initial state, free terminal state, and simple constraints on the control; (2) fixed time problems with inequality constraints on both the initial and the terminal state and no control constraints; (3) free time problems with inequality constraints on the initial and terminal states and simple constraints on the control; and (4) fixed time problems with inequality state space contraints and constraints on the control. The nonlinear programming algorithms are derived for each of the methods in its associated category.
NASA Astrophysics Data System (ADS)
Palenčár, Rudolf; Sopkuliak, Peter; Palenčár, Jakub; Ďuriš, Stanislav; Suroviak, Emil; Halaj, Martin
2017-06-01
Evaluation of uncertainties of the temperature measurement by standard platinum resistance thermometer calibrated at the defining fixed points according to ITS-90 is a problem that can be solved in different ways. The paper presents a procedure based on the propagation of distributions using the Monte Carlo method. The procedure employs generation of pseudo-random numbers for the input variables of resistances at the defining fixed points, supposing the multivariate Gaussian distribution for input quantities. This allows taking into account the correlations among resistances at the defining fixed points. Assumption of Gaussian probability density function is acceptable, with respect to the several sources of uncertainties of resistances. In the case of uncorrelated resistances at the defining fixed points, the method is applicable to any probability density function. Validation of the law of propagation of uncertainty using the Monte Carlo method is presented on the example of specific data for 25 Ω standard platinum resistance thermometer in the temperature range from 0 to 660 °C. Using this example, we demonstrate suitability of the method by validation of its results.
Atmospheric Tracer Inverse Modeling Using Markov Chain Monte Carlo (MCMC)
NASA Astrophysics Data System (ADS)
Kasibhatla, P.
2004-12-01
In recent years, there has been an increasing emphasis on the use of Bayesian statistical estimation techniques to characterize the temporal and spatial variability of atmospheric trace gas sources and sinks. The applications have been varied in terms of the particular species of interest, as well as in terms of the spatial and temporal resolution of the estimated fluxes. However, one common characteristic has been the use of relatively simple statistical models for describing the measurement and chemical transport model error statistics and prior source statistics. For example, multivariate normal probability distribution functions (pdfs) are commonly used to model these quantities and inverse source estimates are derived for fixed values of pdf paramaters. While the advantage of this approach is that closed form analytical solutions for the a posteriori pdfs of interest are available, it is worth exploring Bayesian analysis approaches which allow for a more general treatment of error and prior source statistics. Here, we present an application of the Markov Chain Monte Carlo (MCMC) methodology to an atmospheric tracer inversion problem to demonstrate how more gereral statistical models for errors can be incorporated into the analysis in a relatively straightforward manner. The MCMC approach to Bayesian analysis, which has found wide application in a variety of fields, is a statistical simulation approach that involves computing moments of interest of the a posteriori pdf by efficiently sampling this pdf. The specific inverse problem that we focus on is the annual mean CO2 source/sink estimation problem considered by the TransCom3 project. TransCom3 was a collaborative effort involving various modeling groups and followed a common modeling and analysis protocoal. As such, this problem provides a convenient case study to demonstrate the applicability of the MCMC methodology to atmospheric tracer source/sink estimation problems.
NASA Astrophysics Data System (ADS)
Ratliff, Bradley M.; LeMaster, Daniel A.
2012-06-01
Pixel-to-pixel response nonuniformity is a common problem that affects nearly all focal plane array sensors. This results in a frame-to-frame fixed pattern noise (FPN) that causes an overall degradation in collected data. FPN is often compensated for through the use of blackbody calibration procedures; however, FPN is a particularly challenging problem because the detector responsivities drift relative to one another in time, requiring that the sensor be recalibrated periodically. The calibration process is obstructive to sensor operation and is therefore only performed at discrete intervals in time. Thus, any drift that occurs between calibrations (along with error in the calibration sources themselves) causes varying levels of residual calibration error to be present in the data at all times. Polarimetric microgrid sensors are particularly sensitive to FPN due to the spatial differencing involved in estimating the Stokes vector images. While many techniques exist in the literature to estimate FPN for conventional video sensors, few have been proposed to address the problem in microgrid imaging sensors. Here we present a scene-based nonuniformity correction technique for microgrid sensors that is able to reduce residual fixed pattern noise while preserving radiometry under a wide range of conditions. The algorithm requires a low number of temporal data samples to estimate the spatial nonuniformity and is computationally efficient. We demonstrate the algorithm's performance using real data from the AFRL PIRATE and University of Arizona LWIR microgrid sensors.
Reactor Application for Coaching Newbies
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-06-17
RACCOON is a Moose based reactor physics application designed to engage undergraduate and first-year graduate students. The code contains capabilities to solve the multi group Neutron Diffusion equation in eigenvalue and fixed source form and will soon have a provision to provide simple thermal feedback. These capabilities are sufficient to solve example problems found in Duderstadt & Hamilton (the typical textbook of senior level reactor physics classes). RACCOON does not contain any advanced capabilities as found in YAK.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moussa, Jonathan E.
2013-05-13
This piece of software is a new feature implemented inside an existing open-source library. Specifically, it is a new implementation of a density functional (HSE, short for Heyd-Scuseria-Ernzerhof) for a repository of density functionals, the libxc library. It fixes some numerical problems with existing implementations, as outlined in a scientific paper recently submitted for publication. Density functionals are components of electronic structure simulations, which model properties of electrons inside molecules and crystals.
Radiation Source Mapping with Bayesian Inverse Methods
Hykes, Joshua M.; Azmy, Yousry Y.
2017-03-22
In this work, we present a method to map the spectral and spatial distributions of radioactive sources using a limited number of detectors. Locating and identifying radioactive materials is important for border monitoring, in accounting for special nuclear material in processing facilities, and in cleanup operations following a radioactive material spill. Most methods to analyze these types of problems make restrictive assumptions about the distribution of the source. In contrast, the source mapping method presented here allows an arbitrary three-dimensional distribution in space and a gamma peak distribution in energy. To apply the method, the problem is cast as anmore » inverse problem where the system’s geometry and material composition are known and fixed, while the radiation source distribution is sought. A probabilistic Bayesian approach is used to solve the resulting inverse problem since the system of equations is ill-posed. The posterior is maximized with a Newton optimization method. The probabilistic approach also provides estimates of the confidence in the final source map prediction. A set of adjoint, discrete ordinates flux solutions, obtained in this work by the Denovo code, is required to efficiently compute detector responses from a candidate source distribution. These adjoint fluxes form the linear mapping from the state space to the response space. The test of the method’s success is simultaneously locating a set of 137Cs and 60Co gamma sources in a room. This test problem is solved using experimental measurements that we collected for this purpose. Because of the weak sources available for use in the experiment, some of the expected photopeaks were not distinguishable from the Compton continuum. However, by supplanting 14 flawed measurements (out of a total of 69) with synthetic responses computed by MCNP, the proof-of-principle source mapping was successful. The locations of the sources were predicted within 25 cm for two of the sources and 90 cm for the third, in a room with an ~4-x 4-m floor plan. Finally, the predicted source intensities were within a factor of ten of their true value.« less
Parameter estimation for slit-type scanning sensors
NASA Technical Reports Server (NTRS)
Fowler, J. W.; Rolfe, E. G.
1981-01-01
The Infrared Astronomical Satellite, scheduled for launch into a 900 km near-polar orbit in August 1982, will perform an infrared point source survey by scanning the sky with slit-type sensors. The description of position information is shown to require the use of a non-Gaussian random variable. Methods are described for deciding whether separate detections stem from a single common source, and a formulism is developed for the scan-to-scan problems of identifying multiple sightings of inertially fixed point sources for combining their individual measurements into a refined estimate. Several cases are given where the general theory yields results which are quite different from the corresponding Gaussian applications, showing that argument by Gaussian analogy would lead to error.
Fugazzotto, P A; Kirsch, A; Ackermann, K L; Neuendorff, G
1999-01-01
Numerous problems have been reported following various therapies used to attach natural teeth to implants beneath a fixed prosthesis. This study documents the results of 843 consecutive patients treated with 1,206 natural tooth/implant-supported prostheses utilizing 3,096 screw-fixed attachments. After 3 to 14 years in function, only 9 intrusion problems were noted. All problems were associated with fractured or lost screws. This report demonstrates the efficacy of such a treatment approach when a natural tooth/implant-supported fixed prosthesis is contemplated.
Meshless method for solving fixed boundary problem of plasma equilibrium
NASA Astrophysics Data System (ADS)
Imazawa, Ryota; Kawano, Yasunori; Itami, Kiyoshi
2015-07-01
This study solves the Grad-Shafranov equation with a fixed plasma boundary by utilizing a meshless method for the first time. Previous studies have utilized a finite element method (FEM) to solve an equilibrium inside the fixed separatrix. In order to avoid difficulties of FEM (such as mesh problem, difficulty of coding, expensive calculation cost), this study focuses on the meshless methods, especially RBF-MFS and KANSA's method to solve the fixed boundary problem. The results showed that CPU time of the meshless methods was ten to one hundred times shorter than that of FEM to obtain the same accuracy.
Placing an upper limit on cryptic marine sulphur cycling.
Johnston, D T; Gill, B C; Masterson, A; Beirne, E; Casciotti, K L; Knapp, A N; Berelson, W
2014-09-25
A quantitative understanding of sources and sinks of fixed nitrogen in low-oxygen waters is required to explain the role of oxygen-minimum zones (OMZs) in controlling the fixed nitrogen inventory of the global ocean. Apparent imbalances in geochemical nitrogen budgets have spurred numerous studies to measure the contributions of heterotrophic and autotrophic N2-producing metabolisms (denitrification and anaerobic ammonia oxidation, respectively). Recently, 'cryptic' sulphur cycling was proposed as a partial solution to the fundamental biogeochemical problem of closing marine fixed-nitrogen budgets in intensely oxygen-deficient regions. The degree to which the cryptic sulphur cycle can fuel a loss of fixed nitrogen in the modern ocean requires the quantification of sulphur recycling in OMZ settings. Here we provide a new constraint for OMZ sulphate reduction based on isotopic profiles of oxygen ((18)O/(16)O) and sulphur ((33)S/(32)S, (34)S/(32)S) in seawater sulphate through oxygenated open-ocean and OMZ-bearing water columns. When coupled with observations and models of sulphate isotope dynamics and data-constrained model estimates of OMZ water-mass residence time, we find that previous estimates for sulphur-driven remineralization and loss of fixed nitrogen from the oceans are near the upper limit for what is possible given in situ sulphate isotope data.
PDQ-8 reference manual (LWBR development program)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pfiefer, C J; Spitz, C J
1978-05-01
The PDQ-8 program is designed to solve the neutron diffusion, depletion problem in one, two, or three dimensions on the CDC-6600 and CDC-7600 computers. The three dimensional spatial calculation may be either explicit or discontinuous trial function synthesis. Up to five lethargy groups are permitted. The fast group treatment may be simplified P(3), and the thermal neutrons may be represented by a single group or a pair of overlapping groups. Adjoint, fixed source, one iteration, additive fixed source, eigenvalue, and boundary value calculations may be performed. The HARMONY system is used for cross section variation and generalized depletion chain solutions.more » The depletion is a combination gross block depletion for all nuclides as well as a fine block depletion for a specified subset of the nuclides. The geometries available include rectangular, cylindrical, spherical, hexagonal, and a very general quadrilateral geometry with diagonal interfaces. All geometries allow variable mesh in all dimensions. Various control searches as well as temperature and xenon feedbacks are provided. The synthesis spatial solution time is dependent on the number of trial functions used and the number of gross blocks. The PDQ-8 program is used at Bettis on a production basis for solving diffusion--depletion problems. The report describes the various features of the program and then separately describes the input required to utilize these features.« less
NASA Technical Reports Server (NTRS)
Fieno, D.
1972-01-01
The perturbation theory for fixed sources was applied to radiation shielding problems to determine changes in neutron and gamma ray doses due to changes in various shield layers. For a given source and detector position the perturbation method enables dose derivatives due to all layer changes to be determined from one forward and one inhomogeneous adjoint calculation. The direct approach requires two forward calculations for the derivative due to a single layer change. Hence, the perturbation method for obtaining dose derivatives permits an appreciable savings in computation for a multilayered shield. For an illustrative problem, a comparison was made of the fractional change in the dose per unit change in the thickness of each shield layer as calculated by perturbation theory and by successive direct calculations; excellent agreement was obtained between the two methods.
Adaptive distributed source coding.
Varodayan, David; Lin, Yao-Chung; Girod, Bernd
2012-05-01
We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.
A restricted Steiner tree problem is solved by Geometric Method II
NASA Astrophysics Data System (ADS)
Lin, Dazhi; Zhang, Youlin; Lu, Xiaoxu
2013-03-01
The minimum Steiner tree problem has wide application background, such as transportation system, communication network, pipeline design and VISL, etc. It is unfortunately that the computational complexity of the problem is NP-hard. People are common to find some special problems to consider. In this paper, we first put forward a restricted Steiner tree problem, which the fixed vertices are in the same side of one line L and we find a vertex on L such the length of the tree is minimal. By the definition and the complexity of the Steiner tree problem, we know that the complexity of this problem is also Np-complete. In the part one, we have considered there are two fixed vertices to find the restricted Steiner tree problem. Naturally, we consider there are three fixed vertices to find the restricted Steiner tree problem. And we also use the geometric method to solve such the problem.
Mideksa, K G; Singh, A; Hoogenboom, N; Hellriegel, H; Krause, H; Schnitzler, A; Deuschl, G; Raethjen, J; Schmidt, G; Muthuraman, M
2016-08-01
One of the most commonly used therapy to treat patients with Parkinson's disease (PD) is deep brain stimulation (DBS) of the subthalamic nucleus (STN). Identifying the most optimal target area for the placement of the DBS electrodes have become one of the intensive research area. In this study, the first aim is to investigate the capabilities of different source-analysis techniques in detecting deep sources located at the sub-cortical level and validating it using the a-priori information about the location of the source, that is, the STN. Secondly, we aim at an investigation of whether EEG or MEG is best suited in mapping the DBS-induced brain activity. To do this, simultaneous EEG and MEG measurement were used to record the DBS-induced electromagnetic potentials and fields. The boundary-element method (BEM) have been used to solve the forward problem. The position of the DBS electrodes was then estimated using the dipole (moving, rotating, and fixed MUSIC), and current-density-reconstruction (CDR) (minimum-norm and sLORETA) approaches. The source-localization results from the dipole approaches demonstrated that the fixed MUSIC algorithm best localizes deep focal sources, whereas the moving dipole detects not only the region of interest but also neighboring regions that are affected by stimulating the STN. The results from the CDR approaches validated the capability of sLORETA in detecting the STN compared to minimum-norm. Moreover, the source-localization results using the EEG modality outperformed that of the MEG by locating the DBS-induced activity in the STN.
NASA Astrophysics Data System (ADS)
Sakamoto, Hiroki; Yamamoto, Toshihiro
2017-09-01
This paper presents improvement and performance evaluation of the "perturbation source method", which is one of the Monte Carlo perturbation techniques. The formerly proposed perturbation source method was first-order accurate, although it is known that the method can be easily extended to an exact perturbation method. A transport equation for calculating an exact flux difference caused by a perturbation is solved. A perturbation particle representing a flux difference is explicitly transported in the perturbed system, instead of in the unperturbed system. The source term of the transport equation is defined by the unperturbed flux and the cross section (or optical parameter) changes. The unperturbed flux is provided by an "on-the-fly" technique during the course of the ordinary fixed source calculation for the unperturbed system. A set of perturbation particle is started at the collision point in the perturbed region and tracked until death. For a perturbation in a smaller portion of the whole domain, the efficiency of the perturbation source method can be improved by using a virtual scattering coefficient or cross section in the perturbed region, forcing collisions. Performance is evaluated by comparing the proposed method to other Monte Carlo perturbation methods. Numerical tests performed for a particle transport in a two-dimensional geometry reveal that the perturbation source method is less effective than the correlated sampling method for a perturbation in a larger portion of the whole domain. However, for a perturbation in a smaller portion, the perturbation source method outperforms the correlated sampling method. The efficiency depends strongly on the adjustment of the new virtual scattering coefficient or cross section.
Dismantling of Radium-226 Coal Level Gauges: Encountered Problems and How to Solve
DOE Office of Scientific and Technical Information (OSTI.GOV)
Punnachaiya, M.; Nuanjan, P.; Moombansao, K.
2006-07-01
This paper describes the techniques for dismantling of disused-sealed Radium-226 (Ra-226) coal level gauges which the source specifications and documents were not available, including problems occurred during dismantling stage and the decision making in solving all those obstacles. The 2 mCi (20 pieces), 6 mCi (20 pieces) and 6.6 mCi (30 pieces) of Ra-226 hemi-spherically-shaped with lead-filled coal level gauges were used in industrial applications for electric power generation. All sources needed to be dismantled for further conditioning as requested by the International Atomic Energy Agency (IAEA). One of the 2 mCi Ra-226 source was dismantled under the supervision ofmore » IAEA expert. Before conditioning period, each of the 6 mCi and 6.6 mCi sources were dismantled and inspected. It was found that coal level gauges had two different source types: the sealed cylindrical source (diameter 2 cm x 2 cm length) locked with spring in lead housing for 2 mCi and 6.6 mCi; while the 6 mCi was an embedded capsule inside source holder stud assembly in lead-filled housing. Dismantling Ra-226 coal level gauges comprised of 6 operational steps: confirmation of the surface dose rate on each source activity, calculation of working time within the effective occupational dose limit, cutting the weld of lead container by electrical blade, confirmation of the Ra-226 embedded capsule size using radiation scanning technique and gamma radiography, automatic sawing of the source holder stud assembly, and transferring the source to store in lead safe box. The embedded length of 6 mCi Ra-226 capsule in its diameter 2 cm x 14.7 cm length stud assembly was identified, the results from scanning technique and radiographic film revealed the embedded source length of about 2 cm, therefore all the 6 mCi sources were safely cut at 3 cm using the automatic saw. Another occurring problem was one of the 6.6 mCi spring type source stuck inside its housing because the spring was deformed and there was previously a leakage on inner source housing. Thus, during manufacturing the filled-lead for shielding passed through this small hole and fixed the deformed spring together with the source. The circular surface of inner hole was measured and slowly drilled at a diameter 2.2 cm behind shielding, till the spring and the fixed lead sheet were cut, therefore the source could be finally hammered out. The surface dose rate of coal level gauges before weld cutting was 10-15 mR/hr and the highest dose rate at the position of the weld cutter was 2.5 mR/hr. The total time for each weld cutting and automatic sawing was 2-3 minutes and 1 minute, respectively. The source was individually and safely transferred to store in lead safe box using a 1-meter length tong and a light container with 1 meter length handle. The total time for Ra-226 (70 pieces) dismantling, including the encountered problems and their troubles shooting took 4 days operation in which the total dose obtained by 18 operators were ranged from 1-38 {mu}Sv. The dismantling team safely completed the activities within the effective dose limit for occupational exposure of 20 mSv/year (80 {mu}Sv/day). (authors)« less
Finite-Time and Fixed-Time Cluster Synchronization With or Without Pinning Control.
Liu, Xiwei; Chen, Tianping
2018-01-01
In this paper, the finite-time and fixed-time cluster synchronization problem for complex networks with or without pinning control are discussed. Finite-time (or fixed-time) synchronization has been a hot topic in recent years, which means that the network can achieve synchronization in finite-time, and the settling time depends on the initial values for finite-time synchronization (or the settling time is bounded by a constant for any initial values for fixed-time synchronization). To realize the finite-time and fixed-time cluster synchronization, some simple distributed protocols with or without pinning control are designed and the effectiveness is rigorously proved. Several sufficient criteria are also obtained to clarify the effects of coupling terms for finite-time and fixed-time cluster synchronization. Especially, when the cluster number is one, the cluster synchronization becomes the complete synchronization problem; when the network has only one node, the coupling term between nodes will disappear, and the synchronization problem becomes the simplest master-slave case, which also includes the stability problem for nonlinear systems like neural networks. All these cases are also discussed. Finally, numerical simulations are presented to demonstrate the correctness of obtained theoretical results.
Father absence due to migration and child illness in rural Mexico.
Schmeer, Kammi
2009-10-01
Little research to date has assessed the importance of the presence of fathers in the household for protecting child health, particularly in developing country contexts. Although divorce and non-marital childbearing are low in many developing countries, migration is a potentially important source of father absence that has yet to be studied in relation to child health. This study utilizes prospective, longitudinal data from Mexico to assess whether father absence due to migration is associated with increased child illness in poor, rural communities. Rural Mexico provides a setting where child illness is related to more serious health problems, and where migration is an important source of father absence. Both state- and individual-level fixed effects regression analyses are used to estimate the relationship between father absence due to migration and child illness while controlling for unobserved contextual and individual characteristics. The state-level models illustrate that the odds of children being ill are 39% higher for any illness and 51% higher for diarrhea when fathers are absent compared with when fathers are present in the household. The individual-level fixed effects models support these findings, indicating that, in the context of rural Mexico, fathers may be important sources of support for ensuring the healthy development of young children.
NASA Astrophysics Data System (ADS)
Chen, Xin; Wang, Shuhong; Liu, Zhen; Wei, Xizhang
2017-07-01
Localization of a source whose half-wavelength is smaller than the array aperture would suffer from serious phase ambiguity problem, which also appears in recently proposed phase-based algorithms. In this paper, by using the centro-symmetry of fixed uniform circular array (UCA) with even number of sensors, the source's angles and range can be decoupled and a novel ambiguity resolving approach is addressed for phase-based algorithms of source's 3-D localization (azimuth angle, elevation angle, and range). In the proposed method, by using the cosine property of unambiguous phase differences, ambiguity searching and actual-value matching are first employed to obtain actual phase differences and corresponding source's angles. Then, the unambiguous angles are utilized to estimate the source's range based on a one dimension multiple signal classification (1-D MUSIC) estimator. Finally, simulation experiments investigate the influence of step size in search and SNR on performance of ambiguity resolution and demonstrate the satisfactory estimation performance of the proposed method.
Li, Xia; Guo, Meifang; Su, Yongfu
2016-01-01
In this article, a new multidirectional monotone hybrid iteration algorithm for finding a solution to the split common fixed point problem is presented for two countable families of quasi-nonexpansive mappings in Banach spaces. Strong convergence theorems are proved. The application of the result is to consider the split common null point problem of maximal monotone operators in Banach spaces. Strong convergence theorems for finding a solution of the split common null point problem are derived. This iteration algorithm can accelerate the convergence speed of iterative sequence. The results of this paper improve and extend the recent results of Takahashi and Yao (Fixed Point Theory Appl 2015:87, 2015) and many others .
NASA Astrophysics Data System (ADS)
Kendall, C.; Silva, S. R.; Doctor, D. H.; Wankel, S. D.; Chang, C. C.; Bergamaschi, B. A.; Kratzer, C. R.; Dahlgren, R. A.; Fleenor, W. E.
2005-12-01
Understanding the sources and sinks for organics and nitrate is critical for devising effective strategies to reduce their loads in ecosystems and mitigate local problems of low dissolved oxygen levels and/or production of disinfection byproducts during water treatment. Since isotopic techniques are effective methods for quantifying the sources and sinks of organics and nutrients, we have analyzed particulate organic matter (POM), nitrate, dissolved inorganic and organic carbon (DIC, DOC), and water isotope samples from selected sites since 2000. Our studies indicate that isotope data are a useful adjunct to traditional methods for assessing and monitoring sources of organics and nutrients. The original sampling in 2000-2001 used the classical fixed-site and fixed-time interval sampling approach, where sites on the San Joaquin River and its major tributaries were sampled bimonthly from July to October for chemistry and isotopes (Kratzer et al., 2004: see URL below). Subsequently, samples were collected during 4 transects along the San Joaquin River (10/02, 3/03, 9/03, and 7/04); the first and last of these transects extended through the Delta to the Bay. Several sites were sampled during diel studies in 8/04 and 7/05. Although fixed-site sampling is the norm in watershed studies, we have found that isotope and chemical data collected during longitudinal transects of the river and diel sampling of several sites along short river reaches have been more useful in convincing colleagues that isotope measurements are extremely useful adjuncts to traditional methods for assessing and monitoring sources of organics and nutrients during ecosystem restoration programs. Furthermore, we have concluded that while the obvious value of isotopes for water resources management is to tell us things about water resources that we didn't know before, what convinces the skeptic is when the isotopes tell us things about water resources that contradict what we thought we knew before. This work will highlight these and other insights developed using varied sampling strategies, and suggest guidelines for how to approach future studies in biologically active and human impacted rivers like the San Joaquin River system.
H2, fixed architecture, control design for large scale systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Mercadal, Mathieu
1990-01-01
The H2, fixed architecture, control problem is a classic linear quadratic Gaussian (LQG) problem whose solution is constrained to be a linear time invariant compensator with a decentralized processing structure. The compensator can be made of p independent subcontrollers, each of which has a fixed order and connects selected sensors to selected actuators. The H2, fixed architecture, control problem allows the design of simplified feedback systems needed to control large scale systems. Its solution becomes more complicated, however, as more constraints are introduced. This work derives the necessary conditions for optimality for the problem and studies their properties. It is found that the filter and control problems couple when the architecture constraints are introduced, and that the different subcontrollers must be coordinated in order to achieve global system performance. The problem requires the simultaneous solution of highly coupled matrix equations. The use of homotopy is investigated as a numerical tool, and its convergence properties studied. It is found that the general constrained problem may have multiple stabilizing solutions, and that these solutions may be local minima or saddle points for the quadratic cost. The nature of the solution is not invariant when the parameters of the system are changed. Bifurcations occur, and a solution may continuously transform into a nonstabilizing compensator. Using a modified homotopy procedure, fixed architecture compensators are derived for models of large flexible structures to help understand the properties of the constrained solutions and compare them to the corresponding unconstrained ones.
Finding fixed satellite service orbital allotments with a k-permutation algorithm
NASA Technical Reports Server (NTRS)
Reilly, Charles H.; Mount-Campbell, Clark A.; Gonsalvez, David J. A.
1990-01-01
A satellite system synthesis problem, the satellite location problem (SLP), is addressed. In SLP, orbital locations (longitudes) are allotted to geostationary satellites in the fixed satellite service. A linear mixed-integer programming model is presented that views SLP as a combination of two problems: the problem of ordering the satellites and the problem of locating the satellites given some ordering. A special-purpose heuristic procedure, a k-permutation algorithm, has been developed to find solutions to SLPs. Solutions to small sample problems are presented and analyzed on the basis of calculated interferences.
Shift Verification and Validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pandya, Tara M.; Evans, Thomas M.; Davidson, Gregory G
2016-09-07
This documentation outlines the verification and validation of Shift for the Consortium for Advanced Simulation of Light Water Reactors (CASL). Five main types of problems were used for validation: small criticality benchmark problems; full-core reactor benchmarks for light water reactors; fixed-source coupled neutron-photon dosimetry benchmarks; depletion/burnup benchmarks; and full-core reactor performance benchmarks. We compared Shift results to measured data and other simulated Monte Carlo radiation transport code results, and found very good agreement in a variety of comparison measures. These include prediction of critical eigenvalue, radial and axial pin power distributions, rod worth, leakage spectra, and nuclide inventories over amore » burn cycle. Based on this validation of Shift, we are confident in Shift to provide reference results for CASL benchmarking.« less
Clinical Nursing Records Study
1991-08-01
In-depth assessment of current AMEDD nursing documentation system used in fixed facilities; 2 - 4) development, implementation and assessment of...used in fixed facilities to: a) identify system problems; b) identify potential solutions to problems; c) set priorities fc problem resolution; d...enhance compatibility between any " hard copy" forms the group might develop and automation requirements. Discussions were also held with personnel from
Hyperboloidal evolution of test fields in three spatial dimensions
NASA Astrophysics Data System (ADS)
Zenginoǧlu, Anıl; Kidder, Lawrence E.
2010-06-01
We present the numerical implementation of a clean solution to the outer boundary and radiation extraction problems within the 3+1 formalism for hyperbolic partial differential equations on a given background. Our approach is based on compactification at null infinity in hyperboloidal scri fixing coordinates. We report numerical tests for the particular example of a scalar wave equation on Minkowski and Schwarzschild backgrounds. We address issues related to the implementation of the hyperboloidal approach for the Einstein equations, such as nonlinear source functions, matching, and evaluation of formally singular terms at null infinity.
Transforming graph states using single-qubit operations.
Dahlberg, Axel; Wehner, Stephanie
2018-07-13
Stabilizer states form an important class of states in quantum information, and are of central importance in quantum error correction. Here, we provide an algorithm for deciding whether one stabilizer (target) state can be obtained from another stabilizer (source) state by single-qubit Clifford operations (LC), single-qubit Pauli measurements (LPM) and classical communication (CC) between sites holding the individual qubits. What is more, we provide a recipe to obtain the sequence of LC+LPM+CC operations which prepare the desired target state from the source state, and show how these operations can be applied in parallel to reach the target state in constant time. Our algorithm has applications in quantum networks, quantum computing, and can also serve as a design tool-for example, to find transformations between quantum error correcting codes. We provide a software implementation of our algorithm that makes this tool easier to apply. A key insight leading to our algorithm is to show that the problem is equivalent to one in graph theory, which is to decide whether some graph G ' is a vertex-minor of another graph G The vertex-minor problem is, in general, [Formula: see text]-Complete, but can be solved efficiently on graphs which are not too complex. A measure of the complexity of a graph is the rank-width which equals the Schmidt-rank width of a subclass of stabilizer states called graph states, and thus intuitively is a measure of entanglement. Here, we show that the vertex-minor problem can be solved in time O (| G | 3 ), where | G | is the size of the graph G , whenever the rank-width of G and the size of G ' are bounded. Our algorithm is based on techniques by Courcelle for solving fixed parameter tractable problems, where here the relevant fixed parameter is the rank width. The second half of this paper serves as an accessible but far from exhausting introduction to these concepts, that could be useful for many other problems in quantum information.This article is part of a discussion meeting issue 'Foundations of quantum mechanics and their impact on contemporary society'. © 2018 The Author(s).
Non-linear vibrating systems excited by a nonideal energy source with a large slope characteristic
NASA Astrophysics Data System (ADS)
González-Carbajal, Javier; Domínguez, Jaime
2017-11-01
This paper revisits the problem of an unbalanced motor attached to a fixed frame by means of a nonlinear spring and a linear damper. The excitation provided by the motor is, in general, nonideal, which means it is affected by the vibratory response. Since the system behaviour is highly dependent on the order of magnitude of the motor characteristic slope, the case of large slope is considered herein. Some Perturbation Methods are applied to the system of equations, which allows transforming the original 4D system into a much simpler 2D system. The fixed points of this reduced system and their stability are carefully studied. We find the existence of a Hopf bifurcation which, to the authors' knowledge, has not been addressed before in the literature. These analytical results are supported by numerical simulations. We also compare our approach and results with those published by other authors.
Airborne gravimetry, altimetry, and GPS navigation errors
NASA Technical Reports Server (NTRS)
Colombo, Oscar L.
1992-01-01
Proper interpretation of airborne gravimetry and altimetry requires good knowledge of aircraft trajectory. Recent advances in precise navigation with differential GPS have made it possible to measure gravity from the air with accuracies of a few milligals, and to obtain altimeter profiles of terrain or sea surface correct to one decimeter. These developments are opening otherwise inaccessible regions to detailed geophysical mapping. Navigation with GPS presents some problems that grow worse with increasing distance from a fixed receiver: the effect of errors in tropospheric refraction correction, GPS ephemerides, and the coordinates of the fixed receivers. Ionospheric refraction and orbit error complicate ambiguity resolution. Optimal navigation should treat all error sources as unknowns, together with the instantaneous vehicle position. To do so, fast and reliable numerical techniques are needed: efficient and stable Kalman filter-smoother algorithms, together with data compression and, sometimes, the use of simplified dynamics.
An ideal sealed source life-cycle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tompkins, Joseph Andrew
2009-01-01
In the last 40 years, barriers to compliant and timely disposition of radioactive sealed sources have become apparent. The story starts with the explosive growth of nuclear gauging technologies in the 1960s. Dozens of companies in the US manufactured sources and many more created nuclear solutions to industrial gauging problems. Today they do not yet know how many Cat 1, 2, or 3 sources there are in the US. There are, at minimum, tens of thousands of sources, perhaps hundreds of thousands of sources. Affordable transportation solutions to consolidate all of these sources and disposition pathways for these sources domore » not exist. The root problem seems to be a lack of necessary regulatory framework that has allowed all of these problems to accumulate with no national plan for solving the problem. In the 1960s, Pu-238 displaced Pu-239 for most neutron and alpha source applications. In the 1970s, the availability of inexpensive Am-241 resulted in a proliferation of low energy gamma sources used in nuclear gauging, well logging, pacemakers, and X-ray fluorescence applications for example. In the 1980s, rapid expansion of worldwide petroleum exploration resulted in the expansion of Am-241 sources into international locations. Improvements of technology and regulation resulted in a change in isotopic distribution as Am-241 made Pu-239 and Pu-238 obsolete. Many early nuclear gauge technologies have been made obsolete as they were replaced by non-nuclear technoogies. With uncertainties in source end of life disposition and increased requirements for sealed source security, nuclear gauging technology is the last choice for modern process engineering gauging solutions. Over the same period, much was learned about licensing LLW disposition facilities as evident by the closure of early disposition facilities like Maxey Flats. The current difficulties in sealed source disposition start with adoption of the NLLW policy act of 1985, which created the state LLW compact system they we have today. This regulation created a new regulatory framework seen as promising at the time. However, now they recognize that, despite the good intentions, the NIJWP/85 has not solved any source disposition problems. The answer to these sealed source disposition problems is to adopt a philosophy to correct these regulatory issues, determine an interim solution, execute that solution until there is a minimal backlog of sources to deal with, and then let the mechanisms they have created solve this problem into the foreseeable future. The primary philosophical tenet of the ideal sealed source life cycle follows. You do not allow the creation (or importation) of any source whose use cannot be justified, which cannot be affordably shipped, or that does not have a well-delinated and affordable disposition pathway. The path forward dictates that we fix the problem by embracing the Ideal Source Life cycle. In figure 1, we can see some of the elements of the ideal source life cycle. The life cycle is broken down into four portions, manufacture, use, consolidation, and disposition. These four arbitrary elements allow them to focus on the ideal life cycle phases that every source should go through between manufacture and final disposition. As we examine the various phases of the sealed source life cycle, they pick specific examples and explore the adoption of the ideal life cycle model.« less
Wang, Leimin; Zeng, Zhigang; Hu, Junhao; Wang, Xiaoping
2017-03-01
This paper addresses the controller design problem for global fixed-time synchronization of delayed neural networks (DNNs) with discontinuous activations. To solve this problem, adaptive control and state feedback control laws are designed. Then based on the two controllers and two lemmas, the error system is proved to be globally asymptotically stable and even fixed-time stable. Moreover, some sufficient and easy checked conditions are derived to guarantee the global synchronization of drive and response systems in fixed time. It is noted that the settling time functional for fixed-time synchronization is independent on initial conditions. Our fixed-time synchronization results contain the finite-time results as the special cases by choosing different values of the two controllers. Finally, theoretical results are supported by numerical simulations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Nursing Home Price and Quality Responses to Publicly Reported Quality Information
Clement, Jan P; Bazzoli, Gloria J; Zhao, Mei
2012-01-01
Objective To assess whether the release of Nursing Home Compare (NHC) data affected self-pay per diem prices and quality of care. Data Sources Primary data sources are the Annual Survey of Wisconsin Nursing Homes for 2001–2003, Online Survey and Certification Reporting System, NHC, and Area Resource File. Study Design We estimated fixed effects models with robust standard errors of per diem self-pay charge and quality before and after NHC. Principal Findings After NHC, low-quality nursing homes raised their prices by a small but significant amount and decreased their use of restraints but did not reduce pressure sores. Mid-level and high-quality nursing homes did not significantly increase self-pay prices after NHC nor consistently change quality. Conclusions Our findings suggest that the release of quality information affected nursing home behavior, especially pricing and quality decisions among low-quality facilities. Policy makers should continue to monitor quality and prices for self-pay residents and scrutinize low-quality homes over time to see whether they are on a pathway to improve quality. In addition, policy makers should not expect public reporting to result in quick fixes to nursing home quality problems. PMID:22092366
On Determining if Tree-based Networks Contain Fixed Trees.
Anaya, Maria; Anipchenko-Ulaj, Olga; Ashfaq, Aisha; Chiu, Joyce; Kaiser, Mahedi; Ohsawa, Max Shoji; Owen, Megan; Pavlechko, Ella; St John, Katherine; Suleria, Shivam; Thompson, Keith; Yap, Corrine
2016-05-01
We address an open question of Francis and Steel about phylogenetic networks and trees. They give a polynomial time algorithm to decide if a phylogenetic network, N, is tree-based and pose the problem: given a fixed tree T and network N, is N based on T? We show that it is [Formula: see text]-hard to decide, by reduction from 3-Dimensional Matching (3DM) and further that the problem is fixed-parameter tractable.
Evaluation and Testing of the ADVANTG Code on SNM Detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.
2013-09-24
Pacific Northwest National Laboratory (PNNL) has been tasked with evaluating the effectiveness of ORNL’s new hybrid transport code, ADVANTG, on scenarios of interest to our NA-22 sponsor, specifically of detection of diversion of special nuclear material (SNM). PNNL staff have determined that acquisition and installation of ADVANTG was relatively straightforward for a code in its phase of development, but probably not yet sufficient for mass distribution to the general user. PNNL staff also determined that with little effort, ADVANTG generated weight windows that typically worked for the problems and generated results consistent with MCNP. With slightly greater effort of choosingmore » a finer mesh around detectors or sample reaction tally regions, the figure of merit (FOM) could be further improved in most cases. This does take some limited knowledge of deterministic transport methods. The FOM could also be increased by limiting the energy range for a tally to the energy region of greatest interest. It was then found that an MCNP run with the full energy range for the tally showed improved statistics in the region used for the ADVANTG run. The specific case of interest chosen by the sponsor is the CIPN project from Las Alamos National Laboratory (LANL), which is an active interrogation, non-destructive assay (NDA) technique to quantify the fissile content in a spent fuel assembly and is also sensitive to cases of material diversion. Unfortunately, weight windows for the CIPN problem cannot currently be properly generated with ADVANTG due to inadequate accommodations for source definition. ADVANTG requires that a fixed neutron source be defined within the problem and cannot account for neutron multiplication. As such, it is rendered useless in active interrogation scenarios. It is also interesting to note that this is a difficult problem to solve and that the automated weight windows generator in MCNP actually slowed down the problem. Therefore, PNNL had determined that there is not an effective tool available for speeding up MCNP for problems such as the CIPN scenario. With regard to the Benchmark scenarios, ADVANTG performed very well for most of the difficult, long-running, standard radiation detection scenarios. Specifically, run time speedups were observed for spatially large scenarios, or those having significant shielding or scattering geometries. ADVANTG performed on par with existing codes for moderate sized scenarios, or those with little to moderate shielding, or multiple paths to the detectors. ADVANTG ran slower than MCNP for very simply, spatially small cases with little to no shielding that run very quickly anyway. Lastly, ADVANTG could not solve problems that did not consist of fixed source to detector geometries. For example, it could not solve scenarios with multiple detectors or secondary particles, such as active interrogation, neutron induced gamma, or fission neutrons.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Papalexopoulos, A.; Hansen, C.; Perrino, D.
This project examined the impact of renewable energy sources, which have zero incremental energy costs, on the sustainability of conventional generation. This “missing money” problem refers to market outcomes in which infra-marginal energy revenues in excess of operations and maintenance (O&M) costs are systematically lower than the amortized costs of new entry for a marginal generator. The problem is caused by two related factors: (1) conventional generation is dispatched less, and (2) the price that conventional generation receives for its energy is lower. This lower revenue stream may not be sufficient to cover both the variable and fixed costs ofmore » conventional generation. In fact, this study showed that higher wind penetrations in the Electric Reliability Council of Texas (ERCOT) system could cause many conventional generators to become uneconomic.« less
NASA Astrophysics Data System (ADS)
de Oliveira, Lília M.; Santos, Nádia A. P.; Maillard, Philippe
2013-10-01
Non-point source pollution (NPSP) is perhaps the leading cause of water quality problems and one of the most challenging environmental issues given the difficulty of modeling and controlling it. In this article, we applied the Manning equation, a hydraulic concept, to improve models of non-point source pollution and determine its influence as a function of slope - land cover roughness for runoff to reach the stream. In our study the equation is somewhat taken out of its usual context to be applies to the flow of an entire watershed. Here a digital elevation model (DEM) from the SRTM satellite was used to compute the slope and data from the RapidEye satellite constellation was used to produce a land cover map later transformed into a roughness surface. The methodology is applied to a 1433 km2 watershed in Southeast Brazil mostly covered by forest, pasture, urban and wetlands. The model was used to create slope buffer of varying width in which the proportions of land cover and roughness coefficient were obtained. Next we correlated these data, through regression, with four water quality parameters measured in situ: nitrate, phosphorous, faecal coliform and turbidity. We compare our results with the ones obtained by fixed buffer. It was found that slope buffer outperformed fixed buffer with higher coefficients of determination up to 15%.
A study of poultry processing plant noise characteristics and potential noise control techniques
NASA Technical Reports Server (NTRS)
Wyvill, J. C.; Jape, A. D.; Moriarity, L. J.; Atkins, R. D.
1980-01-01
The noise environment in a typical poultry processing plant was characterized by developing noise contours for two representative plants: Central Soya of Athens, Inc., Athens, Georgia, and Tip Top Poultry, Inc., Marietta, Georgia. Contour information was restricted to the evisceration are of both plants because nearly 60 percent of all process employees are stationed in this area during a normal work shift. Both plant evisceration areas were composed of tile walls, sheet metal ceilings, and concrete floors. Processing was performed in an assembly-line fashion in which the birds travel through the area on overhead shackles while personnel remain at fixed stations. Processing machinery was present throughout the area. In general, the poultry processing noise problem is the result of loud sources and reflective surfaces. Within the evisceration area, it can be concluded that only a few major sources (lung guns, a chiller component, and hock cutters) are responsible for essentially all direct and reverberant sound pressure levels currently observed during normal operations. Consequently, any effort to reduce the noise problem must first address the sound power output of these sources and/or the absorptive qualitities of the room.
A k-permutation algorithm for Fixed Satellite Service orbital allotments
NASA Technical Reports Server (NTRS)
Reilly, Charles H.; Mount-Campbell, Clark A.; Gonsalvez, David J. A.
1988-01-01
A satellite system synthesis problem, the satellite location problem (SLP), is addressed in this paper. In SLP, orbital locations (longitudes) are allotted to geostationary satellites in the Fixed Satellite Service. A linear mixed-integer programming model is presented that views SLP as a combination of two problems: (1) the problem of ordering the satellites and (2) the problem of locating the satellites given some ordering. A special-purpose heuristic procedure, a k-permutation algorithm, that has been developed to find solutions to SLPs formulated in the manner suggested is described. Solutions to small example problems are presented and analyzed.
NASA Technical Reports Server (NTRS)
Crockett, Thomas M.; Joswig, Joseph C.; Shams, Khawaja S.; Norris, Jeffrey S.; Morris, John R.
2011-01-01
MSLICE Sequencing is a graphical tool for writing sequences and integrating them into RML files, as well as for producing SCMF files for uplink. When operated in a testbed environment, it also supports uplinking these SCMF files to the testbed via Chill. This software features a free-form textural sequence editor featuring syntax coloring, automatic content assistance (including command and argument completion proposals), complete with types, value ranges, unites, and descriptions from the command dictionary that appear as they are typed. The sequence editor also has a "field mode" that allows tabbing between arguments and displays type/range/units/description for each argument as it is edited. Color-coded error and warning annotations on problematic tokens are included, as well as indications of problems that are not visible in the current scroll range. "Quick Fix" suggestions are made for resolving problems, and all the features afforded by modern source editors are also included such as copy/cut/paste, undo/redo, and a sophisticated find-and-replace system optionally using regular expressions. The software offers a full XML editor for RML files, which features syntax coloring, content assistance and problem annotations as above. There is a form-based, "detail view" that allows structured editing of command arguments and sequence parameters when preferred. The "project view" shows the user s "workspace" as a tree of "resources" (projects, folders, and files) that can subsequently be opened in editors by double-clicking. Files can be added, deleted, dragged-dropped/copied-pasted between folders or projects, and these operations are undoable and redoable. A "problems view" contains a tabular list of all problems in the current workspace. Double-clicking on any row in the table opens an editor for the appropriate sequence, scrolling to the specific line with the problem, and highlighting the problematic characters. From there, one can invoke "quick fix" as described above to resolve the issue. Once resolved, saving the file causes the problem to be removed from the problem view.
In-situ X-ray diffraction system using sources and detectors at fixed angular positions
Gibson, David M [Voorheesville, NY; Gibson, Walter M [Voorheesville, NY; Huang, Huapeng [Latham, NY
2007-06-26
An x-ray diffraction technique for measuring a known characteristic of a sample of a material in an in-situ state. The technique includes using an x-ray source for emitting substantially divergent x-ray radiation--with a collimating optic disposed with respect to the fixed source for producing a substantially parallel beam of x-ray radiation by receiving and redirecting the divergent paths of the divergent x-ray radiation. A first x-ray detector collects radiation diffracted from the sample; wherein the source and detector are fixed, during operation thereof, in position relative to each other and in at least one dimension relative to the sample according to a-priori knowledge about the known characteristic of the sample. A second x-ray detector may be fixed relative to the first x-ray detector according to the a-priori knowledge about the known characteristic of the sample, especially in a phase monitoring embodiment of the present invention.
NASA Astrophysics Data System (ADS)
Bollati, Julieta; Tarzia, Domingo A.
2018-04-01
Recently, in Tarzia (Thermal Sci 21A:1-11, 2017) for the classical two-phase Lamé-Clapeyron-Stefan problem an equivalence between the temperature and convective boundary conditions at the fixed face under a certain restriction was obtained. Motivated by this article we study the two-phase Stefan problem for a semi-infinite material with a latent heat defined as a power function of the position and a convective boundary condition at the fixed face. An exact solution is constructed using Kummer functions in case that an inequality for the convective transfer coefficient is satisfied generalizing recent works for the corresponding one-phase free boundary problem. We also consider the limit to our problem when that coefficient goes to infinity obtaining a new free boundary problem, which has been recently studied in Zhou et al. (J Eng Math 2017. https://doi.org/10.1007/s10665-017-9921-y).
Adaptive sampling of information in perceptual decision-making.
Cassey, Thomas C; Evens, David R; Bogacz, Rafal; Marshall, James A R; Ludwig, Casimir J H
2013-01-01
In many perceptual and cognitive decision-making problems, humans sample multiple noisy information sources serially, and integrate the sampled information to make an overall decision. We derive the optimal decision procedure for two-alternative choice tasks in which the different options are sampled one at a time, sources vary in the quality of the information they provide, and the available time is fixed. To maximize accuracy, the optimal observer allocates time to sampling different information sources in proportion to their noise levels. We tested human observers in a corresponding perceptual decision-making task. Observers compared the direction of two random dot motion patterns that were triggered only when fixated. Observers allocated more time to the noisier pattern, in a manner that correlated with their sensory uncertainty about the direction of the patterns. There were several differences between the optimal observer predictions and human behaviour. These differences point to a number of other factors, beyond the quality of the currently available sources of information, that influences the sampling strategy.
Störmer method for a problem of point injection of charged particles into a magnetic dipole field
NASA Astrophysics Data System (ADS)
Kolesnikov, E. K.
2017-03-01
The problem of point injection of charged particles into a magnetic dipole field was considered. Analytical expressions were obtained by the Störmer method for regions of allowed pulses of charged particles at random points of a dipole field at a set position of the point source of particles. It was found that, for a fixed location of the studied point, there was a specific structure of the coordinate space in the form of a set of seven regions, where the injector location in each region corresponded to a definite form of an allowed pulse region at the studied point. It was shown that the allowed region boundaries in four of the mentioned regions were surfaces of conic section revolution.
NASA Astrophysics Data System (ADS)
Greene, Casey S.; Hill, Douglas P.; Moore, Jason H.
The relationship between interindividual variation in our genomes and variation in our susceptibility to common diseases is expected to be complex with multiple interacting genetic factors. A central goal of human genetics is to identify which DNA sequence variations predict disease risk in human populations. Our success in this endeavour will depend critically on the development and implementation of computational intelligence methods that are able to embrace, rather than ignore, the complexity of the genotype to phenotype relationship. To this end, we have developed a computational evolution system (CES) to discover genetic models of disease susceptibility involving complex relationships between DNA sequence variations. The CES approach is hierarchically organized and is capable of evolving operators of any arbitrary complexity. The ability to evolve operators distinguishes this approach from artificial evolution approaches using fixed operators such as mutation and recombination. Our previous studies have shown that a CES that can utilize expert knowledge about the problem in evolved operators significantly outperforms a CES unable to use this knowledge. This environmental sensing of external sources of biological or statistical knowledge is important when the search space is both rugged and large as in the genetic analysis of complex diseases. We show here that the CES is also capable of evolving operators which exploit one of several sources of expert knowledge to solve the problem. This is important for both the discovery of highly fit genetic models and because the particular source of expert knowledge used by evolved operators may provide additional information about the problem itself. This study brings us a step closer to a CES that can solve complex problems in human genetics in addition to discovering genetic models of disease.
Dissociation Predicts Later Attention Problems in Sexually Abused Children
ERIC Educational Resources Information Center
Kaplow, Julie B.; Hall, Erin; Koenen, Karestan C.; Dodge, Kenneth A.; Amaya-Jackson, Lisa
2008-01-01
Objective: The goals of this research are to develop and test a prospective model of attention problems in sexually abused children that includes fixed variables (e.g., gender), trauma, and disclosure-related pathways. Methods: At Time 1, fixed variables, trauma variables, and stress reactions upon disclosure were assessed in 156 children aged…
High order multi-grid methods to solve the Poisson equation
NASA Technical Reports Server (NTRS)
Schaffer, S.
1981-01-01
High order multigrid methods based on finite difference discretization of the model problem are examined. The following methods are described: (1) a fixed high order FMG-FAS multigrid algorithm; (2) the high order methods; and (3) results are presented on four problems using each method with the same underlying fixed FMG-FAS algorithm.
Renaissance of the ~1 TeV Fixed-Target Program
NASA Astrophysics Data System (ADS)
Adams, T.; Appel, J. A.; Arms, K. E.; Balantekin, A. B.; Conrad, J. M.; Cooper, P. S.; Djurcic, Z.; Dunwoodie, W.; Engelfried, J.; Fisher, P. H.; Gottschalk, E.; de Gouvea, A.; Heller, K.; Ignarra, C. M.; Karagiorgi, G.; Kwan, S.; Loinaz, W. A.; Meadows, B.; Moore, R.; Morfín, J. G.; Naples, D.; Nienaber, P.; Pate, S. F.; Papavassiliou, V.; Petrov, A. A.; Purohit, M. V.; Ray, H.; Russ, J.; Schwartz, A. J.; Seligman, W. G.; Shaevitz, M. H.; Schellman, H.; Spitz, J.; Syphers, M. J.; Tait, T. M. P.; Vannucci, F.
This document describes the physics potential of a new fixed-target program based on a ~1 TeV proton source. Two proton sources are potentially available in the future: the existing Tevatron at Fermilab, which can provide 800 GeV protons for fixed-target physics, and a possible upgrade to the SPS at CERN, called SPS+, which would produce 1 TeV protons on target. In this paper we use an example Tevatron fixed-target program to illustrate the high discovery potential possible in the charm and neutrino sectors. We highlight examples which are either unique to the program or difficult to accomplish at other venues.
Renaissance of the ~ 1-TeV Fixed-Target Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, T.; /Florida State U.; Appel, J.A.
2011-12-02
This document describes the physics potential of a new fixed-target program based on a {approx}1 TeV proton source. Two proton sources are potentially available in the future: the existing Tevatron at Fermilab, which can provide 800 GeV protons for fixed-target physics, and a possible upgrade to the SPS at CERN, called SPS+, which would produce 1 TeV protons on target. In this paper we use an example Tevatron fixed-target program to illustrate the high discovery potential possible in the charm and neutrino sectors. We highlight examples which are either unique to the program or difficult to accomplish at other venues.
Multistep integration formulas for the numerical integration of the satellite problem
NASA Technical Reports Server (NTRS)
Lundberg, J. B.; Tapley, B. D.
1981-01-01
The use of two Class 2/fixed mesh/fixed order/multistep integration packages of the PECE type for the numerical integration of the second order, nonlinear, ordinary differential equation of the satellite orbit problem. These two methods are referred to as the general and the second sum formulations. The derivation of the basic equations which characterize each formulation and the role of the basic equations in the PECE algorithm are discussed. Possible starting procedures are examined which may be used to supply the initial set of values required by the fixed mesh/multistep integrators. The results of the general and second sum integrators are compared to the results of various fixed step and variable step integrators.
New Boundary Constraints for Elliptic Systems used in Grid Generation Problems
NASA Technical Reports Server (NTRS)
Kaul, Upender K.; Clancy, Daniel (Technical Monitor)
2002-01-01
This paper discusses new boundary constraints for elliptic partial differential equations as used in grid generation problems in generalized curvilinear coordinate systems. These constraints, based on the principle of local conservation of thermal energy in the vicinity of the boundaries, are derived using the Green's Theorem. They uniquely determine the so called decay parameters in the source terms of these elliptic systems. These constraints' are designed for boundary clustered grids where large gradients in physical quantities need to be resolved adequately. It is observed that the present formulation also works satisfactorily for mild clustering. Therefore, a closure for the decay parameter specification for elliptic grid generation problems has been provided resulting in a fully automated elliptic grid generation technique. Thus, there is no need for a parametric study of these decay parameters since the new constraints fix them uniquely. It is also shown that for Neumann type boundary conditions, these boundary constraints uniquely determine the solution to the internal elliptic problem thus eliminating the non-uniqueness of the solution of an internal Neumann boundary value grid generation problem.
Expected Fitness Gains of Randomized Search Heuristics for the Traveling Salesperson Problem.
Nallaperuma, Samadhi; Neumann, Frank; Sudholt, Dirk
2017-01-01
Randomized search heuristics are frequently applied to NP-hard combinatorial optimization problems. The runtime analysis of randomized search heuristics has contributed tremendously to our theoretical understanding. Recently, randomized search heuristics have been examined regarding their achievable progress within a fixed-time budget. We follow this approach and present a fixed-budget analysis for an NP-hard combinatorial optimization problem. We consider the well-known Traveling Salesperson Problem (TSP) and analyze the fitness increase that randomized search heuristics are able to achieve within a given fixed-time budget. In particular, we analyze Manhattan and Euclidean TSP instances and Randomized Local Search (RLS), (1+1) EA and (1+[Formula: see text]) EA algorithms for the TSP in a smoothed complexity setting, and derive the lower bounds of the expected fitness gain for a specified number of generations.
Review of the inverse scattering problem at fixed energy in quantum mechanics
NASA Technical Reports Server (NTRS)
Sabatier, P. C.
1972-01-01
Methods of solution of the inverse scattering problem at fixed energy in quantum mechanics are presented. Scattering experiments of a beam of particles at a nonrelativisitic energy by a target made up of particles are analyzed. The Schroedinger equation is used to develop the quantum mechanical description of the system and one of several functions depending on the relative distance of the particles. The inverse problem is the construction of the potentials from experimental measurements.
NASA Technical Reports Server (NTRS)
Tarras, A.
1987-01-01
The problem of stabilization/pole placement under structural constraints of large scale linear systems is discussed. The existence of a solution to this problem is expressed in terms of fixed modes. The aim is to provide a bibliographic survey of the available results concerning the fixed modes (characterization, elimination, control structure selection to avoid them, control design in their absence) and to present the author's contribution to this problem which can be summarized by the use of the mode sensitivity concept to detect or to avoid them, the use of vibrational control to stabilize them, and the addition of parametric robustness considerations to design an optimal decentralized robust control.
NASA Technical Reports Server (NTRS)
Turner, J. W. (Inventor)
1973-01-01
A measurement system is described for providing an indication of a varying physical quantity represented by or converted to a variable frequency signal. Timing pulses are obtained marking the duration of a fixed number, or set, of cycles of the sampled signal and these timing pulses are employed to control the period of counting of cycles of a higher fixed and known frequency source. The counts of cycles obtained from the fixed frequency source provide a precise measurement of the average frequency of each set of cycles sampled, and thus successive discrete values of the quantity being measured. The frequency of the known frequency source is made such that each measurement is presented as a direct digital representation of the quantity measured.
Application of hierarchical Bayesian unmixing models in river sediment source apportionment
NASA Astrophysics Data System (ADS)
Blake, Will; Smith, Hugh; Navas, Ana; Bodé, Samuel; Goddard, Rupert; Zou Kuzyk, Zou; Lennard, Amy; Lobb, David; Owens, Phil; Palazon, Leticia; Petticrew, Ellen; Gaspar, Leticia; Stock, Brian; Boeckx, Pacsal; Semmens, Brice
2016-04-01
Fingerprinting and unmixing concepts are used widely across environmental disciplines for forensic evaluation of pollutant sources. In aquatic and marine systems, this includes tracking the source of organic and inorganic pollutants in water and linking problem sediment to soil erosion and land use sources. It is, however, the particular complexity of ecological systems that has driven creation of the most sophisticated mixing models, primarily to (i) evaluate diet composition in complex ecological food webs, (ii) inform population structure and (iii) explore animal movement. In the context of the new hierarchical Bayesian unmixing model, MIXSIAR, developed to characterise intra-population niche variation in ecological systems, we evaluate the linkage between ecological 'prey' and 'consumer' concepts and river basin sediment 'source' and sediment 'mixtures' to exemplify the value of ecological modelling tools to river basin science. Recent studies have outlined advantages presented by Bayesian unmixing approaches in handling complex source and mixture datasets while dealing appropriately with uncertainty in parameter probability distributions. MixSIAR is unique in that it allows individual fixed and random effects associated with mixture hierarchy, i.e. factors that might exert an influence on model outcome for mixture groups, to be explored within the source-receptor framework. This offers new and powerful ways of interpreting river basin apportionment data. In this contribution, key components of the model are evaluated in the context of common experimental designs for sediment fingerprinting studies namely simple, nested and distributed catchment sampling programmes. Illustrative examples using geochemical and compound specific stable isotope datasets are presented and used to discuss best practice with specific attention to (1) the tracer selection process, (2) incorporation of fixed effects relating to sample timeframe and sediment type in the modelling process, (3) deriving and using informative priors in sediment fingerprinting context and (4) transparency of the process and replication of model results by other users.
NASA Astrophysics Data System (ADS)
Kuipers, J.; Ueda, T.; Vermaseren, J. A. M.; Vollinga, J.
2013-05-01
We present version 4.0 of the symbolic manipulation system FORM. The most important new features are manipulation of rational polynomials and the factorization of expressions. Many other new functions and commands are also added; some of them are very general, while others are designed for building specific high level packages, such as one for Gröbner bases. New is also the checkpoint facility, that allows for periodic backups during long calculations. Finally, FORM 4.0 has become available as open source under the GNU General Public License version 3. Program summaryProgram title: FORM. Catalogue identifier: AEOT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 151599 No. of bytes in distributed program, including test data, etc.: 1 078 748 Distribution format: tar.gz Programming language: The FORM language. FORM itself is programmed in a mixture of C and C++. Computer: All. Operating system: UNIX, LINUX, Mac OS, Windows. Classification: 5. Nature of problem: FORM defines a symbolic manipulation language in which the emphasis lies on fast processing of very large formulas. It has been used successfully for many calculations in Quantum Field Theory and mathematics. In speed and size of formulas that can be handled it outperforms other systems typically by an order of magnitude. Special in this version: The version 4.0 contains many new features. Most important are factorization and rational arithmetic. The program has also become open source under the GPL. The code in CPC is for reference. You are encouraged to upload the most recent sources from www.nikhef.nl/form/formcvs.php because of frequent bug fixes. Solution method: See "Nature of Problem", above. Additional comments: NOTE: The code in CPC is for reference. You are encouraged to upload the most recent sources from www.nikhef.nl/form/formcvs.php because of frequent bug fixes.
Morgan, R L; Salzberg, C L
1992-01-01
Two studies investigated effects of video-assisted training on employment-related social skills of adults with severe mental retardation. In video-assisted training, participants discriminated a model's behavior on videotape and received feedback from the trainer for responses to questions about video scenes. In the first study, 3 adults in an employment program participated in video-assisted training to request their supervisor's assistance when encountering work problems. Results indicated that participants discriminated the target behavior on video but effects did not generalize to the work setting for 2 participants until they rehearsed the behavior. In the second study, 2 participants were taught to fix and report four work problems using video-assisted procedures. Results indicated that after participants rehearsed how to fix and report one or two work problems, they began to fix and report the remaining problems with video-assisted training alone. PMID:1378826
The impacts of non-renewable and renewable energy on CO2 emissions in Turkey.
Bulut, Umit
2017-06-01
As a result of great increases in CO 2 emissions in the last few decades, many papers have examined the relationship between renewable energy and CO 2 emissions in the energy economics literature, because as a clean energy source, renewable energy can reduce CO 2 emissions and solve environmental problems stemming from increases in CO 2 emissions. When one analyses these papers, he/she will observe that they employ fixed parameter estimation methods, and time-varying effects of non-renewable and renewable energy consumption/production on greenhouse gas emissions are ignored. In order to fulfil this gap in the literature, this paper examines the effects of non-renewable and renewable energy on CO 2 emissions in Turkey over the period 1970-2013 by employing fixed parameter and time-varying parameter estimation methods. Estimation methods reveal that CO 2 emissions are positively related to non-renewable energy and renewable energy in Turkey. Since policy makers expect renewable energy to decrease CO 2 emissions, this paper argues that renewable energy is not able to satisfy the expectations of policy makers though fewer CO 2 emissions arise through production of electricity using renewable sources. In conclusion, the paper argues that policy makers should implement long-term energy policies in Turkey.
ERIC Educational Resources Information Center
Kuntz, Aaron M.; Petrovic, John E.
2018-01-01
In this article we consider the material dimensions of schooling as constitutive of the possibilities inherent in "fixing" education. We begin by mapping out the problem of "fixing education," pointing to the necrophilic tendencies of contemporary education--a desire to kill what otherwise might be life-giving. In this sense,…
Effects of coating rectangular microscopic electrophoresis chamber with methylcellulose
NASA Technical Reports Server (NTRS)
Plank, L. D.
1985-01-01
One of the biggest problems in obtaining high accuracy in microscopic electrophoresis is the parabolic flow of liquid in the chamber due to electroosmotic backflow during application of the electric field. In chambers with glass walls the source of polarization leading to electroosmosis is the negative charge of the silicare and other ions that form the wall structure. It was found by Hjerten, who used a rotating 3.0 mm capillary tube for free zone electrophoresis, that precisely neutralizing this charge was extremely difficult, but if a neutral polymer matrix (formaldehyde fixed methylcellulose) was formed over the glass (quartz) wall the double layer was displaced and the viscosity at the shear plane increased so that electroosmotic flow could be eliminated. Experiments were designed to determine the reliability with which methylcellulose coating of the Zeiss Cytopherometer chamber reduced electroosmotic backflow and the effect of coating on the accuracy of cell electrophoretic mobility (EPN) determinations. Fixed rat erythrocytes (RBC) were used as test particles.
New collinear twist-3 analysis of transverse SSA: Toward a resolution for the sign-mismatch problem
Kanazawa, Koichi; Pitonyak, Daniel; Koike, Yuji; ...
2014-10-19
We present a new collinear twist-3 analysis of the transverse SSA A N at RHIC. We use the TMD Sivers/Collins function to fix some of the relevant collinear twist-3 functions and perform a fit of the RHIC data with other parameterized twist-3 functions. This allows us to keep the consistency among descriptions in pp collision, SIDIS, and e +e – annihilation and thus could provide a unified description of the spin asymmetries in the low- and high-P T processes. In conclusion, by taking into account the twist-3 fragmentation contribution, we show for the first time this contribution could be themore » main source of A N in pp ↑ → hX and its inclusion could provide a solution for the sign-mismatch problem.« less
A new patent-based approach for technology mapping in the pharmaceutical domain.
Russo, Davide; Montecchi, Tiziano; Carrara, Paolo
2013-09-01
The key factor in decision-making is the quality of information collected and processed in the problem analysis. In most cases, patents represent a very important source of information. The main problem is how to extract such information from the huge corpus of documents with a high recall and precision, and in a short time. This article demonstrates a patent search and classification method, called Knowledge Organizing Module, which consists of creating, almost automatically, a pool of patents based on polysemy expansion and homonymy disambiguation. Since the pool is done, an automatic patent technology landscaping is provided for fixing the state of the art of our product, and exploring competing alternative treatments and/or possible technological opportunities. An exemplary case study is provided, it deals with a patent analysis in the field of verruca treatments.
Fixed Point Results for G-α-Contractive Maps with Application to Boundary Value Problems
Roshan, Jamal Rezaei
2014-01-01
We unify the concepts of G-metric, metric-like, and b-metric to define new notion of generalized b-metric-like space and discuss its topological and structural properties. In addition, certain fixed point theorems for two classes of G-α-admissible contractive mappings in such spaces are obtained and some new fixed point results are derived in corresponding partially ordered space. Moreover, some examples and an application to the existence of a solution for the first-order periodic boundary value problem are provided here to illustrate the usability of the obtained results. PMID:24895655
Seismic Analysis Code (SAC): Development, porting, and maintenance within a legacy code base
NASA Astrophysics Data System (ADS)
Savage, B.; Snoke, J. A.
2017-12-01
The Seismic Analysis Code (SAC) is the result of toil of many developers over almost a 40-year history. Initially a Fortran-based code, it has undergone major transitions in underlying bit size from 16 to 32, in the 1980s, and 32 to 64 in 2009; as well as a change in language from Fortran to C in the late 1990s. Maintenance of SAC, the program and its associated libraries, have tracked changes in hardware and operating systems including the advent of Linux in the early 1990, the emergence and demise of Sun/Solaris, variants of OSX processors (PowerPC and x86), and Windows (Cygwin). Traces of these systems are still visible in source code and associated comments. A major concern while improving and maintaining a routinely used, legacy code is a fear of introducing bugs or inadvertently removing favorite features of long-time users. Prior to 2004, SAC was maintained and distributed by LLNL (Lawrence Livermore National Lab). In that year, the license was transferred from LLNL to IRIS (Incorporated Research Institutions for Seismology), but the license is not open source. However, there have been thousands of downloads a year of the package, either source code or binaries for specific system. Starting in 2004, the co-authors have maintained the SAC package for IRIS. In our updates, we fixed bugs, incorporated newly introduced seismic analysis procedures (such as EVALRESP), added new, accessible features (plotting and parsing), and improved the documentation (now in HTML and PDF formats). Moreover, we have added modern software engineering practices to the development of SAC including use of recent source control systems, high-level tests, and scripted, virtualized environments for rapid testing and building. Finally, a "sac-help" listserv (administered by IRIS) was setup for SAC-related issues and is the primary avenue for users seeking advice and reporting bugs. Attempts are always made to respond to issues and bugs in a timely fashion. For the past thirty-plus years, SAC files contained a fixed-length header. Time and distance-related values are stored in single precision, which has become a problem with the increase in desired precision for data compared to thirty years ago. A future goal is to address this precision problem, but in a backward compatible manner. We would also like to transition SAC to a more open source license.
Development of an expert system for power quality advisement using CLIPS 6.0
NASA Technical Reports Server (NTRS)
Chandrasekaran, A.; Sarma, P. R. R.; Sundaram, Ashok
1994-01-01
Proliferation of power electronic devices has brought in its wake both deterioration in and demand for quality power supply from the utilities. The power quality problems become apparent when the user's equipment or systems maloperate or fail. Since power quality concerns arise from a wide variety of sources and the problem fixes are better achieved from the expertise of field engineers, development of an expert system for power quality advisement seems to be a very attractive and cost-effective solution for utility applications. An expert system thus developed gives an understanding of the adverse effects of power quality related problems on the system and could help in finding remedial solutions. The paper reports the design of a power quality advisement expert system being developed using CLIPS 6.0. A brief outline of the power quality concerns is first presented. A description of the knowledge base is next given and details of actual implementation include screen output from the program.
NASA Astrophysics Data System (ADS)
Ragon, Théa; Sladen, Anthony; Simons, Mark
2018-05-01
The ill-posed nature of earthquake source estimation derives from several factors including the quality and quantity of available observations and the fidelity of our forward theory. Observational errors are usually accounted for in the inversion process. Epistemic errors, which stem from our simplified description of the forward problem, are rarely dealt with despite their potential to bias the estimate of a source model. In this study, we explore the impact of uncertainties related to the choice of a fault geometry in source inversion problems. The geometry of a fault structure is generally reduced to a set of parameters, such as position, strike and dip, for one or a few planar fault segments. While some of these parameters can be solved for, more often they are fixed to an uncertain value. We propose a practical framework to address this limitation by following a previously implemented method exploring the impact of uncertainties on the elastic properties of our models. We develop a sensitivity analysis to small perturbations of fault dip and position. The uncertainties in fault geometry are included in the inverse problem under the formulation of the misfit covariance matrix that combines both prediction and observation uncertainties. We validate this approach with the simplified case of a fault that extends infinitely along strike, using both Bayesian and optimization formulations of a static inversion. If epistemic errors are ignored, predictions are overconfident in the data and source parameters are not reliably estimated. In contrast, inclusion of uncertainties in fault geometry allows us to infer a robust posterior source model. Epistemic uncertainties can be many orders of magnitude larger than observational errors for great earthquakes (Mw > 8). Not accounting for uncertainties in fault geometry may partly explain observed shallow slip deficits for continental earthquakes. Similarly, ignoring the impact of epistemic errors can also bias estimates of near surface slip and predictions of tsunamis induced by megathrust earthquakes. (Mw > 8)
NASA Astrophysics Data System (ADS)
Ferrari, Alessia; Vacondio, Renato; Dazzi, Susanna; Mignosa, Paolo
2017-09-01
A novel augmented Riemann Solver capable of handling porosity discontinuities in 1D and 2D Shallow Water Equation (SWE) models is presented. With the aim of accurately approximating the porosity source term, a Generalized Riemann Problem is derived by adding an additional fictitious equation to the SWEs system and imposing mass and momentum conservation across the porosity discontinuity. The modified Shallow Water Equations are theoretically investigated, and the implementation of an augmented Roe Solver in a 1D Godunov-type finite volume scheme is presented. Robust treatment of transonic flows is ensured by introducing an entropy fix based on the wave pattern of the Generalized Riemann Problem. An Exact Riemann Solver is also derived in order to validate the numerical model. As an extension of the 1D scheme, an analogous 2D numerical model is also derived and validated through test cases with radial symmetry. The capability of the 1D and 2D numerical models to capture different wave patterns is assessed against several Riemann Problems with different wave patterns.
Anderson Acceleration for Fixed-Point Iterations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, Homer F.
The purpose of this grant was to support research on acceleration methods for fixed-point iterations, with applications to computational frameworks and simulation problems that are of interest to DOE.
Scalable problems and memory bounded speedup
NASA Technical Reports Server (NTRS)
Sun, Xian-He; Ni, Lionel M.
1992-01-01
In this paper three models of parallel speedup are studied. They are fixed-size speedup, fixed-time speedup and memory-bounded speedup. The latter two consider the relationship between speedup and problem scalability. Two sets of speedup formulations are derived for these three models. One set considers uneven workload allocation and communication overhead and gives more accurate estimation. Another set considers a simplified case and provides a clear picture on the impact of the sequential portion of an application on the possible performance gain from parallel processing. The simplified fixed-size speedup is Amdahl's law. The simplified fixed-time speedup is Gustafson's scaled speedup. The simplified memory-bounded speedup contains both Amdahl's law and Gustafson's scaled speedup as special cases. This study leads to a better understanding of parallel processing.
How convincing is a matching Y-chromosome profile?
2017-01-01
The introduction of forensic autosomal DNA profiles was controversial, but the problems were successfully addressed, and DNA profiling has gone on to revolutionise forensic science. Y-chromosome profiles are valuable when there is a mixture of male-source and female-source DNA, and interest centres on the identity of the male source(s) of the DNA. The problem of evaluating evidential weight is even more challenging for Y profiles than for autosomal profiles. Numerous approaches have been proposed, but they fail to deal adequately with the fact that men with matching Y-profiles are related in extended patrilineal clans, many of which may not be represented in available databases. The higher mutation rates of modern profiling kits have led to increased discriminatory power but they have also exacerbated the problem of fairly conveying evidential value. Because the relevant population is difficult to define, yet the number of matching relatives is fixed as population size varies, it is typically infeasible to derive population-based match probabilities relevant to a specific crime. We propose a conceptually simple solution, based on a simulation model and software to approximate the distribution of the number of males with a matching Y profile. We show that this distribution is robust to different values for the variance in reproductive success and the population growth rate. We also use importance sampling reweighting to derive the distribution of the number of matching males conditional on a database frequency, finding that this conditioning typically has only a modest impact. We illustrate the use of our approach to quantify the value of Y profile evidence for a court in a way that is both scientifically valid and easily comprehensible by a judge or juror. PMID:29099833
NASA Astrophysics Data System (ADS)
Gutin, Gregory; Kim, Eun Jung; Soleimanfallah, Arezou; Szeider, Stefan; Yeo, Anders
The NP-hard general factor problem asks, given a graph and for each vertex a list of integers, whether the graph has a spanning subgraph where each vertex has a degree that belongs to its assigned list. The problem remains NP-hard even if the given graph is bipartite with partition U ⊎ V, and each vertex in U is assigned the list {1}; this subproblem appears in the context of constraint programming as the consistency problem for the extended global cardinality constraint. We show that this subproblem is fixed-parameter tractable when parameterized by the size of the second partite set V. More generally, we show that the general factor problem for bipartite graphs, parameterized by |V |, is fixed-parameter tractable as long as all vertices in U are assigned lists of length 1, but becomes W[1]-hard if vertices in U are assigned lists of length at most 2. We establish fixed-parameter tractability by reducing the problem instance to a bounded number of acyclic instances, each of which can be solved in polynomial time by dynamic programming.
Zhao, Jing; Zong, Haili
2018-01-01
In this paper, we propose parallel and cyclic iterative algorithms for solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators. We also combine the process of cyclic and parallel iterative methods and propose two mixed iterative algorithms. Our several algorithms do not need any prior information about the operator norms. Under mild assumptions, we prove weak convergence of the proposed iterative sequences in Hilbert spaces. As applications, we obtain several iterative algorithms to solve the multiple-set split equality problem.
A dual method for optimal control problems with initial and final boundary constraints.
NASA Technical Reports Server (NTRS)
Pironneau, O.; Polak, E.
1973-01-01
This paper presents two new algorithms belonging to the family of dual methods of centers. The first can be used for solving fixed time optimal control problems with inequality constraints on the initial and terminal states. The second one can be used for solving fixed time optimal control problems with inequality constraints on the initial and terminal states and with affine instantaneous inequality constraints on the control. Convergence is established for both algorithms. Qualitative reasoning indicates that the rate of convergence is linear.
NASA Astrophysics Data System (ADS)
Fu, Junjie; Wang, Jin-zhi
2017-09-01
In this paper, we study the finite-time consensus problems with globally bounded convergence time also known as fixed-time consensus problems for multi-agent systems subject to directed communication graphs. Two new distributed control strategies are proposed such that leaderless and leader-follower consensus are achieved with convergence time independent on the initial conditions of the agents. Fixed-time formation generation and formation tracking problems are also solved as the generalizations. Simulation examples are provided to demonstrate the performance of the new controllers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akhymbek, Meiram Erkanatuly; Yessirkegenov, Nurgissa Amankeldiuly; Sadybekov, Makhmud Abdysametovich
2015-09-18
In the current paper, the problem of bending vibrations of a beam in which the binding on the right end is unknown and not available for visual inspection is studied. The main objective is to study an inverse problem: find additional unknown boundary conditions by additional spectral data, i.e., the conditions of fixing the right end of the rod. In this work, unlike many other works, as such additional conditions we choose the first natural frequencies (eigenvalues) of two new problems corresponding to the problem of bending vibrations of a beam with loads of different weights at the central point.
NASA Astrophysics Data System (ADS)
Grafarend, E. W.; Heck, B.; Knickmeyer, E. H.
1985-03-01
Various formulations of the geodetic fixed and free boundary value problem are presented, depending upon the type of boundary data. For the free problem, boundary data of type astronomical latitude, astronomical longitude and a pair of the triplet potential, zero and first-order vertical gradient of gravity are presupposed. For the fixed problem, either the potential or gravity or the vertical gradient of gravity is assumed to be given on the boundary. The potential and its derivatives on the boundary surface are linearized with respect to a reference potential and a reference surface by Taylor expansion. The Eulerian and Lagrangean concepts of a perturbation theory of the nonlinear geodetic boundary value problem are reviewed. Finally the boundary value problems are solved by Hilbert space techniques leading to new generalized Stokes and Hotine functions. Reduced Stokes and Hotine functions are recommended for numerical reasons. For the case of a boundary surface representing the topography a base representation of the solution is achieved by solving an infinite dimensional system of equations. This system of equations is obtained by means of the product-sum-formula for scalar surface spherical harmonics with Wigner 3j-coefficients.
Li, Shanlin; Li, Maoqin
2015-01-01
We consider an integrated production and distribution scheduling problem faced by a typical make-to-order manufacturer which relies on a third-party logistics (3PL) provider for finished product delivery to customers. In the beginning of a planning horizon, the manufacturer has received a set of orders to be processed on a single production line. Completed orders are delivered to customers by a finite number of vehicles provided by the 3PL company which follows a fixed daily or weekly shipping schedule such that the vehicles have fixed departure dates which are not part of the decisions. The problem is to find a feasible schedule that minimizes one of the following objective functions when processing times and weights are oppositely ordered: (1) the total weight of late orders and (2) the number of vehicles used subject to the condition that the total weight of late orders is minimum. We show that both problems are solvable in polynomial time.
Li, Shanlin; Li, Maoqin
2015-01-01
We consider an integrated production and distribution scheduling problem faced by a typical make-to-order manufacturer which relies on a third-party logistics (3PL) provider for finished product delivery to customers. In the beginning of a planning horizon, the manufacturer has received a set of orders to be processed on a single production line. Completed orders are delivered to customers by a finite number of vehicles provided by the 3PL company which follows a fixed daily or weekly shipping schedule such that the vehicles have fixed departure dates which are not part of the decisions. The problem is to find a feasible schedule that minimizes one of the following objective functions when processing times and weights are oppositely ordered: (1) the total weight of late orders and (2) the number of vehicles used subject to the condition that the total weight of late orders is minimum. We show that both problems are solvable in polynomial time. PMID:25785285
Teaching an Old Dog an Old Trick: FREE-FIX and Free-Boundary Axisymmetric MHD Equilibrium
NASA Astrophysics Data System (ADS)
Guazzotto, Luca
2015-11-01
A common task in plasma physics research is the calculation of an axisymmetric equilibrium for tokamak modeling. The main unknown of the problem is the magnetic poloidal flux ψ. The easiest approach is to assign the shape of the plasma and only solve the equilibrium problem in the plasma / closed-field-lines region (the ``fixed-boundary approach''). Often, one may also need the vacuum fields, i.e. the equilibrium in the open-field-lines region, requiring either coil currents or ψ on some closed curve outside the plasma to be assigned (the ``free-boundary approach''). Going from one approach to the other is a textbook problem, involving the calculation of Green's functions and surface integrals in the plasma. However, no tools are readily available to perform this task. Here we present a code (FREE-FIX) to compute a boundary condition for a free-boundary equilibrium given only the corresponding fixed-boundary equilibrium. An improvement to the standard solution method, allowing for much faster calculations, is presented. Applications are discussed. PPPL fund 245139 and DOE grant G00009102.
DEEP ATTRACTOR NETWORK FOR SINGLE-MICROPHONE SPEAKER SEPARATION.
Chen, Zhuo; Luo, Yi; Mesgarani, Nima
2017-03-01
Despite the overwhelming success of deep learning in various speech processing tasks, the problem of separating simultaneous speakers in a mixture remains challenging. Two major difficulties in such systems are the arbitrary source permutation and unknown number of sources in the mixture. We propose a novel deep learning framework for single channel speech separation by creating attractor points in high dimensional embedding space of the acoustic signals which pull together the time-frequency bins corresponding to each source. Attractor points in this study are created by finding the centroids of the sources in the embedding space, which are subsequently used to determine the similarity of each bin in the mixture to each source. The network is then trained to minimize the reconstruction error of each source by optimizing the embeddings. The proposed model is different from prior works in that it implements an end-to-end training, and it does not depend on the number of sources in the mixture. Two strategies are explored in the test time, K-means and fixed attractor points, where the latter requires no post-processing and can be implemented in real-time. We evaluated our system on Wall Street Journal dataset and show 5.49% improvement over the previous state-of-the-art methods.
Adaptive near-field beamforming techniques for sound source imaging.
Cho, Yong Thung; Roan, Michael J
2009-02-01
Phased array signal processing techniques such as beamforming have a long history in applications such as sonar for detection and localization of far-field sound sources. Two sometimes competing challenges arise in any type of spatial processing; these are to minimize contributions from directions other than the look direction and minimize the width of the main lobe. To tackle this problem a large body of work has been devoted to the development of adaptive procedures that attempt to minimize side lobe contributions to the spatial processor output. In this paper, two adaptive beamforming procedures-minimum variance distorsionless response and weight optimization to minimize maximum side lobes--are modified for use in source visualization applications to estimate beamforming pressure and intensity using near-field pressure measurements. These adaptive techniques are compared to a fixed near-field focusing technique (both techniques use near-field beamforming weightings focusing at source locations estimated based on spherical wave array manifold vectors with spatial windows). Sound source resolution accuracies of near-field imaging procedures with different weighting strategies are compared using numerical simulations both in anechoic and reverberant environments with random measurement noise. Also, experimental results are given for near-field sound pressure measurements of an enclosed loudspeaker.
Xia, Xinghui; Wu, Qiong; Zhu, Baotong; Zhao, Pujun; Zhang, Shangwei; Yang, Lingyan
2015-08-01
We applied a mixing model based on stable isotopic δ(13)C, δ(15)N, and C:N ratios to estimate the contributions of multiple sources to sediment nitrogen. We also developed a conceptual model describing and analyzing the impacts of climate change on nitrogen enrichment. These two models were conducted in Miyun Reservoir to analyze the contribution of climate change to the variations in sediment nitrogen sources based on two (210)Pb and (137)Cs dated sediment cores. The results showed that during the past 50years, average contributions of soil and fertilizer, submerged macrophytes, N2-fixing phytoplankton, and non-N2-fixing phytoplankton were 40.7%, 40.3%, 11.8%, and 7.2%, respectively. In addition, total nitrogen (TN) contents in sediment showed significant increasing trends from 1960 to 2010, and sediment nitrogen of both submerged macrophytes and phytoplankton sources exhibited significant increasing trends during the past 50years. In contrast, soil and fertilizer sources showed a significant decreasing trend from 1990 to 2010. According to the changing trend of N2-fixing phytoplankton, changes of temperature and sunshine duration accounted for at least 43% of the trend in the sediment nitrogen enrichment over the past 50years. Regression analysis of the climatic factors on nitrogen sources showed that the contributions of precipitation, temperature, and sunshine duration to the variations in sediment nitrogen sources ranged from 18.5% to 60.3%. The study demonstrates that the mixing model provides a robust method for calculating the contribution of multiple nitrogen sources in sediment, and this study also suggests that N2-fixing phytoplankton could be regarded as an important response factor for assessing the impacts of climate change on nitrogen enrichment. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Steinhaus, David W.; Kline, John V.; Bieniewski, Thomas M.; Dow, Grove S.; Apel, Charles T.
1980-11-01
An all-mirror optical system is used to direct the light from a variety of spectroscopic sources to two 2-m spectrographs that are placed on either side of a sturdy vertical mounting plate. The gratings were chosen so that the first spectrograph covers the ultraviolet spectral region, and the second spectrograph covers the ultraviolet, visible, and near-infrared regions. With the over 2.5 m of focal curves, each ultraviolet line is available at more than one place. Thus, problems with close lines can be overcome. The signals from a possible maximum of 256 photoelectric detectors go to a small computer for reading and calculation of the element abundances. To our knowledge, no other direct-reading spectrograph has more than about 100 fixed detectors. With an inductively-coupled-plasma source, our calibration curves, and detection limits, are similar to those of other workers using a direct-reading spectrograph.
A New Source Biasing Approach in ADVANTG
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bevill, Aaron M; Mosher, Scott W
2012-01-01
The ADVANTG code has been developed at Oak Ridge National Laboratory to generate biased sources and weight window maps for MCNP using the CADIS and FW-CADIS methods. In preparation for an upcoming RSICC release, a new approach for generating a biased source has been developed. This improvement streamlines user input and improves reliability. Previous versions of ADVANTG generated the biased source from ADVANTG input, writing an entirely new general fixed-source definition (SDEF). Because volumetric sources were translated into SDEF-format as a finite set of points, the user had to perform a convergence study to determine whether the number of sourcemore » points used accurately represented the source region. Further, the large number of points that must be written in SDEF-format made the MCNP input and output files excessively long and difficult to debug. ADVANTG now reads SDEF-format distributions and generates corresponding source biasing cards, eliminating the need for a convergence study. Many problems of interest use complicated source regions that are defined using cell rejection. In cell rejection, the source distribution in space is defined using an arbitrarily complex cell and a simple bounding region. Source positions are sampled within the bounding region but accepted only if they fall within the cell; otherwise, the position is resampled entirely. When biasing in space is applied to sources that use rejection sampling, current versions of MCNP do not account for the rejection in setting the source weight of histories, resulting in an 'unfair game'. This problem was circumvented in previous versions of ADVANTG by translating volumetric sources into a finite set of points, which does not alter the mean history weight ({bar w}). To use biasing parameters without otherwise modifying the original cell-rejection SDEF-format source, ADVANTG users now apply a correction factor for {bar w} in post-processing. A stratified-random sampling approach in ADVANTG is under development to automatically report the correction factor with estimated uncertainty. This study demonstrates the use of ADVANTG's new source biasing method, including the application of {bar w}.« less
NASA Astrophysics Data System (ADS)
Storti, Mario A.; Nigro, Norberto M.; Paz, Rodrigo R.; Dalcín, Lisandro D.
2009-03-01
In this paper some results on the convergence of the Gauss-Seidel iteration when solving fluid/structure interaction problems with strong coupling via fixed point iteration are presented. The flow-induced vibration of a flat plate aligned with the flow direction at supersonic Mach number is studied. The precision of different predictor schemes and the influence of the partitioned strong coupling on stability is discussed.
Noise power spectrum of the fixed pattern noise in digital radiography detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Dong Sik, E-mail: dskim@hufs.ac.kr; Kim, Eun
Purpose: The fixed pattern noise in radiography image detectors is caused by various sources. Multiple readout circuits with gate drivers and charge amplifiers are used to efficiently acquire the pixel voltage signals. However, the multiple circuits are not identical and thus yield nonuniform system gains. Nonuniform sensitivities are also produced from local variations in the charge collection elements. Furthermore, in phosphor-based detectors, the optical scattering at the top surface of the columnar CsI growth, the grain boundaries, and the disorder structure causes spatial sensitivity variations. These nonuniform gains or sensitivities cause fixed pattern noise and degrade the detector performance, evenmore » though the noise problem can be partially alleviated by using gain correction techniques. Hence, in order to develop good detectors, comparative analysis of the energy spectrum of the fixed pattern noise is important. Methods: In order to observe the energy spectrum of the fixed pattern noise, a normalized noise power spectrum (NNPS) of the fixed pattern noise is considered in this paper. Since the fixed pattern noise is mainly caused by the nonuniform gains, we call the spectrum the gain NNPS. We first asymptotically observe the gain NNPS and then formulate two relationships to calculate the gain NNPS based on a nonuniform-gain model. Since the gain NNPS values are quite low compared to the usual NNPS, measuring such a low NNPS value is difficult. By using the average of the uniform exposure images, a robust measuring method for the gain NNPS is proposed in this paper. Results: By using the proposed measuring method, the gain NNPS curves of several prototypes of general radiography and mammography detectors were measured to analyze their fixed pattern noise properties. We notice that a direct detector, which is based on the a-Se photoconductor, showed lower gain NNPS than the indirect-detector case, which is based on the CsI scintillator. By comparing the gain NNPS curves of the indirect detectors, we could analyze the scintillator properties depending on the techniques for the scintillator surface processing. Conclusions: A robust measuring method for the NNPS of the fixed pattern noise of a radiography detector is proposed in this paper. The method can measure a stable gain NNPS curve, even though the fixed pattern noise level is quite low. From the measured gain NNPS curves, we can compare and analyze the detector properties in terms of producing the fixed pattern noise.« less
ERIC Educational Resources Information Center
Lubyanaya, Alexandra V.; Izmailov, Airat M.; Nikulina, Ekaterina Y.; Shaposhnikov, Vladislav A.
2016-01-01
The purpose of this article is to investigate the problem, which stems from non-current fixed assets affecting profitability and asset management efficiency. Tangible assets, intangible assets and financial assets are all included in non-current fixed assets. The aim of the research is to identify the impact of estimates and valuation in…
Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code
NASA Astrophysics Data System (ADS)
Marinkovic, Slavica; Guillemot, Christine
2006-12-01
Quantized frame expansions based on block transforms and oversampled filter banks (OFBs) have been considered recently as joint source-channel codes (JSCCs) for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC) or a fixed-length code (FLC). This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an[InlineEquation not available: see fulltext.]-ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO) VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.
Gallastegi, Mara; Guxens, Mònica; Jiménez-Zabala, Ana; Calvente, Irene; Fernández, Marta; Birks, Laura; Struchen, Benjamin; Vrijheid, Martine; Estarlich, Marisa; Fernández, Mariana F; Torrent, Maties; Ballester, Ferrán; Aurrekoetxea, Juan J; Ibarluzea, Jesús; Guerra, David; González, Julián; Röösli, Martin; Santa-Marina, Loreto
2016-02-18
Analysis of the association between exposure to electromagnetic fields of non-ionising radiation (EMF-NIR) and health in children and adolescents is hindered by the limited availability of data, mainly due to the difficulties on the exposure assessment. This study protocol describes the methodologies used for characterising exposure of children to EMF-NIR in the INMA (INfancia y Medio Ambiente- Environment and Childhood) Project, a prospective cohort study. Indirect (proximity to emission sources, questionnaires on sources use and geospatial propagation models) and direct methods (spot and fixed longer-term measurements and personal measurements) were conducted in order to assess exposure levels of study participants aged between 7 and 18 years old. The methodology used varies depending on the frequency of the EMF-NIR and the environment (homes, schools and parks). Questionnaires assessed the use of sources contributing both to Extremely Low Frequency (ELF) and Radiofrequency (RF) exposure levels. Geospatial propagation models (NISMap) are implemented and validated for environmental outdoor sources of RFs using spot measurements. Spot and fixed longer-term ELF and RF measurements were done in the environments where children spend most of the time. Moreover, personal measurements were taken in order to assess individual exposure to RF. The exposure data are used to explore their relationships with proximity and/or use of EMF-NIR sources. Characterisation of the EMF-NIR exposure by this combination of methods is intended to overcome problems encountered in other research. The assessment of exposure of INMA cohort children and adolescents living in different regions of Spain to the full frequency range of EMF-NIR extends the characterisation of environmental exposures in this cohort. Together with other data obtained in the project, on socioeconomic and family characteristics and development of the children and adolescents, this will enable to evaluate the complex interaction between health outcomes in children and adolescents and the various environmental factors that surround them.
User's guide to four-body and three-body trajectory optimization programs
NASA Technical Reports Server (NTRS)
Pu, C. L.; Edelbaum, T. N.
1974-01-01
A collection of computer programs and subroutines written in FORTRAN to calculate 4-body (sun-earth-moon-space) and 3-body (earth-moon-space) optimal trajectories is presented. The programs incorporate a variable step integration technique and a quadrature formula to correct single step errors. The programs provide capability to solve initial value problem, two point boundary value problem of a transfer from a given initial position to a given final position in fixed time, optimal 2-impulse transfer from an earth parking orbit of given inclination to a given final position and velocity in fixed time and optimal 3-impulse transfer from a given position to a given final position and velocity in fixed time.
A general optimality criteria algorithm for a class of engineering optimization problems
NASA Astrophysics Data System (ADS)
Belegundu, Ashok D.
2015-05-01
An optimality criteria (OC)-based algorithm for optimization of a general class of nonlinear programming (NLP) problems is presented. The algorithm is only applicable to problems where the objective and constraint functions satisfy certain monotonicity properties. For multiply constrained problems which satisfy these assumptions, the algorithm is attractive compared with existing NLP methods as well as prevalent OC methods, as the latter involve computationally expensive active set and step-size control strategies. The fixed point algorithm presented here is applicable not only to structural optimization problems but also to certain problems as occur in resource allocation and inventory models. Convergence aspects are discussed. The fixed point update or resizing formula is given physical significance, which brings out a strength and trim feature. The number of function evaluations remains independent of the number of variables, allowing the efficient solution of problems with large number of variables.
Coordinate transformations and gauges in the relativistic astronomical reference systems
NASA Astrophysics Data System (ADS)
Tao, J.-H.; Huang, T.-Y.; Han, C.-H.
2000-11-01
This paper applies a fully post-Newtonian theory (Damour et al. 1991, 1992, 1993, 1994) to the problem of gauge in relativistic reference systems. Gauge fixing is necessary when the precision of time measurement and application reaches 10-16 or better. We give a general procedure for fixing the gauges of gravitational potentials in both the global and local coordinate systems, and for determining the gauge functions in all the coordinate transformations. We demonstrate that gauge fixing in a gravitational N-body problem can be solved by fixing the gauge of the self-gravitational potential of each body and the gauge function in the coordinate transformation between the global and local coordinate systems. We also show that these gauge functions can be chosen to make all the coordinate systems harmonic or any as required, no matter what gauge is chosen for the self-gravitational potential of each body.
1989-06-09
Theorem and the Perron - Frobenius Theorem in matrix theory. We use the Hahn-Banach theorem and do not use any fixed-point related concepts. 179 A...games defined b’, tions 87 Isac G. Fixed point theorems on convex cones , generalized pseudo-contractive mappings and the omplementarity problem 89...and (II), af(x) ° denotes the negative polar cone ot of(x). This condition are respectively called "inward" and "outward". Indeed, when X is convex
ERIC Educational Resources Information Center
Rittner-Heir, Robbin
2000-01-01
Examines the problem of acoustics in school classrooms; the problems it creates for student learning, particularly for students with hearing problems; and the impediments to achieving acceptable acoustical levels for school classrooms. Acoustic guidelines are explored and some remedies for fixing sound problems are highlighted. (GR)
Joint Source-Channel Decoding of Variable-Length Codes with Soft Information: A Survey
NASA Astrophysics Data System (ADS)
Guillemot, Christine; Siohan, Pierre
2005-12-01
Multimedia transmission over time-varying wireless channels presents a number of challenges beyond existing capabilities conceived so far for third-generation networks. Efficient quality-of-service (QoS) provisioning for multimedia on these channels may in particular require a loosening and a rethinking of the layer separation principle. In that context, joint source-channel decoding (JSCD) strategies have gained attention as viable alternatives to separate decoding of source and channel codes. A statistical framework based on hidden Markov models (HMM) capturing dependencies between the source and channel coding components sets the foundation for optimal design of techniques of joint decoding of source and channel codes. The problem has been largely addressed in the research community, by considering both fixed-length codes (FLC) and variable-length source codes (VLC) widely used in compression standards. Joint source-channel decoding of VLC raises specific difficulties due to the fact that the segmentation of the received bitstream into source symbols is random. This paper makes a survey of recent theoretical and practical advances in the area of JSCD with soft information of VLC-encoded sources. It first describes the main paths followed for designing efficient estimators for VLC-encoded sources, the key component of the JSCD iterative structure. It then presents the main issues involved in the application of the turbo principle to JSCD of VLC-encoded sources as well as the main approaches to source-controlled channel decoding. This survey terminates by performance illustrations with real image and video decoding systems.
A numerical analysis of phase-change problems including natural convection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Y.; Faghri, A.
1990-08-01
Fixed grid solutions for phase-change problems remove the need to satisfy conditions at the phase-change front and can be easily extended to multidimensional problems. The two most important and widely used methods are enthalpy methods and temperature-based equivalent heat capacity methods. Both methods in this group have advantages and disadvantages. Enthalpy methods (Shamsundar and Sparrow, 1975; Voller and Prakash, 1987; Cao et al., 1989) are flexible and can handle phase-change problems occurring both at a single temperature and over a temperature range. The drawback of this method is that although the predicted temperature distributions and melting fronts are reasonable, themore » predicted time history of the temperature at a typical grid point may have some oscillations. The temperature-based fixed grid methods (Morgan, 1981; Hsiao and Chung, 1984) have no such time history problems and are more convenient with conjugate problems involving an adjacent wall, but have to deal with the severe nonlinearity of the governing equations when the phase-change temperature range is small. In this paper, a new temperature-based fixed-grid formulation is proposed, and the reason that the original equivalent heat capacity model is subject to such restrictions on the time step, mesh size, and the phase-change temperature range will also be discussed.« less
A fully Sinc-Galerkin method for Euler-Bernoulli beam models
NASA Technical Reports Server (NTRS)
Smith, R. C.; Bowers, K. L.; Lund, J.
1990-01-01
A fully Sinc-Galerkin method in both space and time is presented for fourth-order time-dependent partial differential equations with fixed and cantilever boundary conditions. The Sinc discretizations for the second-order temporal problem and the fourth-order spatial problems are presented. Alternate formulations for variable parameter fourth-order problems are given which prove to be especially useful when applying the forward techniques to parameter recovery problems. The discrete system which corresponds to the time-dependent partial differential equations of interest are then formulated. Computational issues are discussed and a robust and efficient algorithm for solving the resulting matrix system is outlined. Numerical results which highlight the method are given for problems with both analytic and singular solutions as well as fixed and cantilever boundary conditions.
Computational alternatives to obtain time optimal jet engine control. M.S. Thesis
NASA Technical Reports Server (NTRS)
Basso, R. J.; Leake, R. J.
1976-01-01
Two computational methods to determine an open loop time optimal control sequence for a simple single spool turbojet engine are described by a set of nonlinear differential equations. Both methods are modifications of widely accepted algorithms which can solve fixed time unconstrained optimal control problems with a free right end. Constrained problems to be considered have fixed right ends and free time. Dynamic programming is defined on a standard problem and it yields a successive approximation solution to the time optimal problem of interest. A feedback control law is obtained and it is then used to determine the corresponding open loop control sequence. The Fletcher-Reeves conjugate gradient method has been selected for adaptation to solve a nonlinear optimal control problem with state variable and control constraints.
Fixed site neutralization model programmer's manual. Volume II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engi, D.; Chapman, L.D.; Judnick, W.
This report relates to protection of nuclear materials at nuclear facilities. This volume presents the source listings for the Fixed Site Neutralization Model and its supporting modules, the Plex Preprocessor and the Data Preprocessor. (DLC)
`Earth-ionosphere' mode controlled source electromagnetic method
NASA Astrophysics Data System (ADS)
Li, Diquan; Di, Qingyun; Wang, Miaoyue; Nobes, David
2015-09-01
In traditional artificial-source electromagnetic exploration, the effects of the ionosphere and displacement current (DC) in the air were neglected, and only the geoelectrical structure of the earth's crust and upper mantle was considered, such as for controlled source audio-frequency magnetotelluric (CSAMT). By employing a transmitter (less than 30 kW) to generate source fields, the CSAMT method overcomes the problems associated with weak natural electromagnetic (EM) fields used in magnetotellurics. However, the transmitter is moved and the source-receiver offset is approximately less than 20 km, because of the limitation of emission energy. We put forward a new idea, that is, a fixed artificial source (greater than 200 kW) is used and the source location selected at a high resistivity region (to ensure a high emission efficiency), so there may be a possibility that as long as the source strength magnitude is strong enough, the artificial EM signal can be easily observed within a distance of several thousand kilometres. Previous studies have provided the evidence to support this idea; they used the `earth-ionosphere' mode in modeling the EM fields with the offset up to a thousand kilometres. Such EM fields still have a signal/noise ratio over 10-20 dB; this means that a new EM method with fixed source is feasible. However, in their calculations, the DC which plays a very important role for large offsets was neglected. This paper pays much attention to derive the formulae of the `earth-ionosphere' mode with a horizontal electric dipole source, and the DC is not neglected. We present some three layers modeling results to illustrate the basic EM field characteristics under the `earth-ionosphere' mode. As the offset increases, the contribution of the conduction current decreases, DC and ionosphere were taken into account, and the EM field attenuation decreases. We also quantitatively compare the predicted and observed data. The comparison of these results with the data reveal the excellent agreement between the experimental and theoretical results. The DC and ionosphere affects the EM fields, however impedances (ratio of E to H) are unaffected, and this means we need to include ionosphere and DC effects to accurately model the EM field amplitudes for optimal setting of measurement parameters, but we do not need to include these complications for the interpretation of the data for the Earth conductivity.
Fixing health care before it fixes us.
Kotlikoff, Laurence J
2009-02-01
The current American health care system is beyond repair. The problems of the health care system are delineated in this discussion. The current health care system needs to be replaced in its entirety with a new system that provides every American with first-rate, first-tier medicine and that doesn't drive our nation broke. The author describes a 10-point Medical Security System, which he proposes will address the problems of the current health care system.
An improved least cost routing approach for WDM optical network without wavelength converters
NASA Astrophysics Data System (ADS)
Bonani, Luiz H.; Forghani-elahabad, Majid
2016-12-01
Routing and wavelength assignment (RWA) problem has been an attractive problem in optical networks, and consequently several algorithms have been proposed in the literature to solve this problem. The most known techniques for the dynamic routing subproblem are fixed routing, fixed-alternate routing, and adaptive routing methods. The first one leads to a high blocking probability (BP) and the last one includes a high computational complexity and requires immense backing from the control and management protocols. The second one suggests a trade-off between performance and complexity, and hence we consider it to improve in our work. In fact, considering the RWA problem in a wavelength routed optical network with no wavelength converter, an improved technique is proposed for the routing subproblem in order to decrease the BP of the network. Based on fixed-alternate approach, the first k shortest paths (SPs) between each node pair is determined. We then rearrange the SPs according to a newly defined cost for the links and paths. Upon arriving a connection request, the sorted paths are consecutively checked for an available wavelength according to the most-used technique. We implement our proposed algorithm and the least-hop fixed-alternate algorithm to show how the rearrangement of SPs contributes to a lower BP in the network. The numerical results demonstrate the efficiency of our proposed algorithm in comparison with the others, considering different number of available wavelengths.
Goldberg, Daniel N.; Narayanan, Sri Hari Krishna; Hascoet, Laurent; ...
2016-05-20
We apply an optimized method to the adjoint generation of a time-evolving land ice model through algorithmic differentiation (AD). The optimization involves a special treatment of the fixed-point iteration required to solve the nonlinear stress balance, which differs from a straightforward application of AD software, and leads to smaller memory requirements and in some cases shorter computation times of the adjoint. The optimization is done via implementation of the algorithm of Christianson (1994) for reverse accumulation of fixed-point problems, with the AD tool OpenAD. For test problems, the optimized adjoint is shown to have far lower memory requirements, potentially enablingmore » larger problem sizes on memory-limited machines. In the case of the land ice model, implementation of the algorithm allows further optimization by having the adjoint model solve a sequence of linear systems with identical (as opposed to varying) matrices, greatly improving performance. Finally, the methods introduced here will be of value to other efforts applying AD tools to ice models, particularly ones which solve a hybrid shallow ice/shallow shelf approximation to the Stokes equations.« less
On the Treatment of Fixed and Sunk Costs in the Principles Textbooks
ERIC Educational Resources Information Center
Colander, David
2004-01-01
The author argues that, although the standard principles level treatment of fixed and sunk costs has problems, it is logically consistent as long as all fixed costs are assumed to be sunk costs. As long as the instructor makes that assumption clear to students, the costs of making the changes recently suggested by X. Henry Wang and Bill Z. Yang in…
A new mathematical adjoint for the modified SAAF -SN equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schunert, Sebastian; Wang, Yaqi; Martineau, Richard
2015-01-01
We present a new adjoint FEM weak form, which can be directly used for evaluating the mathematical adjoint, suitable for perturbation calculations, of the self-adjoint angular flux SN equations (SAAF -SN) without construction and transposition of the underlying coefficient matrix. Stabilization schemes incorporated in the described SAAF -SN method make the mathematical adjoint distinct from the physical adjoint, i.e. the solution of the continuous adjoint equation with SAAF -SN . This weak form is implemented into RattleSnake, the MOOSE (Multiphysics Object-Oriented Simulation Environment) based transport solver. Numerical results verify the correctness of the implementation and show its utility both formore » fixed source and eigenvalue problems.« less
[Public health in major socio-economic crisis].
Cosmacini, G
2014-01-01
The term "crisis" in different cultures (such as ancient Greece or China) can have a positive meaning, since it indicates a time of growth, change and opportunity. Over the centuries there have been times of severe economic and social crisis that led to the implementation of major reforms and improved population health. Nowadays, despite the new economic crisis which has also affected health care for its rising costs, health economics does not hesitate to affirm the importance of key objectives such as prevention and medical assistance. Prevention is not prediction. Prevention means "going upstream" and fixing a problem at the source; the goal is to reduce diseases' effects, causes and risk factors, thereby reducing the prevalence of costly medical conditions.
Deep Learning Based Binaural Speech Separation in Reverberant Environments.
Zhang, Xueliang; Wang, DeLiang
2017-05-01
Speech signal is usually degraded by room reverberation and additive noises in real environments. This paper focuses on separating target speech signal in reverberant conditions from binaural inputs. Binaural separation is formulated as a supervised learning problem, and we employ deep learning to map from both spatial and spectral features to a training target. With binaural inputs, we first apply a fixed beamformer and then extract several spectral features. A new spatial feature is proposed and extracted to complement the spectral features. The training target is the recently suggested ideal ratio mask. Systematic evaluations and comparisons show that the proposed system achieves very good separation performance and substantially outperforms related algorithms under challenging multi-source and reverberant environments.
Assessing technical performance at diverse ambulatory care sites.
Osterweis, M; Bryant, E
1978-01-01
The purpose of the large study reported here was to develop and test methods for assessing the quality of health care that would be broadly applicable to diverse ambulatory care organizations for periodic comparative review. Methodological features included the use of an age-sex stratified random sampling scheme, dependence on medical records as the source of data, a fixed study period year, use of Kessner's tracer methodology (including not only acute and chronic diseases but also screening and immunization rates as indicators), and a fixed tracer matrix at all test sites. This combination of methods proved more efficacious in estimating certain parameters for the total patient populations at each site (including utilization patterns, screening, and immunization rates) and the process of care for acute conditions than it did in examining the process of care for the selected chronic condition. It was found that the actual process of care at all three sites for the three acute conditions (streptococcal pharyngitis, urinary tract infection, and iron deficiency anemia) often differed from the expected process in terms of both diagnostic procedures and treatment. For hypertension, the chronic disease tracer, medical records were frequently a deficient data source from which to draw conclusions about the adequacy of treatment. Several aspects of the study methodology were found to be detrimental to between-site comparisons of the process of care for chronic disease management. The use of an age-sex stratified random sampling scheme resulted in the identification of too few cases of hypertension at some sites for analytic purposes, thereby necessitating supplementary sampling by diagnosis. The use of a fixed study period year resulted in an arbitrary starting point in the course of the disease. Furthermore, in light of the diverse sociodemographic characteristics of the patient populations, the use of a fixed matrix of tracer conditions for all test sites is questionable. The discussion centers on these and other problems encountered in attempting to compare technical performance within diverse ambulatory care organizations and provides some guidelines as to the utility of alternative methods for assessing the quality of health care.
NASA Technical Reports Server (NTRS)
Liu, S. C.; Cicerone, R. J.; Donahue, T. M.; Chameides, W. L.
1977-01-01
The terrestrial and marine nitrogen cycles are examined in an attempt to clarify how the atmospheric content of N2O is controlled. We review available data on the various reservoirs of fixed nitrogen, the transfer rates between the reservoirs, and estimate how the reservoir contents and transfer rates can change under man's influence. It is seen that sources, sinks and lifetime of atmospheric N2O are not understood well. Based on our limited knowledge of the stability of atmospheric N2O we conclude that future growth in the usage of industrial fixed nitrogen fertilizers could cause a 1% to 2% global ozone reduction in the next 50 years. However, centuries from now the ozone layer could be reduced by as much as 10% if soils are the major source of atmospheric N2O.
Adolescent mental health and earnings inequalities in adulthood: evidence from the Young-HUNT Study.
Evensen, Miriam; Lyngstad, Torkild Hovde; Melkevik, Ole; Reneflot, Anne; Mykletun, Arnstein
2017-02-01
Previous studies have shown that adolescent mental health problems are associated with lower employment probabilities and risk of unemployment. The evidence on how earnings are affected is much weaker, and few have addressed whether any association reflects unobserved characteristics and whether the consequences of mental health problems vary across the earnings distribution. A population-based Norwegian health survey linked to administrative registry data (N=7885) was used to estimate how adolescents' mental health problems (separate indicators of internalising, conduct, and attention problems and total sum scores) affect earnings (≥30 years) in young adulthood. We used linear regression with fixed-effects models comparing either students within schools or siblings within families. Unconditional quantile regressions were used to explore differentials across the earnings distribution. Mental health problems in adolescence reduce average earnings in adulthood, and associations are robust to control for observed family background and school fixed effects. For some, but not all mental health problems, associations are also robust in sibling fixed-effects models, where all stable family factors are controlled. Further, we found much larger earnings loss below the 25th centile. Adolescent mental health problems reduce adult earnings, especially among individuals in the lower tail of the earnings distribution. Preventing mental health problems in adolescence may increase future earnings. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Peninsula transportation district commission route deviation feasibility study.
DOT National Transportation Integrated Search
1998-11-01
Many urban transit providers are faced with the problem of declining ridership on traditional fixed route services in low density suburban areas. As a result, many fixed route services in such areas are not economically viable for the transit provide...
Microscopical analysis of synovial fluid wear debris from failing CoCr hip prostheses
NASA Astrophysics Data System (ADS)
Ward, M. B.; Brown, A. P.; Cox, A.; Curry, A.; Denton, J.
2010-07-01
Metal on metal hip joint prostheses are now commonly implanted in patients with hip problems. Although hip replacements largely go ahead problem free, some complications can arise such as infection immediately after surgery and aseptic necrosis caused by vascular complications due to surgery. A recent observation that has been made at Manchester is that some Cobalt Chromium (CoCr) implants are causing chronic pain, with the source being as yet unidentified. This form of replacement failure is independent of surgeon or hospital and so some underlying body/implant interface process is thought to be the problem. When the synovial fluid from a failed joint is examined particles of metal (wear debris) can be found. Transmission Electron Microscopy (TEM) has been used to look at fixed and sectioned samples of the synovial fluid and this has identified fine (< 100 nm) metal and metal oxide particles within the fluid. TEM EDX and Electron Energy Loss Spectroscopy (EELS) have been employed to examine the composition of the particles, showing them to be chromium rich. This gives rise to concern that the failure mechanism may be associated with the debris.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bolding, Simon R.; Cleveland, Mathew Allen; Morel, Jim E.
In this paper, we have implemented a new high-order low-order (HOLO) algorithm for solving thermal radiative transfer problems. The low-order (LO) system is based on the spatial and angular moments of the transport equation and a linear-discontinuous finite-element spatial representation, producing equations similar to the standard S 2 equations. The LO solver is fully implicit in time and efficiently resolves the nonlinear temperature dependence at each time step. The high-order (HO) solver utilizes exponentially convergent Monte Carlo (ECMC) to give a globally accurate solution for the angular intensity to a fixed-source pure-absorber transport problem. This global solution is used tomore » compute consistency terms, which require the HO and LO solutions to converge toward the same solution. The use of ECMC allows for the efficient reduction of statistical noise in the Monte Carlo solution, reducing inaccuracies introduced through the LO consistency terms. Finally, we compare results with an implicit Monte Carlo code for one-dimensional gray test problems and demonstrate the efficiency of ECMC over standard Monte Carlo in this HOLO algorithm.« less
Bolding, Simon R.; Cleveland, Mathew Allen; Morel, Jim E.
2016-10-21
In this paper, we have implemented a new high-order low-order (HOLO) algorithm for solving thermal radiative transfer problems. The low-order (LO) system is based on the spatial and angular moments of the transport equation and a linear-discontinuous finite-element spatial representation, producing equations similar to the standard S 2 equations. The LO solver is fully implicit in time and efficiently resolves the nonlinear temperature dependence at each time step. The high-order (HO) solver utilizes exponentially convergent Monte Carlo (ECMC) to give a globally accurate solution for the angular intensity to a fixed-source pure-absorber transport problem. This global solution is used tomore » compute consistency terms, which require the HO and LO solutions to converge toward the same solution. The use of ECMC allows for the efficient reduction of statistical noise in the Monte Carlo solution, reducing inaccuracies introduced through the LO consistency terms. Finally, we compare results with an implicit Monte Carlo code for one-dimensional gray test problems and demonstrate the efficiency of ECMC over standard Monte Carlo in this HOLO algorithm.« less
NASA Astrophysics Data System (ADS)
Katzav, Eytan
2013-04-01
In this paper, a mode of using the Dynamic Renormalization Group (DRG) method is suggested in order to cope with inconsistent results obtained when applying it to a continuous family of one-dimensional nonlocal models. The key observation is that the correct fixed-point dynamical system has to be identified during the analysis in order to account for all the relevant terms that are generated under renormalization. This is well established for static problems, however poorly implemented in dynamical ones. An application of this approach to a nonlocal extension of the Kardar-Parisi-Zhang equation resolves certain problems in one-dimension. Namely, obviously problematic predictions are eliminated and the existing exact analytic results are recovered.
Unary probabilistic and quantum automata on promise problems
NASA Astrophysics Data System (ADS)
Gainutdinova, Aida; Yakaryılmaz, Abuzer
2018-02-01
We continue the systematic investigation of probabilistic and quantum finite automata (PFAs and QFAs) on promise problems by focusing on unary languages. We show that bounded-error unary QFAs are more powerful than bounded-error unary PFAs, and, contrary to the binary language case, the computational power of Las-Vegas QFAs and bounded-error PFAs is equivalent to the computational power of deterministic finite automata (DFAs). Then, we present a new family of unary promise problems defined with two parameters such that when fixing one parameter QFAs can be exponentially more succinct than PFAs and when fixing the other parameter PFAs can be exponentially more succinct than DFAs.
NASA Technical Reports Server (NTRS)
Armoundas, A. A.; Feldman, A. B.; Sherman, D. A.; Cohen, R. J.
2001-01-01
Although the single equivalent point dipole model has been used to represent well-localised bio-electrical sources, in realistic situations the source is distributed. Consequently, position estimates of point dipoles determined by inverse algorithms suffer from systematic error due to the non-exact applicability of the inverse model. In realistic situations, this systematic error cannot be avoided, a limitation that is independent of the complexity of the torso model used. This study quantitatively investigates the intrinsic limitations in the assignment of a location to the equivalent dipole due to distributed electrical source. To simulate arrhythmic activity in the heart, a model of a wave of depolarisation spreading from a focal source over the surface of a spherical shell is used. The activity is represented by a sequence of concentric belt sources (obtained by slicing the shell with a sequence of parallel plane pairs), with constant dipole moment per unit length (circumferentially) directed parallel to the propagation direction. The distributed source is represented by N dipoles at equal arc lengths along the belt. The sum of the dipole potentials is calculated at predefined electrode locations. The inverse problem involves finding a single equivalent point dipole that best reproduces the electrode potentials due to the distributed source. The inverse problem is implemented by minimising the chi2 per degree of freedom. It is found that the trajectory traced by the equivalent dipole is sensitive to the location of the spherical shell relative to the fixed electrodes. It is shown that this trajectory does not coincide with the sequence of geometrical centres of the consecutive belt sources. For distributed sources within a bounded spherical medium, displaced from the sphere's centre by 40% of the sphere's radius, it is found that the error in the equivalent dipole location varies from 3 to 20% for sources with size between 5 and 50% of the sphere's radius. Finally, a method is devised to obtain the size of the distributed source during the cardiac cycle.
Wall shear stress fixed points in blood flow
NASA Astrophysics Data System (ADS)
Arzani, Amirhossein; Shadden, Shawn
2017-11-01
Patient-specific computational fluid dynamics produces large datasets, and wall shear stress (WSS) is one of the most important parameters due to its close connection with the biological processes at the wall. While some studies have investigated WSS vectorial features, the WSS fixed points have not received much attention. In this talk, we will discuss the importance of WSS fixed points from three viewpoints. First, we will review how WSS fixed points relate to the flow physics away from the wall. Second, we will discuss how certain types of WSS fixed points lead to high biochemical surface concentration in cardiovascular mass transport problems. Finally, we will introduce a new measure to track the exposure of endothelial cells to WSS fixed points.
Effect of Nitrogen Source on Growth and Trichloroethylene Degradation by Methane-Oxidizing Bacteria
Chu, Kung-Hui; Alvarez-Cohen, Lisa
1998-01-01
The effect of nitrogen source on methane-oxidizing bacteria with respect to cellular growth and trichloroethylene (TCE) degradation ability were examined. One mixed chemostat culture and two pure type II methane-oxidizing strains, Methylosinus trichosporium OB3b and strain CAC-2, which was isolated from the chemostat culture, were used in this study. All cultures were able to grow with each of three different nitrogen sources: ammonia, nitrate, and molecular nitrogen. Both M. trichosporium OB3b and strain CAC-2 showed slightly lower net cellular growth rates and cell yields but exhibited higher methane uptake rates, levels of poly-β-hydroxybutyrate (PHB) production, and naphthalene oxidation rates when grown under nitrogen-fixing conditions. The TCE-degrading ability of each culture was measured in terms of initial TCE oxidation rates and TCE transformation capacities (mass of TCE degraded/biomass inactivated), measured both with and without external energy sources. Higher initial TCE oxidation rates and TCE transformation capacities were observed in nitrogen-fixing mixed, M. trichosporium OB3b, and CAC-2 cultures than in nitrate- or ammonia-supplied cells. TCE transformation capacities were found to correlate with cellular PHB content in all three cultures. The results of this study suggest that the nitrogen-fixing capabilities of methane-oxidizing bacteria can be used to select for high-activity TCE degraders for the enhancement of bioremediation in fixed-nitrogen-limited environments. PMID:9726896
Efficient Inversion of Mult-frequency and Multi-Source Electromagnetic Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gary D. Egbert
2007-03-22
The project covered by this report focused on development of efficient but robust non-linear inversion algorithms for electromagnetic induction data, in particular for data collected with multiple receivers, and multiple transmitters, a situation extremely common in eophysical EM subsurface imaging methods. A key observation is that for such multi-transmitter problems each step in commonly used linearized iterative limited memory search schemes such as conjugate gradients (CG) requires solution of forward and adjoint EM problems for each of the N frequencies or sources, essentially generating data sensitivities for an N dimensional data-subspace. These multiple sensitivities allow a good approximation to themore » full Jacobian of the data mapping to be built up in many fewer search steps than would be required by application of textbook optimization methods, which take no account of the multiplicity of forward problems that must be solved for each search step. We have applied this idea to a develop a hybrid inversion scheme that combines features of the iterative limited memory type methods with a Newton-type approach using a partial calculation of the Jacobian. Initial tests on 2D problems show that the new approach produces results essentially identical to a Newton type Occam minimum structure inversion, while running more rapidly than an iterative (fixed regularization parameter) CG style inversion. Memory requirements, while greater than for something like CG, are modest enough that even in 3D the scheme should allow 3D inverse problems to be solved on a common desktop PC, at least for modest (~ 100 sites, 15-20 frequencies) data sets. A secondary focus of the research has been development of a modular system for EM inversion, using an object oriented approach. This system has proven useful for more rapid prototyping of inversion algorithms, in particular allowing initial development and testing to be conducted with two-dimensional example problems, before approaching more computationally cumbersome three-dimensional problems.« less
FUJIFILM X10 white orbs and DeOrbIt
NASA Astrophysics Data System (ADS)
Dietz, Henry Gordon
2013-01-01
The FUJIFILM X10 is a high-end enthusiast compact digital camera using an unusual sensor design. Unfortunately, upon its Fall 2011 release, the camera quickly became infamous for the uniquely disturbing "white orbs" that often appeared in areas where the sensor was saturated. FUJIFILM's first attempt at a fix was firmware released on February 25, 2012 if it had little effect. In April 2012, a sensor replacement essentially solved the problem. This paper explores the "white orb" phenomenon in detail. After FUJIFILM's attempt at a firmware fix failed, the author decided to create a post-processing tool that automatically could repair existing images. DeOrbIt was released as a free tool on March 7, 2012. To better understand the problem and how to fix it, the WWW form version of the tool logs images, processing parameters, and evaluations by users. The current paper describes the technical problem, the novel computational photography methods used by DeOrbit to repair affected images, and the public perceptions revealed by this experiment.
NASA Astrophysics Data System (ADS)
Ofek, Eran O.; Zackay, Barak
2018-04-01
Detection of templates (e.g., sources) embedded in low-number count Poisson noise is a common problem in astrophysics. Examples include source detection in X-ray images, γ-rays, UV, neutrinos, and search for clusters of galaxies and stellar streams. However, the solutions in the X-ray-related literature are sub-optimal in some cases by considerable factors. Using the lemma of Neyman–Pearson, we derive the optimal statistics for template detection in the presence of Poisson noise. We demonstrate that, for known template shape (e.g., point sources), this method provides higher completeness, for a fixed false-alarm probability value, compared with filtering the image with the point-spread function (PSF). In turn, we find that filtering by the PSF is better than filtering the image using the Mexican-hat wavelet (used by wavdetect). For some background levels, our method improves the sensitivity of source detection by more than a factor of two over the popular Mexican-hat wavelet filtering. This filtering technique can also be used for fast PSF photometry and flare detection; it is efficient and straightforward to implement. We provide an implementation in MATLAB. The development of a complete code that works on real data, including the complexities of background subtraction and PSF variations, is deferred for future publication.
Mao, Shasha; Xiong, Lin; Jiao, Licheng; Feng, Tian; Yeung, Sai-Kit
2017-05-01
Riemannian optimization has been widely used to deal with the fixed low-rank matrix completion problem, and Riemannian metric is a crucial factor of obtaining the search direction in Riemannian optimization. This paper proposes a new Riemannian metric via simultaneously considering the Riemannian geometry structure and the scaling information, which is smoothly varying and invariant along the equivalence class. The proposed metric can make a tradeoff between the Riemannian geometry structure and the scaling information effectively. Essentially, it can be viewed as a generalization of some existing metrics. Based on the proposed Riemanian metric, we also design a Riemannian nonlinear conjugate gradient algorithm, which can efficiently solve the fixed low-rank matrix completion problem. By experimenting on the fixed low-rank matrix completion, collaborative filtering, and image and video recovery, it illustrates that the proposed method is superior to the state-of-the-art methods on the convergence efficiency and the numerical performance.
Volcano fixes nitrogen into plant-available forms
Huebert, B.; Vitousek, P.; Sutton, J.; Elias, T.; Heath, J.; Coeppicus, S.; Howell, S.; Blomquist, B.
1999-01-01
Hawaiian montane ecosystems developing on recent tephra deposits contain more fixed nitrogen than conventional sources can explain. Heath and Huebert (1999) demonstrated that cloud water interception is the mechanism by which this extra nitrogen is deposited, but could not identify its source. We show here that atmospheric dinitrogen is fixed at the surface of active lava flows, producing concentrations of NO which are higher than those found in most urban rush hour air pollution. Over a period of hours this NO is blown away from the island and oxidized to nitrate. Interruptions in the trade wind flow can return this nitrate to the island to be deposited in cloud water. Thus, fixation on active lava flows is able to provide nitrogen to developing ecosystems on flows emplaced earlier.
1991-11-19
grew 253 percent, net assets grew 87 vigorous debates among economists a few years ago, has percent, fixed assets grew 155 percent, and average been...although enterprises. they only account for 2.7 percent of all industrial enter- prises, they possess two-thirds of all fixed assess, account If we are to...large- ther fiscal problems are handled on an ad-hoc basis. A and medium-sized enterprises do not appear strong fixed base number in contracts sets taxes
Digital computing cardiotachometer
NASA Technical Reports Server (NTRS)
Smith, H. E.; Rasquin, J. R.; Taylor, R. A. (Inventor)
1973-01-01
A tachometer is described which instantaneously measures heart rate. During the two intervals between three succeeding heart beats, the electronic system: (1) measures the interval by counting cycles from a fixed frequency source occurring between the two beats; and (2) computes heat rate during the interval between the next two beats by counting the number of times that the interval count must be counted to zero in order to equal a total count of sixty times (to convert to beats per minute) the frequency of the fixed frequency source.
A USPL functional system with articulated mirror arm for in-vivo applications in dentistry
NASA Astrophysics Data System (ADS)
Schelle, Florian; Meister, Jörg; Dehn, Claudia; Oehme, Bernd; Bourauel, Christoph; Frentzen, Mathias
Ultra-short pulsed laser (USPL) systems for dental application have overcome many of their initial disadvantages. However, a problem that has not yet been addressed and solved is the beam delivery into the oral cavity. The functional system that is introduced in this study includes an articulated mirror arm, a scanning system as well as a handpiece, allowing for freehand preparations with ultra-short laser pulses. As laser source an Nd:YVO4 laser is employed, emitting pulses with a duration of tp < 10 ps at a repetition rate of up to 500 kHz. The centre wavelength is at 1064 nm and the average output power can be tuned up to 9 W. The delivery system consists of an articulated mirror arm, to which a scanning system and a custom made handpiece are connected, including a 75 mm focussing lens. The whole functional system is compact in size and moveable. General characteristics like optical losses and ablation rate are determined and compared to results employing a fixed setup on an optical table. Furthermore classical treatment procedures like cavity preparation are being demonstrated on mammoth ivory. This study indicates that freehand preparation employing an USPL system is possible but challenging, and accompanied by a variety of side-effects. The ablation rate with fixed handpiece is about 10 mm3/min. Factors like defocussing and blinding affect treatment efficiency. Laser sources with higher average output powers might be needed in order to reach sufficient preparation speeds.
Cost of nitrogen use in the US | Science Inventory | US EPA
Growing human demands for food, fuel and fiber have accelerated the human-driven fixation of reactive nitrogen (N) by at least 10-fold over the last century. This acceleration is one of the most dramatic changes to the sustainability of Earth’s systems. Approximately 65% of the N fixed within the US is used in agriculture as synthetic N fertilizers and by N-fixing crops such as alfalfa and soybeans. Leakage of from human activities to the environment can result in a host of human health and environmental problems (see figure). These costs include effects on human respiratory health via mortality, hospital visits and loss of work days due to the formation of smog, costs associated with treatment and replacement of drinking water contaminated with nitrate, losses to recreation and fisheries resulting from algal blooms and hypoxia in freshwater and coastal ecosystems. Often these harmful effects are not reflected in the costs of the food, fuel, and fiber that depend upon N use. A recent US EPA study (Sobota et al. 2015) quantified the potential damage costs associated with N leaked from the following sources: synthetic and manure fertilizers, crop N-fixation, wastewater, and fossil fuel combustion. Each source was traced through the nitrogen cascade to the environment (see figure) in order to connect to existing data on the costs of specific forms of N in specific situations in order to calculate the annual damage cost of anthropogenic N. Estimates of N l
Kalloniati, Chrysanthi; Krompas, Panagiotis; Karalias, Georgios; Udvardi, Michael K; Rennenberg, Heinz; Herschbach, Cornelia; Flemetakis, Emmanouil
2015-09-01
We combined transcriptomic and biochemical approaches to study rhizobial and plant sulfur (S) metabolism in nitrogen (N) fixing nodules (Fix(+)) of Lotus japonicus, as well as the link of S-metabolism to symbiotic nitrogen fixation and the effect of nodules on whole-plant S-partitioning and metabolism. Our data reveal that N-fixing nodules are thiol-rich organs. Their high adenosine 5'-phosphosulfate reductase activity and strong (35)S-flux into cysteine and its metabolites, in combination with the transcriptional upregulation of several rhizobial and plant genes involved in S-assimilation, highlight the function of nodules as an important site of S-assimilation. The higher thiol content observed in nonsymbiotic organs of N-fixing plants in comparison to uninoculated plants could not be attributed to local biosynthesis, indicating that nodules are an important source of reduced S for the plant, which triggers whole-plant reprogramming of S-metabolism. Enhanced thiol biosynthesis in nodules and their impact on the whole-plant S-economy are dampened in plants nodulated by Fix(-) mutant rhizobia, which in most respects metabolically resemble uninoculated plants, indicating a strong interdependency between N-fixation and S-assimilation. © 2015 American Society of Plant Biologists. All rights reserved.
Fixation and chemical analysis of single fog and rain droplets
NASA Astrophysics Data System (ADS)
Kasahara, M.; Akashi, S.; Ma, C.-J.; Tohno, S.
Last decade, the importance of global environmental problems has been recognized worldwide. Acid rain is one of the most important global environmental problems as well as the global warming. The grasp of physical and chemical properties of fog and rain droplets is essential to make clear the physical and chemical processes of acid rain and also their effects on forests, materials and ecosystems. We examined the physical and chemical properties of single fog and raindrops by applying fixation technique. The sampling method and treatment procedure to fix the liquid droplets as a solid particle were investigated. Small liquid particles like fog droplet could be easily fixed within few minutes by exposure to cyanoacrylate vapor. The large liquid particles like raindrops were also fixed successively, but some of them were not perfect. Freezing method was applied to fix the large raindrops. Frozen liquid particles existed stably by exposure to cyanoacrylate vapor after freezing. The particle size measurement and the elemental analysis of the fixed particle were performed in individual base using microscope, and SEX-EDX, particle-induced X-ray emission (PIXE) and micro-PIXE analyses, respectively. The concentration in raindrops was dependent upon the droplet size and the elapsed time from the beginning of rainfall.
Path planning and Ground Control Station simulator for UAV
NASA Astrophysics Data System (ADS)
Ajami, A.; Balmat, J.; Gauthier, J.-P.; Maillot, T.
In this paper we present a Universal and Interoperable Ground Control Station (UIGCS) simulator for fixed and rotary wing Unmanned Aerial Vehicles (UAVs), and all types of payloads. One of the major constraints is to operate and manage multiple legacy and future UAVs, taking into account the compliance with NATO Combined/Joint Services Operational Environment (STANAG 4586). Another purpose of the station is to assign the UAV a certain degree of autonomy, via autonomous planification/replanification strategies. The paper is organized as follows. In Section 2, we describe the non-linear models of the fixed and rotary wing UAVs that we use in the simulator. In Section 3, we describe the simulator architecture, which is based upon interacting modules programmed independently. This simulator is linked with an open source flight simulator, to simulate the video flow and the moving target in 3D. To conclude this part, we tackle briefly the problem of the Matlab/Simulink software connection (used to model the UAV's dynamic) with the simulation of the virtual environment. Section 5 deals with the control module of a flight path of the UAV. The control system is divided into four distinct hierarchical layers: flight path, navigation controller, autopilot and flight control surfaces controller. In the Section 6, we focus on the trajectory planification/replanification question for fixed wing UAV. Indeed, one of the goals of this work is to increase the autonomy of the UAV. We propose two types of algorithms, based upon 1) the methods of the tangent and 2) an original Lyapunov-type method. These algorithms allow either to join a fixed pattern or to track a moving target. Finally, Section 7 presents simulation results obtained on our simulator, concerning a rather complicated scenario of mission.
NASA Astrophysics Data System (ADS)
Noor-E-Alam, Md.; Doucette, John
2015-08-01
Grid-based location problems (GBLPs) can be used to solve location problems in business, engineering, resource exploitation, and even in the field of medical sciences. To solve these decision problems, an integer linear programming (ILP) model is designed and developed to provide the optimal solution for GBLPs considering fixed cost criteria. Preliminary results show that the ILP model is efficient in solving small to moderate-sized problems. However, this ILP model becomes intractable in solving large-scale instances. Therefore, a decomposition heuristic is proposed to solve these large-scale GBLPs, which demonstrates significant reduction of solution runtimes. To benchmark the proposed heuristic, results are compared with the exact solution via ILP. The experimental results show that the proposed method significantly outperforms the exact method in runtime with minimal (and in most cases, no) loss of optimality.
47 CFR 27.70 - Information exchange.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES MISCELLANEOUS WIRELESS... activated or an existing base or fixed station is modified: (1) Location; (2) Effective radiated power; (3... identify the source if interference is encountered when the base or fixed station is activated. [72 FR...
Medical Malpractice Reform: A Fix for a Problem Long out of Fashion.
Kirkner, Richard Mark
2017-10-01
State tort reforms have all but relegated the malpractice crisis to the history books. But there's good news for those of you into all things retro: The House of Representatives just voted to fix the malpractice crisis by a 222-197 margin.
Fu, Xiaoran; Apgar, James R.; Keating, Amy E.
2007-01-01
Computational protein design can be used to select sequences that are compatible with a fixed-backbone template. This strategy has been used in numerous instances to engineer novel proteins. However, the fixed-backbone assumption severely restricts the sequence space that is accessible via design. For challenging problems, such as the design of functional proteins, this may not be acceptable. In this paper, we present a method for introducing backbone flexibility into protein design calculations and apply it to the design of diverse helical BH3 ligands that bind to the anti-apoptotic protein Bcl-xL, a member of the Bcl-2 protein family. We demonstrate how normal mode analysis can be used to sample different BH3 backbones, and show that this leads to a larger and more diverse set of low-energy solutions than can be achieved using a native high-resolution Bcl-xL complex crystal structure as a template. We tested several of the designed solutions experimentally and found that this approach worked well when normal mode calculations were used to deform a native BH3 helix structure, but less well when they were used to deform an idealized helix. A subsequent round of design and testing identified a likely source of the problem as inadequate sampling of the helix pitch. In all, we tested seventeen designed BH3 peptide sequences, including several point mutants. Of these, eight bound well to Bcl-xL and four others showed weak but detectable binding. The successful designs showed a diversity of sequences that would have been difficult or impossible to achieve using only a fixed backbone. Thus, introducing backbone flexibility via normal mode analysis effectively broadened the set of sequences identified by computational design, and provided insight into positions important for binding Bcl-xL. PMID:17597151
Energy-Aware Multipath Routing Scheme Based on Particle Swarm Optimization in Mobile Ad Hoc Networks
Robinson, Y. Harold; Rajaram, M.
2015-01-01
Mobile ad hoc network (MANET) is a collection of autonomous mobile nodes forming an ad hoc network without fixed infrastructure. Dynamic topology property of MANET may degrade the performance of the network. However, multipath selection is a great challenging task to improve the network lifetime. We proposed an energy-aware multipath routing scheme based on particle swarm optimization (EMPSO) that uses continuous time recurrent neural network (CTRNN) to solve optimization problems. CTRNN finds the optimal loop-free paths to solve link disjoint paths in a MANET. The CTRNN is used as an optimum path selection technique that produces a set of optimal paths between source and destination. In CTRNN, particle swarm optimization (PSO) method is primly used for training the RNN. The proposed scheme uses the reliability measures such as transmission cost, energy factor, and the optimal traffic ratio between source and destination to increase routing performance. In this scheme, optimal loop-free paths can be found using PSO to seek better link quality nodes in route discovery phase. PSO optimizes a problem by iteratively trying to get a better solution with regard to a measure of quality. The proposed scheme discovers multiple loop-free paths by using PSO technique. PMID:26819966
Clark, William John
2011-01-01
During the 20th century functional appliances evolved from night time wear to more flexible appliances for increased day time wear to full time wear with Twin Block appliances. The current trend is towards fixed functional appliances and this paper introduces the Fixed Twin Block, bonded to the teeth to eliminate problems of compliance in functional therapy. TransForce lingual appliances are pre-activated and may be used in first phase treatment for sagittal and transverse arch development. Alternatively they may be integrated with fixed appliances at any stage of treatment.
Wei, Ruoyu; Cao, Jinde; Alsaedi, Ahmed
2018-02-01
This paper investigates the finite-time synchronization and fixed-time synchronization problems of inertial memristive neural networks with time-varying delays. By utilizing the Filippov discontinuous theory and Lyapunov stability theory, several sufficient conditions are derived to ensure finite-time synchronization of inertial memristive neural networks. Then, for the purpose of making the setting time independent of initial condition, we consider the fixed-time synchronization. A novel criterion guaranteeing the fixed-time synchronization of inertial memristive neural networks is derived. Finally, three examples are provided to demonstrate the effectiveness of our main results.
A numerical solution method for acoustic radiation from axisymmetric bodies
NASA Technical Reports Server (NTRS)
Caruthers, John E.; Raviprakash, G. K.
1995-01-01
A new and very efficient numerical method for solving equations of the Helmholtz type is specialized for problems having axisymmetric geometry. It is then demonstrated by application to the classical problem of acoustic radiation from a vibrating piston set in a stationary infinite plane. The method utilizes 'Green's Function Discretization', to obtain an accurate resolution of the waves using only 2-3 points per wave. Locally valid free space Green's functions, used in the discretization step, are obtained by quadrature. Results are computed for a range of grid spacing/piston radius ratios at a frequency parameter, omega R/c(sub 0), of 2 pi. In this case, the minimum required grid resolution appears to be fixed by the need to resolve a step boundary condition at the piston edge rather than by the length scale imposed by the wave length of the acoustic radiation. It is also demonstrated that a local near-field radiation boundary procedure allows the domain to be truncated very near the radiating source with little effect on the solution.
Improving the environment in urban areas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adamkus, V.V.
1994-12-31
The author discusses the need for improvements to the environment in urban areas, and efforts being made under the direction of the Environmental Protection Agency (EPA) to address these problems. The impact the new Clean Air Act can have on emissions from gasoline powered autos, diesel burning trucks, fixed emission sources ranging from utilities to chemical plants, and consumer products like hair sprays and charcoal starters, will all work together to improve air quality in urban areas. The author also discusses Brownfields Economic Redevelopment Plan efforts being supported by the EPA in a coordinated plan to get municipalities involved inmore » cleaning up areas with pollution, to remove the blight on the urban areas, provide new land for development, and promote additional jobs.« less
Use of archival resources has been limited to date by inconsistent methods for genomic profiling of degraded RNA from formalin-fixed paraffin-embedded (FFPE) samples. RNA-sequencing offers a promising way to address this problem. Here we evaluated transcriptomic dose responses us...
Nonlinear Resonance and Duffing's Spring Equation
ERIC Educational Resources Information Center
Fay, Temple H.
2006-01-01
This note discusses the boundary in the frequency--amplitude plane for boundedness of solutions to the forced spring Duffing type equation. For fixed initial conditions and fixed parameter [epsilon] results are reported of a systematic numerical investigation on the global stability of solutions to the initial value problem as the parameters F and…
Aircraft Pitch Control With Fixed Order LQ Compensators
NASA Technical Reports Server (NTRS)
Green, James; Ashokkumar, C. R.; Homaifar, Abdollah
1997-01-01
This paper considers a given set of fixed order compensators for aircraft pitch control problem. By augmenting compensator variables to the original state equations of the aircraft, a new dynamic model is considered to seek a LQ controller. While the fixed order compensators can achieve a set of desired poles in a specified region, LQ formulation provides the inherent robustness properties. The time response for ride quality is significantly improved with a set of dynamic compensators.
Aircraft Pitch Control with Fixed Order LQ Compensators
NASA Technical Reports Server (NTRS)
Green, James; Ashokkumar, Cr.; Homaifar, A.
1997-01-01
This paper considers a given set of fixed order compensators for aircraft pitch control problem. By augmenting compensator variables to the original state equations of the aircraft, a new dynamic model is considered to seek a LQ controller. While the fixed order compensators can achieve a set of desired poles in a specified region, LQ formulation provides the inherent robustness properties. The time response for ride quality is significantly improved with a set of dynamic compensators.
On the complexity and approximability of some Euclidean optimal summing problems
NASA Astrophysics Data System (ADS)
Eremeev, A. V.; Kel'manov, A. V.; Pyatkin, A. V.
2016-10-01
The complexity status of several well-known discrete optimization problems with the direction of optimization switching from maximum to minimum is analyzed. The task is to find a subset of a finite set of Euclidean points (vectors). In these problems, the objective functions depend either only on the norm of the sum of the elements from the subset or on this norm and the cardinality of the subset. It is proved that, if the dimension of the space is a part of the input, then all these problems are strongly NP-hard. Additionally, it is shown that, if the space dimension is fixed, then all the problems are NP-hard even for dimension 2 (on a plane) and there are no approximation algorithms with a guaranteed accuracy bound for them unless P = NP. It is shown that, if the coordinates of the input points are integer, then all the problems can be solved in pseudopolynomial time in the case of a fixed space dimension.
2016-07-22
their corresponding transmission powers . At first glance, one may wonder whether the thinnest path problem is simply a shortest path problem with the...nature of the shortest path problem. Another aspect that complicates the problem is the choice of the transmission power at each node (within a maximum...fixed transmission power at each node (in this case, the resulting hypergraph degenerates to a standard graph), the thinnest path problem is NP
NASA Technical Reports Server (NTRS)
Fieno, D.
1972-01-01
Perturbation theory formulas were derived and applied to determine changes in neutron and gamma-ray doses due to changes in various radiation shield layers for fixed sources. For a given source and detector position, the perturbation method enables dose derivatives with respect to density, or equivalently thickness, for every layer to be determined from one forward and one inhomogeneous adjoint calculation. A direct determination without the perturbation approach would require two forward calculations to evaluate the dose derivative due to a change in a single layer. Hence, the perturbation method for obtaining dose derivatives requires fewer computations for design studies of multilayer shields. For an illustrative problem, a comparison was made of the fractional change in the dose per unit change in the thickness of each shield layer in a two-layer spherical configuration as calculated by perturbation theory and by successive direct calculations; excellent agreement was obtained between the two methods.
An Energy Balance Model to Predict Chemical Partitioning in a Photosynthetic Microbial Mat
NASA Technical Reports Server (NTRS)
Hoehler, Tori M.; Albert, Daniel B.; DesMarais, David J.
2006-01-01
Studies of biosignature formation in photosynthetic microbial mat communities offer potentially useful insights with regards to both solar and extrasolar astrobiology. Biosignature formation in such systems results from the chemical transformation of photosynthetically fixed carbon by accessory microorganisms. This fixed carbon represents a source not only of reducing power, but also energy, to these organisms, so that chemical and energy budgets should be coupled. We tested this hypothesis by applying an energy balance model to predict the fate of photosynthetic productivity under dark, anoxic conditions. Fermentation of photosynthetically fixed carbon is taken to be the only source of energy available to cyanobacteria in the absence of light and oxygen, and nitrogen fixation is the principal energy demand. The alternate fate for fixed carbon is to build cyanobacterial biomass with Redfield C:N ratio. The model predicts that, under completely nitrogen-limited conditions, growth is optimized when 78% of fixed carbon stores are directed into fermentative energy generation, with the remainder allocated to growth. These predictions were compared to measurements made on microbial mats that are known to be both nitrogen-limited and populated by actively nitrogen-fixing cyanobacteria. In these mats, under dark, anoxic conditions, 82% of fixed carbon stores were diverted into fermentation. The close agreement between these independent approaches suggests that energy balance models may provide a quantitative means of predicting chemical partitioning within such systems - an important step towards understanding how biological productivity is ultimately partitioned into biosignature compounds.
Adaptive fixed-time trajectory tracking control of a stratospheric airship.
Zheng, Zewei; Feroskhan, Mir; Sun, Liang
2018-05-01
This paper addresses the fixed-time trajectory tracking control problem of a stratospheric airship. By extending the method of adding a power integrator to a novel adaptive fixed-time control method, the convergence of a stratospheric airship to its reference trajectory is guaranteed to be achieved within a fixed time. The control algorithm is firstly formulated without the consideration of external disturbances to establish the stability of the closed-loop system in fixed-time and demonstrate that the convergence time of the airship is essentially independent of its initial conditions. Subsequently, a smooth adaptive law is incorporated into the proposed fixed-time control framework to provide the system with robustness to external disturbances. Theoretical analyses demonstrate that under the adaptive fixed-time controller, the tracking errors will converge towards a residual set in fixed-time. The results of a comparative simulation study with other recent methods illustrate the remarkable performance and superiority of the proposed control method. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Kalloniati, Chrysanthi; Krompas, Panagiotis; Udvardi, Michael K.; Flemetakis, Emmanouil
2015-01-01
We combined transcriptomic and biochemical approaches to study rhizobial and plant sulfur (S) metabolism in nitrogen (N) fixing nodules (Fix+) of Lotus japonicus, as well as the link of S-metabolism to symbiotic nitrogen fixation and the effect of nodules on whole-plant S-partitioning and metabolism. Our data reveal that N-fixing nodules are thiol-rich organs. Their high adenosine 5′-phosphosulfate reductase activity and strong 35S-flux into cysteine and its metabolites, in combination with the transcriptional upregulation of several rhizobial and plant genes involved in S-assimilation, highlight the function of nodules as an important site of S-assimilation. The higher thiol content observed in nonsymbiotic organs of N-fixing plants in comparison to uninoculated plants could not be attributed to local biosynthesis, indicating that nodules are an important source of reduced S for the plant, which triggers whole-plant reprogramming of S-metabolism. Enhanced thiol biosynthesis in nodules and their impact on the whole-plant S-economy are dampened in plants nodulated by Fix− mutant rhizobia, which in most respects metabolically resemble uninoculated plants, indicating a strong interdependency between N-fixation and S-assimilation. PMID:26296963
Application of Decomposition to Transportation Network Analysis
DOT National Transportation Integrated Search
1976-10-01
This document reports preliminary results of five potential applications of the decomposition techniques from mathematical programming to transportation network problems. The five application areas are (1) the traffic assignment problem with fixed de...
47 CFR 24.5 - Terms and definitions.
Code of Federal Regulations, 2011 CFR
2011-10-01
... in the National Geodetic Survey (NGS) data base. (Source: National Geodetic Survey, U.S. Department... antenna site. Base Station. A land station in the land mobile service. Broadband PCS. PCS services.... Fixed Station. A station in the fixed service. Land Mobile Service. A mobile service between base...
47 CFR 24.5 - Terms and definitions.
Code of Federal Regulations, 2013 CFR
2013-10-01
... in the National Geodetic Survey (NGS) data base. (Source: National Geodetic Survey, U.S. Department... antenna site. Base Station. A land station in the land mobile service. Broadband PCS. PCS services.... Fixed Station. A station in the fixed service. Land Mobile Service. A mobile service between base...
47 CFR 24.5 - Terms and definitions.
Code of Federal Regulations, 2014 CFR
2014-10-01
... in the National Geodetic Survey (NGS) data base. (Source: National Geodetic Survey, U.S. Department... antenna site. Base Station. A land station in the land mobile service. Broadband PCS. PCS services.... Fixed Station. A station in the fixed service. Land Mobile Service. A mobile service between base...
47 CFR 24.5 - Terms and definitions.
Code of Federal Regulations, 2012 CFR
2012-10-01
... in the National Geodetic Survey (NGS) data base. (Source: National Geodetic Survey, U.S. Department... antenna site. Base Station. A land station in the land mobile service. Broadband PCS. PCS services.... Fixed Station. A station in the fixed service. Land Mobile Service. A mobile service between base...
NASA Astrophysics Data System (ADS)
Zamanov, A. D.
2002-01-01
Based on the exact three-dimensional equations of continuum mechanics and the Akbarov-Guz' continuum theory, the problem on forced vibrations of a rectangular plate made of a composite material with a periodically curved structure is formulated. The plate is rigidly fixed along the Ox 1 axis. Using the semi-analytic method of finite elements, a numerical procedure is elaborated for investigating this problem. The numerical results on the effect of structural curvings on the stress distribution in the plate under forced vibrations are analyzed. It is shown that the disturbances of the stress σ22 in a hinge-supported plate are greater than in a rigidly fixed one. Also, it is found that the structural curvings considerably affect the stress distribution in plates both under static and dynamic loading.
Social interaction as a heuristic for combinatorial optimization problems
NASA Astrophysics Data System (ADS)
Fontanari, José F.
2010-11-01
We investigate the performance of a variant of Axelrod’s model for dissemination of culture—the Adaptive Culture Heuristic (ACH)—on solving an NP-Complete optimization problem, namely, the classification of binary input patterns of size F by a Boolean Binary Perceptron. In this heuristic, N agents, characterized by binary strings of length F which represent possible solutions to the optimization problem, are fixed at the sites of a square lattice and interact with their nearest neighbors only. The interactions are such that the agents’ strings (or cultures) become more similar to the low-cost strings of their neighbors resulting in the dissemination of these strings across the lattice. Eventually the dynamics freezes into a homogeneous absorbing configuration in which all agents exhibit identical solutions to the optimization problem. We find through extensive simulations that the probability of finding the optimal solution is a function of the reduced variable F/N1/4 so that the number of agents must increase with the fourth power of the problem size, N∝F4 , to guarantee a fixed probability of success. In this case, we find that the relaxation time to reach an absorbing configuration scales with F6 which can be interpreted as the overall computational cost of the ACH to find an optimal set of weights for a Boolean binary perceptron, given a fixed probability of success.
Slot angle detecting method for fiber fixed chip
NASA Astrophysics Data System (ADS)
Zhang, Jiaquan; Wang, Jiliang; Zhou, Chaochao
2018-04-01
The slot angle of fiber fixed chip has a significant impact on performance of photoelectric devices. In order to solve the actual engineering problem, this paper put forward a detecting method based on imaging processing. Because the images have very low contrast that is hardly segmented, so this paper proposes imaging segment methods based on edge character. Then get fixed chip edge line slope k2 and calculate the fiber fixed slot line slope k1, which can be used calculating the slot angle. Lastly, test the repeatability and accuracy of system, which show that this method has very fast operation speed and good robustness. Clearly, it is also satisfied to the actual demand of fiber fixed chip slot angle detection.
Coordinated perception by teams of aerial and ground robots
NASA Astrophysics Data System (ADS)
Grocholsky, Benjamin P.; Swaminathan, Rahul; Kumar, Vijay; Taylor, Camillo J.; Pappas, George J.
2004-12-01
Air and ground vehicles exhibit complementary capabilities and characteristics as robotic sensor platforms. Fixed wing aircraft offer broad field of view and rapid coverage of search areas. However, minimum operating airspeed and altitude limits, combined with attitude uncertainty, place a lower limit on their ability to detect and localize ground features. Ground vehicles on the other hand offer high resolution sensing over relatively short ranges with the disadvantage of slow coverage. This paper presents a decentralized architecture and solution methodology for seamlessly realizing the collaborative potential of air and ground robotic sensor platforms. We provide a framework based on an established approach to the underlying sensor fusion problem. This provides transparent integration of information from heterogeneous sources. An information-theoretic utility measure captures the task objective and robot inter-dependencies. A simple distributed solution mechanism is employed to determine team member sensing trajectories subject to the constraints of individual vehicle and sensor sub-systems. The architecture is applied to a mission involving searching for and localizing an unknown number of targets in an user specified search area. Results for a team of two fixed wing UAVs and two all terrain UGVs equipped with vision sensors are presented.
Experimental studies of the rotor flow downwash on the Stability of multi-rotor crafts in descent
NASA Astrophysics Data System (ADS)
Veismann, Marcel; Dougherty, Christopher; Gharib, Morteza
2017-11-01
All rotorcrafts, including helicopters and multicopters, have the inherent problem of entering rotor downwash during vertical descent. As a result, the craft is subject to highly unsteady flow, called vortex ring state (VRS), which leads to a loss of lift and reduced stability. To date, experimental efforts to investigate this phenomenon have been largely limited to analysis of a single, fixed rotor mounted in a horizontal wind tunnel. Our current work aims to understand the interaction of multiple rotors in vertical descent by mounting a multi-rotor craft in a low speed, vertical wind tunnel. Experiments were performed with a fixed and rotationally free mounting; the latter allowing us to better capture the dynamics of a free flying drone. The effect of rotor separation on stability, generated thrust, and rotor wake interaction was characterized using force gauge data and PIV analysis for various descent velocities. The results obtained help us better understand fluid-craft interactions of drones in vertical descent and identify possible sources of instability. The presented material is based upon work supported by the Center for Autonomous Systems and Technologies (CAST) at the Graduate Aerospace Laboratories of the California Institute of Technology (GALCIT).
3D shape recovery of smooth surfaces: dropping the fixed-viewpoint assumption.
Moses, Yael; Shimshoni, Ilan
2009-07-01
We present a new method for recovering the 3D shape of a featureless smooth surface from three or more calibrated images illuminated by different light sources (three of them are independent). This method is unique in its ability to handle images taken from unconstrained perspective viewpoints and unconstrained illumination directions. The correspondence between such images is hard to compute and no other known method can handle this problem locally from a small number of images. Our method combines geometric and photometric information in order to recover dense correspondence between the images and accurately computes the 3D shape. Only a single pass starting at one point and local computation are used. This is in contrast to methods that use the occluding contours recovered from many images to initialize and constrain an optimization process. The output of our method can be used to initialize such processes. In the special case of fixed viewpoint, the proposed method becomes a new perspective photometric stereo algorithm. Nevertheless, the introduction of the multiview setup, self-occlusions, and regions close to the occluding boundaries are better handled, and the method is more robust to noise than photometric stereo. Experimental results are presented for simulated and real images.
Employment insecurity and employees' health in Denmark.
Cottini, Elena; Ghinetti, Paolo
2018-02-01
We use register data for Denmark (IDA) merged with the Danish Work Environment Cohort Survey (1995, 2000, and 2005) to estimate the effect of perceived employment insecurity on perceived health for a sample of Danish employees. We consider two health measures from the SF-36 Health Survey Instrument: a vitality scale for general well-being and a mental health scale. We first analyse a summary measure of employment insecurity. Instrumental variables-fixed effects estimates that use firm workforce changes as a source of exogenous variation show that 1 additional dimension of insecurity causes a shift from the median to the 25th percentile in the mental health scale and to the 30th in that of energy/vitality. It also increases by about 6 percentage points the probability to develop severe mental health problems. Looking at single insecurity dimensions by naïve fixed effects, uncertainty associated with the current job is important for mental health. Employability has a sizeable relationship with health and is the only insecurity dimension that matters for the energy and vitality scale. Danish employees who fear involuntary firm internal mobility experience worse mental health. Copyright © 2017 John Wiley & Sons, Ltd.
Global nitrogen overload problem grows critical
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moffat, A.S.
1998-02-13
This article discusses a global problem due to man`s intervention in the biosphere resulting from an increased production and usage of products producing nitrogen compounds which can be fixed in ecosystems. This problem was recognized on small scales even in the 1960`s, but recent studies on a more global scale show that the amount of nitrogen compounds in river runoff is strongly related to the use of synthetic fertilizers, fossil-fuel power plants, and automobile emissions. The increased fixed nitrogen load is exceeding the ability of some ecosystems to use or break the compounds down, resulting in a change in themore » types of flora and fauna which are found to inhabit the ecosystems, and leading to decreased biodiversity.« less
Does food insecurity affect parental characteristics and child behavior? Testing mediation effects.
Huang, Jin; Oshima, Karen M Matta; Kim, Youngmi
2010-01-01
Using two waves of data from the Child Development Supplement in the Panel Study of Income Dynamics, this study investigates whether parental characteristics (parenting stress, parental warmth, psychological distress, and parent's self-esteem) mediate household food insecurity's relations with child behavior problems. Fixed-effects analyses examine data from a low-income sample of 416 children from 249 households. This study finds that parenting stress mediates the effects of food insecurity on child behavior problems. However, two robustness tests produce different results from those of the fixed-effects models. This inconsistency suggests that household food insecurity's relations to the two types of child behavior problems need to be investigated further with a different methodology and other measures.
NASA Technical Reports Server (NTRS)
Miele, A.; Zhao, Z. G.; Lee, W. Y.
1989-01-01
The determination of optimal trajectories for the aeroassisted flight experiment (AFE) is discussed. The AFE refers to the study of the free flight of an autonomous spacecraft, shuttle-launched and shuttle-recovered. Its purpose is to gather atmospheric entry environmental data for use in designing aeroassisted orbital transfer vehicles (AOTV). It is assumed that: (1) the spacecraft is a particle of constant mass; (2) the Earth is rotating with constant angular velocity; (3) the Earth is an oblate planet, and the gravitational potential depends on both the radial distance and the latitude (harmonics of order higher than four are ignored); and (4) the atmosphere is at rest with respect to the Earth. Under these assumptions, the equations of motion for hypervelocity atmospheric flight (which can be used not only for AFE problems, but also for AOT problems and space shuttle problems) are derived in an Earth-fixed system. Transformation relations are supplied which allow one to pass from quantities computed in an Earth-fixed system to quantities computed in an inertial system, and vice versa.
NASA Astrophysics Data System (ADS)
Griffin, Christopher; Belmonte, Andrew
2017-05-01
We study the problem of stabilized coexistence in a three-species public goods game in which each species simultaneously contributes to one public good while freeloading off another public good ("cheating"). The proportional population growth is governed by an appropriately modified replicator equation, depending on the returns from the public goods and the cost. We show that the replicator dynamic has at most one interior unstable fixed point and that the population becomes dominated by a single species. We then show that by applying an externally imposed penalty, or "tax" on success can stabilize the interior fixed point, allowing for the symbiotic coexistence of all species. We show that the interior fixed point is the point of globally minimal total population growth in both the taxed and untaxed cases. We then formulate an optimal taxation problem and show that it admits a quasilinearization, resulting in novel necessary conditions for the optimal control. In particular, the optimal control problem governing the tax rate must solve a certain second-order ordinary differential equation.
Griffin, Christopher; Belmonte, Andrew
2017-05-01
We study the problem of stabilized coexistence in a three-species public goods game in which each species simultaneously contributes to one public good while freeloading off another public good ("cheating"). The proportional population growth is governed by an appropriately modified replicator equation, depending on the returns from the public goods and the cost. We show that the replicator dynamic has at most one interior unstable fixed point and that the population becomes dominated by a single species. We then show that by applying an externally imposed penalty, or "tax" on success can stabilize the interior fixed point, allowing for the symbiotic coexistence of all species. We show that the interior fixed point is the point of globally minimal total population growth in both the taxed and untaxed cases. We then formulate an optimal taxation problem and show that it admits a quasilinearization, resulting in novel necessary conditions for the optimal control. In particular, the optimal control problem governing the tax rate must solve a certain second-order ordinary differential equation.
Parameterized Algorithmics for Finding Exact Solutions of NP-Hard Biological Problems.
Hüffner, Falk; Komusiewicz, Christian; Niedermeier, Rolf; Wernicke, Sebastian
2017-01-01
Fixed-parameter algorithms are designed to efficiently find optimal solutions to some computationally hard (NP-hard) problems by identifying and exploiting "small" problem-specific parameters. We survey practical techniques to develop such algorithms. Each technique is introduced and supported by case studies of applications to biological problems, with additional pointers to experimental results.
Introduction to the IWA task group on biofilm modeling.
Noguera, D R; Morgenroth, E
2004-01-01
An International Water Association (IWA) Task Group on Biofilm Modeling was created with the purpose of comparatively evaluating different biofilm modeling approaches. The task group developed three benchmark problems for this comparison, and used a diversity of modeling techniques that included analytical, pseudo-analytical, and numerical solutions to the biofilm problems. Models in one, two, and three dimensional domains were also compared. The first benchmark problem (BM1) described a monospecies biofilm growing in a completely mixed reactor environment and had the purpose of comparing the ability of the models to predict substrate fluxes and concentrations for a biofilm system of fixed total biomass and fixed biomass density. The second problem (BM2) represented a situation in which substrate mass transport by convection was influenced by the hydrodynamic conditions of the liquid in contact with the biofilm. The third problem (BM3) was designed to compare the ability of the models to simulate multispecies and multisubstrate biofilms. These three benchmark problems allowed identification of the specific advantages and disadvantages of each modeling approach. A detailed presentation of the comparative analyses for each problem is provided elsewhere in these proceedings.
Evaluating Quality of Aged Archival Formalin-Fixed Paraffin-Embedded Samples for RNA-Sequencing
Archival formalin-fixed paraffin-embedded (FFPE) samples offer a vast, untapped source of genomic data for biomarker discovery. However, the quality of FFPE samples is often highly variable, and conventional methods to assess RNA quality for RNA-sequencing (RNA-seq) are not infor...
Radon gas, useful for medical purposes, safely fixed in quartz
NASA Technical Reports Server (NTRS)
Fields, P. R.; Stein, L.; Zirin, M. H.
1966-01-01
Radon gas is enclosed in quartz or glass ampules by subjecting the gas sealed at a low pressure in the ampules to an ionization process. This process is useful for preparing fixed radon sources for radiological treatment of malignancies, without the danger of releasing radioactive gases.
Nonparametric estimation and testing of fixed effects panel data models
Henderson, Daniel J.; Carroll, Raymond J.; Li, Qi
2009-01-01
In this paper we consider the problem of estimating nonparametric panel data models with fixed effects. We introduce an iterative nonparametric kernel estimator. We also extend the estimation method to the case of a semiparametric partially linear fixed effects model. To determine whether a parametric, semiparametric or nonparametric model is appropriate, we propose test statistics to test between the three alternatives in practice. We further propose a test statistic for testing the null hypothesis of random effects against fixed effects in a nonparametric panel data regression model. Simulations are used to examine the finite sample performance of the proposed estimators and the test statistics. PMID:19444335
NASA Astrophysics Data System (ADS)
Jiang, Shengqin; Lu, Xiaobo; Cai, Guoliang; Cai, Shuiming
2017-12-01
This paper focuses on the cluster synchronisation problem of coupled complex networks with uncertain disturbances under an adaptive fixed-time control strategy. To begin with, complex dynamical networks with community structure which are subject to uncertain disturbances are taken into account. Then, a novel adaptive control strategy combined with fixed-time techniques is proposed to guarantee the nodes in the communities to desired states in a settling time. In addition, the stability of complex error systems is theoretically proved based on Lyapunov stability theorem. At last, two examples are presented to verify the effectiveness of the proposed adaptive fixed-time control.
Job shop scheduling model for non-identic machine with fixed delivery time to minimize tardiness
NASA Astrophysics Data System (ADS)
Kusuma, K. K.; Maruf, A.
2016-02-01
Scheduling non-identic machines problem with low utilization characteristic and fixed delivery time are frequent in manufacture industry. This paper propose a mathematical model to minimize total tardiness for non-identic machines in job shop environment. This model will be categorized as an integer linier programming model and using branch and bound algorithm as the solver method. We will use fixed delivery time as main constraint and different processing time to process a job. The result of this proposed model shows that the utilization of production machines can be increase with minimal tardiness using fixed delivery time as constraint.
New, Novice or Nervous? The "Quick" Guide to the "No-Quick-Fix"
ERIC Educational Resources Information Center
Teaching History, 2016
2016-01-01
"Teaching History" presents "New, Novice or Nervous (NNN)" for those new to the published writings of history teachers. Each problem newcomers wrestle with is one other teachers have wrestled with too. Quick fixes do not exist. But in others' writing, there is something better: "conversations in which other history…
Fixed and equilibrium endpoint problems in uneven-aged stand management
Robert G. Haight; Wayne M. Getz
1987-01-01
Studies in uneven-aged management have concentrated on the determination of optimal steady-state diameter distribution harvest policies for single and mixed species stands. To find optimal transition harvests for irregular stands, either fixed endpoint or equilibrium endpoint constraints can be imposed after finite transition periods. Penalty function and gradient...
Performance Problems in Service Contracting
1988-01-01
technical manual for the U.S. Army. Contract types have run the gamut from firm fixed price to various forms of cost plus arrangements, and award has been...in a National Forest to producing a technical manual for the U.S. Army. Contract types have run the gamut from firm fixed price to various forms of
NASA Technical Reports Server (NTRS)
Varaiya, P. P.
1972-01-01
General discussion of the theory of differential games with two players and zero sum. Games starting at a fixed initial state and ending at a fixed final time are analyzed. Strategies for the games are defined. The existence of saddle values and saddle points is considered. A stochastic version of a differential game is used to examine the synthesis problem.
Can evolutionary constraints explain the rarity of nitrogen-fixing trees in high-latitude forests?
Menge, Duncan N L; Crews, Timothy E
2016-09-01
Contents 1195 I. 1195 II. 1196 III. 1196 IV. 1200 1200 References 1200 SUMMARY: The rarity of symbiotic nitrogen (N)-fixing trees in temperate and boreal ('high-latitude') forests is curious. One explanation - the evolutionary constraints hypothesis - posits that high-latitude N-fixing trees are rare because few have evolved. Here, we consider traits necessary for high-latitude N-fixing trees. We then use recent developments in trait evolution to estimate that > 2000 and > 500 species could have evolved from low-latitude N-fixing trees and high-latitude N-fixing herbs, respectively. Evolution of N-fixing from nonfixing trees is an unlikely source of diversity. Dispersal limitation seems unlikely to limit high-latitude N-fixer diversity. The greater number of N-fixing species predicted to evolve than currently inhabit high-latitude forests suggests a greater role for ecological than evolutionary constraints. © 2016 The Authors. New Phytologist © 2016 New Phytologist Trust.
Chaos in a restricted problem of rotation of a rigid body with a fixed point
NASA Astrophysics Data System (ADS)
Borisov, A. V.; Kilin, A. A.; Mamaev, I. S.
2008-06-01
In this paper, we consider the transition to chaos in the phase portrait of a restricted problem of rotation of a rigid body with a fixed point. Two interrelated mechanisms responsible for chaotization are indicated: (1) the growth of the homoclinic structure and (2) the development of cascades of period doubling bifurcations. On the zero level of the area integral, an adiabatic behavior of the system (as the energy tends to zero) is noted. Meander tori induced by the break of the torsion property of the mapping are found.
Unequal-area, fixed-shape facility layout problems using the firefly algorithm
NASA Astrophysics Data System (ADS)
Ingole, Supriya; Singh, Dinesh
2017-07-01
In manufacturing industries, the facility layout design is a very important task, as it is concerned with the overall manufacturing cost and profit of the industry. The facility layout problem (FLP) is solved by arranging the departments or facilities of known dimensions on the available floor space. The objective of this article is to implement the firefly algorithm (FA) for solving unequal-area, fixed-shape FLPs and optimizing the costs of total material handling and transportation between the facilities. The FA is a nature-inspired algorithm and can be used for combinatorial optimization problems. Benchmark problems from the previous literature are solved using the FA. To check its effectiveness, it is implemented to solve large-sized FLPs. Computational results obtained using the FA show that the algorithm is less time consuming and the total layout costs for FLPs are better than the best results achieved so far.
Extension of the SIESTA MHD equilibrium code to free-plasma-boundary problems
Peraza-Rodriguez, Hugo; Reynolds-Barredo, J. M.; Sanchez, Raul; ...
2017-08-28
Here, SIESTA is a recently developed MHD equilibrium code designed to perform fast and accurate calculations of ideal MHD equilibria for three-dimensional magnetic configurations. Since SIESTA does not assume closed magnetic surfaces, the solution can exhibit magnetic islands and stochastic regions. In its original implementation SIESTA addressed only fixed-boundary problems. That is, the shape of the plasma edge, assumed to be a magnetic surface, was kept fixed as the solution iteratively converges to equilibrium. This condition somewhat restricts the possible applications of SIESTA. In this paper we discuss an extension that will enable SIESTA to address free-plasma-boundary problems, opening upmore » the possibility of investigating problems in which the plasma boundary is perturbed either externally or internally. As an illustration, SIESTA is applied to a configuration of the W7-X stellarator.« less
Extension of the SIESTA MHD equilibrium code to free-plasma-boundary problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peraza-Rodriguez, Hugo; Reynolds-Barredo, J. M.; Sanchez, Raul
Here, SIESTA is a recently developed MHD equilibrium code designed to perform fast and accurate calculations of ideal MHD equilibria for three-dimensional magnetic configurations. Since SIESTA does not assume closed magnetic surfaces, the solution can exhibit magnetic islands and stochastic regions. In its original implementation SIESTA addressed only fixed-boundary problems. That is, the shape of the plasma edge, assumed to be a magnetic surface, was kept fixed as the solution iteratively converges to equilibrium. This condition somewhat restricts the possible applications of SIESTA. In this paper we discuss an extension that will enable SIESTA to address free-plasma-boundary problems, opening upmore » the possibility of investigating problems in which the plasma boundary is perturbed either externally or internally. As an illustration, SIESTA is applied to a configuration of the W7-X stellarator.« less
Behaviorism: part of the problem or part of the solution.
Holland, J G
1978-01-01
The form frequently taken by behavior-modification programs is analyzed in terms of the parent science, Behaviorism. Whereas Behaviorism assumes that behavior is the result of contingencies, and that lasting behavior change involves changing the contingencies that give rise to and support the behavior, most behavior-modification programs merely arrange special contingencies in a special environment to eliminate the "problem" behavior. Even when the problem behavior is as widespread as alcoholism and crime, behavior modifiers focus on "fixing" the alcoholic and the criminal, not on changing the societal contingencies that prevail outside the therapeutic environment and continue to produce alcoholics and criminals. The contingencies that shape this method of dealing with behavioral problems are also analyzed, and this analysis leads to a criticism of the current social structure as a behavior control system. Although applied behaviorists have frequently focused on fixing individuals, the science of Behaviorism provides the means to analyze the structures, the system, and the forms of societal control that produce the "problems". PMID:649524
An approximation algorithm for the Noah's Ark problem with random feature loss.
Hickey, Glenn; Blanchette, Mathieu; Carmi, Paz; Maheshwari, Anil; Zeh, Norbert
2011-01-01
The phylogenetic diversity (PD) of a set of species is a measure of their evolutionary distinctness based on a phylogenetic tree. PD is increasingly being adopted as an index of biodiversity in ecological conservation projects. The Noah's Ark Problem (NAP) is an NP-Hard optimization problem that abstracts a fundamental conservation challenge in asking to maximize the expected PD of a set of taxa given a fixed budget, where each taxon is associated with a cost of conservation and a probability of extinction. Only simplified instances of the problem, where one or more parameters are fixed as constants, have as of yet been addressed in the literature. Furthermore, it has been argued that PD is not an appropriate metric for models that allow information to be lost along paths in the tree. We therefore generalize the NAP to incorporate a proposed model of feature loss according to an exponential distribution and term this problem NAP with Loss (NAPL). In this paper, we present a pseudopolynomial time approximation scheme for NAPL.
NASA Astrophysics Data System (ADS)
Mehic, M.; Fazio, P.; Voznak, M.; Partila, P.; Komosny, D.; Tovarek, J.; Chmelikova, Z.
2016-05-01
A mobile ad hoc network is a collection of mobile nodes which communicate without a fixed backbone or centralized infrastructure. Due to the frequent mobility of nodes, routes connecting two distant nodes may change. Therefore, it is not possible to establish a priori fixed paths for message delivery through the network. Because of its importance, routing is the most studied problem in mobile ad hoc networks. In addition, if the Quality of Service (QoS) is demanded, one must guarantee the QoS not only over a single hop but over an entire wireless multi-hop path which may not be a trivial task. In turns, this requires the propagation of QoS information within the network. The key to the support of QoS reporting is QoS routing, which provides path QoS information at each source. To support QoS for real-time traffic one needs to know not only minimum delay on the path to the destination but also the bandwidth available on it. Therefore, throughput, end-to-end delay, and routing overhead are traditional performance metrics used to evaluate the performance of routing protocol. To obtain additional information about the link, most of quality-link metrics are based on calculation of the lost probabilities of links by broadcasting probe packets. In this paper, we address the problem of including multiple routing metrics in existing routing packets that are broadcasted through the network. We evaluate the efficiency of such approach with modified version of DSDV routing protocols in ns-3 simulator.
Levas, Stephen J; Grottoli, Andréa G; Hughes, Adam; Osburn, Christopher L; Matsui, Yohei
2013-01-01
Mounding corals survive bleaching events in greater numbers than branching corals. However, no study to date has determined the underlying physiological and biogeochemical trait(s) that are responsible for mounding coral holobiont resilience to bleaching. Furthermore, the potential of dissolved organic carbon (DOC) as a source of fixed carbon to bleached corals has never been determined. Here, Porites lobata corals were experimentally bleached for 23 days and then allowed to recover for 0, 1, 5, and 11 months. At each recovery interval a suite of analyses were performed to assess their recovery (photosynthesis, respiration, chlorophyll a, energy reserves, tissue biomass, calcification, δ(13)C of the skeletal, δ(13)C, and δ(15)N of the animal host and endosymbiont fractions). Furthermore, at 0 months of recovery, the assimilation of photosynthetically acquired and zooplankton-feeding acquired carbon into the animal host, endosymbiont, skeleton, and coral-mediated DOC were measured via (13)C-pulse-chase labeling. During the first month of recovery, energy reserves and tissue biomass in bleached corals were maintained despite reductions in chlorophyll a, photosynthesis, and the assimilation of photosynthetically fixed carbon. At the same time, P. lobata corals catabolized carbon acquired from zooplankton and seemed to take up DOC as a source of fixed carbon. All variables that were negatively affected by bleaching recovered within 5 to 11 months. Thus, bleaching resilience in the mounding coral P. lobata is driven by its ability to actively catabolize zooplankton-acquired carbon and seemingly utilize DOC as a significant fixed carbon source, facilitating the maintenance of energy reserves and tissue biomass. With the frequency and intensity of bleaching events expected to increase over the next century, coral diversity on future reefs may favor not only mounding morphologies but species like P. lobata, which have the ability to utilize heterotrophic sources of fixed carbon that minimize the impact of bleaching and promote fast recovery.
Levas, Stephen J.; Grottoli, Andréa G.; Hughes, Adam; Osburn, Christopher L.; Matsui, Yohei
2013-01-01
Mounding corals survive bleaching events in greater numbers than branching corals. However, no study to date has determined the underlying physiological and biogeochemical trait(s) that are responsible for mounding coral holobiont resilience to bleaching. Furthermore, the potential of dissolved organic carbon (DOC) as a source of fixed carbon to bleached corals has never been determined. Here, Porites lobata corals were experimentally bleached for 23 days and then allowed to recover for 0, 1, 5, and 11 months. At each recovery interval a suite of analyses were performed to assess their recovery (photosynthesis, respiration, chlorophyll a, energy reserves, tissue biomass, calcification, δ13C of the skeletal, δ13C, and δ15N of the animal host and endosymbiont fractions). Furthermore, at 0 months of recovery, the assimilation of photosynthetically acquired and zooplankton-feeding acquired carbon into the animal host, endosymbiont, skeleton, and coral-mediated DOC were measured via 13C-pulse-chase labeling. During the first month of recovery, energy reserves and tissue biomass in bleached corals were maintained despite reductions in chlorophyll a, photosynthesis, and the assimilation of photosynthetically fixed carbon. At the same time, P. lobata corals catabolized carbon acquired from zooplankton and seemed to take up DOC as a source of fixed carbon. All variables that were negatively affected by bleaching recovered within 5 to 11 months. Thus, bleaching resilience in the mounding coral P. lobata is driven by its ability to actively catabolize zooplankton-acquired carbon and seemingly utilize DOC as a significant fixed carbon source, facilitating the maintenance of energy reserves and tissue biomass. With the frequency and intensity of bleaching events expected to increase over the next century, coral diversity on future reefs may favor not only mounding morphologies but species like P. lobata, which have the ability to utilize heterotrophic sources of fixed carbon that minimize the impact of bleaching and promote fast recovery. PMID:23658817
Variational algorithms for nonlinear smoothing applications
NASA Technical Reports Server (NTRS)
Bach, R. E., Jr.
1977-01-01
A variational approach is presented for solving a nonlinear, fixed-interval smoothing problem with application to offline processing of noisy data for trajectory reconstruction and parameter estimation. The nonlinear problem is solved as a sequence of linear two-point boundary value problems. Second-order convergence properties are demonstrated. Algorithms for both continuous and discrete versions of the problem are given, and example solutions are provided.
A slotted access control protocol for metropolitan WDM ring networks
NASA Astrophysics Data System (ADS)
Baziana, P. A.; Pountourakis, I. E.
2009-03-01
In this study we focus on the serious scalability problems that many access protocols for WDM ring networks introduce due to the use of a dedicated wavelength per access node for either transmission or reception. We propose an efficient slotted MAC protocol suitable for WDM ring metropolitan area networks. The proposed network architecture employs a separate wavelength for control information exchange prior to the data packet transmission. Each access node is equipped with a pair of tunable transceivers for data communication and a pair of fixed tuned transceivers for control information exchange. Also, each access node includes a set of fixed delay lines for synchronization reasons; to keep the data packets, while the control information is processed. An efficient access algorithm is applied to avoid both the data wavelengths and the receiver collisions. In our protocol, each access node is capable of transmitting and receiving over any of the data wavelengths, facing the scalability issues. Two different slot reuse schemes are assumed: the source and the destination stripping schemes. For both schemes, performance measures evaluation is provided via an analytic model. The analytical results are validated by a discrete event simulation model that uses Poisson traffic sources. Simulation results show that the proposed protocol manages efficient bandwidth utilization, especially under high load. Also, comparative simulation results prove that our protocol achieves significant performance improvement as compared with other WDMA protocols which restrict transmission over a dedicated data wavelength. Finally, performance measures evaluation is explored for diverse numbers of buffer size, access nodes and data wavelengths.
Effect of distance-related heterogeneity on population size estimates from point counts
Efford, Murray G.; Dawson, Deanna K.
2009-01-01
Point counts are used widely to index bird populations. Variation in the proportion of birds counted is a known source of error, and for robust inference it has been advocated that counts be converted to estimates of absolute population size. We used simulation to assess nine methods for the conduct and analysis of point counts when the data included distance-related heterogeneity of individual detection probability. Distance from the observer is a ubiquitous source of heterogeneity, because nearby birds are more easily detected than distant ones. Several recent methods (dependent double-observer, time of first detection, time of detection, independent multiple-observer, and repeated counts) do not account for distance-related heterogeneity, at least in their simpler forms. We assessed bias in estimates of population size by simulating counts with fixed radius w over four time intervals (occasions). Detection probability per occasion was modeled as a half-normal function of distance with scale parameter sigma and intercept g(0) = 1.0. Bias varied with sigma/w; values of sigma inferred from published studies were often 50% for a 100-m fixed-radius count. More critically, the bias of adjusted counts sometimes varied more than that of unadjusted counts, and inference from adjusted counts would be less robust. The problem was not solved by using mixture models or including distance as a covariate. Conventional distance sampling performed well in simulations, but its assumptions are difficult to meet in the field. We conclude that no existing method allows effective estimation of population size from point counts.
Inorganic, fixed nitrogen from agricultural settings often is introduced to first-order streams via surface runoff and shallow ground-water flow. Best management practices for limiting the flux of fixed N to surface waters often include buffers such as wetlands. However, the eff...
Inorganic, fixed nitrogen from agricultural settings often is introduced to first-order streams via surface runoff and shallow ground-water flow. Best management practices for limiting the flux of fixed N to surface waters often include buffers such as wetlands. However, the eff...
Evaluation of Fixed Momentary DRO Schedules under Signaled and Unsignaled Arrangements
ERIC Educational Resources Information Center
Hammond, Jennifer L.; Iwata, Brian A.; Fritz, Jennifer N.; Dempsey, Carrie M.
2011-01-01
Fixed momentary schedules of differential reinforcement of other behavior (FM DRO) generally have been ineffective as treatment for problem behavior. Because most early research on FM DRO included presentation of a signal at the end of the DRO interval, it is unclear whether the limited effects of FM DRO were due to (a) the momentary response…
ERIC Educational Resources Information Center
Tomlin, Michelle; Reed, Phil
2012-01-01
The effects of fixed-time (FT) reinforcement schedules on the disruptive behavior of 4 students in special education classrooms were studied. Attention provided on FT schedules in the context of a multiple-baseline design across participants substantially decreased all students' challenging behavior. Disruptive behavior was maintained at levels…
An Efficient MCMC Algorithm to Sample Binary Matrices with Fixed Marginals
ERIC Educational Resources Information Center
Verhelst, Norman D.
2008-01-01
Uniform sampling of binary matrices with fixed margins is known as a difficult problem. Two classes of algorithms to sample from a distribution not too different from the uniform are studied in the literature: importance sampling and Markov chain Monte Carlo (MCMC). Existing MCMC algorithms converge slowly, require a long burn-in period and yield…
Fixed-Tuition Pricing: A Solution that May Be Worse than the Problem
ERIC Educational Resources Information Center
Morphew, Christopher C.
2007-01-01
Fixed-tuition plans, which vary in specifics from institution to institution, rely on a common principle: Students pay the same annual tuition costs over a pre-determined length of time, ostensibly the time required to earn an undergraduate degree. Students, parents, and policymakers are demonstrating growing interest in such plans. At face value,…
Commissioning of two RF operation modes for RF negative ion source experimental setup at HUST
NASA Astrophysics Data System (ADS)
Li, D.; Chen, D.; Liu, K.; Zhao, P.; Zuo, C.; Wang, X.; Wang, H.; Zhang, L.
2017-08-01
An RF-driven negative ion source experimental setup, without a cesium oven and an extraction system, has been built at Huazhong University of Science and Technology (HUST). The working gas is hydrogen, and the typical operational gas pressure is 0.3 Pa. The RF generator is capable of delivering up to 20 kW at 0.9 - 1.1 MHz, and has two operation modes, the fixed-frequency mode and auto-tuning mode. In the fixed-frequency mode, it outputs a steady RF forward power (Pf) at a fixed frequency. In the auto-tuning mode, it adjusts the operating frequency to seek and track the minimum standing wave ratio (SWR) during plasma discharge. To achieve fast frequency tuning, the RF signal source adopts a direct digital synthesizer (DDS). To withstand high SWR during the discharge, a tetrode amplifier is chosen as the final stage amplifier. The trend of maximum power reflection coefficient |ρ|2 at plasma ignition is presented at the fixed frequency of 1.02 MHz with the Pf increasing from 5 kW to 20 kW, which shows the maximum |ρ|2 tends to be "steady" under high RF power. The experiments in auto-tuning mode fail due to over-current protection of screen grid. The possible reason is the relatively large equivalent anode impedance caused by the frequency tuning. The corresponding analysis and possible solution are presented.
ERIC Educational Resources Information Center
Visich, Marian, Jr.
1984-01-01
Discusses strategies used in a course for nonengineering students which consists of case studies of such sociotechnological problems as automobile safety, water pollution, and energy. Solutions to the problems are classified according to three approaches: education, government regulation, and technological fix. (BC)
Higher-Order Thinking Development through Adaptive Problem-Based Learning
ERIC Educational Resources Information Center
Raiyn, Jamal; Tilchin, Oleg
2015-01-01
In this paper we propose an approach to organizing Adaptive Problem-Based Learning (PBL) leading to the development of Higher-Order Thinking (HOT) skills and collaborative skills in students. Adaptability of PBL is expressed by changes in fixed instructor assessments caused by the dynamics of developing HOT skills needed for problem solving,…
Optimization in First Semester Calculus: A Look at a Classic Problem
ERIC Educational Resources Information Center
LaRue, Renee; Infante, Nicole Engelke
2015-01-01
Optimization problems in first semester calculus have historically been a challenge for students. Focusing on the classic optimization problem of finding the minimum amount of fencing required to enclose a fixed area, we examine students' activity through the lens of Tall and Vinner's concept image and Carlson and Bloom's multidimensional…
NASA Technical Reports Server (NTRS)
Yahsi, O. S.; Erdogan, F.
1983-01-01
A cylindrical shell having a very stiff and plate or a flange is considered. It is assumed that near the end the cylinder contains an axial flaw which may be modeled as a part through surface crack or a through crack. The effect of the end constraining on the stress intensity factor which is the main fracture mechanics parameter is studied. The applied loads acting on the cylinder are assumed to be axisymmetric. Thus the crack problem under consideration is symmetric with respect to the plane of the crack and consequently only the Mode 1 stress intensity factors are nonzero. With this limitation, the general perturbation problem for a cylinder with a built in end containing an axial crack is considered. Reissner's shell theory is used to formulate the problem. The part through crack problem is treated by using a line spring model. In the case of a crack tip terminating at the fixed end it is shown that the integral equations of the shell problem has the same generalized Cauchy kernel as the corresponding plane stress elasticity problem.
NASA Astrophysics Data System (ADS)
Sakakibara, Kazutoshi; Tian, Yajie; Nishikawa, Ikuko
We discuss the planning of transportation by trucks over a multi-day period. Each truck collects loads from suppliers and delivers them to assembly plants or a truck terminal. By exploiting the truck terminal as a temporal storage, we aim to increase the load ratio of each truck and to minimize the lead time for transportation. In this paper, we show a mixed integer programming model which represents each product explicitly, and discuss the decomposition of the problem into a problem of delivery and storage, and a problem of vehicle routing. Based on this model, we propose a relax-and-fix type heuristic in which decision variables are fixed one by one by mathematical programming techniques such as branch-and-bound methods.
Diameter-Constrained Steiner Tree
NASA Astrophysics Data System (ADS)
Ding, Wei; Lin, Guohui; Xue, Guoliang
Given an edge-weighted undirected graph G = (V,E,c,w), where each edge e ∈ E has a cost c(e) and a weight w(e), a set S ⊆ V of terminals and a positive constant D 0, we seek a minimum cost Steiner tree where all terminals appear as leaves and its diameter is bounded by D 0. Note that the diameter of a tree represents the maximum weight of path connecting two different leaves in the tree. Such problem is called the minimum cost diameter-constrained Steiner tree problem. This problem is NP-hard even when the topology of Steiner tree is fixed. In present paper we focus on this restricted version and present a fully polynomial time approximation scheme (FPTAS) for computing a minimum cost diameter-constrained Steiner tree under a fixed topology.
Affirmative action policies promote women and do not harm efficiency in the laboratory.
Balafoutas, Loukas; Sutter, Matthias
2012-02-03
Gender differences in choosing to enter competitions are one source of unequal labor market outcomes concerning wages and promotions. Given that studying the effects of policy interventions to support women is difficult with field data because of measurement problems and potential lack of control, we evaluated, in a set of controlled laboratory experiments, four interventions: quotas, where one of two winners of a competition must be female; two variants of preferential treatment, where a fixed increment is added to women's performance; and repetition of the competition, where a second competition takes place if no woman is among the winners. Compared with no intervention, all interventions encourage women to enter competitions more often, and performance is at least equally good, both during and after the competition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greenwald, P.T.
In the post Cold War era, the East-West conflict may be succeeded by a new confrontation which pits an industrialized North against a developing South. In June 1992, world attention was fixed on the Earth Summit in Rio de Janeiro. This event marked a milestone in global environmental awareness; but just as the end of the Cold War has provided new opportunities for the US, the world is now faced with new sources of conflict which have advanced to the forefront of the national security debate. Among the new sources of conflict, environmental problems are rapidly becoming preeminent. Within nationalmore » security debates, those environmental problems which respect no international boundary are of particular concern. Worldwide deforestation, and the related issues of global warming and the loss of biodiversity, represent a clear threat to national security. Two percent of the Earth's rainforests are lost each year; one 'football field' is lost each second. Deforestation has already led to conflict and instability within several regions of the world including Southeast Asia. The United States must recognize the character and dynamics of these new sources of conflict in order to successfully realize its policy aims in national security. The US should preempt conflict through cooperation and develop a shared concern for the environment throughout the world. The US military may play a key role in this effort. Rainforest, Deforestation, Tropical timber, Logging, Southeast Asia, Philippines, Malaysia, Indonesia, Thailand, Burma, Laos, Japan Cambodia, Vietnam, Human rights, Plywood, Pulp, Paper, World Bank, U.S. Agency for International Development.« less
Cheng, Wan-Ju; Cheng, Yawen
2017-07-01
Shift work is associated with adverse physical and psychological health outcomes. However, the independent health effects of night work and rotating shift on workers' sleep and mental health risks and the potential gender differences have not been fully evaluated. We used data from a nationwide survey of representative employees of Taiwan in 2013, consisting of 16 440 employees. Participants reported their work shift patterns 1 week prior to the survey, which were classified into the four following shift types: fixed day, rotating day, fixed night and rotating night shifts. Also obtained were self-reported sleep duration, presence of insomnia, burnout and mental disorder assessed by the Brief Symptom Rating Scale. Among all shift types, workers with fixed night shifts were found to have the shortest duration of sleep, highest level of burnout score, and highest prevalence of insomnia and minor mental disorders. Gender-stratified regression analyses with adjustment of age, education and psychosocial work conditions showed that both in male and female workers, fixed night shifts were associated with greater risks for short sleep duration (<7 hours per day) and insomnia. In female workers, fixed night shifts were also associated with increased risks for burnout and mental disorders, but after adjusting for insomnia, the associations between fixed night shifts and poor mental health were no longer significant. The findings of this study suggested that a fixed night shift was associated with greater risks for sleep and mental health problems, and the associations might be mediated by sleep disturbance. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
End of Life Decisions for Sealed Radioactive Sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pryor, Kathryn H.
Sealed radioactive sources are encountered in a wide variety of settings – from household smoke detectors and instrument check sources, through fixed industrial gauges, industrial radiography and well logging sources, to irradiators and medical teletherapy devices. In general, the higher the level of activity in the sealed source, the stricter the regulatory control that is applied to its use, control and ultimate disposition. Lower levels of attention and oversight can and do lead to sources ending up in the wrong place – as orphan sources in uncontrolled storage, disposed in a sanitary landfill, melted down in metal recycling operations andmore » incorporated into consumer products, or handled by an unsuspecting member of the public. There are a range of issues that contribute to the problem of improper disposal of sealed sources, and, in particular, to disused source disposal. General licensed sources and devices are particularly at risk of being disposed incorrectly. Higher activity general licensed sources, although required to be registered with the Nuclear Regulatory Commission (NRC) or Agreement State, receive limited regulatory oversight and are not tracked on a national scale. Users frequently do not consider the full life-cycle costs when procuring sources or devices and discover that they cannot afford to package, transport and dispose of their sources properly. The NRC requirements for decommissioning funding plans and financial assurance are not adequate to cover sealed source transport and disposal costs fully. While there are regulatory limits for storage of disused sources, enforcement is limited and there is no financial incentive for owners to dispose of the sources. In some cases, the lack of availability of approved Department of Transportation (DOT) Type B shipping casks also presents a barrier to sealed source disposal. The report of the Disused Sources Working Group does an excellent job of framing these issues. This article reviews both the issues and the report’s recommendations, which are designed to improve sealed source control and encourage proper disposal of disused sources.« less
Well-fixed acetabular component retention or replacement: the whys and the wherefores.
Blaha, J David
2002-06-01
Occasionally the adult reconstructive surgeon is faced with a well-fixed acetabular component that is associated with an arthroplasty problem that ordinarily would require removal and replacement of the cup. Removal of a well-fixed cup is associated with considerable morbidity in bone loss, particularly in the medial wall of the acetabulum. In such a situation, retention of the cup with exchange only of the polyethylene liner may be possible. As preparation for a prospective study, I informally reviewed my experience of cup retention or replacement in revision total hip arthroplasty. An algorithm for retaining or revising a well-fixed acetabular component is presented here. Copyright 2002, Elsevier Science (USA).
Gauge fixing and BFV quantization
NASA Astrophysics Data System (ADS)
Rogers, Alice
2000-01-01
Non-singularity conditions are established for the Batalin-Fradkin-Vilkovisky (BFV) gauge-fixing fermion which are sufficient for it to lead to the correct path integral for a theory with constraints canonically quantized in the BFV approach. The conditions ensure that the anticommutator of this fermion with the BRST charge regularizes the path integral by regularizing the trace over non-physical states in each ghost sector. The results are applied to the quantization of a system which has a Gribov problem, using a non-standard form of the gauge-fixing fermion.
Analysis of Phoenix Anomalies and IV and V Findings Applied to the GRAIL Mission
NASA Technical Reports Server (NTRS)
Larson, Steve
2012-01-01
Analysis of patterns in IV&V findings and their correlation with post-launch anomalies allowed GRAIL to make more efficient use of IV&V services . Fewer issues. . Higher fix rate. . Better communication. . Increased volume of potential issues vetted, at lower cost. . Hard to make predictions of post-launch performance based on IV&V findings . Phoenix made sound fix/use as-is decisions . Things that were fixed eliminated some problems, but hard to quantify. . Broad predictive success in one area, but inverse relationship in others.
Steep radio spectra in high-redshift radio galaxies
NASA Technical Reports Server (NTRS)
Krolik, Julian H.; Chen, Wan
1991-01-01
The generic spectrum of an optically thin synchrotron source steepens by 0.5 in spectral index from low frequencies to high whenever the source lifetime is greater than the energy-loss timescale for at least some of the radiating electrons. Three effects tend to decrease the frequency nu(b) of this spectral bend as the source redshift increases: (1) for fixed bend frequency nu* in the rest frame, nu(b) = nu*/(1 + z); (2) losses due to inverse Compton scattering the microwave background rise with redshift as (1 + z) exp 4, so that, for fixed residence time in the radiating region, the energy of the lowest energy electron that can cool falls rapidly with increasing redshift; and (3) if the magnetic field is proportional to the equipartition field and the emitting volume is fixed or slowly varying, flux-limited samples induce a selection effect favoring low nu* at high z because higher redshift sources require higher emissivity to be included in the sample, and hence have stronger implied fields and more rapid synchrotron losses. A combination of these effects may explain the trend observed in the 3CR sample for higher redshift radio galaxies to have steeper spectra, and the successful use of ultrasteep spectrum surveys to locate high-redshift galaxies.
Wong, P P; Stenberg, N E; Edgar, L
1980-03-01
A bacterium with the taxonomic characteristics of the genus Azospirillum was isolated from celluloytic N2-fixing mixed cultures. Its characteristics fit the descriptions of both Azopirillum lipoferum (Beijerinck) comb. nov. and Azospirillum brasilense sp. nov. It may be a variant strain of A. lipoferum. In mixed cultures with cellulolytic organisms, the bacterium grew and fixed N2 with cellelose as a sole source of energy and carbon. The mixed cultures used cellulose from leaves of wheat (Triticum aestivum L.), corn (Zea mays L.), and big bluestem grass (Andropogon gerardii Vitm). Microaerophilic N2-fixing bacteria of the genus Azospirillum, such as the bacterium we isolated, may be important contributors of fixed N2 in soil with partial anaerobiosis and cellulose decomposition.
Optimum rocket propulsion for energy-limited transfer
NASA Technical Reports Server (NTRS)
Zuppero, Anthony; Landis, Geoffrey A.
1991-01-01
In order to effect large-scale return of extraterrestrial resources to Earth orbit, it is desirable to optimize the propulsion system to maximize the mass of payload returned per unit energy expended. This optimization problem is different from the conventional rocket propulsion optimization. A rocket propulsion system consists of an energy source plus reaction mass. In a conventional chemical rocket, the energy source and the reaction mass are the same. For the transportation system required, however, the best system performance is achieved if the reaction mass used is from a locally available source. In general, the energy source and the reaction mass will be separate. One such rocket system is the nuclear thermal rocket, in which the energy source is a reactor and the reaction mass a fluid which is heated by the reactor and exhausted. Another energy-limited rocket system is the hydrogen/oxygen rocket where H2/O2 fuel is produced by electrolysis of water using a solar array or a nuclear reactor. The problem is to choose the optimum specific impulse (or equivalently exhaust velocity) to minimize the amount of energy required to produce a given mission delta-v in the payload. The somewhat surprising result is that the optimum specific impulse is not the maximum possible value, but is proportional to the mission delta-v. In general terms, at the beginning of the mission it is optimum to use a very low specific impulse and expend a lot of reaction mass, since this is the most energy efficient way to transfer momentum. However, as the mission progresses, it becomes important to minimize the amount of reaction mass expelled, since energy is wasted moving the reaction mass. Thus, the optimum specific impulse will increase with the mission delta-v. Optimum I(sub sp) is derived for maximum payload return per energy expended for both the case of fixed and variable I(sub sp) engines. Sample missions analyzed include return of water payloads from the moons of Mars and of Saturn.
Not Just Hats Anymore: Binomial Inversion and the Problem of Multiple Coincidences
ERIC Educational Resources Information Center
Hathout, Leith
2007-01-01
The well-known "hats" problem, in which a number of people enter a restaurant and check their hats, and then receive them back at random, is often used to illustrate the concept of derangements, that is, permutations with no fixed points. In this paper, the problem is extended to multiple items of clothing, and a general solution to the problem of…
Baran, Timothy M; Foster, Thomas H
2014-02-01
For interstitial photodynamic therapy (iPDT) of bulky tumors, careful treatment planning is required in order to ensure that a therapeutic dose is delivered to the tumor, while minimizing damage to surrounding normal tissue. In clinical contexts, iPDT has typically been performed with either flat cleaved or cylindrical diffusing optical fibers as light sources. Here, the authors directly compare these two source geometries in terms of the number of fibers and duration of treatment required to deliver a prescribed light dose to a tumor volume. Treatment planning software for iPDT was developed based on graphics processing unit enhanced Monte Carlo simulations. This software was used to optimize the number of fibers, total energy delivered by each fiber, and the position of individual fibers in order to deliver a target light dose (D90) to 90% of the tumor volume. Treatment plans were developed using both flat cleaved and cylindrical diffusing fibers, based on tissue volumes derived from CT data from a head and neck cancer patient. Plans were created for four cases: fixed energy per fiber, fixed number of fibers, and in cases where both or neither of these factors were fixed. When the number of source fibers was fixed at eight, treatment plans based on flat cleaved fibers required each to deliver 7180-8080 J in order to deposit 90 J/cm(2) in 90% of the tumor volume. For diffusers, each fiber was required to deliver 2270-2350 J (333-1178 J/cm) in order to achieve this same result. For the case of fibers delivering a fixed 900 J, 13 diffusers or 19 flat cleaved fibers at a spacing of 1 cm were required to deliver the desired dose. With energy per fiber fixed at 2400 J and the number of fibers fixed at eight, diffuser fibers delivered the desired dose to 93% of the tumor volume, while flat cleaved fibers delivered this dose to 79%. With both energy and number of fibers allowed to vary, six diffusers delivering 3485-3600 J were required, compared to ten flat cleaved fibers delivering 2780-3600 J. For the same number of fibers, cylindrical diffusers allow for a shorter treatment duration compared to flat cleaved fibers. For the same energy delivered per fiber, diffusers allow for the insertion of fewer fibers in order to deliver the same light dose to a target volume.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baran, Timothy M., E-mail: timothy.baran@rochester.edu; Foster, Thomas H.
Purpose: For interstitial photodynamic therapy (iPDT) of bulky tumors, careful treatment planning is required in order to ensure that a therapeutic dose is delivered to the tumor, while minimizing damage to surrounding normal tissue. In clinical contexts, iPDT has typically been performed with either flat cleaved or cylindrical diffusing optical fibers as light sources. Here, the authors directly compare these two source geometries in terms of the number of fibers and duration of treatment required to deliver a prescribed light dose to a tumor volume. Methods: Treatment planning software for iPDT was developed based on graphics processing unit enhanced Montemore » Carlo simulations. This software was used to optimize the number of fibers, total energy delivered by each fiber, and the position of individual fibers in order to deliver a target light dose (D{sub 90}) to 90% of the tumor volume. Treatment plans were developed using both flat cleaved and cylindrical diffusing fibers, based on tissue volumes derived from CT data from a head and neck cancer patient. Plans were created for four cases: fixed energy per fiber, fixed number of fibers, and in cases where both or neither of these factors were fixed. Results: When the number of source fibers was fixed at eight, treatment plans based on flat cleaved fibers required each to deliver 7180–8080 J in order to deposit 90 J/cm{sup 2} in 90% of the tumor volume. For diffusers, each fiber was required to deliver 2270–2350 J (333–1178 J/cm) in order to achieve this same result. For the case of fibers delivering a fixed 900 J, 13 diffusers or 19 flat cleaved fibers at a spacing of 1 cm were required to deliver the desired dose. With energy per fiber fixed at 2400 J and the number of fibers fixed at eight, diffuser fibers delivered the desired dose to 93% of the tumor volume, while flat cleaved fibers delivered this dose to 79%. With both energy and number of fibers allowed to vary, six diffusers delivering 3485–3600 J were required, compared to ten flat cleaved fibers delivering 2780–3600 J. Conclusions: For the same number of fibers, cylindrical diffusers allow for a shorter treatment duration compared to flat cleaved fibers. For the same energy delivered per fiber, diffusers allow for the insertion of fewer fibers in order to deliver the same light dose to a target volume.« less
Some estimation formulae for continuous time-invariant linear systems
NASA Technical Reports Server (NTRS)
Bierman, G. J.; Sidhu, G. S.
1975-01-01
In this brief paper we examine a Riccati equation decomposition due to Reid and Lainiotis and apply the result to the continuous time-invariant linear filtering problem. Exploitation of the time-invariant structure leads to integration-free covariance recursions which are of use in covariance analyses and in filter implementations. A super-linearly convergent iterative solution to the algebraic Riccati equation (ARE) is developed. The resulting algorithm, arranged in a square-root form, is thought to be numerically stable and competitive with other ARE solution methods. Certain covariance relations that are relevant to the fixed-point and fixed-lag smoothing problems are also discussed.
NASA Technical Reports Server (NTRS)
Burns, John A.; Marrekchi, Hamadi
1993-01-01
The problem of using reduced order dynamic compensators to control a class of nonlinear parabolic distributed parameter systems was considered. Concentration was on a system with unbounded input and output operators governed by Burgers' equation. A linearized model was used to compute low-order-finite-dimensional control laws by minimizing certain energy functionals. Then these laws were applied to the nonlinear model. Standard approaches to this problem employ model/controller reduction techniques in conjunction with linear quadratic Gaussian (LQG) theory. The approach used is based on the finite dimensional Bernstein/Hyland optimal projection theory which yields a fixed-finite-order controller.
NASA Astrophysics Data System (ADS)
González Cornejo, Felipe A.; Cruchaga, Marcela A.; Celentano, Diego J.
2017-11-01
The present work reports a fluid-rigid solid interaction formulation described within the framework of a fixed-mesh technique. The numerical analysis is focussed on the study of a vortex-induced vibration (VIV) of a circular cylinder at low Reynolds number. The proposed numerical scheme encompasses the fluid dynamics computation in an Eulerian domain where the body is embedded using a collection of markers to describe its shape, and the rigid solid's motion is obtained with the well-known Newton's law. The body's velocity is imposed on the fluid domain through a penalty technique on the embedded fluid-solid interface. The fluid tractions acting on the solid are computed from the fluid dynamic solution of the flow around the body. The resulting forces are considered to solve the solid motion. The numerical code is validated by contrasting the obtained results with those reported in the literature using different approaches for simulating the flow past a fixed circular cylinder as a benchmark problem. Moreover, a mesh convergence analysis is also done providing a satisfactory response. In particular, a VIV problem is analyzed, emphasizing the description of the synchronization phenomenon.
A Study of Fixed-Order Mixed Norm Designs for a Benchmark Problem in Structural Control
NASA Technical Reports Server (NTRS)
Whorton, Mark S.; Calise, Anthony J.; Hsu, C. C.
1998-01-01
This study investigates the use of H2, p-synthesis, and mixed H2/mu methods to construct full-order controllers and optimized controllers of fixed dimensions. The benchmark problem definition is first extended to include uncertainty within the controller bandwidth in the form of parametric uncertainty representative of uncertainty in the natural frequencies of the design model. The sensitivity of H2 design to unmodelled dynamics and parametric uncertainty is evaluated for a range of controller levels of authority. Next, mu-synthesis methods are applied to design full-order compensators that are robust to both unmodelled dynamics and to parametric uncertainty. Finally, a set of mixed H2/mu compensators are designed which are optimized for a fixed compensator dimension. These mixed norm designs recover the H, design performance levels while providing the same levels of robust stability as the u designs. It is shown that designing with the mixed norm approach permits higher levels of controller authority for which the H, designs are destabilizing. The benchmark problem is that of an active tendon system. The controller designs are all based on the use of acceleration feedback.
Filter design for the detection of compact sources based on the Neyman-Pearson detector
NASA Astrophysics Data System (ADS)
López-Caniego, M.; Herranz, D.; Barreiro, R. B.; Sanz, J. L.
2005-05-01
This paper considers the problem of compact source detection on a Gaussian background. We present a one-dimensional treatment (though a generalization to two or more dimensions is possible). Two relevant aspects of this problem are considered: the design of the detector and the filtering of the data. Our detection scheme is based on local maxima and it takes into account not only the amplitude but also the curvature of the maxima. A Neyman-Pearson test is used to define the region of acceptance, which is given by a sufficient linear detector that is independent of the amplitude distribution of the sources. We study how detection can be enhanced by means of linear filters with a scaling parameter, and compare some filters that have been proposed in the literature [the Mexican hat wavelet, the matched filter (MF) and the scale-adaptive filter (SAF)]. We also introduce a new filter, which depends on two free parameters (the biparametric scale-adaptive filter, BSAF). The value of these two parameters can be determined, given the a priori probability density function of the amplitudes of the sources, such that the filter optimizes the performance of the detector in the sense that it gives the maximum number of real detections once it has fixed the number density of spurious sources. The new filter includes as particular cases the standard MF and the SAF. As a result of its design, the BSAF outperforms these filters. The combination of a detection scheme that includes information on the curvature and a flexible filter that incorporates two free parameters (one of them a scaling parameter) improves significantly the number of detections in some interesting cases. In particular, for the case of weak sources embedded in white noise, the improvement with respect to the standard MF is of the order of 40 per cent. Finally, an estimation of the amplitude of the source (most probable value) is introduced and it is proven that such an estimator is unbiased and has maximum efficiency. We perform numerical simulations to test these theoretical ideas in a practical example and conclude that the results of the simulations agree with the analytical results.
Fixed Point Problems for Linear Transformations on Pythagorean Triples
ERIC Educational Resources Information Center
Zhan, M.-Q.; Tong, J.-C.; Braza, P.
2006-01-01
In this article, an attempt is made to find all linear transformations that map a standard Pythagorean triple (a Pythagorean triple [x y z][superscript T] with y being even) into a standard Pythagorean triple, which have [3 4 5][superscript T] as their fixed point. All such transformations form a monoid S* under matrix product. It is found that S*…
A MAP fixed-point, packing-unpacking routine for the IBM 7094 computer
Robert S. Helfman
1966-01-01
Two MAP (Macro Assembly Program) computer routines for packing and unpacking fixed point data are described. Use of these routines with Fortran IV Programs provides speedy access to quantities of data which far exceed the normal storage capacity of IBM 7000-series computers. Many problems that could not be attempted because of the slow access-speed of tape...
Optimal observables for multiparameter seismic tomography
NASA Astrophysics Data System (ADS)
Bernauer, Moritz; Fichtner, Andreas; Igel, Heiner
2014-08-01
We propose a method for the design of seismic observables with maximum sensitivity to a target model parameter class, and minimum sensitivity to all remaining parameter classes. The resulting optimal observables thereby minimize interparameter trade-offs in multiparameter inverse problems. Our method is based on the linear combination of fundamental observables that can be any scalar measurement extracted from seismic waveforms. Optimal weights of the fundamental observables are determined with an efficient global search algorithm. While most optimal design methods assume variable source and/or receiver positions, our method has the flexibility to operate with a fixed source-receiver geometry, making it particularly attractive in studies where the mobility of sources and receivers is limited. In a series of examples we illustrate the construction of optimal observables, and assess the potentials and limitations of the method. The combination of Rayleigh-wave traveltimes in four frequency bands yields an observable with strongly enhanced sensitivity to 3-D density structure. Simultaneously, sensitivity to S velocity is reduced, and sensitivity to P velocity is eliminated. The original three-parameter problem thereby collapses into a simpler two-parameter problem with one dominant parameter. By defining parameter classes to equal earth model properties within specific regions, our approach mimics the Backus-Gilbert method where data are combined to focus sensitivity in a target region. This concept is illustrated using rotational ground motion measurements as fundamental observables. Forcing dominant sensitivity in the near-receiver region produces an observable that is insensitive to the Earth structure at more than a few wavelengths' distance from the receiver. This observable may be used for local tomography with teleseismic data. While our test examples use a small number of well-understood fundamental observables, few parameter classes and a radially symmetric earth model, the method itself does not impose such restrictions. It can easily be applied to large numbers of fundamental observables and parameters classes, as well as to 3-D heterogeneous earth models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winters, J.M.
Some background is given on the field of human factors. The nature of problems with current human/computer interfaces is discussed, some costs are identified, ideal attributes of graceful system interfaces are outlined, and some reasons are indicated why it's not easy to fix the problems. (LEW)
Solving free-plasma-boundary problems with the SIESTA MHD code
NASA Astrophysics Data System (ADS)
Sanchez, R.; Peraza-Rodriguez, H.; Reynolds-Barredo, J. M.; Tribaldos, V.; Geiger, J.; Hirshman, S. P.; Cianciosa, M.
2017-10-01
SIESTA is a recently developed MHD equilibrium code designed to perform fast and accurate calculations of ideal MHD equilibria for 3D magnetic configurations. It is an iterative code that uses the solution obtained by the VMEC code to provide a background coordinate system and an initial guess of the solution. The final solution that SIESTA finds can exhibit magnetic islands and stochastic regions. In its original implementation, SIESTA addressed only fixed-boundary problems. This fixed boundary condition somewhat restricts its possible applications. In this contribution we describe a recent extension of SIESTA that enables it to address free-plasma-boundary situations, opening up the possibility of investigating problems with SIESTA in which the plasma boundary is perturbed either externally or internally. As an illustration, the extended version of SIESTA is applied to a configuration of the W7-X stellarator.
Jeribi, Aref; Krichen, Bilel; Mefteh, Bilel
2013-01-01
In the paper [A. Ben Amar, A. Jeribi, and B. Krichen, Fixed point theorems for block operator matrix and an application to a structured problem under boundary conditions of Rotenberg's model type, to appear in Math. Slovaca. (2014)], the existence of solutions of the two-dimensional boundary value problem (1) and (2) was discussed in the product Banach space L(p)×L(p) for p∈(1, ∞). Due to the lack of compactness on L1 spaces, the analysis did not cover the case p=1. The purpose of this work is to extend the results of Ben Amar et al. to the case p=1 by establishing new variants of fixed-point theorems for a 2×2 operator matrix, involving weakly compact operators.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, TImothy P.; Kiedrowski, Brian C.; Martin, William R.
Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics formore » one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.« less
Defining Geodetic Reference Frame using Matlab®: PlatEMotion 2.0
NASA Astrophysics Data System (ADS)
Cannavò, Flavio; Palano, Mimmo
2016-03-01
We describe the main features of the developed software tool, namely PlatE-Motion 2.0 (PEM2), which allows inferring the Euler pole parameters by inverting the observed velocities at a set of sites located on a rigid block (inverse problem). PEM2 allows also calculating the expected velocity value for any point located on the Earth providing an Euler pole (direct problem). PEM2 is the updated version of a previous software tool initially developed for easy-to-use file exchange with the GAMIT/GLOBK software package. The software tool is developed in Matlab® framework and, as the previous version, includes a set of MATLAB functions (m-files), GUIs (fig-files), map data files (mat-files) and user's manual as well as some example input files. New changes in PEM2 include (1) some bugs fixed, (2) improvements in the code, (3) improvements in statistical analysis, (4) new input/output file formats. In addition, PEM2 can be now run under the majority of operating systems. The tool is open source and freely available for the scientific community.
Prospective motion correction of high-resolution magnetic resonance imaging data in children.
Brown, Timothy T; Kuperman, Joshua M; Erhart, Matthew; White, Nathan S; Roddey, J Cooper; Shankaranarayanan, Ajit; Han, Eric T; Rettmann, Dan; Dale, Anders M
2010-10-15
Motion artifacts pose significant problems for the acquisition and analysis of high-resolution magnetic resonance imaging data. These artifacts can be particularly severe when studying pediatric populations, where greater patient movement reduces the ability to clearly view and reliably measure anatomy. In this study, we tested the effectiveness of a new prospective motion correction technique, called PROMO, as applied to making neuroanatomical measures in typically developing school-age children. This method attempts to address the problem of motion at its source by keeping the measurement coordinate system fixed with respect to the subject throughout image acquisition. The technique also performs automatic rescanning of images that were acquired during intervals of particularly severe motion. Unlike many previous techniques, this approach adjusts for both in-plane and through-plane movement, greatly reducing image artifacts without the need for additional equipment. Results show that the use of PROMO notably enhances subjective image quality, reduces errors in Freesurfer cortical surface reconstructions, and significantly improves the subcortical volumetric segmentation of brain structures. Further applications of PROMO for clinical and cognitive neuroscience are discussed. Copyright 2010 Elsevier Inc. All rights reserved.
Automated bond order assignment as an optimization problem.
Dehof, Anna Katharina; Rurainski, Alexander; Bui, Quang Bao Anh; Böcker, Sebastian; Lenhof, Hans-Peter; Hildebrandt, Andreas
2011-03-01
Numerous applications in Computational Biology process molecular structures and hence strongly rely not only on correct atomic coordinates but also on correct bond order information. For proteins and nucleic acids, bond orders can be easily deduced but this does not hold for other types of molecules like ligands. For ligands, bond order information is not always provided in molecular databases and thus a variety of approaches tackling this problem have been developed. In this work, we extend an ansatz proposed by Wang et al. that assigns connectivity-based penalty scores and tries to heuristically approximate its optimum. In this work, we present three efficient and exact solvers for the problem replacing the heuristic approximation scheme of the original approach: an A*, an ILP and an fixed-parameter approach (FPT) approach. We implemented and evaluated the original implementation, our A*, ILP and FPT formulation on the MMFF94 validation suite and the KEGG Drug database. We show the benefit of computing exact solutions of the penalty minimization problem and the additional gain when computing all optimal (or even suboptimal) solutions. We close with a detailed comparison of our methods. The A* and ILP solution are integrated into the open-source C++ LGPL library BALL and the molecular visualization and modelling tool BALLView and can be downloaded from our homepage www.ball-project.org. The FPT implementation can be downloaded from http://bio.informatik.uni-jena.de/software/.
Behavioral pattern identification for structural health monitoring in complex systems
NASA Astrophysics Data System (ADS)
Gupta, Shalabh
Estimation of structural damage and quantification of structural integrity are critical for safe and reliable operation of human-engineered complex systems, such as electromechanical, thermofluid, and petrochemical systems. Damage due to fatigue crack is one of the most commonly encountered sources of structural degradation in mechanical systems. Early detection of fatigue damage is essential because the resulting structural degradation could potentially cause catastrophic failures, leading to loss of expensive equipment and human life. Therefore, for reliable operation and enhanced availability, it is necessary to develop capabilities for prognosis and estimation of impending failures, such as the onset of wide-spread fatigue crack damage in mechanical structures. This dissertation presents information-based online sensing of fatigue damage using the analytical tools of symbolic time series analysis ( STSA). Anomaly detection using STSA is a pattern recognition method that has been recently developed based upon a fixed-structure, fixed-order Markov chain. The analysis procedure is built upon the principles of Symbolic Dynamics, Information Theory and Statistical Pattern Recognition. The dissertation demonstrates real-time fatigue damage monitoring based on time series data of ultrasonic signals. Statistical pattern changes are measured using STSA to monitor the evolution of fatigue damage. Real-time anomaly detection is presented as a solution to the forward (analysis) problem and the inverse (synthesis) problem. (1) the forward problem - The primary objective of the forward problem is identification of the statistical changes in the time series data of ultrasonic signals due to gradual evolution of fatigue damage. (2) the inverse problem - The objective of the inverse problem is to infer the anomalies from the observed time series data in real time based on the statistical information generated during the forward problem. A computer-controlled special-purpose fatigue test apparatus, equipped with multiple sensing devices (e.g., ultrasonics and optical microscope) for damage analysis, has been used to experimentally validate the STSA method for early detection of anomalous behavior. The sensor information is integrated with a software module consisting of the STSA algorithm for real-time monitoring of fatigue damage. Experiments have been conducted under different loading conditions on specimens constructed from the ductile aluminium alloy 7075 - T6. The dissertation has also investigated the application of the STSA method for early detection of anomalies in other engineering disciplines. Two primary applications include combustion instability in a generic thermal pulse combustor model and whirling phenomenon in a typical misaligned shaft.
On Profit-Maximizing Pricing for the Highway and Tollbooth Problems
NASA Astrophysics Data System (ADS)
Elbassioni, Khaled; Raman, Rajiv; Ray, Saurabh; Sitters, René
In the tollbooth problem on trees, we are given a tree T= (V,E) with n edges, and a set of m customers, each of whom is interested in purchasing a path on the graph. Each customer has a fixed budget, and the objective is to price the edges of T such that the total revenue made by selling the paths to the customers that can afford them is maximized. An important special case of this problem, known as the highway problem, is when T is restricted to be a line. For the tollbooth problem, we present an O(logn)-approximation, improving on the current best O(logm)-approximation. We also study a special case of the tollbooth problem, when all the paths that customers are interested in purchasing go towards a fixed root of T. In this case, we present an algorithm that returns a (1 - ɛ)-approximation, for any ɛ> 0, and runs in quasi-polynomial time. On the other hand, we rule out the existence of an FPTAS by showing that even for the line case, the problem is strongly NP-hard. Finally, we show that in the discount model, when we allow some items to be priced below zero to improve the overall profit, the problem becomes even APX-hard.
Parameter-space metric of semicoherent searches for continuous gravitational waves
NASA Astrophysics Data System (ADS)
Pletsch, Holger J.
2010-08-01
Continuous gravitational-wave (CW) signals such as emitted by spinning neutron stars are an important target class for current detectors. However, the enormous computational demand prohibits fully coherent broadband all-sky searches for prior unknown CW sources over wide ranges of parameter space and for yearlong observation times. More efficient hierarchical “semicoherent” search strategies divide the data into segments much shorter than one year, which are analyzed coherently; then detection statistics from different segments are combined incoherently. To optimally perform the incoherent combination, understanding of the underlying parameter-space structure is requisite. This problem is addressed here by using new coordinates on the parameter space, which yield the first analytical parameter-space metric for the incoherent combination step. This semicoherent metric applies to broadband all-sky surveys (also embedding directed searches at fixed sky position) for isolated CW sources. Furthermore, the additional metric resolution attained through the combination of segments is studied. From the search parameters (sky position, frequency, and frequency derivatives), solely the metric resolution in the frequency derivatives is found to significantly increase with the number of segments.
Biological sources and sinks of nitrous oxide and strategies to mitigate emissions
Thomson, Andrew J.; Giannopoulos, Georgios; Pretty, Jules; Baggs, Elizabeth M.; Richardson, David J.
2012-01-01
Nitrous oxide (N2O) is a powerful atmospheric greenhouse gas and cause of ozone layer depletion. Global emissions continue to rise. More than two-thirds of these emissions arise from bacterial and fungal denitrification and nitrification processes in soils, largely as a result of the application of nitrogenous fertilizers. This article summarizes the outcomes of an interdisciplinary meeting, ‘Nitrous oxide (N2O) the forgotten greenhouse gas’, held at the Kavli Royal Society International Centre, from 23 to 24 May 2011. It provides an introduction and background to the nature of the problem, and summarizes the conclusions reached regarding the biological sources and sinks of N2O in oceans, soils and wastewaters, and discusses the genetic regulation and molecular details of the enzymes responsible. Techniques for providing global and local N2O budgets are discussed. The findings of the meeting are drawn together in a review of strategies for mitigating N2O emissions, under three headings, namely: (i) managing soil chemistry and microbiology, (ii) engineering crop plants to fix nitrogen, and (iii) sustainable agricultural intensification. PMID:22451101
Primary Beam Air Kerma Dependence on Distance from Cargo and People Scanners
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strom, Daniel J.; Cerra, Frank
The distance dependence of air kerma or dose rate of the primary radiation beam is not obvious for security scanners of cargo and people in which there is relative motion between a collimated source and the person or object being imaged. To study this problem, one fixed line source and three moving-source scan-geometry cases are considered, each characterized by radiation emanating perpendicular to an axis. The cases are 1) a stationary line source of radioactive material, e.g., contaminated solution in a pipe; 2) a moving, uncollimated point source of radiation that is shuttered or off when it is stationary; 3)more » a moving, collimated point source of radiation that is shuttered or off when it is stationary; and 4) a translating, narrow “pencil” beam emanating in a flying-spot, raster pattern. Each case is considered for short and long distances compared to the line source length or path traversed by a moving source. The short distance model pertains mostly to dose to objects being scanned and personnel associated with the screening operation. The long distance model pertains mostly to potential dose to bystanders. For radionuclide sources, the number of nuclear transitions that occur a) per unit length of a line source, or b) during the traversal of a point source, is a unifying concept. The “universal source strength” of air kerma rate at a meter from the source can be used to describe x-ray machine or radionuclide sources. For many cargo and people scanners with highly collimated fan or pencil beams, dose varies as the inverse of the distance from the source in the near field and with the inverse square of the distance beyond a critical radius. Ignoring the inverse square dependence and using inverse distance dependence is conservative in the sense of tending to overestimate dose.« less
Primary Beam Air Kerma Dependence on Distance from Cargo and People Scanners.
Strom, Daniel J; Cerra, Frank
2016-06-01
The distance dependence of air kerma or dose rate of the primary radiation beam is not obvious for security scanners of cargo and people in which there is relative motion between a collimated source and the person or object being imaged. To study this problem, one fixed line source and three moving-source scan-geometry cases are considered, each characterized by radiation emanating perpendicular to an axis. The cases are 1) a stationary line source of radioactive material, e.g., contaminated solution in a pipe; 2) a moving, uncollimated point source of radiation that is shuttered or off when it is stationary; 3) a moving, collimated point source of radiation that is shuttered or off when it is stationary; and 4) a translating, narrow "pencil" beam emanating in a flying-spot, raster pattern. Each case is considered for short and long distances compared to the line source length or path traversed by a moving source. The short distance model pertains mostly to dose to objects being scanned and personnel associated with the screening operation. The long distance model pertains mostly to potential dose to bystanders. For radionuclide sources, the number of nuclear transitions that occur a) per unit length of a line source or b) during the traversal of a point source is a unifying concept. The "universal source strength" of air kerma rate at 1 m from the source can be used to describe x-ray machine or radionuclide sources. For many cargo and people scanners with highly collimated fan or pencil beams, dose varies as the inverse of the distance from the source in the near field and with the inverse square of the distance beyond a critical radius. Ignoring the inverse square dependence and using inverse distance dependence is conservative in the sense of tending to overestimate dose.
NASA Technical Reports Server (NTRS)
Salmon, R. F.; Imbrogno, S.
1976-01-01
The importance of measuring accurate air and fuel flows as well as the importance of obtaining accurate exhaust pollutant measurements were emphasized. Some of the problems and the corrective actions taken to incorporate fixes and/or modifications were identified.
Determination of the expansion of the potential of the earth's normal gravitational field
NASA Astrophysics Data System (ADS)
Kochiev, A. A.
The potential of the generalized problem of 2N fixed centers is expanded in a polynomial and Legendre function series. Formulas are derived for the expansion coefficients, and the disturbing function of the problem is constructed in an explicit form.
Graphite distortion ``C`` Reactor. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, N.H.
1962-02-08
This report covers the efforts of the Laboratory in an investigation of the graphite distortion in the ``C`` reactor at Hanford. The particular aspects of the problem to be covered by the Laboratory were possible ``fixes`` to the control rod sticking problem caused by VSR channel distortion.
Adjoint-based optimization of PDEs in moving domains
NASA Astrophysics Data System (ADS)
Protas, Bartosz; Liao, Wenyuan
2008-02-01
In this investigation we address the problem of adjoint-based optimization of PDE systems in moving domains. As an example we consider the one-dimensional heat equation with prescribed boundary temperatures and heat fluxes. We discuss two methods of deriving an adjoint system necessary to obtain a gradient of a cost functional. In the first approach we derive the adjoint system after mapping the problem to a fixed domain, whereas in the second approach we derive the adjoint directly in the moving domain by employing methods of the noncylindrical calculus. We show that the operations of transforming the system from a variable to a fixed domain and deriving the adjoint do not commute and that, while the gradient information contained in both systems is the same, the second approach results in an adjoint problem with a simpler structure which is therefore easier to implement numerically. This approach is then used to solve a moving boundary optimization problem for our model system.
NASA Astrophysics Data System (ADS)
Hashimoto, Hiroyuki; Takaguchi, Yusuke; Nakamura, Shizuka
Instability of calculation process and increase of calculation time caused by increasing size of continuous optimization problem remain the major issues to be solved to apply the technique to practical industrial systems. This paper proposes an enhanced quadratic programming algorithm based on interior point method mainly for improvement of calculation stability. The proposed method has dynamic estimation mechanism of active constraints on variables, which fixes the variables getting closer to the upper/lower limit on them and afterwards releases the fixed ones as needed during the optimization process. It is considered as algorithm-level integration of the solution strategy of active-set method into the interior point method framework. We describe some numerical results on commonly-used bench-mark problems called “CUTEr” to show the effectiveness of the proposed method. Furthermore, the test results on large-sized ELD problem (Economic Load Dispatching problems in electric power supply scheduling) are also described as a practical industrial application.
Mikš, Antonín; Novák, Pavel
2018-05-10
In this article, we analyze the problem of the paraxial design of an active optical element with variable focal length, which maintains the positions of its principal planes fixed during the change of its optical power. Such optical elements are important in the process of design of complex optical systems (e.g., zoom systems), where the fixed position of principal planes during the change of optical power is essential for the design process. The proposed solution is based on the generalized membrane tunable-focus fluidic lens with several membrane surfaces.
48 CFR 908.7106 - Security cabinets.
Code of Federal Regulations, 2010 CFR
2010-10-01
....7106 Section 908.7106 Federal Acquisition Regulations System DEPARTMENT OF ENERGY COMPETITION ACQUISITION PLANNING REQUIRED SOURCES OF SUPPLIES AND SERVICES Acquisition of Special Items 908.7106 Security...) Fixed-price prime contractors and lower tier subcontractors may use GSA acquisition sources for security...
Pyrotechnic device provides one-shot heat source
NASA Technical Reports Server (NTRS)
Haller, H. C.; Lalli, V. R.
1968-01-01
Pyrotechnic heater provides a one-shot heat source capable of creating a predetermined temperature around sealed packages. It is composed of a blend of an active chemical element and another compound which reacts exothermically when ignited and produces fixed quantities of heat.
Mie Scattering of Growing Molecular Contaminants
NASA Technical Reports Server (NTRS)
Herren, Kenneth A.; Gregory, Don A.
2007-01-01
Molecular contamination of optical surfaces from outgassed material has been shown in many cases to proceed from acclimation centers and to produce many roughly hemispherical "islands" of contamination on the surface. The mathematics of the hemispherical scattering is simplified by introducing a Virtual source below the plane of the optic, in this case a mirror, allowing the use of Mie theory to produce a solution for the resulting sphere .in transmission. Experimentally, a fixed wavelength in the vacuum ultraviolet was used as the illumination source and scattered light from the polished and coated glass mirrors was detected at a fixed angle as the contamination islands grew in time.
Crustal dynamics project data analysis, 1991: VLBI geodetic results, 1979 - 1990
NASA Technical Reports Server (NTRS)
Ma, C.; Ryan, J. W.; Caprette, D. S.
1992-01-01
The Goddard VLBI group reports the results of analyzing 1412 Mark II data sets acquired from fixed and mobile observing sites through the end of 1990 and available to the Crustal Dynamics Project. Three large solutions were used to obtain Earth rotation parameters, nutation offsets, global source positions, site velocities, and baseline evolution. Site positions are tabulated on a yearly basis from 1979 through 1992. Site velocities are presented in both geocentric Cartesian coordinates and topocentric coordinates. Baseline evolution is plotted for 175 baselines. Rates are computed for earth rotation and nutation parameters. Included are 104 sources, 88 fixed stations and mobile sites, and 688 baselines.
ERIC Educational Resources Information Center
Hoff, David J.
2006-01-01
Kathy Christie, senior vice president at the Education Commission of the States (ECS), resigned on May 1, 2006, saying that the Denver-based group faces a financial crisis, and that she doubts the current ECS president can fix it. By the end of the week, the accounting manager had also resigned, expressing similar concerns, and two policy analysts…
Mimicking Nonequilibrium Steady States with Time-Periodic Driving (Open Source)
2016-05-18
nonequilibrium steady states, and vice versa, within the theoretical framework of discrete-state stochastic thermodynamics . Nonequilibrium steady states...equilibrium [2], spontaneous relaxation towards equilibrium [3], nonequilibrium steady states generated by fixed thermodynamic forces [4], and stochastic pumps...paradigm, a system driven by fixed thermodynamic forces—such as temperature gradients or chemical potential differences— reaches a steady state in
Studies in integrated line-and packet-switched computer communication systems
NASA Astrophysics Data System (ADS)
Maglaris, B. S.
1980-06-01
The problem of efficiently allocating the bandwidth of a trunk to both types of traffic is handled for various system and traffic models. A performance analysis is carried out both for variable and fixed frame schemes. It is shown that variable frame schemes, adjusting the frame length according to the traffic variations, offer better trunk utilization at the cost of the additional hardware and software complexity needed because of the lack of synchronization. An optimization study on the fixed frame schemes follows. The problem of dynamically allocating the fixed frame to both types of traffic is formulated as a Markovian Decision process. It is shown that the movable boundary scheme, suggested for commercial implementations of integrated multiplexors, offers optimal or near optimal performance and simplicity of implementation. Finally, the behavior of the movable boundary integrated scheme is studied for tandem link connections. Under the assumptions made for the line-switched traffic, the forward allocation technique is found to offer the best alternative among different path set-up strategies.
A simple technique to increase profits in wood products marketing
George B. Harpole
1971-01-01
Mathematical models can be used to solve quickly some simple day-to-day marketing problems. This note explains how a sawmill production manager, who has an essentially fixed-capacity mill, can solve several optimization problems by using pencil and paper, a forecast of market prices, and a simple algorithm. One such problem is to maximize profits in an operating period...
On Making a Distinguished Vertex Minimum Degree by Vertex Deletion
NASA Astrophysics Data System (ADS)
Betzler, Nadja; Bredereck, Robert; Niedermeier, Rolf; Uhlmann, Johannes
For directed and undirected graphs, we study the problem to make a distinguished vertex the unique minimum-(in)degree vertex through deletion of a minimum number of vertices. The corresponding NP-hard optimization problems are motivated by applications concerning control in elections and social network analysis. Continuing previous work for the directed case, we show that the problem is W[2]-hard when parameterized by the graph's feedback arc set number, whereas it becomes fixed-parameter tractable when combining the parameters "feedback vertex set number" and "number of vertices to delete". For the so far unstudied undirected case, we show that the problem is NP-hard and W[1]-hard when parameterized by the "number of vertices to delete". On the positive side, we show fixed-parameter tractability for several parameterizations measuring tree-likeness, including a vertex-linear problem kernel with respect to the parameter "feedback edge set number". On the contrary, we show a non-existence result concerning polynomial-size problem kernels for the combined parameter "vertex cover number and number of vertices to delete", implying corresponding nonexistence results when replacing vertex cover number by treewidth or feedback vertex set number.
Isolating intrinsic noise sources in a stochastic genetic switch.
Newby, Jay M
2012-01-01
The stochastic mutual repressor model is analysed using perturbation methods. This simple model of a gene circuit consists of two genes and three promotor states. Either of the two protein products can dimerize, forming a repressor molecule that binds to the promotor of the other gene. When the repressor is bound to a promotor, the corresponding gene is not transcribed and no protein is produced. Either one of the promotors can be repressed at any given time or both can be unrepressed, leaving three possible promotor states. This model is analysed in its bistable regime in which the deterministic limit exhibits two stable fixed points and an unstable saddle, and the case of small noise is considered. On small timescales, the stochastic process fluctuates near one of the stable fixed points, and on large timescales, a metastable transition can occur, where fluctuations drive the system past the unstable saddle to the other stable fixed point. To explore how different intrinsic noise sources affect these transitions, fluctuations in protein production and degradation are eliminated, leaving fluctuations in the promotor state as the only source of noise in the system. The process without protein noise is then compared to the process with weak protein noise using perturbation methods and Monte Carlo simulations. It is found that some significant differences in the random process emerge when the intrinsic noise source is removed.
NASA Technical Reports Server (NTRS)
Sackett, L. L.; Edelbaum, T. N.; Malchow, H. L.
1974-01-01
This manual is a guide for using a computer program which calculates time optimal trajectories for high-and low-thrust geocentric transfers. Either SEP or NEP may be assumed and a one or two impulse, fixed total delta V, initial high thrust phase may be included. Also a single impulse of specified delta V may be included after the low thrust state. The low thrust phase utilizes equinoctial orbital elements to avoid the classical singularities and Kryloff-Boguliuboff averaging to help insure more rapid computation time. The program is written in FORTRAN 4 in double precision for use on an IBM 360 computer. The manual includes a description of the problem treated, input/output information, examples of runs, and source code listings.
Gu, Herong; Guan, Yajuan; Wang, Huaibao; Wei, Baoze; Guo, Xiaoqiang
2014-01-01
Microgrid is an effective way to integrate the distributed energy resources into the utility networks. One of the most important issues is the power flow control of grid-connected voltage-source inverter in microgrid. In this paper, the small-signal model of the power flow control for the grid-connected inverter is established, from which it can be observed that the conventional power flow control may suffer from the poor damping and slow transient response. While the new power flow control can mitigate these problems without affecting the steady-state power flow regulation. Results of continuous-domain simulations in MATLAB and digital control experiments based on a 32-bit fixed-point TMS320F2812 DSP are in good agreement, which verify the small signal model analysis and effectiveness of the proposed method.
Long Term Measurement of the Vapor Pressure of Gold in the Au-C System
NASA Technical Reports Server (NTRS)
Copland, Evan H.
2009-01-01
Incorporating the {Au(s,l) + graphite} reference in component activity measurements made with the multiple effusion-cell vapor source mass spectrometry (multicell KEMS) technique provides a fixed temperature defining ITS-90 (T(sub mp)(Au) = 1337.33K) and a systematic method to check accuracy. Over a 2 year period delta H sub(298)Au was determined by the 2nd and 3rd law methods in 25 separate experiments and were in the ranges 362.2 plus or minus 3.3 kJmol(sup -1) and 367.8 plus or minus 1.1 kJmol(sup -1), respectively. This 5 kJmol-1 discrepancy is transferred directly to the measured activities. This is unacceptable and the source of this discrepancy needs to be understood and corrected. Accepting the 2nd law value increases p(Au) by about 50 percent, brings the 2nd and 3rd law values into agreement and removes the T dependence in the 3rd law values. While compelling, there is no way to independently determine instrument sensitivities, S(sub Au), with T in a single experiment with KEMS. This lack of capability is stopping a deeper understanding of this problem. In addition, the Au-C phase diagram suggests a eutectic invariant reaction: L-Au(4.7at%C) = FCC-Au(0.08at%C) + C(graphite) at T(sub e) approximately 1323K. This high C concentration in Au(l) must reduce p(Au) in equilibrium with {Au(s,l) + graphite} and raises some critical questions about the Gibbs free energy functions of Au(s,l) and the Au fixed point (T(sub mp)(Au) = 1337.33K) which is always measured in graphite.
Transmission imaging for integrated PET-MR systems.
Bowen, Spencer L; Fuin, Niccolò; Levine, Michael A; Catana, Ciprian
2016-08-07
Attenuation correction for PET-MR systems continues to be a challenging problem, particularly for body regions outside the head. The simultaneous acquisition of transmission scan based μ-maps and MR images on integrated PET-MR systems may significantly increase the performance of and offer validation for new MR-based μ-map algorithms. For the Biograph mMR (Siemens Healthcare), however, use of conventional transmission schemes is not practical as the patient table and relatively small diameter scanner bore significantly restrict radioactive source motion and limit source placement. We propose a method for emission-free coincidence transmission imaging on the Biograph mMR. The intended application is not for routine subject imaging, but rather to improve and validate MR-based μ-map algorithms; particularly for patient implant and scanner hardware attenuation correction. In this study we optimized source geometry and assessed the method's performance with Monte Carlo simulations and phantom scans. We utilized a Bayesian reconstruction algorithm, which directly generates μ-map estimates from multiple bed positions, combined with a robust scatter correction method. For simulations with a pelvis phantom a single torus produced peak noise equivalent count rates (34.8 kcps) dramatically larger than a full axial length ring (11.32 kcps) and conventional rotating source configurations. Bias in reconstructed μ-maps for head and pelvis simulations was ⩽4% for soft tissue and ⩽11% for bone ROIs. An implementation of the single torus source was filled with (18)F-fluorodeoxyglucose and the proposed method quantified for several test cases alone or in comparison with CT-derived μ-maps. A volume average of 0.095 cm(-1) was recorded for an experimental uniform cylinder phantom scan, while a bias of <2% was measured for the cortical bone equivalent insert of the multi-compartment phantom. Single torus μ-maps of a hip implant phantom showed significantly less artifacts and improved dynamic range, and differed greatly for highly attenuating materials in the case of the patient table, compared to CT results. Use of a fixed torus geometry, in combination with translation of the patient table to perform complete tomographic sampling, generated highly quantitative measured μ-maps and is expected to produce images with significantly higher SNR than competing fixed geometries at matched total acquisition time.
Transmission imaging for integrated PET-MR systems
NASA Astrophysics Data System (ADS)
Bowen, Spencer L.; Fuin, Niccolò; Levine, Michael A.; Catana, Ciprian
2016-08-01
Attenuation correction for PET-MR systems continues to be a challenging problem, particularly for body regions outside the head. The simultaneous acquisition of transmission scan based μ-maps and MR images on integrated PET-MR systems may significantly increase the performance of and offer validation for new MR-based μ-map algorithms. For the Biograph mMR (Siemens Healthcare), however, use of conventional transmission schemes is not practical as the patient table and relatively small diameter scanner bore significantly restrict radioactive source motion and limit source placement. We propose a method for emission-free coincidence transmission imaging on the Biograph mMR. The intended application is not for routine subject imaging, but rather to improve and validate MR-based μ-map algorithms; particularly for patient implant and scanner hardware attenuation correction. In this study we optimized source geometry and assessed the method’s performance with Monte Carlo simulations and phantom scans. We utilized a Bayesian reconstruction algorithm, which directly generates μ-map estimates from multiple bed positions, combined with a robust scatter correction method. For simulations with a pelvis phantom a single torus produced peak noise equivalent count rates (34.8 kcps) dramatically larger than a full axial length ring (11.32 kcps) and conventional rotating source configurations. Bias in reconstructed μ-maps for head and pelvis simulations was ⩽4% for soft tissue and ⩽11% for bone ROIs. An implementation of the single torus source was filled with 18F-fluorodeoxyglucose and the proposed method quantified for several test cases alone or in comparison with CT-derived μ-maps. A volume average of 0.095 cm-1 was recorded for an experimental uniform cylinder phantom scan, while a bias of <2% was measured for the cortical bone equivalent insert of the multi-compartment phantom. Single torus μ-maps of a hip implant phantom showed significantly less artifacts and improved dynamic range, and differed greatly for highly attenuating materials in the case of the patient table, compared to CT results. Use of a fixed torus geometry, in combination with translation of the patient table to perform complete tomographic sampling, generated highly quantitative measured μ-maps and is expected to produce images with significantly higher SNR than competing fixed geometries at matched total acquisition time.
Optimal Portfolio Selection Under Concave Price Impact
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma Jin, E-mail: jinma@usc.edu; Song Qingshuo, E-mail: songe.qingshuo@cityu.edu.hk; Xu Jing, E-mail: xujing8023@yahoo.com.cn
2013-06-15
In this paper we study an optimal portfolio selection problem under instantaneous price impact. Based on some empirical analysis in the literature, we model such impact as a concave function of the trading size when the trading size is small. The price impact can be thought of as either a liquidity cost or a transaction cost, but the concavity nature of the cost leads to some fundamental difference from those in the existing literature. We show that the problem can be reduced to an impulse control problem, but without fixed cost, and that the value function is a viscosity solutionmore » to a special type of Quasi-Variational Inequality (QVI). We also prove directly (without using the solution to the QVI) that the optimal strategy exists and more importantly, despite the absence of a fixed cost, it is still in a 'piecewise constant' form, reflecting a more practical perspective.« less
NASA Astrophysics Data System (ADS)
McDonald, Geoff L.; Zhao, Qing
2017-01-01
Minimum Entropy Deconvolution (MED) has been applied successfully to rotating machine fault detection from vibration data, however this method has limitations. A convolution adjustment to the MED definition and solution is proposed in this paper to address the discontinuity at the start of the signal - in some cases causing spurious impulses to be erroneously deconvolved. A problem with the MED solution is that it is an iterative selection process, and will not necessarily design an optimal filter for the posed problem. Additionally, the problem goal in MED prefers to deconvolve a single-impulse, while in rotating machine faults we expect one impulse-like vibration source per rotational period of the faulty element. Maximum Correlated Kurtosis Deconvolution was proposed to address some of these problems, and although it solves the target goal of multiple periodic impulses, it is still an iterative non-optimal solution to the posed problem and only solves for a limited set of impulses in a row. Ideally, the problem goal should target an impulse train as the output goal, and should directly solve for the optimal filter in a non-iterative manner. To meet these goals, we propose a non-iterative deconvolution approach called Multipoint Optimal Minimum Entropy Deconvolution Adjusted (MOMEDA). MOMEDA proposes a deconvolution problem with an infinite impulse train as the goal and the optimal filter solution can be solved for directly. From experimental data on a gearbox with and without a gear tooth chip, we show that MOMEDA and its deconvolution spectrums according to the period between the impulses can be used to detect faults and study the health of rotating machine elements effectively.
Timing, Magnitude and Sources of Ecosystem Respiration in High Arctic Tundra of NW Greenland
NASA Astrophysics Data System (ADS)
Lupascu, M.; Xu, X.; Lett, C.; Maseyk, K. S.; Lindsey, D. S.; Thomas, J. S.; Welker, J. M.; Czimczik, C. I.
2011-12-01
High arctic ecosystems with low vegetation density contain significant stocks of organic carbon (C) in the form of soil organic matter that range in age from modern to ancient. How rapidly these C pools can be mineralized and lost to the atmosphere as CO2 (ecosystem respiration, ER) as a consequence of warming and, or changes in precipitation is a major uncertainty in our understanding of current and future arctic biogeochemistry and for predicting future levels of atmospheric CO2. In a 2-year study (2010-2011), we monitored seasonal changes in the magnitude, timing and sources of ER and soil pore space CO2 in the High Arctic of NW Greenland under current and simulated, future climate conditions. Measurements were taken from May to August at a multi-factorial, long-term climate change experiment in prostrate dwarf-shrub tundra on patterned ground with 5 treatments: (T1) +2oC warming, (T2) +4oC warming, (W) +50% summer precipitation, (T2W) +4oC + 50% summer precipitation, and (C) control. ER (using opaque chambers) and soil CO2 concentrations (wells) were monitored daily via infrared spectroscopy (LI-COR 800 & 840). The source of CO2 was inferred from its radiocarbon (14C) content analyzed at the AMS facility in UCI. CO2 was sampled monthly using molecular sieve traps (chambers) or evacuated canisters (wells). Highest rates of ER are observed on vegetated ground with a maximum in mid summer - reflecting a peak in plant productivity and soil temperature. Respiration rates from bare ground remain similar throughout the summer. Additional soil moisture, administered or due to precipitation events, strongly enhances ER from both vegetated and bare ground. Daily ER budget for the sampling period was of 53.1 mmol C m-2 day-1 for the (C) vegetated areas compared to the 60.0 for the (T2), 68.1 for the (T2W) or the 79.9 for the (W) treatment. ER was highly correlated to temperature (eg. C = 0.8; T2W = 0.8) until middle of July, when heavy precipitation started to occur. In vegetated areas, ER is dominated by recently fixed C, but older C sources contribute during snow melt. Bare areas can be sources of old C throughout the summer. Under ambient climate conditions, pore space CO2 is produced from recently-fixed C near the surface and older C sources at depth. When summer rainfall is increased, recently fixed C is the dominant source of CO2 at all soil depths as recently fixed C is relocated deeper into the soil. Future conditions in NW Greenland will likely result in greater rates of ER, being especially dramatic if summer rainfall increases coincidently with warming. Our findings show that the sources of C efflux will still mostly being dominated by recently fixed C, due to the strong response of plants to water addition.
ERIC Educational Resources Information Center
McMahon, Dennis O.
This report describes a problem-solving approach to grievance settling and negotiations developed in the Brighton, Michigan, school district and inspired by the book, "Getting To Yes," by Roger Fisher and William Ury. In this approach teachers and administrators come to the table not with fixed positions but with problems both sides want…
Radon Q & A. What You Need to Know.
ERIC Educational Resources Information Center
Bayham, Chris
1994-01-01
Because radon is the second leading cause of lung cancer in this country, the article presents a question and answer sheet on where radon comes from, which buildings are most likely to have radon, how to tell whether there is a problem, and expenses involved in testing and fixing problems. (SM)
ERIC Educational Resources Information Center
Kroeker, Leonard P.
The problem of blocking on a status variable was investigated. The one-way fixed-effects analysis of variance, analysis of covariance, and generalized randomized block designs each treat the blocking problem in a different way. In order to compare these designs, it is necessary to restrict attention to experimental situations in which observations…
Y2K for Librarians: Exactly What You Need To Do.
ERIC Educational Resources Information Center
Doering, William
1999-01-01
Addresses how libraries can prepare for Y2K problems. Discusses technology that might be affected and equipment that should be examined, difficulty of fixing noncompliant hardware and software, identifying problem areas and developing solutions, and dealing with vendors. Includes a checklist of necessary preparations. (AEF)
A Microcomputer-Based Network Optimization Package.
1981-09-01
from either cases a or c as Truncated-Newton directions. It can be shown [Ref. 27] that the TNCG algorithm is globally convergent and capable of...nonzero values of LGB indicate bounds at which arcs are fixed or reversed. Fixed arcs have negative T ( ) while free arcs have positive T ( ) values...Solution of Generalized Network Problems," Working Paper, Department of Finance and Business Economics , School of Business , University of Southern
Identifying Acquisition Patterns of Failure Using Systems Archetypes
2008-04-02
OF PAGES 18 19a. NAME OF RESPONSIBLE PERSON a. REPORT unclassified b . ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev...any but the smallest programs, complete path coverage for defect detection is impractical. Adapted from Pressman , R.S., Software Engineering: A...Firefighting” concept from “Past the Tipping Point” Fix S O B Problem Symptom R “Fixes That Fail” – Systems Archetype S Unintended Consequences S
NASA Technical Reports Server (NTRS)
Lomax, Harvard
1957-01-01
Several variational problems involving optimum wing and body combinations having minimum wave drag for different kinds of geometrical restraints are analyzed. Particular attention is paid to the effect on the wave drag of shortening the fuselage and, for slender axially symmetric bodies, the effect of fixing the fuselage diameter at several points or even of fixing whole portions of its shape.
The Imperatives of Tactical Level Maintenance,
1986-11-24
consist of the imperatives of tactical level maintenance. These imperatives are: fix forward; provision of repair parts supply in a responsive manner...Fix forward; Provide rer,onsi,,e repair parts supply support; Conduct responsive recoer’, and e,)acuat!Dr operations; Establish and maintain...1 primary battlefield source of supple becomes the maintenance system. The role of maintenance, therefore, is to ass-ist in tne pro ’,51 _n
The trigger mechanism of low-frequency earthquakes on Montserrat
NASA Astrophysics Data System (ADS)
Neuberg, J. W.; Tuffen, H.; Collier, L.; Green, D.; Powell, T.; Dingwell, D.
2006-05-01
A careful analysis of low-frequency seismic events on Soufrièere Hills volcano, Montserrat, points to a source mechanism that is non-destructive, repetitive, and has a stationary source location. By combining these seismological clues with new field evidence and numerical magma flow modelling, we propose a seismic trigger model which is based on brittle failure of magma in the glass transition. Loss of heat and gas from the magma results in a strong viscosity gradient across a dyke or conduit. This leads to a build-up of shear stress near the conduit wall where magma can rupture in a brittle manner, as field evidence from a rhyolitic dyke demonstrates. This brittle failure provides seismic energy, the majority of which is trapped in the conduit or dyke forming the low-frequency coda of the observed seismic signal. The trigger source location marks the transition from ductile conduit flow to friction-controlled magma ascent. As the trigger mechanism is governed by the depth-dependent magma parameters, the source location remains fixed at a depth where the conditions allow brittle failure. This is reflected in the fixed seismic source locations.
DEVELOPMENT OF A MODEL FOR REAL TIME CO CONCENTRATIONS NEAR ROADWAYS
Although emission standards for mobile sources continue to be tightened, tailpipe emissions in urban areas continue to be a major source of human exposure to air toxics. Current human exposure models using simplified assumptions based on fixed air monitoring stations and region...
Controle du vol longitudinal d'un avion civil avec satisfaction de qualiies de manoeuvrabilite
NASA Astrophysics Data System (ADS)
Saussie, David Alexandre
2010-03-01
Fulfilling handling qualities still remains a challenging problem during flight control design. These criteria of different nature are derived from a wide experience based upon flight tests and data analysis, and they have to be considered if one expects a good behaviour of the aircraft. The goal of this thesis is to develop synthesis methods able to satisfy these criteria with fixed classical architectures imposed by the manufacturer or with a new flight control architecture. This is applied to the longitudinal flight model of a Bombardier Inc. business jet aircraft, namely the Challenger 604. A first step of our work consists in compiling the most commonly used handling qualities in order to compare them. A special attention is devoted to the dropback criterion for which theoretical analysis leads us to establish a practical formulation for synthesis purpose. Moreover, the comparison of the criteria through a reference model highlighted dominant criteria that, once satisfied, ensure that other ones are satisfied too. Consequently, we are able to consider the fulfillment of these criteria in the fixed control architecture framework. Guardian maps (Saydy et al., 1990) are then considered to handle the problem. Initially for robustness study, they are integrated in various algorithms for controller synthesis. Incidently, this fixed architecture problem is similar to the static output feedback stabilization problem and reduced-order controller synthesis. Algorithms performing stabilization and pole assignment in a specific region of the complex plane are then proposed. Afterwards, they are extended to handle the gain-scheduling problem. The controller is then scheduled through the entire flight envelope with respect to scheduling parameters. Thereafter, the fixed architecture is put aside while only conserving the same output signals. The main idea is to use Hinfinity synthesis to obtain an initial controller satisfying handling qualities thanks to reference model pairing and robust versus mass and center of gravity variations. Using robust modal control (Magni, 2002), we are able to reduce substantially the controller order and to structure it in order to come close to a classical architecture. An auto-scheduling method finally allows us to schedule the controller with respect to scheduling parameters. Two different paths are used to solve the same problem; each one exhibits its own advantages and disadvantages.
Optical fiber sensors embedded in flexible polymer foils
NASA Astrophysics Data System (ADS)
van Hoe, Bram; van Steenberge, Geert; Bosman, Erwin; Missinne, Jeroen; Geernaert, Thomas; Berghmans, Francis; Webb, David; van Daele, Peter
2010-04-01
In traditional electrical sensing applications, multiplexing and interconnecting the different sensing elements is a major challenge. Recently, many optical alternatives have been investigated including optical fiber sensors of which the sensing elements consist of fiber Bragg gratings. Different sensing points can be integrated in one optical fiber solving the interconnection problem and avoiding any electromagnetical interference (EMI). Many new sensing applications also require flexible or stretchable sensing foils which can be attached to or wrapped around irregularly shaped objects such as robot fingers and car bumpers or which can even be applied in biomedical applications where a sensor is fixed on a human body. The use of these optical sensors however always implies the use of a light-source, detectors and electronic circuitry to be coupled and integrated with these sensors. The coupling of these fibers with these light sources and detectors is a critical packaging problem and as it is well-known the costs for packaging, especially with optoelectronic components and fiber alignment issues are huge. The end goal of this embedded sensor is to create a flexible optical sensor integrated with (opto)electronic modules and control circuitry. To obtain this flexibility, one can embed the optical sensors and the driving optoelectronics in a stretchable polymer host material. In this article different embedding techniques for optical fiber sensors are described and characterized. Initial tests based on standard manufacturing processes such as molding and laser structuring are reported as well as a more advanced embedding technique based on soft lithography processing.
Biochemical transport modeling, estimation, and detection in realistic environments
NASA Astrophysics Data System (ADS)
Ortner, Mathias; Nehorai, Arye
2006-05-01
Early detection and estimation of the spread of a biochemical contaminant are major issues for homeland security applications. We present an integrated approach combining the measurements given by an array of biochemical sensors with a physical model of the dispersion and statistical analysis to solve these problems and provide system performance measures. We approximate the dispersion model of the contaminant in a realistic environment through numerical simulations of reflected stochastic diffusions describing the microscopic transport phenomena due to wind and chemical diffusion using the Feynman-Kac formula. We consider arbitrary complex geometries and account for wind turbulence. Localizing the dispersive sources is useful for decontamination purposes and estimation of the cloud evolution. To solve the associated inverse problem, we propose a Bayesian framework based on a random field that is particularly powerful for localizing multiple sources with small amounts of measurements. We also develop a sequential detector using the numerical transport model we propose. Sequential detection allows on-line analysis and detecting wether a change has occurred. We first focus on the formulation of a suitable sequential detector that overcomes the presence of unknown parameters (e.g. release time, intensity and location). We compute a bound on the expected delay before false detection in order to decide the threshold of the test. For a fixed false-alarm rate, we obtain the detection probability of a substance release as a function of its location and initial concentration. Numerical examples are presented for two real-world scenarios: an urban area and an indoor ventilation duct.
Cyanobacteria: A Precious Bio-resource in Agriculture, Ecosystem, and Environmental Sustainability.
Singh, Jay Shankar; Kumar, Arun; Rai, Amar N; Singh, Devendra P
2016-01-01
Keeping in view, the challenges concerning agro-ecosystem and environment, the recent developments in biotechnology offers a more reliable approach to address the food security for future generations and also resolve the complex environmental problems. Several unique features of cyanobacteria such as oxygenic photosynthesis, high biomass yield, growth on non-arable lands and a wide variety of water sources (contaminated and polluted waters), generation of useful by-products and bio-fuels, enhancing the soil fertility and reducing green house gas emissions, have collectively offered these bio-agents as the precious bio-resource for sustainable development. Cyanobacterial biomass is the effective bio-fertilizer source to improve soil physico-chemical characteristics such as water-holding capacity and mineral nutrient status of the degraded lands. The unique characteristics of cyanobacteria include their ubiquity presence, short generation time and capability to fix the atmospheric N2. Similar to other prokaryotic bacteria, the cyanobacteria are increasingly applied as bio-inoculants for improving soil fertility and environmental quality. Genetically engineered cyanobacteria have been devised with the novel genes for the production of a number of bio-fuels such as bio-diesel, bio-hydrogen, bio-methane, synga, and therefore, open new avenues for the generation of bio-fuels in the economically sustainable manner. This review is an effort to enlist the valuable information about the qualities of cyanobacteria and their potential role in solving the agricultural and environmental problems for the future welfare of the planet.
Cyanobacteria: A Precious Bio-resource in Agriculture, Ecosystem, and Environmental Sustainability
Singh, Jay Shankar; Kumar, Arun; Rai, Amar N.; Singh, Devendra P.
2016-01-01
Keeping in view, the challenges concerning agro-ecosystem and environment, the recent developments in biotechnology offers a more reliable approach to address the food security for future generations and also resolve the complex environmental problems. Several unique features of cyanobacteria such as oxygenic photosynthesis, high biomass yield, growth on non-arable lands and a wide variety of water sources (contaminated and polluted waters), generation of useful by-products and bio-fuels, enhancing the soil fertility and reducing green house gas emissions, have collectively offered these bio-agents as the precious bio-resource for sustainable development. Cyanobacterial biomass is the effective bio-fertilizer source to improve soil physico-chemical characteristics such as water-holding capacity and mineral nutrient status of the degraded lands. The unique characteristics of cyanobacteria include their ubiquity presence, short generation time and capability to fix the atmospheric N2. Similar to other prokaryotic bacteria, the cyanobacteria are increasingly applied as bio-inoculants for improving soil fertility and environmental quality. Genetically engineered cyanobacteria have been devised with the novel genes for the production of a number of bio-fuels such as bio-diesel, bio-hydrogen, bio-methane, synga, and therefore, open new avenues for the generation of bio-fuels in the economically sustainable manner. This review is an effort to enlist the valuable information about the qualities of cyanobacteria and their potential role in solving the agricultural and environmental problems for the future welfare of the planet. PMID:27148218
Parallel Fixed Point Implementation of a Radial Basis Function Network in an FPGA
de Souza, Alisson C. D.; Fernandes, Marcelo A. C.
2014-01-01
This paper proposes a parallel fixed point radial basis function (RBF) artificial neural network (ANN), implemented in a field programmable gate array (FPGA) trained online with a least mean square (LMS) algorithm. The processing time and occupied area were analyzed for various fixed point formats. The problems of precision of the ANN response for nonlinear classification using the XOR gate and interpolation using the sine function were also analyzed in a hardware implementation. The entire project was developed using the System Generator platform (Xilinx), with a Virtex-6 xc6vcx240t-1ff1156 as the target FPGA. PMID:25268918
The importance of fixed costs in animal health systems.
Tisdell, C A; Adamson, D
2017-04-01
In this paper, the authors detail the structure and optimal management of health systems as influenced by the presence and level of fixed costs. Unlike variable costs, fixed costs cannot be altered, and are thus independent of the level of veterinary activity in the short run. Their importance is illustrated by using both single-period and multi-period models. It is shown that multi-stage veterinary decision-making can often be envisaged as a sequence of fixed-cost problems. In general, it becomes clear that, the higher the fixed costs, the greater the net benefit of veterinary activity must be, if such activity is to be economic. The authors also assess the extent to which it pays to reduce fixed costs and to try to compensate for this by increasing variable costs. Fixed costs have major implications for the industrial structure of the animal health products industry and for the structure of the private veterinary services industry. In the former, they favour market concentration and specialisation in the supply of products. In the latter, they foster increased specialisation. While cooperation by individual farmers may help to reduce their individual fixed costs, the organisational difficulties and costs involved in achieving this cooperation can be formidable. In such cases, the only solution is government provision of veterinary services. Moreover, international cooperation may be called for. Fixed costs also influence the nature of the provision of veterinary education.
Cryogenic Fluid Film Bearing Tester Development Study
NASA Technical Reports Server (NTRS)
Scharrer, Joseph K. (Editor); Murphy, Brian T.; Hawkins, Lawrence A.
1993-01-01
Conceptual designs were developed for the determination of rotordynamic coefficients of cryogenic fluid film bearings. The designs encompassed the use of magnetic and conventional excitation sources as well as the use of magnetic bearings as support bearings. Test article configurations reviewed included overhung, floating housing, and fixed housing. Uncertainty and forced response analyses were performed to assess quality of data and suitability of each for testing a variety of fluid film bearing designs. Development cost and schedule estimates were developed for each design. Facility requirements were reviewed and compared with existing MSFC capability. The recommended configuration consisted of a fixed test article housing centrally located between two magnetic bearings. The magnetic bearings would also serve as the excitation source.
NASA Space Geodesy Program: GSFC data analysis, 1993. VLBI geodetic results 1979 - 1992
NASA Technical Reports Server (NTRS)
Ma, Chopo; Ryan, James W.; Caprette, Douglas S.
1994-01-01
The Goddard VLBI group reports the results of analyzing Mark 3 data sets acquired from 110 fixed and mobile observing sites through the end of 1992 and available to the Space Geodesy Program. Two large solutions were used to obtain site positions, site velocities, baseline evolution for 474 baselines, earth rotation parameters, nutation offsets, and radio source positions. Site velocities are presented in both geocentric Cartesian and topocentric coordinates. Baseline evolution is plotted for the 89 baselines that were observed in 1992 and positions at 1988.0 are presented for all fixed stations and mobile sites. Positions are also presented for quasar radio sources used in the solutions.
Ward, Ryan D; Gallistel, C R; Jensen, Greg; Richards, Vanessa L; Fairhurst, Stephen; Balsam, Peter D
2012-07-01
In a conditioning protocol, the onset of the conditioned stimulus ([CS]) provides information about when to expect reinforcement (unconditioned stimulus [US]). There are two sources of information from the CS in a delay conditioning paradigm in which the CS-US interval is fixed. The first depends on the informativeness, the degree to which CS onset reduces the average expected time to onset of the next US. The second depends only on how precisely a subject can represent a fixed-duration interval (the temporal Weber fraction). In three experiments with mice, we tested the differential impact of these two sources of information on rate of acquisition of conditioned responding (CS-US associability). In Experiment 1, we showed that associability (the inverse of trials to acquisition) increased in proportion to informativeness. In Experiment 2, we showed that fixing the duration of the US-US interval or the CS-US interval or both had no effect on associability. In Experiment 3, we equated the increase in information produced by varying the C/T ratio with the increase produced by fixing the duration of the CS-US interval. Associability increased with increased informativeness, but, as in Experiment 2, fixing the CS-US duration had no effect on associability. These results are consistent with the view that CS-US associability depends on the increased rate of reward signaled by CS onset. The results also provide further evidence that conditioned responding is temporally controlled when it emerges.
CS Informativeness Governs CS-US Associability
Ward, Ryan D.; Gallistel, C. R.; Jensen, Greg; Richards, Vanessa L.; Fairhurst, Stephen; Balsam, Peter D
2012-01-01
In a conditioning protocol, the onset of the conditioned stimulus (CS) provides information about when to expect reinforcement (the US). There are two sources of information from the CS in a delay conditioning paradigm in which the CS-US interval is fixed. The first depends on the informativeness, the degree to which CS onset reduces the average expected time to onset of the next US. The second depends only on how precisely a subject can represent a fixed-duration interval (the temporal Weber fraction). In three experiments with mice, we tested the differential impact of these two sources of information on rate of acquisition of conditioned responding (CS-US associability). In Experiment 1, we show that associability (the inverse of trials to acquisition) increases in proportion to informativeness. In Experiment 2, we show that fixing the duration of the US-US interval or the CS-US interval or both has no effect on associability. In Experiment 3, we equated the increase in information produced by varying the C̅/T̅ ratio with the increase produced by fixing the duration of the CS-US interval. Associability increased with increased informativeness, but, as in Experiment 2, fixing the CS-US duration had no effect on associability. These results are consistent with the view that CS-US associability depends on the increased rate of reward signaled by CS onset. The results also provide further evidence that conditioned responding is temporally controlled when it emerges. PMID:22468633
Practicality of electronic beam steering for MST/ST radars, part 6.2A
NASA Technical Reports Server (NTRS)
Clark, W. L.; Green, J. L.
1984-01-01
Electronic beam steering is described as complex and expensive. The Sunset implementation of electronic steering is described, and it is demonstrated that such systems are cost effective, versatile, and no more complex than fixed beam alternatives, provided three or more beams are needed. The problem of determining accurate meteorological wind components in the presence of spatial variation is considered. A cost comparison of steerable and fixed systems allowing solution of this problem is given. The concepts and relations involved in phase steering are given, followed by the description of the Sunset ST radar steering system. The implications are discussed, references to the competing SAD method are provided, and a recommendation concerning the design of the future Doppler ST/MST systems is made.
Enhancing GADRAS Source Term Inputs for Creation of Synthetic Spectra.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horne, Steven M.; Harding, Lee
The Gamma Detector Response and Analysis Software (GADRAS) team has enhanced the source term input for the creation of synthetic spectra. These enhancements include the following: allowing users to programmatically provide source information to GADRAS through memory, rather than through a string limited to 256 characters; allowing users to provide their own source decay database information; and updating the default GADRAS decay database to fix errors and include coincident gamma information.
NASA Astrophysics Data System (ADS)
Cartarius, Holger; Musslimani, Ziad H.; Schwarz, Lukas; Wunner, Günter
2018-03-01
The spectral renormalization method was introduced in 2005 as an effective way to compute ground states of nonlinear Schrödinger and Gross-Pitaevskii type equations. In this paper, we introduce an orthogonal spectral renormalization (OSR) method to compute ground and excited states (and their respective eigenvalues) of linear and nonlinear eigenvalue problems. The implementation of the algorithm follows four simple steps: (i) reformulate the underlying eigenvalue problem as a fixed-point equation, (ii) introduce a renormalization factor that controls the convergence properties of the iteration, (iii) perform a Gram-Schmidt orthogonalization process in order to prevent the iteration from converging to an unwanted mode, and (iv) compute the solution sought using a fixed-point iteration. The advantages of the OSR scheme over other known methods (such as Newton's and self-consistency) are (i) it allows the flexibility to choose large varieties of initial guesses without diverging, (ii) it is easy to implement especially at higher dimensions, and (iii) it can easily handle problems with complex and random potentials. The OSR method is implemented on benchmark Hermitian linear and nonlinear eigenvalue problems as well as linear and nonlinear non-Hermitian PT -symmetric models.
Dynamic simulation solves process control problem in Oman
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1998-11-16
A dynamic simulation study solved the process control problems for a Saih Rawl, Oman, gas compressor station operated by Petroleum Development of Oman (PDO). PDO encountered persistent compressor failure that caused frequent facility shutdowns, oil production deferment, and gas flaring. It commissioned MSE (Consultants) Ltd., U.K., to find a solution for the problem. Saih Rawl, about 40 km from Qarn Alam, produces oil and associated gas from a large number of low and high-pressure wells. Oil and gas are separated in three separators. The oil is pumped to Qarn Alam for treatment and export. Associated gas is compressed in twomore » parallel trains. Train K-1115 is a 350,000 standard cu m/day, four-stage reciprocating compressor driven by a fixed-speed electric motor. Train K-1120 is a 1 million standard cu m/day, four-stage reciprocating compressor driven by a fixed-speed electric motor. Train K-1120 is a 1 million standard cu m/day, four-stage centrifugal compressor driven by a variable-speed motor. The paper describes tripping and surging problems with the gas compressor and the control simplifications that solved the problem.« less
A systematic approach to the control of esthetic form.
Preston, J D
1976-04-01
A systematic, orderly approach to the problem of establishing harmonious phonetics, esthetics, and function in fixed restorations has been described. The system requires an initial investment of time in performing an adequate diagnostic waxing, but recoups that time in many clinical and laboratory procedures. The method has proved a valuable asset in fixed prosthodontic care. The technique can be expanded and combined with other techniques with a little imagination and artistic bent.
NASA Technical Reports Server (NTRS)
Gardner, Robert; Gillis, James W.; Griesel, Ann; Pardo, Bruce
1985-01-01
An analysis of the direction finding (DF) and fix estimation algorithms in TRAILBLAZER is presented. The TRAILBLAZER software analyzed is old and not currently used in the field. However, the algorithms analyzed are used in other current IEW systems. The underlying algorithm assumptions (including unmodeled errors) are examined along with their appropriateness for TRAILBLAZER. Coding and documentation problems are then discussed. A detailed error budget is presented.
NASA Astrophysics Data System (ADS)
Rocco, Emr; Prado, Afbap; Souza, Mlos
In this work, the problem of bi-impulsive orbital transfers between coplanar elliptical orbits with minimum fuel consumption but with a time limit for this transfer is studied. As a first method, the equations presented by Lawden (1993) were used. Those equations furnishes the optimal transfer orbit with fixed time for this transfer, between two elliptical coplanar orbits considering fixed terminal points. The method was adapted to cases with free terminal points and those equations was solved to develop a software for orbital maneuvers. As a second method, the equations presented by Eckel and Vinh (1984) were used, those equations provide the transfer orbit between non-coplanar elliptical orbits with minimum fuel and fixed time transfer, or minimum time transfer for a prescribed fuel consumption, considering free terminal points. But in this work only the problem with fixed time transfer was considered, the case of minimum time for a prescribed fuel consumption was already studied in Rocco et al. (2000). Then, the method was modified to consider cases of coplanar orbital transfer, and develop a software for orbital maneuvers. Therefore, two software that solve the same problem using different methods were developed. The first method, presented by Lawden, uses the primer vector theory. The second method, presented by Eckel and Vinh, uses the ordinary theory of maxima and minima. So, to test the methods we choose the same terminal orbits and the same time as input. We could verify that we didn't obtain exactly the same result. In this work, that is an extension of Rocco et al. (2002), these differences in the results are explored with objective of determining the reason of the occurrence of these differences and which modifications should be done to eliminate them.
Information transmission on hybrid networks
NASA Astrophysics Data System (ADS)
Chen, Rongbin; Cui, Wei; Pu, Cunlai; Li, Jie; Ji, Bo; Gakis, Konstantinos; Pardalos, Panos M.
2018-01-01
Many real-world communication networks often have hybrid nature with both fixed nodes and moving modes, such as the mobile phone networks mainly composed of fixed base stations and mobile phones. In this paper, we discuss the information transmission process on the hybrid networks with both fixed and mobile nodes. The fixed nodes (base stations) are connected as a spatial lattice on the plane forming the information-carrying backbone, while the mobile nodes (users), which are the sources and destinations of information packets, connect to their current nearest fixed nodes respectively to deliver and receive information packets. We observe the phase transition of traffic load in the hybrid network when the packet generation rate goes from below and then above a critical value, which measures the network capacity of packets delivery. We obtain the optimal speed of moving nodes leading to the maximum network capacity. We further improve the network capacity by rewiring the fixed nodes and by considering the current load of fixed nodes during packets transmission. Our purpose is to optimize the network capacity of hybrid networks from the perspective of network science, and provide some insights for the construction of future communication infrastructures.
NASA Astrophysics Data System (ADS)
Battye, William; Aneja, Viney P.; Schlesinger, William H.
2017-09-01
Just as carbon fueled the Industrial Revolution, nitrogen has fueled an Agricultural Revolution. The use of synthetic nitrogen fertilizers and the cultivation of nitrogen-fixing crops both expanded exponentially during the last century, with most of the increase occurring after 1960. As a result, the current flux of reactive, or fixed, nitrogen compounds to the biosphere due to human activities is roughly equivalent to the total flux of fixed nitrogen from all natural sources, both on land masses and in the world's oceans. Natural fluxes of fixed nitrogen are subject to very large uncertainties, but anthropogenic production of reactive nitrogen has increased almost fivefold in the last 60 years, and this rapid increase in anthropogenic fixed nitrogen has removed any uncertainty on the relative importance of anthropogenic fluxes to the natural budget. The increased use of nitrogen has been critical for increased crop yields and protein production needed to keep pace with the growing world population. However, similar to carbon, the release of fixed nitrogen into the natural environment is linked to adverse consequences at local, regional, and global scales. Anthropogenic contributions of fixed nitrogen continue to grow relative to the natural budget, with uncertain consequences.
Beyond Deficit: Graduate Student Research-Writing Pedagogies
ERIC Educational Resources Information Center
Badenhorst, Cecile; Moloney, Cecilia; Rosales, Janna; Dyer, Jennifer; Ru, Lina
2015-01-01
Graduate writing is receiving increasing attention, particularly in contexts of diverse student bodies and widening access to universities. In many of these contexts, writing is seen as "a problem" in need of fixing. Often, the problem and the solution are perceived as being solely located in notions of deficit in individuals and not in…
Examining the Impact of Adaptively Faded Worked Examples on Student Learning Outcomes
ERIC Educational Resources Information Center
Flores, Raymond; Inan, Fethi
2014-01-01
The purpose of this study was to explore effective ways to design guided practices within a web-based mathematics problem solving tutorial. Specifically, this study examined student learning outcome differences between two support designs (e.g. adaptively faded and fixed). In the adaptively faded design, students were presented with problems in…
Bending Back on High School Programs for Youth with Learning Disabilities
ERIC Educational Resources Information Center
Edgar, Eugene
2005-01-01
In this opinion piece, the author views several major problems facing those who care about students labeled has having learning disabilities (LD). He believes that while there are technical problems that educators should be able to fix (definition of LD, best instructional practices for students so identified, powerful secondary programs that…
Earthquakes Threaten Many American Schools
ERIC Educational Resources Information Center
Bailey, Nancy E.
2010-01-01
Millions of U.S. children attend schools that are not safe from earthquakes, even though they are in earthquake-prone zones. Several cities and states have worked to identify and repair unsafe buildings, but many others have done little or nothing to fix the problem. The reasons for ignoring the problem include political and financial ones, but…
NASA Astrophysics Data System (ADS)
Ning, Boda; Jin, Jiong; Zheng, Jinchuan; Man, Zhihong
2018-06-01
This paper is concerned with finite-time and fixed-time consensus of multi-agent systems in a leader-following framework. Different from conventional leader-following tracking approaches where inherent dynamics satisfying the Lipschitz continuous condition is required, a more generalised case is investigated: discontinuous inherent dynamics. By nonsmooth techniques, a nonlinear protocol is first proposed to achieve the finite-time leader-following consensus. Then, based on fixed-time stability strategies, the fixed-time leader-following consensus problem is solved. An upper bound of settling time is obtained by using a new protocol, and such a bound is independent of initial states, thereby providing additional options for designers in practical scenarios where initial conditions are unavailable. Finally, numerical simulations are provided to demonstrate the effectiveness of the theoretical results.
NASA Astrophysics Data System (ADS)
Trautmann, L.; Petrausch, S.; Bauer, M.
2005-09-01
The functional transformation method (FTM) is an established mathematical method for accurate simulation of multidimensional physical systems from various fields of science, including optics, heat and mass transfer, electrical engineering, and acoustics. It is a frequency-domain method based on the decomposition into eigenvectors and eigenfrequencies of the underlying physical problem. In this article, the FTM is applied to real-time simulations of vibrating strings which are ideally fixed at one end while the fixing at the other end is modeled by a frequency-dependent input impedance. Thus, boundary conditions of third kind are applied to the model at the end fixed with the input impedance. It is shown that accurate and stable simulations are achieved with nearly the same computational cost as with strings ideally fixed at both ends.
Robust Control Design via Linear Programming
NASA Technical Reports Server (NTRS)
Keel, L. H.; Bhattacharyya, S. P.
1998-01-01
This paper deals with the problem of synthesizing or designing a feedback controller of fixed dynamic order. The closed loop specifications considered here are given in terms of a target performance vector representing a desired set of closed loop transfer functions connecting various signals. In general these point targets are unattainable with a fixed order controller. By enlarging the target from a fixed point set to an interval set the solvability conditions with a fixed order controller are relaxed and a solution is more easily enabled. Results from the parametric robust control literature can be used to design the interval target family so that the performance deterioration is acceptable, even when plant uncertainty is present. It is shown that it is possible to devise a computationally simple linear programming approach that attempts to meet the desired closed loop specifications.
Selbig, William R.; Bannerman, Roger T.
2011-01-01
The U.S Geological Survey, in cooperation with the Wisconsin Department of Natural Resources (WDNR) and in collaboration with the Root River Municipal Stormwater Permit Group monitored eight urban source areas representing six types of source areas in or near Madison, Wis. in an effort to improve characterization of particle-size distributions in urban stormwater by use of fixed-point sample collection methods. The types of source areas were parking lot, feeder street, collector street, arterial street, rooftop, and mixed use. This information can then be used by environmental managers and engineers when selecting the most appropriate control devices for the removal of solids from urban stormwater. Mixed-use and parking-lot study areas had the lowest median particle sizes (42 and 54 (u or mu)m, respectively), followed by the collector street study area (70 (u or mu)m). Both arterial street and institutional roof study areas had similar median particle sizes of approximately 95 (u or mu)m. Finally, the feeder street study area showed the largest median particle size of nearly 200 (u or mu)m. Median particle sizes measured as part of this study were somewhat comparable to those reported in previous studies from similar source areas. The majority of particle mass in four out of six source areas was silt and clay particles that are less than 32 (u or mu)m in size. Distributions of particles ranging from 500 (u or mu)m were highly variable both within and between source areas. Results of this study suggest substantial variability in data can inhibit the development of a single particle-size distribution that is representative of stormwater runoff generated from a single source area or land use. Continued development of improved sample collection methods, such as the depth-integrated sample arm, may reduce variability in particle-size distributions by mitigating the effect of sediment bias inherent with a fixed-point sampler.
Customer-centered problem solving.
Samelson, Q B
1999-11-01
If there is no single best way to attract new customers and retain current customers, there is surely an easy way to lose them: fail to solve the problems that arise in nearly every buyer-supplier relationship, or solve them in an unsatisfactory manner. Yet, all too frequently, companies do just that. Either we deny that a problem exists, we exert all our efforts to pin the blame elsewhere, or we "Band-Aid" the problem instead of fixing it, almost guaranteeing that we will face it again and again.
NASA Astrophysics Data System (ADS)
Larson, T.
2010-12-01
Measuring air pollution concentrations from a moving platform is not a new idea. Historically, however, most information on the spatial variability of air pollutants have been derived from fixed site networks operating simultaneously over space. While this approach has obvious advantages from a regulatory perspective, with the increasing need to understand ever finer scales of spatial variability in urban pollution levels, the use of mobile monitoring to supplement fixed site networks has received increasing attention. Here we present examples of the use of this approach: 1) to assess existing fixed-site fine particle networks in Seattle, WA, including the establishment of new fixed-site monitoring locations; 2) to assess the effectiveness of a regulatory intervention, a wood stove burning ban, on the reduction of fine particle levels in the greater Puget Sound region; and 3) to assess spatial variability of both wood smoke and mobile source impacts in both Vancouver, B.C. and Tacoma, WA. Deducing spatial information from the inherently spatio-temporal measurements taken from a mobile platform is an area that deserves further attention. We discuss the use of “fuzzy” points to address the fine-scale spatio-temporal variability in the concentration of mobile source pollutants, specifically to deduce the broader distribution and sources of fine particle soot in the summer in Vancouver, B.C. We also discuss the use of principal component analysis to assess the spatial variability in multivariate, source-related features deduced from simultaneous measurements of light scattering, light absorption and particle-bound PAHs in Tacoma, WA. With increasing miniaturization and decreasing power requirements of air monitoring instruments, the number of simultaneous measurements that can easily be made from a mobile platform is rapidly increasing. Hopefully the methods used to design mobile monitoring experiments for differing purposes, and the methods used to interpret those measurements will keep pace.
NASA Astrophysics Data System (ADS)
Zhang, X.; PAN, X.; MA, M.; Li, W.; Cui, L.
2016-12-01
N-fixing cyanobacteria can create extra nitrogen for aquatic ecosystems. Previous studies reported inconsistence patterns of the contribution of biological nitrogen fixation to the nitrogen pools in aquatic ecosystems. However, there were few studies concerning the effect of fixed nitrogen by cyanobacteria on the nitrogen removal efficiency in constructed wetlands. This study was performed at the Beijing Wildlife Rescue and Rehabilitation Centre, where a constructed lake for the habitation of waterfowls and a constructed wetland for purifying sewage from the lake are located. The composition of phytoplankton communities, the concentrations of particulate organic nitrogen (PON) and nitrogen fixation rates (Rn) in the constructed lake and the constructed wetland were compared throughout a growing season. We counted the densities of genus Anabaena and Microcystis cells, and explored their relationships with PON and Rn in water. The proportions of PON from various sources, including the ambient N2, waterfowl faeces, wetland sediments and the nitrates, were calculated by the natural abundance of 15N with the IsoSource software. The result revealed that the constructed lake was alternately dominated by Anabaena and Microcystis throughout the growing season, and the Rn was positively correlated with PON and the cell density of Anabaena (P < 0.05). This implied that the fixed nitrogen by N-fixing Anabaena might be utilized by non-N-fixing Microcystis, maintaining the fixed nitrogen with PON form. The ambient N2 composed 0.5 82% and 50.0 84.7% to the PON in the constructed lake and wetland respectively during the growing season. The proportions of PON from N2 increased to more than 80% when the Rn reached the highest in September. The result demonstrated that the nitrogen fixed by Anabaena might be utilized by non-N-fixing Microcystis which formed water blooms in summer. Therefore, the decline of the removal efficiency of PON in the constructed wetland in summer might indirectly result from the nitrogen fixation, since the proliferated algal were difficult to sediment in surface flow wetlands.
Homogenization of Winkler-Steklov spectral conditions in three-dimensional linear elasticity
NASA Astrophysics Data System (ADS)
Gómez, D.; Nazarov, S. A.; Pérez, M. E.
2018-04-01
We consider a homogenization Winkler-Steklov spectral problem that consists of the elasticity equations for a three-dimensional homogeneous anisotropic elastic body which has a plane part of the surface subject to alternating boundary conditions on small regions periodically placed along the plane. These conditions are of the Dirichlet type and of the Winkler-Steklov type, the latter containing the spectral parameter. The rest of the boundary of the body is fixed, and the period and size of the regions, where the spectral parameter arises, are of order ɛ . For fixed ɛ , the problem has a discrete spectrum, and we address the asymptotic behavior of the eigenvalues {β _k^ɛ }_{k=1}^{∞} as ɛ → 0. We show that β _k^ɛ =O(ɛ ^{-1}) for each fixed k, and we observe a common limit point for all the rescaled eigenvalues ɛ β _k^ɛ while we make it evident that, although the periodicity of the structure only affects the boundary conditions, a band-gap structure of the spectrum is inherited asymptotically. Also, we provide the asymptotic behavior for certain "groups" of eigenmodes.
Meta-analysis in evidence-based healthcare: a paradigm shift away from random effects is overdue.
Doi, Suhail A R; Furuya-Kanamori, Luis; Thalib, Lukman; Barendregt, Jan J
2017-12-01
Each year up to 20 000 systematic reviews and meta-analyses are published whose results influence healthcare decisions, thus making the robustness and reliability of meta-analytic methods one of the world's top clinical and public health priorities. The evidence synthesis makes use of either fixed-effect or random-effects statistical methods. The fixed-effect method has largely been replaced by the random-effects method as heterogeneity of study effects led to poor error estimation. However, despite the widespread use and acceptance of the random-effects method to correct this, it too remains unsatisfactory and continues to suffer from defective error estimation, posing a serious threat to decision-making in evidence-based clinical and public health practice. We discuss here the problem with the random-effects approach and demonstrate that there exist better estimators under the fixed-effect model framework that can achieve optimal error estimation. We argue for an urgent return to the earlier framework with updates that address these problems and conclude that doing so can markedly improve the reliability of meta-analytical findings and thus decision-making in healthcare.
Fixed-Order Mixed Norm Designs for Building Vibration Control
NASA Technical Reports Server (NTRS)
Whorton, Mark S.; Calise, Anthony J.
2000-01-01
This study investigates the use of H2, mu-synthesis, and mixed H2/mu methods to construct full order controllers and optimized controllers of fixed dimensions. The benchmark problem definition is first extended to include uncertainty within the controller bandwidth in the form of parametric uncertainty representative of uncertainty in the natural frequencies of the design model. The sensitivity of H2 design to unmodeled dynamics and parametric uncertainty is evaluated for a range of controller levels of authority. Next, mu-synthesis methods are applied to design full order compensators that are robust to both unmodeled dynamics and to parametric uncertainty. Finally, a set of mixed H2/mu compensators are designed which are optimized for a fixed compensator dimension. These mixed norm designs recover the H2 design performance levels while providing the same levels of robust stability as the mu designs. It is shown that designing with the mixed norm approach permits higher levels of controller authority for which the H2 designs are destabilizing. The benchmark problem is that of an active tendon system. The controller designs are all based on the use of acceleration feedback.
Goal-based h-adaptivity of the 1-D diamond difference discrete ordinate method
NASA Astrophysics Data System (ADS)
Jeffers, R. S.; Kópházi, J.; Eaton, M. D.; Févotte, F.; Hülsemann, F.; Ragusa, J.
2017-04-01
The quantity of interest (QoI) associated with a solution of a partial differential equation (PDE) is not, in general, the solution itself, but a functional of the solution. Dual weighted residual (DWR) error estimators are one way of providing an estimate of the error in the QoI resulting from the discretisation of the PDE. This paper aims to provide an estimate of the error in the QoI due to the spatial discretisation, where the discretisation scheme being used is the diamond difference (DD) method in space and discrete ordinate (SN) method in angle. The QoI are reaction rates in detectors and the value of the eigenvalue (Keff) for 1-D fixed source and eigenvalue (Keff criticality) neutron transport problems respectively. Local values of the DWR over individual cells are used as error indicators for goal-based mesh refinement, which aims to give an optimal mesh for a given QoI.
Spatial Indexing for Data Searching in Mobile Sensing Environments.
Zhou, Yuchao; De, Suparna; Wang, Wei; Moessner, Klaus; Palaniswami, Marimuthu S
2017-06-18
Data searching and retrieval is one of the fundamental functionalities in many Web of Things applications, which need to collect, process and analyze huge amounts of sensor stream data. The problem in fact has been well studied for data generated by sensors that are installed at fixed locations; however, challenges emerge along with the popularity of opportunistic sensing applications in which mobile sensors keep reporting observation and measurement data at variable intervals and changing geographical locations. To address these challenges, we develop the Geohash-Grid Tree, a spatial indexing technique specially designed for searching data integrated from heterogeneous sources in a mobile sensing environment. Results of the experiments on a real-world dataset collected from the SmartSantander smart city testbed show that the index structure allows efficient search based on spatial distance, range and time windows in a large time series database.
Spatial Indexing for Data Searching in Mobile Sensing Environments
Zhou, Yuchao; De, Suparna; Wang, Wei; Moessner, Klaus; Palaniswami, Marimuthu S.
2017-01-01
Data searching and retrieval is one of the fundamental functionalities in many Web of Things applications, which need to collect, process and analyze huge amounts of sensor stream data. The problem in fact has been well studied for data generated by sensors that are installed at fixed locations; however, challenges emerge along with the popularity of opportunistic sensing applications in which mobile sensors keep reporting observation and measurement data at variable intervals and changing geographical locations. To address these challenges, we develop the Geohash-Grid Tree, a spatial indexing technique specially designed for searching data integrated from heterogeneous sources in a mobile sensing environment. Results of the experiments on a real-world dataset collected from the SmartSantander smart city testbed show that the index structure allows efficient search based on spatial distance, range and time windows in a large time series database. PMID:28629156
Stability of an optically contacted etalon to cosmic radiation. [aboard Dynamics Explorer satellite
NASA Technical Reports Server (NTRS)
Killeen, T. L.; Dettman, D. L.; Hays, P. B.
1980-01-01
An investigation has been completed to determine the effects of prolonged exposure to cosmic radiation on Zerodur spacing elements used between two dielectric reflectors on silica substrates in the plane Fabry-Perot etalon selected for flight in the Dynamics Explorer satellite. The measured radiation expansion coefficient for Zerodur is approximately -4.0 x 10 to the -12th/rad. In addition to the overall change in gap dimension, test data indicate a degradation in etalon parallelism, which is ascribed to the different doses received by the three spacers due to their differing distances from a Co-60 source. The effect is considered to be of practical use in the tuning and parallelism adjustment of fixed gap etalons. The variation is small enough not to pose a problem for the satellite instrument where expected radiation doses are less than 10,000 rads.
Gu, Herong; Guan, Yajuan; Wang, Huaibao; Wei, Baoze; Guo, Xiaoqiang
2014-01-01
Microgrid is an effective way to integrate the distributed energy resources into the utility networks. One of the most important issues is the power flow control of grid-connected voltage-source inverter in microgrid. In this paper, the small-signal model of the power flow control for the grid-connected inverter is established, from which it can be observed that the conventional power flow control may suffer from the poor damping and slow transient response. While the new power flow control can mitigate these problems without affecting the steady-state power flow regulation. Results of continuous-domain simulations in MATLAB and digital control experiments based on a 32-bit fixed-point TMS320F2812 DSP are in good agreement, which verify the small signal model analysis and effectiveness of the proposed method. PMID:24672304
NASA Astrophysics Data System (ADS)
Hafner, D.
2015-09-01
The application of ground-based boresight sources for calibration and testing of tracking antennas usually entails various difficulties, mostly due to unwanted ground effects. To avoid this problem, DLR MORABA developed a small, lightweight, frequency-adjustable S-band boresight source, mounted on a small remote-controlled multirotor aircraft. Highly accurate GPS-supported, position and altitude control functions allow both, very steady positioning of the aircraft in mid-air, and precise waypoint-based, semi-autonomous flights. In contrast to fixed near-ground boresight sources this flying setup enables to avoid obstructions in the Fresnel zone between source and antenna. Further, it minimizes ground reflections and other multipath effects which can affect antenna calibration. In addition, the large operating range of a flying boresight simplifies measurements in the far field of the antenna and permits undisturbed antenna pattern tests. A unique application is the realistic simulation of sophisticated flight paths, including overhead tracking and demanding trajectories of fast objects such as sounding rockets. Likewise, dynamic tracking tests are feasible which provide crucial information about the antenna pedestal performance — particularly at high elevations — and reveal weaknesses in the autotrack control loop of tracking antenna systems. During acceptance tests of MORABA's new tracking antennas, a manned aircraft was never used, since the Flying Boresight surpassed all expectations regarding usability, efficiency, and precision. Hence, it became an integral part of MORABA's standard antenna setup and calibration procedures.
Spatial Rule-Based Modeling: A Method and Its Application to the Human Mitotic Kinetochore
Ibrahim, Bashar; Henze, Richard; Gruenert, Gerd; Egbert, Matthew; Huwald, Jan; Dittrich, Peter
2013-01-01
A common problem in the analysis of biological systems is the combinatorial explosion that emerges from the complexity of multi-protein assemblies. Conventional formalisms, like differential equations, Boolean networks and Bayesian networks, are unsuitable for dealing with the combinatorial explosion, because they are designed for a restricted state space with fixed dimensionality. To overcome this problem, the rule-based modeling language, BioNetGen, and the spatial extension, SRSim, have been developed. Here, we describe how to apply rule-based modeling to integrate experimental data from different sources into a single spatial simulation model and how to analyze the output of that model. The starting point for this approach can be a combination of molecular interaction data, reaction network data, proximities, binding and diffusion kinetics and molecular geometries at different levels of detail. We describe the technique and then use it to construct a model of the human mitotic inner and outer kinetochore, including the spindle assembly checkpoint signaling pathway. This allows us to demonstrate the utility of the procedure, show how a novel perspective for understanding such complex systems becomes accessible and elaborate on challenges that arise in the formulation, simulation and analysis of spatial rule-based models. PMID:24709796
Biomechanical considerations on tooth-implant supported fixed partial dentures
Calvani, Pasquale; Hirayama, Hiroshi
2012-01-01
This article discusses the connection of teeth to implants, in order to restore partial edentulism. The main problem arising from this connection is tooth intrusion, which can occur in up to 7.3% of the cases. The justification of this complication is being attempted through the perspective of biomechanics of the involved anatomical structures, that is, the periodontal ligament and the bone, as well as that of the teeth- and implant-supported fixed partial dentures. PMID:23255882
NASA Astrophysics Data System (ADS)
Becker, P.; Idelsohn, S. R.; Oñate, E.
2015-06-01
This paper describes a strategy to solve multi-fluid and fluid-structure interaction (FSI) problems using Lagrangian particles combined with a fixed finite element (FE) mesh. Our approach is an extension of the fluid-only PFEM-2 (Idelsohn et al., Eng Comput 30(2):2-2, 2013; Idelsohn et al., J Numer Methods Fluids, 2014) which uses explicit integration over the streamlines to improve accuracy. As a result, the convective term does not appear in the set of equations solved on the fixed mesh. Enrichments in the pressure field are used to improve the description of the interface between phases.
NASA Astrophysics Data System (ADS)
Zhang, Wei-Guo; Li, Zhe; Liu, Yong-Jun
2018-01-01
In this paper, we study the pricing problem of the continuously monitored fixed and floating strike geometric Asian power options in a mixed fractional Brownian motion environment. First, we derive both closed-form solutions and mixed fractional partial differential equations for fixed and floating strike geometric Asian power options based on delta-hedging strategy and partial differential equation method. Second, we present the lower and upper bounds of the prices of fixed and floating strike geometric Asian power options under the assumption that both risk-free interest rate and volatility are interval numbers. Finally, numerical studies are performed to illustrate the performance of our proposed pricing model.
Lossless Compression of Data into Fixed-Length Packets
NASA Technical Reports Server (NTRS)
Kiely, Aaron B.; Klimesh, Matthew A.
2009-01-01
A computer program effects lossless compression of data samples from a one-dimensional source into fixed-length data packets. The software makes use of adaptive prediction: it exploits the data structure in such a way as to increase the efficiency of compression beyond that otherwise achievable. Adaptive linear filtering is used to predict each sample value based on past sample values. The difference between predicted and actual sample values is encoded using a Golomb code.
Environmental Fate and tTransport of a New Energetic Material CL-20
2006-02-01
the study suggest indirectly that availability of their respective food sources, bacteria and fungi, were also unaffected, or increased in soil CL-20...was placed inside each pot at the bottom in order to prevent soil loss during testing. Alfalfa seeds were inoculated with nitrogen-fixing bacteria ...prior to sowing (Southern States Alfalfa-Clover Nitrogen Fixing Bacteria , lot no. 3092002, expiration date 07/2004 [Alfalfa toxicity tests were
Morton, Siyuan C; Zhang, Yan; Edwards, Marc A
2005-08-01
Control of microbial regrowth in iron pipes is a major challenge for water utilities. This work examines the inter-relationship between iron corrosion and bacterial regrowth, with a special focus on the potential of iron pipe to serve as a source of phosphorus. Under some circumstances, corroding iron and steel may serve as a source for all macronutrients necessary for bacterial regrowth including fixed carbon, fixed nitrogen and phosphorus. Conceptual models and experimental data illustrate that levels of phosphorus released from corroding iron are significant relative to that necessary to sustain high levels of biofilm bacteria. Consequently, it may not be possible to control regrowth on iron surfaces by limiting phosphorus in the bulk water.
NASA Technical Reports Server (NTRS)
Ryan, J. W.; Ma, C.; Caprette, D. S.
1993-01-01
The Goddard VLBI group reports the results of analyzing 1648 Mark 3 data sets acquired from fixed and mobile observing sites through the end of 1991, and available to the Crustal Dynamics Project. Two large solutions were used to obtain Earth rotation parameters, nutation offsets, radio source positions, site positions, site velocities, and baseline evolution. Site positions are tabulated on a yearly basis for 1979 to 1995, inclusive. Site velocities are presented in both geocentric Cartesian and topocentric coordinates. Baseline evolution is plotted for 200 baselines, and individual length determinations are presented for an additional 356 baselines. This report includes 155 quasar radio sources, 96 fixed stations and mobile sites, and 556 baselines.
Topological analysis of the motion of an ellipsoid on a smooth plane
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivochkin, M Yu
2008-06-30
The problem of the motion of a dynamically and geometrically symmetric heavy ellipsoid on a smooth horizontal plane is investigated. The problem is integrable and can be considered a generalization of the problem of motion of a heavy rigid body with fixed point in the Lagrangian case. The Smale bifurcation diagrams are constructed. Surgeries of tori are investigated using methods developed by Fomenko and his students. Bibliography: 9 titles.
Salimi-Khorshidi, Gholamreza; Douaud, Gwenaëlle; Beckmann, Christian F; Glasser, Matthew F; Griffanti, Ludovica; Smith, Stephen M
2014-01-01
Many sources of fluctuation contribute to the fMRI signal, and this makes identifying the effects that are truly related to the underlying neuronal activity difficult. Independent component analysis (ICA) - one of the most widely used techniques for the exploratory analysis of fMRI data - has shown to be a powerful technique in identifying various sources of neuronally-related and artefactual fluctuation in fMRI data (both with the application of external stimuli and with the subject “at rest”). ICA decomposes fMRI data into patterns of activity (a set of spatial maps and their corresponding time series) that are statistically independent and add linearly to explain voxel-wise time series. Given the set of ICA components, if the components representing “signal” (brain activity) can be distinguished form the “noise” components (effects of motion, non-neuronal physiology, scanner artefacts and other nuisance sources), the latter can then be removed from the data, providing an effective cleanup of structured noise. Manual classification of components is labour intensive and requires expertise; hence, a fully automatic noise detection algorithm that can reliably detect various types of noise sources (in both task and resting fMRI) is desirable. In this paper, we introduce FIX (“FMRIB’s ICA-based X-noiseifier”), which provides an automatic solution for denoising fMRI data via accurate classification of ICA components. For each ICA component FIX generates a large number of distinct spatial and temporal features, each describing a different aspect of the data (e.g., what proportion of temporal fluctuations are at high frequencies). The set of features is then fed into a multi-level classifier (built around several different Classifiers). Once trained through the hand-classification of a sufficient number of training datasets, the classifier can then automatically classify new datasets. The noise components can then be subtracted from (or regressed out of) the original data, to provide automated cleanup. On conventional resting-state fMRI (rfMRI) single-run datasets, FIX achieved about 95% overall accuracy. On high-quality rfMRI data from the Human Connectome Project, FIX achieves over 99% classification accuracy, and as a result is being used in the default rfMRI processing pipeline for generating HCP connectomes. FIX is publicly available as a plugin for FSL. PMID:24389422
Milner, Allison; Krnjack, Lauren; LaMontagne, Anthony D
2017-01-01
Objectives Entry into employment may be a time when a young person's well-being and mental health is challenged. Specifically, we examined the difference in mental health when a young person was "not in the labor force" (NILF) (ie, non-working activity such as participating in education) compared to being in a job with varying levels of psychosocial quality. Method The data source for this study was the Household Income and Labor Dynamics in Australia (HILDA) study, and the sample included 10 534 young people (aged ≤30 years). We used longitudinal fixed-effects regression to investigate within-person changes in mental health comparing circumstances where individuals were NILF to when they were employed in jobs of varying psychosocial quality. Results Compared to when individuals were not in the labor force, results suggest a statistically significant decline in mental health when young people were employed in jobs with poor psychosocial working conditions and an improvement in mental health when they were employed in jobs with optimal psychosocial working conditions. Our results were robust to various sensitivity tests, including adjustment for life events and the lagged effects of mental health and job stressors. Conclusions If causal, the results suggest that improving the psychosocial quality of work for younger workers will protect and promote their wellbeing, and may reduce the likelihood of mental health problems later on.
Evaluation of the Community Multi-scale Air Quality (CMAQ) ...
The Community Multiscale Air Quality (CMAQ) model is a state-of-the-science air quality model that simulates the emission, transport and fate of numerous air pollutants, including ozone and particulate matter. The Computational Exposure Division (CED) of the U.S. Environmental Protection Agency develops the CMAQ model and periodically releases new versions of the model that include bug fixes and various other improvements to the modeling system. In the fall of 2015, CMAQ version 5.1 was released. This new version of CMAQ will contain important bug fixes to several issues that were identified in CMAQv5.0.2 and additionally include updates to other portions of the code. Several annual, and numerous episodic, CMAQv5.1 simulations were performed to assess the impact of these improvements on the model results. These results will be presented, along with a base evaluation of the performance of the CMAQv5.1 modeling system against available surface and upper-air measurements available during the time period simulated. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, decision-support tools, and models to be applied to media-specific or receptor-specific problem areas. CED uses modeling-based approaches to characterize exposures, evaluate fate and transport, and support environmental diagnostics/forensics with input from multiple data sources. It also develops media- and receptor-specific models, proces
NASA Astrophysics Data System (ADS)
Moschetti, M. P.; Mueller, C. S.; Boyd, O. S.; Petersen, M. D.
2013-12-01
In anticipation of the update of the Alaska seismic hazard maps (ASHMs) by the U. S. Geological Survey, we report progress on the comparison of smoothed seismicity models developed using fixed and adaptive smoothing algorithms, and investigate the sensitivity of seismic hazard to the models. While fault-based sources, such as those for great earthquakes in the Alaska-Aleutian subduction zone and for the ~10 shallow crustal faults within Alaska, dominate the seismic hazard estimates for locations near to the sources, smoothed seismicity rates make important contributions to seismic hazard away from fault-based sources and where knowledge of recurrence and magnitude is not sufficient for use in hazard studies. Recent developments in adaptive smoothing methods and statistical tests for evaluating and comparing rate models prompt us to investigate the appropriateness of adaptive smoothing for the ASHMs. We develop smoothed seismicity models for Alaska using fixed and adaptive smoothing methods and compare the resulting models by calculating and evaluating the joint likelihood test. We use the earthquake catalog, and associated completeness levels, developed for the 2007 ASHM to produce fixed-bandwidth-smoothed models with smoothing distances varying from 10 to 100 km and adaptively smoothed models. Adaptive smoothing follows the method of Helmstetter et al. and defines a unique smoothing distance for each earthquake epicenter from the distance to the nth nearest neighbor. The consequence of the adaptive smoothing methods is to reduce smoothing distances, causing locally increased seismicity rates, where seismicity rates are high and to increase smoothing distances where seismicity is sparse. We follow guidance from previous studies to optimize the neighbor number (n-value) by comparing model likelihood values, which estimate the likelihood that the observed earthquake epicenters from the recent catalog are derived from the smoothed rate models. We compare likelihood values from all rate models to rank the smoothing methods. We find that adaptively smoothed seismicity models yield better likelihood values than the fixed smoothing models. Holding all other (source and ground motion) models constant, we calculate seismic hazard curves for all points across Alaska on a 0.1 degree grid, using the adaptively smoothed and fixed smoothed seismicity models separately. Because adaptively smoothed models concentrate seismicity near the earthquake epicenters where seismicity rates are high, the corresponding hazard values are higher, locally, but reduced with distance from observed seismicity, relative to the hazard from fixed-bandwidth models. We suggest that adaptively smoothed seismicity models be considered for implementation in the update to the ASHMs because of their improved likelihood estimates relative to fixed smoothing methods; however, concomitant increases in seismic hazard will cause significant changes in regions of high seismicity, such as near the subduction zone, northeast of Kotzebue, and along the NNE trending zone of seismicity in the Alaskan interior.
Moschetti, Morgan P.; Mueller, Charles S.; Boyd, Oliver S.; Petersen, Mark D.
2014-01-01
In anticipation of the update of the Alaska seismic hazard maps (ASHMs) by the U. S. Geological Survey, we report progress on the comparison of smoothed seismicity models developed using fixed and adaptive smoothing algorithms, and investigate the sensitivity of seismic hazard to the models. While fault-based sources, such as those for great earthquakes in the Alaska-Aleutian subduction zone and for the ~10 shallow crustal faults within Alaska, dominate the seismic hazard estimates for locations near to the sources, smoothed seismicity rates make important contributions to seismic hazard away from fault-based sources and where knowledge of recurrence and magnitude is not sufficient for use in hazard studies. Recent developments in adaptive smoothing methods and statistical tests for evaluating and comparing rate models prompt us to investigate the appropriateness of adaptive smoothing for the ASHMs. We develop smoothed seismicity models for Alaska using fixed and adaptive smoothing methods and compare the resulting models by calculating and evaluating the joint likelihood test. We use the earthquake catalog, and associated completeness levels, developed for the 2007 ASHM to produce fixed-bandwidth-smoothed models with smoothing distances varying from 10 to 100 km and adaptively smoothed models. Adaptive smoothing follows the method of Helmstetter et al. and defines a unique smoothing distance for each earthquake epicenter from the distance to the nth nearest neighbor. The consequence of the adaptive smoothing methods is to reduce smoothing distances, causing locally increased seismicity rates, where seismicity rates are high and to increase smoothing distances where seismicity is sparse. We follow guidance from previous studies to optimize the neighbor number (n-value) by comparing model likelihood values, which estimate the likelihood that the observed earthquake epicenters from the recent catalog are derived from the smoothed rate models. We compare likelihood values from all rate models to rank the smoothing methods. We find that adaptively smoothed seismicity models yield better likelihood values than the fixed smoothing models. Holding all other (source and ground motion) models constant, we calculate seismic hazard curves for all points across Alaska on a 0.1 degree grid, using the adaptively smoothed and fixed smoothed seismicity models separately. Because adaptively smoothed models concentrate seismicity near the earthquake epicenters where seismicity rates are high, the corresponding hazard values are higher, locally, but reduced with distance from observed seismicity, relative to the hazard from fixed-bandwidth models. We suggest that adaptively smoothed seismicity models be considered for implementation in the update to the ASHMs because of their improved likelihood estimates relative to fixed smoothing methods; however, concomitant increases in seismic hazard will cause significant changes in regions of high seismicity, such as near the subduction zone, northeast of Kotzebue, and along the NNE trending zone of seismicity in the Alaskan interior.
Kellermeier, Markus; Bert, Christoph; Müller, Reinhold G
2015-07-01
Focussing primarily on thermal load capacity, we describe the performance of a novel fixed anode CT (FACT) compared with a 100 kW reference CT. Being a fixed system, FACT has no focal spot blurring of the X-ray source during projection. Monte Carlo and finite element methods were used to determine the fluence proportional to thermal capacity. Studies of repeated short-time exposures showed that FACT could operate in pulsed mode for an unlimited period. A virtual model for FACT was constructed to analyse various temporal sequences for the X-ray source ring, representing a circular array of 1160 fixed anodes in the gantry. Assuming similar detector properties at a very small integration time, image quality was investigated using an image reconstruction library. Our model showed that approximately 60 gantry rounds per second, i.e. 60 sequential targetings of the 1160 anodes per second, were required to achieve a performance level equivalent to that of the reference CT (relative performance, RP = 1) at equivalent image quality. The optimal projection duration in each direction was about 10 μs. With a beam pause of 1 μs between projections, 78.4 gantry rounds per second with consecutive source activity were thermally possible at a given thermal focal spot. The settings allowed for a 1.3-fold (RP = 1.3) shorter scan time than conventional CT while maintaining radiation exposure and image quality. Based on the high number of rounds, FACT supports a high image frame rate at low doses, which would be beneficial in a wide range of diagnostic and technical applications. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Shine, R. A.
1975-01-01
The problem of LTE and non-LTE line formation in the presence of nonthermal velocity fields with geometric scales between the microscopic and macroscopic limits is investigated in the cases of periodic sinusoidal and sawtooth waves. For a fixed source function (the LTE case), it is shown that time-averaged line profiles progress smoothly from the microscopic to the macroscopic limits as the geometric scale of the motions increases, that the sinusoidal motions produce symmetric time-averaged profiles, and that the sawtooth motions cause a redshift. In several idealized non-LTE cases, it is found that intermediate-scale velocity fields can significantly increase the surface source functions and line-core intensities. Calculations are made for a two-level atom in an isothermal atmosphere for a range of velocity scales and non-LTE coupling parameters and also for a two-level atom and a four-level representation of Na I line formation in the Harvard-Smithsonian Reference Atmosphere (1971) solar model. It is found that intermediate-scale velocity fields in the solar atmosphere could explain the central intensities of the Na I D lines and other strong absorption lines without invoking previously suggested high electron densities.
Nitrogen in the environment: sources, problems, and management.
Follett, R F; Hatfield, J L
2001-10-30
Nitrogen (N) is applied worldwide to produce food. It is in the atmosphere, soil, and water and is essential to all life. N for agriculture includes fertilizer, biologically fixed, manure, recycled crop residue, and soil-mineralized N. Presently, fertilizer N is a major source of N, and animal manure N is inefficiently used. Potential environmental impacts of N excreted by humans are increasing rapidly with increasing world populations. Where needed, N must be efficiently used because N can be transported immense distances and transformed into soluble and/or gaseous forms that pollute water resources and cause greenhouse effects. Unfortunately, increased amounts of gaseous N enter the environment as N2O to cause greenhouse warming and as NH3 to shift ecological balances of natural ecosystems. Large amounts of N are displaced with eroding sediments in surface waters. Soluble N in runoff or leachate water enters streams, rivers, and groundwater. High-nitrate drinking water can cause methemoglobinemia, while nitrosamines are associated with various human cancers. We describe the benefits, but also how N in the wrong form or place results in harmful effects on humans and animals, as well as to ecological and environmental systems.
NASA Astrophysics Data System (ADS)
Lee, Sang-Young
2017-05-01
Forthcoming wearable/flexible electronics with compelling shape diversity and mobile usability have garnered significant attention as a kind of disruptive technology to drastically change our daily lives. From a power source point of view, conventional rechargeable batteries (represented by lithium-ion batteries) with fixed shapes and dimensions are generally fabricated by winding (or stacking) cell components (such as anodes, cathodes and separator membranes) and then packaging them with (cylindrical-/rectangular-shaped) metallic canisters or pouch films, finally followed by injection of liquid electrolytes. In particular, the use of liquid electrolytes gives rise to serious concerns in cell assembly, because they require strict packaging materials to avoid leakage problems and also separator membranes to prevent electrical contact between electrodes. For these reasons, the conventional cell assembly and materials have pushed the batteries to lack of variety in form factors, thus imposing formidable challenges on their integration into versatile-shaped electronic devices. Here, as a facile and efficient strategy to address the aforementioned longstanding challenge, we demonstrate a new class of printed solid-state Li-ion batteries and also all-inkjet-printed solid-state supercapacitors with exceptional shape conformability and aesthetic versatility which lie far beyond those achievable with conventional battery technologies.
NASA Astrophysics Data System (ADS)
Qin, Xinqiang; Hu, Gang; Hu, Kai
2018-01-01
The decomposition of multiple source images using bidimensional empirical mode decomposition (BEMD) often produces mismatched bidimensional intrinsic mode functions, either by their number or their frequency, making image fusion difficult. A solution to this problem is proposed using a fixed number of iterations and a union operation in the sifting process. By combining the local regional features of the images, an image fusion method has been developed. First, the source images are decomposed using the proposed BEMD to produce the first intrinsic mode function (IMF) and residue component. Second, for the IMF component, a selection and weighted average strategy based on local area energy is used to obtain a high-frequency fusion component. Third, for the residue component, a selection and weighted average strategy based on local average gray difference is used to obtain a low-frequency fusion component. Finally, the fused image is obtained by applying the inverse BEMD transform. Experimental results show that the proposed algorithm provides superior performance over methods based on wavelet transform, line and column-based EMD, and complex empirical mode decomposition, both in terms of visual quality and objective evaluation criteria.
NASA Technical Reports Server (NTRS)
Ryabenkii, V. S.; Turchaninov, V. I.; Tsynkov, S. V.
1999-01-01
We propose a family of algorithms for solving numerically a Cauchy problem for the three-dimensional wave equation. The sources that drive the equation (i.e., the right-hand side) are compactly supported in space for any given time; they, however, may actually move in space with a subsonic speed. The solution is calculated inside a finite domain (e.g., sphere) that also moves with a subsonic speed and always contains the support of the right-hand side. The algorithms employ a standard consistent and stable explicit finite-difference scheme for the wave equation. They allow one to calculate tile solution for arbitrarily long time intervals without error accumulation and with the fixed non-growing amount of tile CPU time and memory required for advancing one time step. The algorithms are inherently three-dimensional; they rely on the presence of lacunae in the solutions of the wave equation in oddly dimensional spaces. The methodology presented in the paper is, in fact, a building block for constructing the nonlocal highly accurate unsteady artificial boundary conditions to be used for the numerical simulation of waves propagating with finite speed over unbounded domains.
NASA Astrophysics Data System (ADS)
Yang, J.; Lee, H.; Sohn, H.
2012-05-01
This study presents an embedded laser ultrasonic system for pipeline monitoring under high temperature environment. Recently, laser ultrasonics is becoming popular because of their advantageous characteristics such as (a) noncontact inspection, (b) immunity against electromagnetic interference (EMI), and (c) applicability under high temperature. However, the performance of conventional laser ultrasonic techniques for pipeline monitoring has been limited because many pipelines are covered by insulating materials and target surfaces are inaccessible. To overcome the problem, this study designs an embeddable optical fibers and fixing devices that deliver laser beams from laser sources to a target pipe using embedded optical fibers. For guided wave generation, an optical fiber is furnished with a beam collimator for irradiating a laser beam onto a target structure. The corresponding response is measured based on the principle of laser interferometry. Light from a monochromatic source is colliminated and delivered to a target surface by another optical with a focusing module, and reflected light is transmitted back to the interferometer through the same fiber. The feasibility of the proposed system for embedded ultrasonic measurement has been experimentally verified using a pipe specimen under high temperature.
NIST Ionization Chamber "A" Sample-Height Corrections.
Fitzgerald, Ryan
2012-01-01
For over 30 years scientists in the NIST radioactivity group have been using their pressurized ionization chamber "A" (PIC "A") to make measurements of radioactivity and radioactive half-lives. We now have evidence that some of those reported measurements were incorrect due to slippage of the source positioning ring over time. The temporal change in the holder caused an error in the source-height within the chamber, which was thought to be invariant. This unaccounted-for height change caused a change in the detector response and thus a relative error in measured activity on the order of 10(-5) to 10(-3) per year, depending on the radionuclide. The drifting detector response affected calibration factors and half-life determinations. After discovering the problem, we carried out historic research and new sensitivity tests. As a result, we have created a quantitative model of the effect and have used that model to estimate corrections to some of the past measurement results from PIC "A". In this paper we report the details and results of that model. Meanwhile, we have fixed the positioning ring and are recalibrating the detector using primary measurement methods and enhanced quality control measures.
Uheda, Eiji; Maejima, Kazuhiro
2009-10-15
In the Azolla-Anabaena association, the host plant Azolla efficiently incorporates and assimilates ammonium ions that are released from the nitrogen-fixing cyanobiont, probably via glutamine synthetase (GS; EC 6.3.1.2) in hair cells, which are specialized cells protruding into the leaf cavity. In order to clarify the regulatory mechanism underlying ammonium assimilation in the Azolla-Anabaena association, Azolla plants were grown under an argon environment (Ar), in which the nitrogen-fixing activity of the cyanobiont was inhibited specifically and completely. The localization of GS in hair cells was determined by immunoelectron microscopy and quantitative analysis of immunogold labeling. Azolla plants grew healthily under Ar when nitrogen sources, such as NO(3)(-) and NH(4)(+), were provided in the growth medium. Both the number of cyanobacterial cells per leaf and the heterocyst frequency of the plants under Ar were similar to those of plants in a nitrogen environment (N(2)). In hair cells of plants grown under Ar, regardless of the type of nitrogen source provided, only weak labeling of GS was observed in the cytoplasm and in chloroplasts. In contrast, in hair cells of plants grown under N(2), abundant labeling of GS was observed in both sites. These findings indicate that specific inhibition of the nitrogen-fixing activity of the cyanobiont affects the localization of GS isoenzymes. Ammonium fixed and released by the cyanobiont could stimulate GS synthesis in hair cells. Simultaneously, the abundant GS, probably GS1, in these cells, could assimilate ammonium rapidly.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hudson, S. R.; Hole, M. J.; Dewar, R. L.
2007-05-15
A generalized energy principle for finite-pressure, toroidal magnetohydrodynamic (MHD) equilibria in general three-dimensional configurations is proposed. The full set of ideal-MHD constraints is applied only on a discrete set of toroidal magnetic surfaces (invariant tori), which act as barriers against leakage of magnetic flux, helicity, and pressure through chaotic field-line transport. It is argued that a necessary condition for such invariant tori to exist is that they have fixed, irrational rotational transforms. In the toroidal domains bounded by these surfaces, full Taylor relaxation is assumed, thus leading to Beltrami fields {nabla}xB={lambda}B, where {lambda} is constant within each domain. Two distinctmore » eigenvalue problems for {lambda} arise in this formulation, depending on whether fluxes and helicity are fixed, or boundary rotational transforms. These are studied in cylindrical geometry and in a three-dimensional toroidal region of annular cross section. In the latter case, an application of a residue criterion is used to determine the threshold for connected chaos.« less
NASA Astrophysics Data System (ADS)
van Horssen, Wim T.; Wang, Yandong; Cao, Guohua
2018-06-01
In this paper, it is shown how characteristic coordinates, or equivalently how the well-known formula of d'Alembert, can be used to solve initial-boundary value problems for wave equations on fixed, bounded intervals involving Robin type of boundary conditions with time-dependent coefficients. A Robin boundary condition is a condition that specifies a linear combination of the dependent variable and its first order space-derivative on a boundary of the interval. Analytical methods, such as the method of separation of variables (SOV) or the Laplace transform method, are not applicable to those types of problems. The obtained analytical results by applying the proposed method, are in complete agreement with those obtained by using the numerical, finite difference method. For problems with time-independent coefficients in the Robin boundary condition(s), the results of the proposed method also completely agree with those as for instance obtained by the method of separation of variables, or by the finite difference method.
Improved Results for Route Planning in Stochastic Transportation Networks
NASA Technical Reports Server (NTRS)
Boyan, Justin; Mitzenmacher, Michael
2000-01-01
In the bus network problem, the goal is to generate a plan for getting from point X to point Y within a city using buses in the smallest expected time. Because bus arrival times are not determined by a fixed schedule but instead may be random. the problem requires more than standard shortest path techniques. In recent work, Datar and Ranade provide algorithms in the case where bus arrivals are assumed to be independent and exponentially distributed. We offer solutions to two important generalizations of the problem, answering open questions posed by Datar and Ranade. First, we provide a polynomial time algorithm for a much wider class of arrival distributions, namely those with increasing failure rate. This class includes not only exponential distributions but also uniform, normal, and gamma distributions. Second, in the case where bus arrival times are independent and geometric discrete random variable,. we provide an algorithm for transportation networks of buses and trains, where trains run according to a fixed schedule.
An hp symplectic pseudospectral method for nonlinear optimal control
NASA Astrophysics Data System (ADS)
Peng, Haijun; Wang, Xinwei; Li, Mingwu; Chen, Biaosong
2017-01-01
An adaptive symplectic pseudospectral method based on the dual variational principle is proposed and is successfully applied to solving nonlinear optimal control problems in this paper. The proposed method satisfies the first order necessary conditions of continuous optimal control problems, also the symplectic property of the original continuous Hamiltonian system is preserved. The original optimal control problem is transferred into a set of nonlinear equations which can be solved easily by Newton-Raphson iterations, and the Jacobian matrix is found to be sparse and symmetric. The proposed method, on one hand, exhibits exponent convergence rates when the number of collocation points are increasing with the fixed number of sub-intervals; on the other hand, exhibits linear convergence rates when the number of sub-intervals is increasing with the fixed number of collocation points. Furthermore, combining with the hp method based on the residual error of dynamic constraints, the proposed method can achieve given precisions in a few iterations. Five examples highlight the high precision and high computational efficiency of the proposed method.
Pace's Maxims for Homegrown Library Projects. Coming Full Circle
ERIC Educational Resources Information Center
Pace, Andrew K.
2005-01-01
This article discusses six maxims by which to run library automation. The following maxims are discussed: (1) Solve only known problems; (2) Avoid changing data to fix display problems; (3) Aut viam inveniam aut faciam; (4) If you cannot make it yourself, buy something; (5) Kill the alligator closest to the boat; and (6) Just because yours is…
ERIC Educational Resources Information Center
Santos, Jose Luis; Haycock, Kati
2016-01-01
In response to mounting concerns about the cost of college, lawmakers have proposed major new partnerships between the federal government and states to tackle college affordability. The Education Trust maintains that any new federal-state proposal aimed at making college more affordable must also simultaneously address completion problems by…
Symmetry of the Adiabatic Condition in the Piston Problem
ERIC Educational Resources Information Center
Anacleto, Joaquim; Ferreira, J. M.
2011-01-01
This study addresses a controversial issue in the adiabatic piston problem, namely that of the piston being adiabatic when it is fixed but no longer so when it can move freely. It is shown that this apparent contradiction arises from the usual definition of adiabatic condition. The issue is addressed here by requiring the adiabatic condition to be…
That Was the Crisis: What Is to Be Done to Fix Irish Education Now?
ERIC Educational Resources Information Center
O'Mahony, Fintan
2015-01-01
In 2008 Ireland found itself in the forefront of the Eurozone crisis. The impact on education has been profound. In this article it is suggested that Ireland's education problems long pre-date the economic crisis and current "reforms" are about long-term neoliberal restructuring, not short-term solutions to immediate economic problems.…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stimpson, Shane G.; Liu, Yuxuan; Collins, Benjamin S.
An essential component of the neutron transport solver is the resonance self-shielding calculation used to determine equivalence cross sections. The neutron transport code, MPACT, is currently using the subgroup self-shielding method, in which the method of characteristics (MOC) is used to solve purely absorbing fixed-source problems. Recent efforts incorporating multigroup kernels to the MOC solvers in MPACT have reduced runtime by roughly 2×. Applying the same concepts for self-shielding and developing a novel lumped parameter approach to MOC, substantial improvements have also been made to the self-shielding computational efficiency without sacrificing any accuracy. These new multigroup and lumped parameter capabilitiesmore » have been demonstrated on two test cases: (1) a single lattice with quarter symmetry known as VERA (Virtual Environment for Reactor Applications) Progression Problem 2a and (2) a two-dimensional quarter-core slice known as Problem 5a-2D. From these cases, self-shielding computational time was reduced by roughly 3–4×, with a corresponding 15–20% increase in overall memory burden. An azimuthal angle sensitivity study also shows that only half as many angles are needed, yielding an additional speedup of 2×. In total, the improvements yield roughly a 7–8× speedup. Furthermore given these performance benefits, these approaches have been adopted as the default in MPACT.« less
Stimpson, Shane G.; Liu, Yuxuan; Collins, Benjamin S.; ...
2017-07-17
An essential component of the neutron transport solver is the resonance self-shielding calculation used to determine equivalence cross sections. The neutron transport code, MPACT, is currently using the subgroup self-shielding method, in which the method of characteristics (MOC) is used to solve purely absorbing fixed-source problems. Recent efforts incorporating multigroup kernels to the MOC solvers in MPACT have reduced runtime by roughly 2×. Applying the same concepts for self-shielding and developing a novel lumped parameter approach to MOC, substantial improvements have also been made to the self-shielding computational efficiency without sacrificing any accuracy. These new multigroup and lumped parameter capabilitiesmore » have been demonstrated on two test cases: (1) a single lattice with quarter symmetry known as VERA (Virtual Environment for Reactor Applications) Progression Problem 2a and (2) a two-dimensional quarter-core slice known as Problem 5a-2D. From these cases, self-shielding computational time was reduced by roughly 3–4×, with a corresponding 15–20% increase in overall memory burden. An azimuthal angle sensitivity study also shows that only half as many angles are needed, yielding an additional speedup of 2×. In total, the improvements yield roughly a 7–8× speedup. Furthermore given these performance benefits, these approaches have been adopted as the default in MPACT.« less
Minimax confidence intervals in geomagnetism
NASA Technical Reports Server (NTRS)
Stark, Philip B.
1992-01-01
The present paper uses theory of Donoho (1989) to find lower bounds on the lengths of optimally short fixed-length confidence intervals (minimax confidence intervals) for Gauss coefficients of the field of degree 1-12 using the heat flow constraint. The bounds on optimal minimax intervals are about 40 percent shorter than Backus' intervals: no procedure for producing fixed-length confidence intervals, linear or nonlinear, can give intervals shorter than about 60 percent the length of Backus' in this problem. While both methods rigorously account for the fact that core field models are infinite-dimensional, the application of the techniques to the geomagnetic problem involves approximations and counterfactual assumptions about the data errors, and so these results are likely to be extremely optimistic estimates of the actual uncertainty in Gauss coefficients.
Synthesis of a controller for stabilizing the motion of a rigid body about a fixed point
NASA Astrophysics Data System (ADS)
Zabolotnov, Yu. M.; Lobanov, A. A.
2017-05-01
A method for the approximate design of an optimal controller for stabilizing the motion of a rigid body about a fixed point is considered. It is assumed that rigid body motion is nearly the motion in the classical Lagrange case. The method is based on the common use of the Bellman dynamic programming principle and the averagingmethod. The latter is used to solve theHamilton-Jacobi-Bellman equation approximately, which permits synthesizing the controller. The proposed method for controller design can be used in many problems close to the problem of motion of the Lagrange top (the motion of a rigid body in the atmosphere, the motion of a rigid body fastened to a cable in deployment of the orbital cable system, etc.).
Development and operations of the astrophysics data system
NASA Technical Reports Server (NTRS)
Murray, Stephen S.; Oliversen, Ronald (Technical Monitor)
2005-01-01
Abstract service - Continued regular updates of abstracts in the databases, both at SA0 and at all mirror sites. - Modified loading scripts to accommodate changes in data format (PhyS) - Discussed data deliveries with providers to clear up problems with format or other errors (EGU) - Continued inclusion of large numbers of historical literature volumes and physics conference volumes xeroxed from the library. - Performed systematic fixes on some data sets in the database to account for changes in article numbering (AGU journals) - Implemented linking of ADS bibliographic records with multimedia files - Debugged and fixed obscure connection problems with the ADS Korean mirror site which were preventing successful updates of the data holdings. - Wrote procedure to parse citation data and characterize an ADS record based on its citation ratios within each database.
Separability of electrostatic and hydrodynamic forces in particle electrophoresis
NASA Astrophysics Data System (ADS)
Todd, Brian A.; Cohen, Joel A.
2011-09-01
By use of optical tweezers we explicitly measure the electrostatic and hydrodynamic forces that determine the electrophoretic mobility of a charged colloidal particle. We test the ansatz of O'Brien and White [J. Chem. Soc. Faraday IIJCFTBS0300-923810.1039/f29787401607 74, 1607 (1978)] that the electrostatically and hydrodynamically coupled electrophoresis problem is separable into two simpler problems: (1) a particle held fixed in an applied electric field with no flow field and (2) a particle held fixed in a flow field with no applied electric field. For a system in the Helmholtz-Smoluchowski and Debye-Hückel regimes, we find that the electrostatic and hydrodynamic forces measured independently accurately predict the electrophoretic mobility within our measurement precision of 7%; the O'Brien and White ansatz holds under the conditions of our experiment.
State estimation for networked control systems using fixed data rates
NASA Astrophysics Data System (ADS)
Liu, Qing-Quan; Jin, Fang
2017-07-01
This paper investigates state estimation for linear time-invariant systems where sensors and controllers are geographically separated and connected via a bandwidth-limited and errorless communication channel with the fixed data rate. All plant states are quantised, coded and converted together into a codeword in our quantisation and coding scheme. We present necessary and sufficient conditions on the fixed data rate for observability of such systems, and further develop the data-rate theorem. It is shown in our results that there exists a quantisation and coding scheme to ensure observability of the system if the fixed data rate is larger than the lower bound given, which is less conservative than the one in the literature. Furthermore, we also examine the role that the disturbances have on the state estimation problem in the case with data-rate limitations. Illustrative examples are given to demonstrate the effectiveness of the proposed method.
van Maanen, Leendert; de Jong, Ritske; van Rijn, Hedderik
2014-01-01
When multiple strategies can be used to solve a type of problem, the observed response time distributions are often mixtures of multiple underlying base distributions each representing one of these strategies. For the case of two possible strategies, the observed response time distributions obey the fixed-point property. That is, there exists one reaction time that has the same probability of being observed irrespective of the actual mixture proportion of each strategy. In this paper we discuss how to compute this fixed-point, and how to statistically assess the probability that indeed the observed response times are generated by two competing strategies. Accompanying this paper is a free R package that can be used to compute and test the presence or absence of the fixed-point property in response time data, allowing for easy to use tests of strategic behavior. PMID:25170893
Fixed Point Learning Based Intelligent Traffic Control System
NASA Astrophysics Data System (ADS)
Zongyao, Wang; Cong, Sui; Cheng, Shao
2017-10-01
Fixed point learning has become an important tool to analyse large scale distributed system such as urban traffic network. This paper presents a fixed point learning based intelligence traffic network control system. The system applies convergence property of fixed point theorem to optimize the traffic flow density. The intelligence traffic control system achieves maximum road resources usage by averaging traffic flow density among the traffic network. The intelligence traffic network control system is built based on decentralized structure and intelligence cooperation. No central control is needed to manage the system. The proposed system is simple, effective and feasible for practical use. The performance of the system is tested via theoretical proof and simulations. The results demonstrate that the system can effectively solve the traffic congestion problem and increase the vehicles average speed. It also proves that the system is flexible, reliable and feasible for practical use.
Positive solutions of fractional integral equations by the technique of measure of noncompactness.
Nashine, Hemant Kumar; Arab, Reza; Agarwal, Ravi P; De la Sen, Manuel
2017-01-01
In the present study, we work on the problem of the existence of positive solutions of fractional integral equations by means of measures of noncompactness in association with Darbo's fixed point theorem. To achieve the goal, we first establish new fixed point theorems using a new contractive condition of the measure of noncompactness in Banach spaces. By doing this we generalize Darbo's fixed point theorem along with some recent results of (Aghajani et al. (J. Comput. Appl. Math. 260:67-77, 2014)), (Aghajani et al. (Bull. Belg. Math. Soc. Simon Stevin 20(2):345-358, 2013)), (Arab (Mediterr. J. Math. 13(2):759-773, 2016)), (Banaś et al. (Dyn. Syst. Appl. 18:251-264, 2009)), and (Samadi et al. (Abstr. Appl. Anal. 2014:852324, 2014)). We also derive corresponding coupled fixed point results. Finally, we give an illustrative example to verify the effectiveness and applicability of our results.
Xia, Bin; Ma, Shao-Sai; Chen, Ju-Fa; Zhao, Jun; Chen, Bi-Juan; Wang, Fang
2010-06-01
Based on the analysis of dissolved organic carbon (DOC), particulate organic carbon (POC) and particulate nitrogen (PN) of the samples collected from stations in Enteromorpha prolifera outbreak area of the Western South Yellow Sea during the period August 9-13 of 2008, combining with the data of environmental hydrology, the horizontal distribution, source and influential factors of organic carbon and carbon fixed strength of phytoplankton were discussed. The results showed that the concentrations of DOC and POC ranged from 1.55 mg/L to 3.22 mg/L, 0.11 mg/L to 0.68 mg/L, with average values of 2.44 mg/L and 0.27 mg/L. The horizontal distributions of DOC and POC were similar in study area. The concentrations of DOC and POC in coastal area were higher than that in the outer sea and the concentrations of DOC and POC at surface water layer were higher than those at the bottom water layer. There were a positive correlation between POC and TSS, indicating that the concentrations and source of TSS were main factors for the POC. According to the univariate linear regression model between POC and PN, the concentrations of particulate inorganic nitrogen (PIN) were evaluated. Removing the content of PIN in the samples, the average POC/PON values in most coastal waters were less than 8, combining with the values of POC/chlorophyll a, suggesting that the marine primary production were the important source of POC in most coastal waters, and the presence of degraded organic matter which derived from degraded Enteromorph prolifera was in the latter period of green tide outbreak. The results of evaluated carbon fixed strength based on primary productivity showed that carbon fixed strength of phytoplankton in Enteromorpha prolifera outbreak area of the Western South Yellow Sea ranged from 167 mg/(m2 x d) to 2017 mg/(m2 x d), with the average of 730 mg/(m2 x d). The daily carbon fixed quantities of the study area were up to 2.95 x 10(4) t. Then the daily carbon fixed quantities of the Yellow Sea were 28.03 x 10(4) t.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diaz, J. I.; Henry, J.; Ramos, A. M.
We prove the approximate controllability of several nonlinear parabolic boundary-value problems by means of two different methods: the first one can be called a Cancellation method and the second one uses the Kakutani fixed-point theorem.
NASA Astrophysics Data System (ADS)
Okedu, Kenneth Eloghene; Muyeen, S. M.; Takahashi, Rion; Tamura, Junji
Recent wind farm grid codes require wind generators to ride through voltage sags, which means that normal power production should be re-initiated once the nominal grid voltage is recovered. However, fixed speed wind turbine generator system using induction generator (IG) has the stability problem similar to the step-out phenomenon of a synchronous generator. On the other hand, doubly fed induction generator (DFIG) can control its real and reactive powers independently while being operated in variable speed mode. This paper proposes a new control strategy using DFIGs for stabilizing a wind farm composed of DFIGs and IGs, without incorporating additional FACTS devices. A new current controlled voltage source converter (CC-VSC) scheme is proposed to control the converters of DFIG and the performance is verified by comparing the results with those of voltage controlled voltage source converter (VC-VSC) scheme. Another salient feature of this study is to reduce the number of proportionate integral (PI) controllers used in the rotor side converter without degrading dynamic and transient performances. Moreover, DC-link protection scheme during grid fault can be omitted in the proposed scheme which reduces overall cost of the system. Extensive simulation analyses by using PSCAD/EMTDC are carried out to clarify the effectiveness of the proposed CC-VSC based control scheme of DFIGs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stroeer, Alexander; Veitch, John
The Laser Interferometer Space Antenna (LISA) defines new demands on data analysis efforts in its all-sky gravitational wave survey, recording simultaneously thousands of galactic compact object binary foreground sources and tens to hundreds of background sources like binary black hole mergers and extreme-mass ratio inspirals. We approach this problem with an adaptive and fully automatic Reversible Jump Markov Chain Monte Carlo sampler, able to sample from the joint posterior density function (as established by Bayes theorem) for a given mixture of signals ''out of the box'', handling the total number of signals as an additional unknown parameter beside the unknownmore » parameters of each individual source and the noise floor. We show in examples from the LISA Mock Data Challenge implementing the full response of LISA in its TDI description that this sampler is able to extract monochromatic Double White Dwarf signals out of colored instrumental noise and additional foreground and background noise successfully in a global fitting approach. We introduce 2 examples with fixed number of signals (MCMC sampling), and 1 example with unknown number of signals (RJ-MCMC), the latter further promoting the idea behind an experimental adaptation of the model indicator proposal densities in the main sampling stage. We note that the experienced runtimes and degeneracies in parameter extraction limit the shown examples to the extraction of a low but realistic number of signals.« less
NASA Astrophysics Data System (ADS)
Taylor, Marika; Woodhead, William
2017-12-01
The F theorem states that, for a unitary three dimensional quantum field theory, the F quantity defined in terms of the partition function on a three sphere is positive, stationary at fixed point and decreases monotonically along a renormalization group flow. We construct holographic renormalization group flows corresponding to relevant deformations of three-dimensional conformal field theories on spheres, working to quadratic order in the source. For these renormalization group flows, the F quantity at the IR fixed point is always less than F at the UV fixed point, but F increases along the RG flow for deformations by operators of dimension between 3/2 and 5/2. Therefore the strongest version of the F theorem is in general violated.
Pictorial Format Display Evaluation
1984-05-01
FIGR) under- Contract No. F33615-81-C-3610. Dr. John M. Reising served as Project Manager .with Capt. Carole Jean Kopala and, later, Major James S...Typo (SAM or AAA) Mode (Seahr, trac or launch) Air or Troak or Treat Type (MIG 21, MIK 23. SAM. or AAA" aa le Iunc Mode (Sara, trock or- lmxch) Clock...presentation (l pilot). Should come up automatically (I pilot). Airplane this smart should be able to fix these problems and just announce fix (1 pilot
Refurbishment of durban fixed ukzn lidar for atmospheric studies - current status
NASA Astrophysics Data System (ADS)
Sivakumar, Venkataraman
2018-04-01
The fixed LIDAR system at University of KwaZulu-Natal (UKZN) in Durban was installed in 1999 and operated until 2004. In 2004, the system was relocated and operation closed due to various technical and instrument problems. The restructuring of the LIDAR system was initiated in 2013 and it is now used to measure vertical aerosol profiles in the height range 03-25 km. Here, we describe the present system in detail, including technical specifications and results obtained from a recent LIDAR calibration campaign.
NASA Technical Reports Server (NTRS)
Ziff, Howard L; Rathert, George A; Gadeberg, Burnett L
1953-01-01
Standard air-to-air-gunnery tracking runs were conducted with F-51H, F8F-1, F-86A, and F-86E airplanes equipped with fixed gunsights. The tracking performances were documented over the normal operating range of altitude, Mach number, and normal acceleration factor for each airplane. The sources of error were studied by statistical analyses of the aim wander.
NASA Astrophysics Data System (ADS)
Ramadhani, T.; Hertono, G. F.; Handari, B. D.
2017-07-01
The Multiple Traveling Salesman Problem (MTSP) is the extension of the Traveling Salesman Problem (TSP) in which the shortest routes of m salesmen all of which start and finish in a single city (depot) will be determined. If there is more than one depot and salesmen start from and return to the same depot, then the problem is called Fixed Destination Multi-depot Multiple Traveling Salesman Problem (MMTSP). In this paper, MMTSP will be solved using the Ant Colony Optimization (ACO) algorithm. ACO is a metaheuristic optimization algorithm which is derived from the behavior of ants in finding the shortest route(s) from the anthill to a form of nourishment. In solving the MMTSP, the algorithm is observed with respect to different chosen cities as depots and non-randomly three parameters of MMTSP: m, K, L, those represents the number of salesmen, the fewest cities that must be visited by a salesman, and the most number of cities that can be visited by a salesman, respectively. The implementation is observed with four dataset from TSPLIB. The results show that the different chosen cities as depots and the three parameters of MMTSP, in which m is the most important parameter, affect the solution.
McGinnis, Molly A; Houchins-Juárez, Nealetta; McDaniel, Jill L; Kennedy, Craig H
2010-01-01
Three participants whose problem behavior was maintained by contingent attention were exposed to 45-min presessions in which attention was withheld, provided on a fixed-time (FT) 15-s schedule, or provided on an FT 120-s schedule. Following each presession, participants were then tested in a 15-min session similar to the social attention condition of an analogue functional analysis. The results showed establishing operation conditions increased problem behavior during tests and that abolishing operation conditions decreased problem behavior during tests. PMID:20808502
McGinnis, Molly A; Houchins-Juárez, Nealetta; McDaniel, Jill L; Kennedy, Craig H
2010-03-01
Three participants whose problem behavior was maintained by contingent attention were exposed to 45-min presessions in which attention was withheld, provided on a fixed-time (FT) 15-s schedule, or provided on an FT 120-s schedule. Following each presession, participants were then tested in a 15-min session similar to the social attention condition of an analogue functional analysis. The results showed establishing operation conditions increased problem behavior during tests and that abolishing operation conditions decreased problem behavior during tests.
Electronic neural network for solving traveling salesman and similar global optimization problems
NASA Technical Reports Server (NTRS)
Thakoor, Anilkumar P. (Inventor); Moopenn, Alexander W. (Inventor); Duong, Tuan A. (Inventor); Eberhardt, Silvio P. (Inventor)
1993-01-01
This invention is a novel high-speed neural network based processor for solving the 'traveling salesman' and other global optimization problems. It comprises a novel hybrid architecture employing a binary synaptic array whose embodiment incorporates the fixed rules of the problem, such as the number of cities to be visited. The array is prompted by analog voltages representing variables such as distances. The processor incorporates two interconnected feedback networks, each of which solves part of the problem independently and simultaneously, yet which exchange information dynamically.
Co-occurrence of methanogenesis and N2 fixation in oil sands tailings.
Collins, C E Victoria; Foght, Julia M; Siddique, Tariq
2016-09-15
Oil sands tailings ponds in northern Alberta, Canada have been producing biogenic gases via microbial metabolism of hydrocarbons for decades. Persistent methanogenic activity in tailings ponds without any known replenishment of nutrients such as fixed nitrogen (N) persuaded us to investigate whether N2 fixation or polyacrylamide (PAM; used as a tailings flocculant) could serve as N sources. Cultures comprising mature fine tailings (MFT) plus methanogenic medium supplemented with or deficient in fixed N were incubated under an N2 headspace. Some cultures were further amended with citrate, which is used in oil sands processing, as a relevant carbon source, and/or with PAM. After an initial delay, N-deficient cultures with or without PAM produced methane (CH4) at the same rate as N-containing cultures, indicating a mechanism of overcoming apparent N-deficiency. Acetylene reduction and (15)N2 incorporation in all N-deficient cultures (with or without PAM) suggested active N2 fixation concurrently with methanogenesis but inability to use PAM as a N source. 16S rRNA gene pyrosequencing revealed little difference between archaeal populations regardless of N content. However, bacterial sequences in N-deficient cultures showed enrichment of Hyphomicrobiaceae and Clostridium members that might contain N2-fixing species. The results are important in understanding long-term production of biogenic greenhouse gases in oil sands tailings. Copyright © 2016 Elsevier B.V. All rights reserved.
Problems of interaction longitudinal shear waves with V-shape tunnels defect
NASA Astrophysics Data System (ADS)
Popov, V. G.
2018-04-01
The problem of determining the two-dimensional dynamic stress state near a tunnel defect of V-shaped cross-section is solved. The defect is located in an infinite elastic medium, where harmonic longitudinal shear waves are propagating. The initial problem is reduced to a system of two singular integral or integro-differential equations with fixed singularities. A numerical method for solving these systems with regard to the true asymptotics of the unknown functions is developed.
NASA Astrophysics Data System (ADS)
Bai, Yunru; Baleanu, Dumitru; Wu, Guo-Cheng
2018-06-01
We investigate a class of generalized differential optimization problems driven by the Caputo derivative. Existence of weak Carathe ´odory solution is proved by using Weierstrass existence theorem, fixed point theorem and Filippov implicit function lemma etc. Then a numerical approximation algorithm is introduced, and a convergence theorem is established. Finally, a nonlinear programming problem constrained by the fractional differential equation is illustrated and the results verify the validity of the algorithm.
The tethered galaxy problem: a possible window to explore cosmological models
NASA Astrophysics Data System (ADS)
Tangmatitham, Matipon; Nemiroff, Robert J.
2017-01-01
In the tethered galaxy problem, a hypothetical galaxy is being held at a fixed proper distance. Contrary to Newtonian intuition, it has been shown that this tethered galaxy can have a nonzero redshift. However, constant proper distance has been suggested as unphysical in a cosmological setting and therefore other definitions have been suggested. The tethered galaxy problem is therefore reviewed in Friedmann cosmology. In this work, different tethers are considered as possible local cosmological discriminators.
Deep-Sea Archaea Fix and Share Nitrogen in Methane-Consuming Microbial Consortia
NASA Astrophysics Data System (ADS)
Dekas, Anne E.; Poretsky, Rachel S.; Orphan, Victoria J.
2009-10-01
Nitrogen-fixing (diazotrophic) microorganisms regulate productivity in diverse ecosystems; however, the identities of diazotrophs are unknown in many oceanic environments. Using single-cell-resolution nanometer secondary ion mass spectrometry images of 15N incorporation, we showed that deep-sea anaerobic methane-oxidizing archaea fix N2, as well as structurally similar CN-, and share the products with sulfate-reducing bacterial symbionts. These archaeal/bacterial consortia are already recognized as the major sink of methane in benthic ecosystems, and we now identify them as a source of bioavailable nitrogen as well. The archaea maintain their methane oxidation rates while fixing N2 but reduce their growth, probably in compensation for the energetic burden of diazotrophy. This finding extends the demonstrated lower limits of respiratory energy capable of fueling N2 fixation and reveals a link between the global carbon, nitrogen, and sulfur cycles.
López, Silvina M Y; Sánchez, Ma Dolores Molina; Pastorino, Graciela N; Franco, Mario E E; García, Nicolás Toro; Balatti, Pedro A
2018-03-15
The purpose of this work was to study further two Bradyrhizobium japonicum strains with high nitrogen-fixing capacity that were identified within a collection of approximately 200 isolates from the soils of Argentina. Nodulation and nitrogen-fixing capacity and the level of expression of regulatory as well as structural genes of nitrogen fixation and the 1-aminocyclopropane-1-carboxylate (ACC) deaminase gene of the isolates were compared with that of E109-inoculated plants. Both isolates of B. japonicum, 163 and 366, were highly efficient to fix nitrogen compared to commercial strain E109. Isolate 366 developed a higher number and larger biomass of nodules and because of this fixed more nitrogen. Isolate 163 developed the same number and nodule biomass than E109. However, nodules developed by isolate 163 had red interiors for a longer period, had a higher leghemoglobin content, and presented high levels of expression of acdS gene, that codes for an ACC deaminase. In conclusion, naturalized rhizobia of the soils of Argentina hold a diverse population that might be the source of highly active nitrogen-fixing rhizobia, a process that appears to be based on different strategies.
Rotationally symmetric viscous gas flows
NASA Astrophysics Data System (ADS)
Weigant, W.; Plotnikov, P. I.
2017-03-01
The Dirichlet boundary value problem for the Navier-Stokes equations of a barotropic viscous compressible fluid is considered. The flow region and the data of the problem are assumed to be invariant under rotations about a fixed axis. The existence of rotationally symmetric weak solutions for all adiabatic exponents from the interval (γ*,∞) with a critical exponent γ* < 4/3 is proved.
The Internet as a Means of Information Resources' Integration: The Regional Aspect.
ERIC Educational Resources Information Center
Elepov, Boris S.; Soboleva, Elena B.; Fedotova, Olga P.; Shabanov, Andrei V.
The presence of Siberian and Far Eastern libraries on the Internet has become the reality of today. Joining this community, they solve at least two main problems--those of rational use of World Wide Web resources and those of providing access to their own products. There is a system of fixing documentary streams disclosing regional problems. Each…
ERIC Educational Resources Information Center
McGinnis, Molly A.; Houchins-Juarez, Nealetta; McDaniel, Jill L.; Kennedy, Craig H.
2010-01-01
Three participants whose problem behavior was maintained by contingent attention were exposed to 45-min presessions in which attention was withheld, provided on a fixed-time (FT) 15-s schedule, or provided on an FT 120-s schedule. Following each presession, participants were then tested in a 15-min session similar to the social attention condition…
The Soda Can Optimization Problem: Getting Close to the Real Thing
ERIC Educational Resources Information Center
Premadasa, Kirthi; Martin, Paul; Sprecher, Bryce; Yang, Lai; Dodge, Noah-Helen
2016-01-01
Optimizing the dimensions of a soda can is a classic problem that is frequently posed to freshman calculus students. However, if we only minimize the surface area subject to a fixed volume, the result is a can with a square edge-on profile, and this differs significantly from actual cans. By considering a more realistic model for the can that…
ERIC Educational Resources Information Center
Rapp, Doris J.
The Federal Government reports that one-third of the nation's public schools are environmentally unsafe in ways that cause health problems to teachers and students and detract from educational quality. Environmentally induced diseases jeopardize those who already have health problems and deteriorates student learning ability. This book addresses a…
Evaluation of fixed momentary dro schedules under signaled and unsignaled arrangements.
Hammond, Jennifer L; Iwata, Brian A; Fritz, Jennifer N; Dempsey, Carrie M
2011-01-01
Fixed momentary schedules of differential reinforcement of other behavior (FM DRO) generally have been ineffective as treatment for problem behavior. Because most early research on FM DRO included presentation of a signal at the end of the DRO interval, it is unclear whether the limited effects of FM DRO were due to (a) the momentary response requirement of the schedule per se or (b) discrimination of the contingency made more salient by the signal. To separate these two potential influences, we compared the effects of signaled versus unsignaled FM DRO with 4 individuals with developmental disabilities whose problem behavior was maintained by social-positive reinforcement. During signaled FM DRO, the experimenter presented a visual stimulus 3 s prior to the end of the DRO interval and delivered reinforcement contingent on the absence of problem behavior at the second the interval elapsed. Unsignaled DRO was identical except that interval termination was not signaled. Results indicated that signaled FM DRO was effective in decreasing 2 subjects' problem behavior, whereas an unsignaled schedule was required for the remaining 2 subjects. These results suggest that the response requirement per se of FM DRO may not be problematic if it is not easily discriminated.
Naval S&T Strategy: Innovations For The Future Force
2015-01-01
collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources ...STRATEGY4 Executive Approval OFFICE OF NAVAL RESEARCH 5 “The naval science and technology community is the pre-eminent source for good ideas and...maturation or use it as a source for fixing identified funding shortfalls. Resource sponsors question how they will make do with less in the face of
A Formula for Fixing Troubled Projects: The Scientific Method Meets Leadership
NASA Technical Reports Server (NTRS)
Wagner, Sandra
2006-01-01
This presentation focuses on project management, specifically addressing project issues using the scientific method of problem-solving. Two sample projects where this methodology has been applied are provided.
NASA Technical Reports Server (NTRS)
Schuller, F. T.
1973-01-01
This publication is the result of over 260 fractional-frequency-whirl stability tests on a variety of fixed-geometry journal bearings. It is intended principally as a guide in the selection and design of antiwhirl bearings that must operate at high speeds and low loads in low-viscosity fluids such as water or liquid metals. However, the various fixed-geometry configurations can be employed as well in applications where other lubricants, such as oil, are used and fractional-frequency whirl is a problem. The important parameters that effect stability are discussed for each bearing type, and design curves to facilitate the design of optimum-geometry bearings are included. A comparison of the stability of the different bearing configurations tested is also given.
Lee, Ju Han; Chang, You Min; Han, Young-Geun; Lee, Sang Bae; Chung, Hae Yang
2007-08-01
The combined use of a programmable, digital micromirror device (DMD) and an ultrabroadband, cw, incoherent supercontinuum (SC) source is experimentally demonstrated to fully explore various aspects on the reconfiguration of a microwave filter transfer function by creating a range of multiwavelength optical filter shapes. Owing to both the unique characteristic of the DMD that an arbitrary optical filter shape can be readily produced and the ultrabroad bandwidth of the cw SC source that is 3 times larger than that of Er-amplified spontaneous emission, a multiwavelength optical beam pattern can be generated with a large number of wavelength filter taps apodized by an arbitrary amplitude window. Therefore various types of high-quality microwave filter can be readily achieved through the spectrum slicing-based photonic microwave transversal filter scheme. The experimental demonstration is performed in three aspects: the tuning of a filter resonance bandwidth at a fixed resonance frequency, filter resonance frequency tuning at a fixed resonance frequency, and flexible microwave filter shape reconstruction.
On the relationship between parallel computation and graph embedding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gupta, A.K.
1989-01-01
The problem of efficiently simulating an algorithm designed for an n-processor parallel machine G on an m-processor parallel machine H with n > m arises when parallel algorithms designed for an ideal size machine are simulated on existing machines which are of a fixed size. The author studies this problem when every processor of H takes over the function of a number of processors in G, and he phrases the simulation problem as a graph embedding problem. New embeddings presented address relevant issues arising from the parallel computation environment. The main focus centers around embedding complete binary trees into smaller-sizedmore » binary trees, butterflies, and hypercubes. He also considers simultaneous embeddings of r source machines into a single hypercube. Constant factors play a crucial role in his embeddings since they are not only important in practice but also lead to interesting theoretical problems. All of his embeddings minimize dilation and load, which are the conventional cost measures in graph embeddings and determine the maximum amount of time required to simulate one step of G on H. His embeddings also optimize a new cost measure called ({alpha},{beta})-utilization which characterizes how evenly the processors of H are used by the processors of G. Ideally, the utilization should be balanced (i.e., every processor of H simulates at most (n/m) processors of G) and the ({alpha},{beta})-utilization measures how far off from a balanced utilization the embedding is. He presents embeddings for the situation when some processors of G have different capabilities (e.g. memory or I/O) than others and the processors with different capabilities are to be distributed uniformly among the processors of H. Placing such conditions on an embedding results in an increase in some of the cost measures.« less
Code of Federal Regulations, 2010 CFR
2010-01-01
... accounts, CDs, stocks, bonds, or other similar assets. Equity in real estate holdings and other fixed... source) when that owner's liquid assets exceed the amounts specified in paragraphs (a) (1) through (3) of... applicant must inject any personal liquid assets which are in excess of two times the total financing...
Parameterizing by the Number of Numbers
NASA Astrophysics Data System (ADS)
Fellows, Michael R.; Gaspers, Serge; Rosamond, Frances A.
The usefulness of parameterized algorithmics has often depended on what Niedermeier has called "the art of problem parameterization". In this paper we introduce and explore a novel but general form of parameterization: the number of numbers. Several classic numerical problems, such as Subset Sum, Partition, 3-Partition, Numerical 3-Dimensional Matching, and Numerical Matching with Target Sums, have multisets of integers as input. We initiate the study of parameterizing these problems by the number of distinct integers in the input. We rely on an FPT result for Integer Linear Programming Feasibility to show that all the above-mentioned problems are fixed-parameter tractable when parameterized in this way. In various applied settings, problem inputs often consist in part of multisets of integers or multisets of weighted objects (such as edges in a graph, or jobs to be scheduled). Such number-of-numbers parameterized problems often reduce to subproblems about transition systems of various kinds, parameterized by the size of the system description. We consider several core problems of this kind relevant to number-of-numbers parameterization. Our main hardness result considers the problem: given a non-deterministic Mealy machine M (a finite state automaton outputting a letter on each transition), an input word x, and a census requirement c for the output word specifying how many times each letter of the output alphabet should be written, decide whether there exists a computation of M reading x that outputs a word y that meets the requirement c. We show that this problem is hard for W[1]. If the question is whether there exists an input word x such that a computation of M on x outputs a word that meets c, the problem becomes fixed-parameter tractable.
Préve, Deison; Saa, Alberto
2015-10-01
Soap bubbles are thin liquid films enclosing a fixed volume of air. Since the surface tension is typically assumed to be the only factor responsible for conforming the soap bubble shape, the realized bubble surfaces are always minimal area ones. Here, we consider the problem of finding the axisymmetric minimal area surface enclosing a fixed volume V and with a fixed equatorial perimeter L. It is well known that the sphere is the solution for V=L(3)/6π(2), and this is indeed the case of a free soap bubble, for instance. Surprisingly, we show that for V<αL(3)/6π(2), with α≈0.21, such a surface cannot be the usual lens-shaped surface formed by the juxtaposition of two spherical caps, but is rather a toroidal surface. Practically, a doughnut-shaped bubble is known to be ultimately unstable and, hence, it will eventually lose its axisymmetry by breaking apart in smaller bubbles. Indisputably, however, the topological transition from spherical to toroidal surfaces is mandatory here for obtaining the global solution for this axisymmetric isoperimetric problem. Our result suggests that deformed bubbles with V<αL(3)/6π(2) cannot be stable and should not exist in foams, for instance.
NASA Astrophysics Data System (ADS)
Préve, Deison; Saa, Alberto
2015-10-01
Soap bubbles are thin liquid films enclosing a fixed volume of air. Since the surface tension is typically assumed to be the only factor responsible for conforming the soap bubble shape, the realized bubble surfaces are always minimal area ones. Here, we consider the problem of finding the axisymmetric minimal area surface enclosing a fixed volume V and with a fixed equatorial perimeter L . It is well known that the sphere is the solution for V =L3/6 π2 , and this is indeed the case of a free soap bubble, for instance. Surprisingly, we show that for V <α L3/6 π2 , with α ≈0.21 , such a surface cannot be the usual lens-shaped surface formed by the juxtaposition of two spherical caps, but is rather a toroidal surface. Practically, a doughnut-shaped bubble is known to be ultimately unstable and, hence, it will eventually lose its axisymmetry by breaking apart in smaller bubbles. Indisputably, however, the topological transition from spherical to toroidal surfaces is mandatory here for obtaining the global solution for this axisymmetric isoperimetric problem. Our result suggests that deformed bubbles with V <α L3/6 π2 cannot be stable and should not exist in foams, for instance.
Suppression of fixed pattern noise for infrared image system
NASA Astrophysics Data System (ADS)
Park, Changhan; Han, Jungsoo; Bae, Kyung-Hoon
2008-04-01
In this paper, we propose suppression of fixed pattern noise (FPN) and compensation of soft defect for improvement of object tracking in cooled staring infrared focal plane array (IRFPA) imaging system. FPN appears an observable image which applies to non-uniformity compensation (NUC) by temperature. Soft defect appears glittering black and white point by characteristics of non-uniformity for IR detector by time. This problem is very important because it happen serious problem for object tracking as well as degradation for image quality. Signal processing architecture in cooled staring IRFPA imaging system consists of three tables: low, normal, high temperature for reference gain and offset values. Proposed method operates two offset tables for each table. This is method which operates six term of temperature on the whole. Proposed method of soft defect compensation consists of three stages: (1) separates sub-image for an image, (2) decides a motion distribution of object between each sub-image, (3) analyzes for statistical characteristic from each stationary fixed pixel. Based on experimental results, the proposed method shows an improved image which suppresses FPN by change of temperature distribution from an observational image in real-time.
SU-F-T-462: Lessons Learned From a Machine Incident Reporting System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sutlief, S; Hoisak, J
Purpose: Linear accelerators must operate with minimal downtime. Machine incident logs are a crucial tool to meet this requirement. They providing a history of service and demonstrate whether a fix is working. This study investigates the information content of a large department linear accelerator incident log. Methods: Our department uses an electronic reporting system to provide immediate information to both key department staff and the field service department. This study examines reports for five linac logs during 2015. The report attributes for analysis include frequency, level of documentation, who solved the problem, and type of fix used. Results: Of themore » reports, 36% were documented as resolved. In another 25% the resolution allowed treatment to proceed although the reported problem recurred within days. In 5% only intermediate troubleshooting was documented. The remainder lacked documentation. In 60% of the reports, radiation therapists resolved the problem, often by clearing the appropriate faults or reinitializing a software or hardware service. 22% were resolved by physics and 10% by field service engineers. The remaining 8% were resolved by IT, Facilities, or resolved spontaneously. Typical fixes, in order of scope, included clearing the fault and moving on, closing and re-opening the patient session or software, cycling power to a sub-unit, recalibrating a device (e.g., optical surface imaging), and calling in Field Service (usually resolving the problem through maintenance or component replacement). Conclusion: The reports with undocumented resolution represent a missed opportunity for learning. Frequency of who resolves a problem scales with the proximity of the person’s role (therapist, physicist, or service engineer), which is inversely related to the permanence of the resolution. Review of lessons learned from machine incident logs can form the basis for guidance to radiation therapists and medical physicists to minimize equipment downtime and ensure safe operation.« less
NASA Technical Reports Server (NTRS)
Wheeler, Ward C.
2003-01-01
The problem of determining the minimum cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete (Wang and Jiang, 1994). Traditionally, point estimations of hypothetical ancestral sequences have been used to gain heuristic, upper bounds on cladogram cost. These include procedures with such diverse approaches as non-additive optimization of multiple sequence alignment, direct optimization (Wheeler, 1996), and fixed-state character optimization (Wheeler, 1999). A method is proposed here which, by extending fixed-state character optimization, replaces the estimation process with a search. This form of optimization examines a diversity of potential state solutions for cost-efficient hypothetical ancestral sequences and can result in greatly more parsimonious cladograms. Additionally, such an approach can be applied to other NP-complete phylogenetic optimization problems such as genomic break-point analysis. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.
Highly eccentric hip-hop solutions of the 2 N-body problem
NASA Astrophysics Data System (ADS)
Barrabés, Esther; Cors, Josep M.; Pinyol, Conxita; Soler, Jaume
2010-02-01
We show the existence of families of hip-hop solutions in the equal-mass 2 N-body problem which are close to highly eccentric planar elliptic homographic motions of 2 N bodies plus small perpendicular non-harmonic oscillations. By introducing a parameter ɛ, the homographic motion and the small amplitude oscillations can be uncoupled into a purely Keplerian homographic motion of fixed period and a vertical oscillation described by a Hill type equation. Small changes in the eccentricity induce large variations in the period of the perpendicular oscillation and give rise, via a Bolzano argument, to resonant periodic solutions of the uncoupled system in a rotating frame. For small ɛ≠0, the topological transversality persists and Brouwer’s fixed point theorem shows the existence of this kind of solutions in the full system.
NASA Technical Reports Server (NTRS)
Yahsi, O. S.; Erdogan, F.
1985-01-01
In this paper a cylindrical shell having a very stiff end plate or a flange is considered. It is assumed that near the end the cylinder contains an axial flow which may be modeled as a part-through surface crack or through crack. The primary objective is to study the effect of the end constraining on the stress intensity factor which is the main fracture mechanics parameter. The applied loads acting on the cylinder are assumed to be axisymmetric. Thus the crack problem under consideration is symmetric with respect to the plane of the crack and consequently only the mode I stress intensity factors are nonzero. With this limitation, the general perturbation problem for a cylinder with a built-in end containing an axial crack is considered. Reissner's shell theory is used to formulate the problem. The part-through crack problem is treated by using a line-spring model. In the case of a crack tip terminating at the fixed end it is shown that the integral equation of the shell problem has the same generalized Cauchy kernel as the corresponding plane stress elasticity problem. Even though the problem is formulated for a general surface crack profile and arbitrary crack surface tractions, the numerical results are obtained only for a semielliptic part-through axial crack located at the inside or outside surface of the cylinder and for internal pressure acting on the cylinder. The stress intensity factors are calculated and presented for a relatively wide range of dimensionless length parameters of the problem.
Synthetic biology approaches to engineering the nitrogen symbiosis in cereals.
Rogers, Christian; Oldroyd, Giles E D
2014-05-01
Nitrogen is abundant in the earth's atmosphere but, unlike carbon, cannot be directly assimilated by plants. The limitation this places on plant productivity has been circumvented in contemporary agriculture through the production and application of chemical fertilizers. The chemical reduction of nitrogen for this purpose consumes large amounts of energy and the reactive nitrogen released into the environment as a result of fertilizer application leads to greenhouse gas emissions, as well as widespread eutrophication of aquatic ecosystems. The environmental impacts are intensified by injudicious use of fertilizers in many parts of the world. Simultaneously, limitations in the production and supply of chemical fertilizers in other regions are leading to low agricultural productivity and malnutrition. Nitrogen can be directly fixed from the atmosphere by some bacteria and Archaea, which possess the enzyme nitrogenase. Some plant species, most notably legumes, have evolved close symbiotic associations with nitrogen-fixing bacteria. Engineering cereal crops with the capability to fix their own nitrogen could one day address the problems created by the over- and under-use of nitrogen fertilizers in agriculture. This could be achieved either by expression of a functional nitrogenase enzyme in the cells of the cereal crop or through transferring the capability to form a symbiotic association with nitrogen-fixing bacteria. While potentially transformative, these biotechnological approaches are challenging; however, with recent advances in synthetic biology they are viable long-term goals. This review discusses the possibility of these biotechnological solutions to the nitrogen problem, focusing on engineering the nitrogen symbiosis in cereals.
Implementation of trinary logic in a polarization encoded optical shadow-casting scheme.
Rizvi, R A; Zaheer, K; Zubairy, M S
1991-03-10
The design of various multioutput trinary combinational logic units by a polarization encoded optical shadow-casting (POSC) technique is presented. The POSC modified algorithm is employed to design and implement these logic elements in a trinary number system with separate and simultaneous generation of outputs. A detailed solution of the POSC logic equations for a fixed source plane and a fixed decoding mask is given to obtain input pixel coding for a trinary half-adder, full adder, and subtractor.
Dackehag, Margareta; Gerdtham, Ulf-G; Nordin, Martin
2015-07-01
This article investigates the excess-weight penalty in income for men and women in the Swedish labor market, using longitudinal data. It compares two identification strategies, OLS and individual fixed effects, and distinguishes between two main sources of excess-weight penalties, lower productivity because of bad health and discrimination. For men, the analysis finds a significant obesity penalty related to discrimination when applying individual fixed effects. We do not find any significant excess-weight penalty for women.
Tympanic thermometer performance validation by use of a body-temperature fixed point blackbody
NASA Astrophysics Data System (ADS)
Machin, Graham; Simpson, Robert
2003-04-01
The use of infrared tympanic thermometers within the medical community (and more generically in the public domain) has recently grown rapidly, displacing more traditional forms of thermometry such as mercury-in-glass. Besides the obvious health concerns over mercury the increase in the use of tympanic thermometers is related to a number of factors such as their speed and relatively non-invasive method of operation. The calibration and testing of such devices is covered by a number of international standards (ASTM1, prEN2, JIS3) which specify the design of calibration blackbodies. However these calibration sources are impractical for day-to-day in-situ validation purposes. In addition several studies (e.g. Modell et al4, Craig et al5) have thrown doubt on the accuracy of tympanic thermometers in clinical use. With this in mind the NPL is developing a practical, portable and robust primary reference fixed point source for tympanic thermometer validation. The aim of this simple device is to give the clinician a rapid way of validating the performance of their tympanic thermometer, enabling the detection of mal-functioning thermometers and giving confidence in the measurement to the clinician (and patient!) at point of use. The reference fixed point operates at a temperature of 36.3 °C (97.3 °F) with a repeatability of approximately +/- 20 mK. The fixed-point design has taken into consideration the optical characteristics of tympanic thermometers enabling wide-angled field of view devices to be successfully tested. The overall uncertainty of the device is estimated to be is less than 0.1°C. The paper gives a description of the fixed point, its design and construction as well as the results to date of validation tests.
Galois groups of Schubert problems via homotopy computation
NASA Astrophysics Data System (ADS)
Leykin, Anton; Sottile, Frank
2009-09-01
Numerical homotopy continuation of solutions to polynomial equations is the foundation for numerical algebraic geometry, whose development has been driven by applications of mathematics. We use numerical homotopy continuation to investigate the problem in pure mathematics of determining Galois groups in the Schubert calculus. For example, we show by direct computation that the Galois group of the Schubert problem of 3-planes in mathbb{C}^8 meeting 15 fixed 5-planes non-trivially is the full symmetric group S_{6006} .
Introducing soft systems methodology plus (SSM+): why we need it and what it can contribute.
Braithwaite, Jeffrey; Hindle, Don; Iedema, Rick; Westbrook, Johanna I
2002-01-01
There are many complicated and seemingly intractable problems in the health care sector. Past ways to address them have involved political responses, economic restructuring, biomedical and scientific studies, and managerialist or business-oriented tools. Few methods have enabled us to develop a systematic response to problems. Our version of soft systems methodology, SSM+, seems to improve problem solving processes by providing an iterative, staged framework that emphasises collaborative learning and systems redesign involving both technical and cultural fixes.
Investigation of stress concentration at corner points for orthotropic plate bending problem
NASA Astrophysics Data System (ADS)
Vasilyan, N. G.
2018-04-01
This article deals with the bending problem for an orthotropic semi-infinite plate strip when three edges of the plate are hinged and the fourth edge goes to infinity. The plate is loaded with distributed load of intensity q(y). A. Nadai’s approach is applied, which says that to obtain the solution at a far distance from the edge, it is necessary to solve the problem of cylindrical bending. The generalized shearing forces on the fixed edge are investigated.
New geophysical electromagnetic method of archeological object research in Egypt
NASA Astrophysics Data System (ADS)
Hachay, O. A.; Khachay, O. Yu.; Attia, Magdi.
2009-04-01
The demand to the enhanced geophysical technique and device, in addition to the precise interpretation of the geophysical data, is the resolution of the geophysical complex research, especially by the absence of priory information about the researched place. Therefore, an approach to use the planshet method of electromagnetic induction in the frequency geometry was developed by Hachay. et al., 1997a, 1997b, 1999, 2000, 2002, and 2005. The method was adapted to map and monitor the high complicated geological mediums, to determine the structural factors and criteria of the rock massif in the mine subsurface. The field observation and the way of interpretation make the new technology differ from other known earlier methods of field raying or tomography (Hachay et al., 1997c, 1999, and 2000).The 3D geoelectrical medium research is based on the concept of three staged interpreting of the alternating electromagnetic field in a frame of the block-layered isotropic medium with inclusion (Hachay 1997a, and 2002); in the first stage, the geoelectrical parameters of the horizontal block-layered medium, which includes the heterogeneities, are defined. In the second stage a geometrical model of the different local heterogeneities or groups inside the block-layered medium is constructed based on the local geoelectrical heterogeneities produced from the first stage after filtering the anomalous fields plunged in the medium. While in the third stage, the surfaces of the searched heterogeneities could be calculated in account of the physical parameters of the anomalous objects.For practical realization of that conception the system of observation for alternating electromagnetic field with use of vertical magnetic dipole was elaborated. Such local source of excitation and regular net of observations allows realizing overlapping by different angles of observation directions. As incoming data for interpretation, three components of modules of three components of magnetic field are used. For the case on surface observations the data are measured on the Earth's surface at the set of distances between the source and receiver as a function of frequencies. The measurements of the module of three components of the magnetic field vertical and two horizontal: one directed to the source and second perpendicular to that direction are provided in the frame of planshet for the fixed net with fixed step and fixed length of the planshet's side. In the frame of profile observations the planshet become to a band or a line and the length of the band or the line is a base of observations or an array. For the variant of the wide profile (band) the source of excitation is located at the beginning of the array on the profile, which is parallel to the measuring profile. We shall call that a wide array. It moves systematically with a fixed step of meters. For the variant of an usual profile the source is located on the measuring profile and the moving of the oscillator is similar. For the variant of a planshet survey the source is located into the center of the planshet using the fixed net of observation. Then the planshet array moves systematically with overlapping usually on the half of the planshet.The interpretation is made in a frame of n-layered model for each array and planshet location. After that each point of the planshet is associated with one and only column of layers thicknesses and corresponding fixed column with resistance of the medium in that layers. Gathering information of all planshets together we obtain a many-valued function of each point - distribution of thicknesses and resistances of the medium layers. Then we calculate the average value for these distributions for each point of the observation set. Thus we obtain the unique distribution of thicknesses of horizontal layers and resistances, which corresponds to the medium model as a cylinder with vertical generatrixes and with a rectangle at the bottom and with a point of observation located in its center. Thus we change over layered model to a block-layered model. Then, gathering the values of thicknesses and resistances for all points of observation, located on one and the same profile we obtain the file of an average cross-section along the profile.The next step is combining the neighboring blocks with close-range values of resistance to one block. That operation is made according to the fixed scale of resistance. The second stage of interpretation is used to define the geometrical characteristics of conductive inclusions and their equivalent moments, which are proportional to the ratio of the conductivity difference in the host rock and in the inclusion to the conductivity in the host rock. Here the approximation principle is used for alternating electromagnetic fields. The initial model of the inclusion is a current line of fixed length. That approximation construction is used for fitting of the average parameter of geoelectrical heterogeneity, which is calculated and located to each point of the profile (Hachay O.A. et all. 2002).The first problem: to found the tomb of Ptolemey in Alexandria. That work is provided by NRIAG together with the Aphine University. The historical and archeological work was provided during a long time. In that moment when we had been asked to do our research on that object it must be needed to show more precisely the place of that tomb on the territory of the ancient royal garden in Alexandria. NRIAG had developed electro prospecting works using radar and vertical electric soundings. With use of our results on the archeological object it had been choose a more precise place for the borehole and for next excavation. The results of drilling showed, as it was forecasted, that from the depth 7m on the showed picket of the observed profile it had been revealed stone objects which differ from the limestones sandstones. The drilling was achieved on 20-th of april 2008.
Photogrammetric Measurements in Fixed Wing Uav Imagery
NASA Astrophysics Data System (ADS)
Gülch, E.
2012-07-01
Several flights have been undertaken with PAMS (Photogrammetric Aerial Mapping System) by Germap, Germany, which is briefly introduced. This system is based on the SmartPlane fixed-wing UAV and a CANON IXUS camera system. The plane is equipped with GPS and has an infrared sensor system to estimate attitude values. A software has been developed to link the PAMS output to a standard photogrammetric processing chain built on Trimble INPHO. The linking of the image files and image IDs and the handling of different cases with partly corrupted output have to be solved to generate an INPHO project file. Based on this project file the software packages MATCH-AT, MATCH-T DSM, OrthoMaster and OrthoVista for digital aerial triangulation, DTM/DSM generation and finally digital orthomosaik generation are applied. The focus has been on investigations on how to adapt the "usual" parameters for the digital aerial triangulation and other software to the UAV flight conditions, which are showing high overlaps, large kappa angles and a certain image blur in case of turbulences. It was found, that the selected parameter setup shows a quite stable behaviour and can be applied to other flights. A comparison is made to results from other open source multi-ray matching software to handle the issue of the described flight conditions. Flights over the same area at different times have been compared to each other. The major objective was here to see, on how far differences occur relative to each other, without having access to ground control data, which would have a potential for applications with low requirements on the absolute accuracy. The results show, that there are influences of weather and illumination visible. The "unusual" flight pattern, which shows big time differences for neighbouring strips has an influence on the AT and DTM/DSM generation. The results obtained so far do indicate problems in the stability of the camera calibration. This clearly requests a usage of GCPs for all projects, independent on the application. The effort is estimated to be even higher as expected, as also self-calibration will be an issue to handle a possibly instable camera calibration. To overcome some of the encountered problems with the very specific features of UAV flights a software UAVision was developed based on Open Source libraries to produce input data for bundle adjustment of UAV images by PAMS. The empirical test results show a considerable improvement in the matching of tie points. The results do, however, show that the Open Source bundle adjustment was not applicable to this type of imagery. This still leaves the possibility to use the improved tie point correspondences in the commercial AT package.
Paerl, Hans W; Xu, Hai; Hall, Nathan S; Zhu, Guangwei; Qin, Boqiang; Wu, Yali; Rossignol, Karen L; Dong, Linghan; McCarthy, Mark J; Joyner, Alan R
2014-01-01
Excessive anthropogenic nitrogen (N) and phosphorus (P) inputs have caused an alarming increase in harmful cyanobacterial blooms, threatening sustainability of lakes and reservoirs worldwide. Hypertrophic Lake Taihu, China's third largest freshwater lake, typifies this predicament, with toxic blooms of the non-N2 fixing cyanobacteria Microcystis spp. dominating from spring through fall. Previous studies indicate N and P reductions are needed to reduce bloom magnitude and duration. However, N reductions may encourage replacement of non-N2 fixing with N2 fixing cyanobacteria. This potentially counterproductive scenario was evaluated using replicate, large (1000 L), in-lake mesocosms during summer bloom periods. N+P additions led to maximum phytoplankton production. Phosphorus enrichment, which promoted N limitation, resulted in increases in N2 fixing taxa (Anabaena spp.), but it did not lead to significant replacement of non-N2 fixing with N2 fixing cyanobacteria, and N2 fixation rates remained ecologically insignificant. Furthermore, P enrichment failed to increase phytoplankton production relative to controls, indicating that N was the most limiting nutrient throughout this period. We propose that Microcystis spp. and other non-N2 fixing genera can maintain dominance in this shallow, highly turbid, nutrient-enriched lake by outcompeting N2 fixing taxa for existing sources of N and P stored and cycled in the lake. To bring Taihu and other hypertrophic systems below the bloom threshold, both N and P reductions will be needed until the legacy of high N and P loading and sediment nutrient storage in these systems is depleted. At that point, a more exclusive focus on P reductions may be feasible.
Paerl, Hans W.; Xu, Hai; Hall, Nathan S.; Zhu, Guangwei; Qin, Boqiang; Wu, Yali; Rossignol, Karen L.; Dong, Linghan; McCarthy, Mark J.; Joyner, Alan R.
2014-01-01
Excessive anthropogenic nitrogen (N) and phosphorus (P) inputs have caused an alarming increase in harmful cyanobacterial blooms, threatening sustainability of lakes and reservoirs worldwide. Hypertrophic Lake Taihu, China’s third largest freshwater lake, typifies this predicament, with toxic blooms of the non-N2 fixing cyanobacteria Microcystis spp. dominating from spring through fall. Previous studies indicate N and P reductions are needed to reduce bloom magnitude and duration. However, N reductions may encourage replacement of non-N2 fixing with N2 fixing cyanobacteria. This potentially counterproductive scenario was evaluated using replicate, large (1000 L), in-lake mesocosms during summer bloom periods. N+P additions led to maximum phytoplankton production. Phosphorus enrichment, which promoted N limitation, resulted in increases in N2 fixing taxa (Anabaena spp.), but it did not lead to significant replacement of non-N2 fixing with N2 fixing cyanobacteria, and N2 fixation rates remained ecologically insignificant. Furthermore, P enrichment failed to increase phytoplankton production relative to controls, indicating that N was the most limiting nutrient throughout this period. We propose that Microcystis spp. and other non-N2 fixing genera can maintain dominance in this shallow, highly turbid, nutrient-enriched lake by outcompeting N2 fixing taxa for existing sources of N and P stored and cycled in the lake. To bring Taihu and other hypertrophic systems below the bloom threshold, both N and P reductions will be needed until the legacy of high N and P loading and sediment nutrient storage in these systems is depleted. At that point, a more exclusive focus on P reductions may be feasible. PMID:25405474
NASA Astrophysics Data System (ADS)
Kelley, C. J.; Keller, C. K.; Smith, J. L.; Evans, R. D.; Harlow, B.
2011-12-01
Buffer strips are commonly used to decrease agricultural runoff with the objective of limiting sediment and agrochemicals fluxes to surface waters. The objective of this study was to determine the effects of an alfalfa buffer strip on the magnitude and source(s) of leached nitrate from a dryland agricultural field. Previous research at the Cook Agronomy Farm has inferred two sources of nitrate in tile drain discharge, a high-discharge-season (January through May) synthetic fertilizer source, and a low-discharge-season (June through December) soil organic nitrogen source. This study examines how a change in management strategy and crop species alters the low discharge season nitrate source. In the spring of 2006 an alfalfa buffer strip approximately 20 m wide was planted running approximately north-south in the lowland portion of a 12 ha tile-drained field bordering a ditch that drains into Missouri Flat Creek. Three-year (2003 through 2005) average NO3--N flux prior to the planting of the alfalfa buffer strip was ~0.40 kg ha-1 year-1. After planting, the three-year (2006 through 2008) average NO3--N flux was ~0.38 kg ha-1 year-1. The lack of evident buffer-strip influence on the fluxes may be due in part to the large variation in precipitation amounts and timing that control water flows through the system. Three-year average δ15Nnitrate values for the tile drain pre and post planting of the alfalfa buffer strip were 6.9 ± 1.1 % and 4.2 ± 0.9 % respectively. We hypothesize that the significant difference indicates that the alfalfa strip affects the source of leached nitrate. Before planting the alfalfa buffer strip, the interpreted source of nitrate was mineralization of soil organic nitrogen from non-N2 fixing crops (spring and summer wheat varieties). After planting the alfalfa buffer strip, the source of nitrate was interpreted to be a mixture of mineralized soil organic nitrogen from N2 fixing alfalfa and non-N2 fixing crops. Further work is needed to test alternative explanations for the observed isotopic shift. This study suggests that the effects of leguminous buffer strips on nutrient fluxes are not simple, and may depend on combinations of hydrologic and pedo-geologic factors.
Spatial noise in microdisplays for near-to-eye applications
NASA Astrophysics Data System (ADS)
Hastings, Arthur R., Jr.; Draper, Russell S.; Wood, Michael V.; Fellowes, David A.
2011-06-01
Spatial noise in imaging systems has been characterized and its impact on image quality metrics has been addressed primarily with respect to the introduction of this noise at the sensor component. However, sensor fixed pattern noise is not the only source of fixed pattern noise in an imaging system. Display fixed pattern noise cannot be easily mitigated in processing and, therefore, must be addressed. In this paper, a thorough examination of the amount and the effect of display fixed pattern noise is presented. The specific manifestation of display fixed pattern noise is dependent upon the display technology. Utilizing a calibrated camera, US Army RDECOM CERDEC NVESD has developed a microdisplay (μdisplay) spatial noise data collection capability. Noise and signal power spectra were used to characterize the display signal to noise ratio (SNR) as a function of spatial frequency analogous to the minimum resolvable temperature difference (MRTD) of a thermal sensor. The goal of this study is to establish a measurement technique to characterize μdisplay limiting performance to assist in proper imaging system specification.
N2-fixing red alder indirectly accelerates ecosystem nitrogen cycling
Perakis, Steven S.; Matkins, Joselin J.; Hibbs, David E.
2012-01-01
Symbiotic N2-fixing tree species can accelerate ecosystem N dynamics through decomposition via direct pathways by producing readily decomposed leaf litter and increasing N supply to decomposers, as well as via indirect pathways by increasing tissue and detrital N in non-fixing vegetation. To evaluate the relative importance of these pathways, we compared three-year decomposition and N dynamics of N2-fixing red alder leaf litter (2.34 %N) to both low-N (0.68 %N) and high-N (1.21 %N) litter of non-fixing Douglas-fir, and decomposed each litter source in four forests dominated by either red alder or Douglas-fir. We also used experimental N fertilization of decomposition plots to assess elevated N availability as a potential mechanism of N2-fixer effects on litter mass loss and N dynamics. Direct effects of N2-fixing red alder on decomposition occurred primarily as faster N release from red alder than Douglas-fir litter, but direct increases in N supply to decomposers via fertilization did not stimulate decomposition of any litter. Fixed N indirectly influenced detrital dynamics by increasing Douglas-fir tissue and litter N concentrations, which accelerated litter N release without accelerating mass loss. By increasing soil N, tissue N, and the rate of N release from litter of non-fixers, we conclude that N2-fixing vegetation can indirectly foster plant-soil feedbacks that contribute to the persistence of elevated N availability in terrestrial ecosystems.
Tang, Sui-Yan; Hara, Shintaro; Melling, Lulie; Goh, Kah-Joo; Hashidoko, Yasuyuki
2010-01-01
Root-associating bacteria of the nipa palm (Nypa fruticans), preferring brackish-water affected mud in Sarawak, Malaysia, were investigated. In a comparison of rhizobacterial microbiota between the nipa and the sago (Metroxylon sagu) palm, it was found that the nipa palm possessed a group of Burkholderia vietnamiensis as its main active nitrogen-fixing endophytic bacterium. Acetylene reduction by the various isolates of B. vietnamiensis was constant (44 to 68 nmol h(-1) in ethylene production rate) in soft gel medium containing 0.2% sucrose as sole carbon source, and the bacterium also showed motility and biofilm-forming capacity. This is the first report of endophytic nitrogen-fixing bacteria from nipa palm.
Rg-Lg coupling as a Lg-wave excitation mechanism
NASA Astrophysics Data System (ADS)
Ge, Z.; Xie, X.
2003-12-01
Regional phase Lg is predominantly comprised of shear wave energy trapped in the crust. Explosion sources are expected to be less efficient for excitation of Lg phases than earthquakes to the extent that the source can be approximated as isotropic. Shallow explosions generate relatively large surface wave Rg compared to deeper earthquakes, and Rg is readily disrupted by crustal heterogeneity. Rg energy may thus scatter into trapped crustal S-waves near the source region and contribute to low-frequency Lg wave. In this study, a finite-difference modeling plus the slowness analysis are used for investigating the above mentioned Lg-wave excitation mechanism. The method allows us to investigate near source energy partitioning in multiple domains including frequency, slowness and time. The main advantage of this method is that it can be applied at close range, before Lg is actually formed, which allows us to use very fine near source velocity model to simulate the energy partitioning process. We use a layered velocity structure as the background model and add small near source random velocity patches to the model to generate the Rg to Lg coupling. Two types of simulations are conducted, (1) a fixed shallow explosion source vs. randomness at different depths and (2) a fixed shallow randomness vs. explosion sources at different depths. The results show apparent couplings between the Rg and Lg waves at lower frequencies (0.3-1.5 Hz). A shallow source combined with shallow randomness generates the maximum Lg-wave, which is consistent with the Rg energy distribution of a shallow explosion source. The Rg energy and excited Lg energy show a near linear relationship. The numerical simulation and slowness analysis suggest that the Rg to Lg coupling is an effective excitation mechanism for low frequency Lg-waves from a shallow explosion source.
Alternate Sources for Propellant Ingredients.
1976-07-07
0dJ variety of reasons; (3) sole source; (4) medical/ OSHA /EPA problems; (5) dependent on foreign Imports; and (6) specification problems. •’. .’ . . I...problems exist for a variety of reasons; (3) sole sourc:e; (4) medical/ OSHA /EPA problems; (5) dependent on foreign imports; and (6) specification problems...regulations of OSHA or EPA affect pro- duction or use of the product; 5. Plant capacity - when demand increases faster that; predictions; 6. Supply
NASA Technical Reports Server (NTRS)
Moore, J. E.
1975-01-01
An enumeration algorithm is presented for solving a scheduling problem similar to the single machine job shop problem with sequence dependent setup times. The scheduling problem differs from the job shop problem in two ways. First, its objective is to select an optimum subset of the available tasks to be performed during a fixed period of time. Secondly, each task scheduled is constrained to occur within its particular scheduling window. The algorithm is currently being used to develop typical observational timelines for a telescope that will be operated in earth orbit. Computational times associated with timeline development are presented.
Complete data listings for CSEM soundings on Kilauea Volcano, Hawaii
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kauahikaua, J.; Jackson, D.B.; Zablocki, C.J.
1983-01-01
This document contains complete data from a controlled-source electromagnetic (CSEM) sounding/mapping project at Kilauea volcano, Hawaii. The data were obtained at 46 locations about a fixed-location, horizontal, polygonal loop source in the summit area of the volcano. The data consist of magnetic field amplitudes and phases at excitation frequencies between 0.04 and 8 Hz. The vector components were measured in a cylindrical coordinate system centered on the loop source. 5 references.
NASA Astrophysics Data System (ADS)
Barth, Daniel S.; Sutherling, William; Engle, Jerome; Beatty, Jackson
1984-01-01
Neuromagnetic measurements were performed on 17 subjects with focal seizure disorders. In all of the subjects, the interictal spike in the scalp electroencephalogram was associated with an orderly extracranial magnetic field pattern. In eight of these subjects, multiple current sources underlay the magnetic spike complex. The multiple sources within a given subject displayed a fixed chronological sequence of discharge, demonstrating a high degree of spatial and temporal organization within the interictal focus.
Lagrangian modeling of global atmospheric methane (1990-2012)
NASA Astrophysics Data System (ADS)
Arfeuille, Florian; Henne, Stephan; Brunner, Dominik
2016-04-01
In the MAIOLICA-II project, the lagrangian particle model FLEXPART is used to simulate the global atmospheric methane over the 1990-2012 period. In this lagrangian framework, 3 million particles are permanently transported based on winds from ERA-interim. The history of individual particles can be followed allowing for a comprehensive analysis of transport pathways and timescales. The link between sources (emissions) and receptors (measurement stations) is then established in a straightforward manner, a prerequisite for source inversion problems. FLEXPART was extended to incorporate the methane loss by reaction with OH, soil uptake and stratospheric loss reactions with prescribed Cl and O(1d) radicals. Sources are separated into 245 different tracers, depending on source origin (anthropogenic, wetlands, rice, biomass burning, termites, wild animals, oceans, volcanoes), region of emission, and time since emission (5 age classes). The inversion method applied is a fixed-lag Kalman smoother similar to that described in Bruhwiler et al. [2005]. Results from the FLEXPART global methane simulation and from the subsequent inversion will be presented. Results notably suggest: - A reduction in methane growth rates due to diminished wetland emissions and anthropogenic European emission in 1990-1993. - A second decrease in 1995-1996 is also mainly attributed to these two emission categories. - A reduced increase in Chinese anthropogenic emissions after 2003 compared to EDGAR inventories. - Large South American wetlands emissions during the entire period. Bruhwiler, L. M. P., Michalak, A. M., Peters, W., Baker, D. F. & Tans, P. 2005: An improved Kalman smoother fore atmospheric inversions, Atmos Chem Phys, 5, 2691-2702.
Telomeres and the ethics of human cloning.
Allhoff, Fritz
2004-01-01
In search of a potential problem with cloning, I investigate the phenomenon of telomere shortening which is caused by cell replication; clones created from somatic cells will have shortened telomeres and therefore reach a state of senescence more rapidly. While genetic intervention might fix this problem at some point in the future, I ask whether, absent technological advances, this biological phenomenon undermines the moral permissibility of cloning.
Reproducibility in a multiprocessor system
Bellofatto, Ralph A; Chen, Dong; Coteus, Paul W; Eisley, Noel A; Gara, Alan; Gooding, Thomas M; Haring, Rudolf A; Heidelberger, Philip; Kopcsay, Gerard V; Liebsch, Thomas A; Ohmacht, Martin; Reed, Don D; Senger, Robert M; Steinmacher-Burow, Burkhard; Sugawara, Yutaka
2013-11-26
Fixing a problem is usually greatly aided if the problem is reproducible. To ensure reproducibility of a multiprocessor system, the following aspects are proposed; a deterministic system start state, a single system clock, phase alignment of clocks in the system, system-wide synchronization events, reproducible execution of system components, deterministic chip interfaces, zero-impact communication with the system, precise stop of the system and a scan of the system state.
2016-10-31
statistical physics. Sec. IV includes several examples of the application of the stochastic method, including matching of a shape to a fixed design, and...an important part of any future application of this method. Second, re-initialization of the level set can lead to small but significant movements of...of engineering design problems [6, 17]. However, many of the relevant applications involve non-convex optimisation problems with multiple locally
NASA Astrophysics Data System (ADS)
Kulikova, N. V.; Chepurova, V. M.
2009-10-01
So far we investigated the nonperturbation dynamics of meteoroid complexes. The numerical integration of the differential equations of motion in the N-body problem by the Everhart algorithm (N=2-6) and introduction of the intermediate hyperbolic orbits build on the base of the generalized problem of two fixed centers permit to take into account some gravitational perturbations.
ERIC Educational Resources Information Center
Calisto, George W.
2013-01-01
This study sought to integrate Dweck and Leggett's (1988) self-theories of intelligence model (i.e., the view that intelligence is either fixed and unalterable or changeable through hard work and effort) with Elliot and Dweck's (1988) achievement goal theory, which explains why some people are oriented towards learning and others toward…
Hip-hop solutions of the 2N-body problem
NASA Astrophysics Data System (ADS)
Barrabés, Esther; Cors, Josep Maria; Pinyol, Conxita; Soler, Jaume
2006-05-01
Hip-hop solutions of the 2N-body problem with equal masses are shown to exist using an analytic continuation argument. These solutions are close to planar regular 2N-gon relative equilibria with small vertical oscillations. For fixed N, an infinity of these solutions are three-dimensional choreographies, with all the bodies moving along the same closed curve in the inertial frame.
Lee, Hae-In; Donati, Andrew J; Hahn, Dittmar; Tisa, Louis S; Chang, Woo-Suk
2013-12-01
We investigated the effect of different nitrogen (N) sources on exopolysaccharide (EPS) production and composition by Frankia strain CcI3, a N2-fixing actinomycete that forms root nodules with Casuarina species. Frankia cells grown in the absence of NH4Cl (i.e., under N2-fixing conditions) produced 1.7-fold more EPS, with lower galactose (45.1 vs. 54.7 mol%) and higher mannose (17.3 vs. 9.7 mol%) contents than those grown in the presence of NH4Cl as a combined N-source. In the absence of the combined N-source, terminally linked and branched residue contents were nearly twice as high with 32.8 vs. 15.1 mol% and 15.1 vs. 8.7 mol%, respectively, than in its presence, while the content of linearly linked residues was lower with 52.1 mol% compared to 76.2 mol%. To find out clues for the altered EPS production at the transcriptional level, we performed whole-gene expression profiling using quantitative reverse transcription PCR and microarray technology. The transcription profiles of Frankia strain CcI3 grown in the absence of NH4Cl revealed up to 2 orders of magnitude higher transcription of nitrogen fixation-related genes compared to those of CcI3 cells grown in the presence of NH4Cl. Unexpectedly, microarray data did not provide evidence for transcriptional regulation as a mechanism for differences in EPS production. These findings indicate effects of nitrogen fixation on the production and composition of EPS in Frankia strain CcI3 and suggest posttranscriptional regulation of enhanced EPS production in the absence of the combined N-source.
Precision blackbody sources for radiometric standards.
Sapritsky, V I; Khlevnoy, B B; Khromchenko, V B; Lisiansky, B E; Mekhontsev, S N; Melenevsky, U A; Morozova, S P; Prokhorov, A V; Samoilov, L N; Shapoval, V I; Sudarev, K A; Zelener, M F
1997-08-01
The precision blackbody sources developed at the All-Russian Institute for Optical and Physical Measurements (Moscow, Russia) and their characteristics are analyzed. The precision high-temperature graphite blackbody BB22p, large-area high-temperature pyrolytic graphite blackbody BB3200pg, middle-temperature graphite blackbody BB2000, low-temperature blackbody BB300, and gallium fixed-point blackbody BB29gl and their characteristics are described.
Angular distribution of photoelectrons at 584A using polarized radiation
NASA Technical Reports Server (NTRS)
Hancock, W. H.; Samson, J. A. R.
1975-01-01
Photoelectron angular distributions for Ar, Xe, N2, O2, CO, CO2, and NH3 were obtained at 584 A by observing the photoelectrons at a fixed angle and simply rotating the plane of polarization of a highly polarized photon source. The radiation from a helium dc glow discharge source was polarized (84%) using a reflection type polarizer.
Performance improvement: one model to reduce length of stay.
Chisari, E; Mele, J A
1994-01-01
Dedicated quality professionals are tired of quick fixes, Band-Aids, and other first-aid strategies that offer only temporary relief of nagging problems rather than a long-term cure. Implementing strategies that can produce permanent solutions to crucial problems is a challenge confronted by organizations striving for continuous performance improvement. One vehicle, driven by data and customer requirements, that can help to solve problems and sustain success over time is the storyboard. This article illustrates the use of the storyboard as the framework for reducing length of stay--one of the most important problems facing healthcare organizations today.
The Balloon Popping Problem Revisited: Lower and Upper Bounds
NASA Astrophysics Data System (ADS)
Jung, Hyunwoo; Chwa, Kyung-Yong
We consider the balloon popping problem introduced by Immorlica et al. in 2007 [13]. This problem is directly related to the problem of profit maximization in online auctions, where an auctioneer is selling a collection of identical items to anonymous unit-demand bidders. The auctioneer has the full knowledge of bidders’ private valuations for the items and tries to maximize his profit. Compared with the profit of fixed price schemes, the competitive ratio of Immorlica et al.’s algorithm was in the range [1.64, 4.33]. In this paper, we narrow the gap to [1.659, 2].
A Method for Simulation of Rotorcraft Fly-In Noise for Human Response Studies
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Christian, Andrew
2015-01-01
The low frequency content of rotorcraft noise allows it to be heard over great distances. This factor contributes to the disruption of natural quiet in national parks and wilderness areas, and can lead to annoyance in populated areas. Further, it can result in detection at greater distances compared to higher altitude fixed wing aircraft operations. Human response studies conducted in the field are made difficult since test conditions are difficult to control. Specifically, compared to fixed wing aircraft, the source noise itself may significantly vary over time even for nominally steady flight conditions, and the propagation of that noise is more variable due to low altitude meteorological conditions. However, it is possible to create the salient features of rotorcraft fly-in noise in a more controlled laboratory setting through recent advancements made in source noise synthesis, propagation modeling and reproduction. This paper concentrates on the first two of these. In particular, the rotorcraft source noise pressure time history is generated using single blade passage signatures from the main and tail rotors. These may be obtained from either acoustic source noise predictions or back-propagation of ground-based measurements. Propagation effects include atmospheric absorption, spreading loss, Doppler shift, and ground plane reflections.
Analyzing and Predicting Effort Associated with Finding and Fixing Software Faults
NASA Technical Reports Server (NTRS)
Hamill, Maggie; Goseva-Popstojanova, Katerina
2016-01-01
Context: Software developers spend a significant amount of time fixing faults. However, not many papers have addressed the actual effort needed to fix software faults. Objective: The objective of this paper is twofold: (1) analysis of the effort needed to fix software faults and how it was affected by several factors and (2) prediction of the level of fix implementation effort based on the information provided in software change requests. Method: The work is based on data related to 1200 failures, extracted from the change tracking system of a large NASA mission. The analysis includes descriptive and inferential statistics. Predictions are made using three supervised machine learning algorithms and three sampling techniques aimed at addressing the imbalanced data problem. Results: Our results show that (1) 83% of the total fix implementation effort was associated with only 20% of failures. (2) Both safety critical failures and post-release failures required three times more effort to fix compared to non-critical and pre-release counterparts, respectively. (3) Failures with fixes spread across multiple components or across multiple types of software artifacts required more effort. The spread across artifacts was more costly than spread across components. (4) Surprisingly, some types of faults associated with later life-cycle activities did not require significant effort. (5) The level of fix implementation effort was predicted with 73% overall accuracy using the original, imbalanced data. Using oversampling techniques improved the overall accuracy up to 77%. More importantly, oversampling significantly improved the prediction of the high level effort, from 31% to around 85%. Conclusions: This paper shows the importance of tying software failures to changes made to fix all associated faults, in one or more software components and/or in one or more software artifacts, and the benefit of studying how the spread of faults and other factors affect the fix implementation effort.
Almutairy, Meznah; Torng, Eric
2018-01-01
Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method.
Torng, Eric
2018-01-01
Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method. PMID:29389989
Fayyoumi, Ebaa; Oommen, B John
2009-10-01
We consider the microaggregation problem (MAP) that involves partitioning a set of individual records in a microdata file into a number of mutually exclusive and exhaustive groups. This problem, which seeks for the best partition of the microdata file, is known to be NP-hard and has been tackled using many heuristic solutions. In this paper, we present the first reported fixed-structure-stochastic-automata-based solution to this problem. The newly proposed method leads to a lower value of the information loss (IL), obtains a better tradeoff between the IL and the disclosure risk (DR) when compared with state-of-the-art methods, and leads to a superior value of the scoring index, which is a criterion involving a combination of the IL and the DR. The scheme has been implemented, tested, and evaluated for different real-life and simulated data sets. The results clearly demonstrate the applicability of learning automata to the MAP and its ability to yield a solution that obtains the best tradeoff between IL and DR when compared with the state of the art.
Many-to-Many Multicast Routing Schemes under a Fixed Topology
Ding, Wei; Wang, Hongfa; Wei, Xuerui
2013-01-01
Many-to-many multicast routing can be extensively applied in computer or communication networks supporting various continuous multimedia applications. The paper focuses on the case where all users share a common communication channel while each user is both a sender and a receiver of messages in multicasting as well as an end user. In this case, the multicast tree appears as a terminal Steiner tree (TeST). The problem of finding a TeST with a quality-of-service (QoS) optimization is frequently NP-hard. However, we discover that it is a good idea to find a many-to-many multicast tree with QoS optimization under a fixed topology. In this paper, we are concerned with three kinds of QoS optimization objectives of multicast tree, that is, the minimum cost, minimum diameter, and maximum reliability. All of three optimization problems are distributed into two types, the centralized and decentralized version. This paper uses the dynamic programming method to devise an exact algorithm, respectively, for the centralized and decentralized versions of each optimization problem. PMID:23589706
Prabhu, Radhakrishnan; Prabhu, Geetha; Baskaran, Eswaran; Arumugam, Eswaran M.
2016-01-01
Statement of Problem: In recent years, direct metal laser sintered (DMLS) metal-ceramic-based fixed partial denture prostheses have been used as an alternative to conventional metal-ceramic fixed partial denture prostheses. However, clinical studies for evaluating their long-term clinical survivability and acceptability are limited. Aims and Objective: The aim of this study was to assess the efficacy of metal-ceramic fixed dental prosthesis fabricated with DMLS technique, and its clinical acceptance on long-term clinical use. Materials and Methods: The study group consisted of 45 patients who were restored with posterior three-unit fixed partial denture prosthesis made using direct laser sintered metal-ceramic restorations. Patient recall and clinical examination of the restorations were done after 6months and every 12 months thereafter for the period of 60 months. Clinical examination for evaluation of longevity of restorations was done using modified Ryge criteria which included chipping of the veneered ceramic, connector failure occurring in the fixed partial denture prosthesis, discoloration at the marginal areas of the veneered ceramic, and marginal adaptation of the metal and ceramic of the fixed denture prosthesis. Periapical status was assessed using periodical radiographs during the study period. Survival analysis was made using the Kaplan–Meier method. Results: None of the patients had failure of the connector of the fixed partial denture prostheses during the study period. Two exhibited biological changes which included periapical changes and proximal caries adjacent to the abutments. Conclusion: DMLS metal-ceramic fixed partial denture prosthesis had a survival rate of 95.5% and yielded promising results during the 5-year clinical study. PMID:27141171
Lookback Option Pricing with Fixed Proportional Transaction Costs under Fractional Brownian Motion.
Sun, Jiao-Jiao; Zhou, Shengwu; Zhang, Yan; Han, Miao; Wang, Fei
2014-01-01
The pricing problem of lookback option with a fixed proportion of transaction costs is investigated when the underlying asset price follows a fractional Brownian motion process. Firstly, using Leland's hedging method a partial differential equation satisfied by the value of the lookback option is derived. Then we obtain its numerical solution by constructing a Crank-Nicolson format. Finally, the effectiveness of the proposed form is verified through a numerical example. Meanwhile, the impact of transaction cost rate and volatility on lookback option value is discussed.
Lookback Option Pricing with Fixed Proportional Transaction Costs under Fractional Brownian Motion
Sun, Jiao-Jiao; Zhou, Shengwu; Zhang, Yan; Han, Miao; Wang, Fei
2014-01-01
The pricing problem of lookback option with a fixed proportion of transaction costs is investigated when the underlying asset price follows a fractional Brownian motion process. Firstly, using Leland's hedging method a partial differential equation satisfied by the value of the lookback option is derived. Then we obtain its numerical solution by constructing a Crank-Nicolson format. Finally, the effectiveness of the proposed form is verified through a numerical example. Meanwhile, the impact of transaction cost rate and volatility on lookback option value is discussed. PMID:27433525
NASA Technical Reports Server (NTRS)
Bronstein, L. M.
1979-01-01
The use of the 18 and 30 GHz bands for fixed service satellite communications is examined. The cost and performance expected of 18 and 30 GHz hardware is assessed, selected trunking and direct to user concepts are optimized, and the cost of these systems are estimated. The effect of rain attenuation on the technical and economic viability of the system and methods circumventing the problem are discussed. Technology developments are investigated and cost estimates of these developments are presented.
Budgets of fixed nitrogen in the Orinoco Savannah Region: Role of pyrodenitrification
NASA Astrophysics Data System (ADS)
Sanhueza, Eugenio; Crutzen, Paul J.
1998-12-01
Human activities have strongly altered the amount of fixed nitrogen that cycles in many regions of the industrialized world, with serious environmental consequences. Past studies conducted at the Orinoco savannahs of Venezuela offer a unique possibility for reviewing the cycling of nitrogen species in a tropical environment. The available information for the Orinoco savannahs is critically reviewed, and, despite many uncertainties, we present a budget analysis of both the fixed N in the soil-vegetation system and atmospheric NOy. Analysis of the data indicates that nitrogen fixation, especially by legumes, and ammonia emission from vegetation and animal wastes needs considerable attention in future research efforts. In contrast with many regions of the world, in the studied region, nonindustrial sources, foremost biomass burning, dominate the soil-vegetation and atmospheric budgets of fixed N. In general, N cycling is mainly driven by biomass burning. The resulting pyrodenitrification in the soil-vegetation system is the largest single process that, during the following wet season, may promote biological fixation to compensate for the N losses from fires during the burning season. However, a gradual impoverishment of the N status of the savannah ecosystems cannot be excluded. During the dry season, biomass burning is also the main source of atmospheric NOy, which is largely exported, mainly in the direction of the Amazon forest. Together with other nutrients, a "fertilization" of the Amazon forest due to biomass burning in the savannah may be the result. These issues require further scientific analysis.
NASA Astrophysics Data System (ADS)
Aziz, Mohammad Abdul; Al-khulaidi, Rami Ali; Rashid, MM; Islam, M. R.; Rashid, MAN
2017-03-01
In this research, a development and performance test of a fixed-bed batch type pyrolysis reactor for pilot scale pyrolysis oil production was successfully completed. The characteristics of the pyrolysis oil were compared to other experimental results. A solid horizontal condenser, a burner for furnace heating and a reactor shield were designed. Due to the pilot scale pyrolytic oil production encountered numerous problems during the plant’s operation. This fixed-bed batch type pyrolysis reactor method will demonstrate the energy saving concept of solid waste tire by creating energy stability. From this experiment, product yields (wt. %) for liquid or pyrolytic oil were 49%, char 38.3 % and pyrolytic gas 12.7% with an operation running time of 185 minutes.
Remote listening and passive acoustic detection in a 3-D environment
NASA Astrophysics Data System (ADS)
Barnhill, Colin
Teleconferencing environments are a necessity in business, education and personal communication. They allow for the communication of information to remote locations without the need for travel and the necessary time and expense required for that travel. Visual information can be communicated using cameras and monitors. The advantage of visual communication is that an image can capture multiple objects and convey them, using a monitor, to a large group of people regardless of the receiver's location. This is not the case for audio. Currently, most experimental teleconferencing systems' audio is based on stereo recording and reproduction techniques. The problem with this solution is that it is only effective for one or two receivers. To accurately capture a sound environment consisting of multiple sources and to recreate that for a group of people is an unsolved problem. This work will focus on new methods of multiple source 3-D environment sound capture and applications using these captured environments. Using spherical microphone arrays, it is now possible to capture a true 3-D environment A spherical harmonic transform on the array's surface allows us to determine the basis functions (spherical harmonics) for all spherical wave solutions (up to a fixed order). This spherical harmonic decomposition (SHD) allows us to not only look at the time and frequency characteristics of an audio signal but also the spatial characteristics of an audio signal. In this way, a spherical harmonic transform is analogous to a Fourier transform in that a Fourier transform transforms a signal into the frequency domain and a spherical harmonic transform transforms a signal into the spatial domain. The SHD also decouples the input signals from the microphone locations. Using the SHD of a soundfield, new algorithms are available for remote listening, acoustic detection, and signal enhancement The new algorithms presented in this paper show distinct advantages over previous detection and listening algorithms especially for multiple speech sources and room environments. The algorithms use high order (spherical harmonic) beamforming and power signal characteristics for source localization and signal enhancement These methods are applied to remote listening, surveillance, and teleconferencing.
ERIC Educational Resources Information Center
Fitzemeyer, Ted
2000-01-01
Discusses how proper maintenance can help schools eliminate sources contributing to poor air quality. Maintaining heating and air conditioning units, investigating bacterial breeding grounds, fixing leaking boilers, and adhering to ventilation codes and standards are discussed. (GR)
Design and Evaluation of Large-Aperture Gallium Fixed-Point Blackbody
NASA Astrophysics Data System (ADS)
Khromchenko, V. B.; Mekhontsev, S. N.; Hanssen, L. M.
2009-02-01
To complement existing water bath blackbodies that now serve as NIST primary standard sources in the temperature range from 15 °C to 75 °C, a gallium fixed-point blackbody has been recently built. The main objectives of the project included creating an extended-area radiation source with a target emissivity of 0.9999 capable of operating either inside a cryo-vacuum chamber or in a standard laboratory environment. A minimum aperture diameter of 45 mm is necessary for the calibration of radiometers with a collimated input geometry or large spot size. This article describes the design and performance evaluation of the gallium fixed-point blackbody, including the calculation and measurements of directional effective emissivity, estimates of uncertainty due to the temperature drop across the interface between the pure metal and radiating surfaces, as well as the radiometrically obtained spatial uniformity of the radiance temperature and the melting plateau stability. Another important test is the measurement of the cavity reflectance, which was achieved by using total integrated scatter measurements at a laser wavelength of 10.6 μm. The result allows one to predict the performance under the low-background conditions of a cryo-chamber. Finally, results of the spectral radiance comparison with the NIST water-bath blackbody are provided. The experimental results are in good agreement with predicted values and demonstrate the potential of our approach. It is anticipated that, after completion of the characterization, a similar source operating at the water triple point will be constructed.
Full circle to in-house facilities services.
Payne, Trevor
2002-08-01
Careful consideration must be taken prior to in-sourcing in order to ensure that the decision is right for the organisation. There will be pressure to go for the quick fix, or the option that involves the least pain or takes the least time (this may be a knee jerk reaction to go straight back out to the market). Contractors will be alert to this due to market intelligence and as a result one problem may be solved but a number of others created as the organisation is put over yet another barrel. Before any decision is taken, an analysis of the circumstances relating to the outsourced services will need to be undertaken. There are several stages to go through when considering in-sourcing and on the whole the steps will mirror those that need to be considered when outsourcing services in the first instance. It is important to recognise that the change management process associated with in-sourcing services will need to be carefully managed. This point cannot be stressed enough. In-sourcing will require management of the outgoing contractor, the in-house team and the customers during the mobilisation phase. A facilities strategy that is aligned to the organisation's strategic direction--sharing core values and goals alongside a good specification are essential. To ensure services are delivered as specified, a robust and effective monitoring system will need to be developed and put into operation. Just as all organisations are different, the drivers influencing the in-sourcing decision will be different--with factors relevant to the host organisation. If in-sourcing has been thoroughly and carefully considered there is absolutely no reason why it should not be effective (as long as it is specified, resourced, managed and monitored in an appropriate manner). In-sourcing is now being considered as a viable alternative to outsourcing, as a vehicle to add value, a sense of corporatism and team spirit to the organisation.
Dissociation predicts later attention problems in sexually abused children
Kaplow, Julie B.; Hall, Erin; Koenen, Karestan C.; Dodge, Kenneth A.; Amaya-Jackson, Lisa
2008-01-01
Objective The goals of this research are to develop and test a prospective model of attention problems in sexually abused children that includes fixed variables (e.g., gender), trauma, and disclosure-related pathways. Methods At Time 1, fixed variables, trauma variables, and stress reactions upon disclosure were assessed in 156 children aged 8 to 13 years. At the Time 2 follow-up (8 to 36 months following the initial interview), 56 of the children were assessed for attention problems. Results A path analysis involving a series of hierarchically-nested, ordinary least squares multiple regression analyses indicated two direct paths to attention problems including the child’s relationship to the perpetrator (β = .23) and dissociation measured immediately after disclosure (β = .53), while controlling for concurrent externalizing behavior (β = .43). Posttraumatic stress symptoms were only indirectly associated with attention problems via dissociation. Taken together, these pathways accounted for approximately 52% of the variance in attention problems and provided an excellent fit to the data. Conclusions Children who report dissociative symptoms upon disclosure of CSA and/or were sexually abused by someone within their family are at an increased risk of developing attention problems. Practice Implications: Findings from this study indicate that children who experienced sexual abuse at an earlier age, by someone within their family, and/or report symptoms of dissociation during disclosure are especially likely to benefit from intervention. Effective interventions should involve (1) providing emotion regulation and coping skills; and (2) helping children to process traumatic aspects of the abuse to reduce the cyclic nature of traumatic reminders leading to unmanageable stress and dissociation. PMID:18308391
Summary of Optimization Techniques That Can Be Applied to Suspension System Design
DOT National Transportation Integrated Search
1973-03-01
Summaries are presented of the analytic techniques available for three levitated vehicle suspension optimization problems: optimization of passive elements for fixed configuration; optimization of a free passive configuration; optimization of a free ...
Working Safe and Feeling Fine.
ERIC Educational Resources Information Center
Milshtein, Amy
1999-01-01
Discusses the problem of repetitive stress disorders in the administrative workplace and shares some quick fixes to aid ergonomics. Some thoughts on the ergonomics of office chairs are provided as is the use of professional guidance in furniture purchasing. (GR)
Li, Zhenyu; Wang, Bin; Liu, Hong
2016-08-30
Satellite capturing with free-floating space robots is still a challenging task due to the non-fixed base and unknown mass property issues. In this paper gyro and eye-in-hand camera data are adopted as an alternative choice for solving this problem. For this improved system, a new modeling approach that reduces the complexity of system control and identification is proposed. With the newly developed model, the space robot is equivalent to a ground-fixed manipulator system. Accordingly, a self-tuning control scheme is applied to handle such a control problem including unknown parameters. To determine the controller parameters, an estimator is designed based on the least-squares technique for identifying the unknown mass properties in real time. The proposed method is tested with a credible 3-dimensional ground verification experimental system, and the experimental results confirm the effectiveness of the proposed control scheme.
Li, Zhenyu; Wang, Bin; Liu, Hong
2016-01-01
Satellite capturing with free-floating space robots is still a challenging task due to the non-fixed base and unknown mass property issues. In this paper gyro and eye-in-hand camera data are adopted as an alternative choice for solving this problem. For this improved system, a new modeling approach that reduces the complexity of system control and identification is proposed. With the newly developed model, the space robot is equivalent to a ground-fixed manipulator system. Accordingly, a self-tuning control scheme is applied to handle such a control problem including unknown parameters. To determine the controller parameters, an estimator is designed based on the least-squares technique for identifying the unknown mass properties in real time. The proposed method is tested with a credible 3-dimensional ground verification experimental system, and the experimental results confirm the effectiveness of the proposed control scheme. PMID:27589748
Of magic wands and kaleidoscopes: fixing problems in the individual market.
Hall, Mark A
2002-01-01
Policy analysts sometimes imagine that problems in the individual market can be fixed by waving a magic wand that makes the individual market function more like the group market. However, prior studies reveal that purchasing cooperatives fail to achieve substantial economies of scale; market reforms that reduce the impact of medical underwriting are difficult to implement in the individual market; and it may not be as easy as imagined to induce people to purchase over the Internet or from new or smaller companies that are at higher risk for exiting the market. The best solution is to limit the use of subsidies to certain purchasing options, such as with purchasing cooperatives that abide by rating, issuance, and renewability rules. What is not acceptable is to hand people subsidies and send them to the unstructured and relatively unregulated individual market, nor will it work to give people unhindered choice between two basically different market segments.
Submillimeter bolt location in car bodywork for production line quality inspection
NASA Astrophysics Data System (ADS)
Altamirano-Robles, Leopoldo; Arias-Estrada, Miguel; Alviso-Quibrera, Samuel; Lopez-Lopez, Aurelio
2000-03-01
In the automotive industry, a vehicle begins with the construction of the vehicle floor. Later on, several robots weld a series of bolts to this floor which are used to fix other parts. Due to several problems, like welding tools wearing, robot miscalibration or momentary low power supply, among others, some bolts are incorrectly positioned or are not present at all, bringing problems and delays in the next work cells. Therefore, it is of importance to verify the quality of welded parts before the following assembly steps. A computer vision system is proposed in order to locate autonomously the presence and quality of the bolts. The system should carry on the inspection in real time at the car assembly line under the following conditions: without touching the bodywork, with a precision in the submillimeter range and in few seconds. In this paper we present a basic computer vision system for bolt location in the submillimeter range. We analyze three arrangements of the system components (camera and illumination sources) that produce different results in the localization. Results are presented and compared for the three approaches obtained under laboratory conditions. The algorithms were tested in the assembling line. Variations up to one millimeter in the welded position of the bolts were observed.
Surveying multidisciplinary aspects in real-time distributed coding for Wireless Sensor Networks.
Braccini, Carlo; Davoli, Franco; Marchese, Mario; Mongelli, Maurizio
2015-01-27
Wireless Sensor Networks (WSNs), where a multiplicity of sensors observe a physical phenomenon and transmit their measurements to one or more sinks, pertain to the class of multi-terminal source and channel coding problems of Information Theory. In this category, "real-time" coding is often encountered for WSNs, referring to the problem of finding the minimum distortion (according to a given measure), under transmission power constraints, attainable by encoding and decoding functions, with stringent limits on delay and complexity. On the other hand, the Decision Theory approach seeks to determine the optimal coding/decoding strategies or some of their structural properties. Since encoder(s) and decoder(s) possess different information, though sharing a common goal, the setting here is that of Team Decision Theory. A more pragmatic vision rooted in Signal Processing consists of fixing the form of the coding strategies (e.g., to linear functions) and, consequently, finding the corresponding optimal decoding strategies and the achievable distortion, generally by applying parametric optimization techniques. All approaches have a long history of past investigations and recent results. The goal of the present paper is to provide the taxonomy of the various formulations, a survey of the vast related literature, examples from the authors' own research, and some highlights on the inter-play of the different theories.
NASA Astrophysics Data System (ADS)
Romero, A.; Sol, D.
2017-09-01
Collecting data by crowdsourcing is an explored trend to support database population and update. This kind of data is unstructured and comes from text, in particular text in social networks. Geographic database is a particular case of database that can be populated by crowdsourcing which can be done when people report some urban event in a social network by writing a short message. An event can describe an accident or a non-functioning device in the urban area. The authorities then need to read and to interpret the message to provide some help for injured people or to fix a problem in a device installed in the urban area like a light or a problem on road. Our main interest is located on working with short messages organized in a collection. Most of the messages do not have geographical coordinates. The messages can then be classified by text patterns describing a location. In fact, people use a text pattern to describe an urban location. Our work tries to identify patterns inside a short text and to indicate when it describes a location. When a pattern is identified our approach look to describe the place where the event is located. The source messages used are tweets reporting events from several Mexican cities.
2009-11-30
generate exposure-rate contours at the fixed time is not an additional source of uncertainty when relative activities of radionuclides on the ground are...deposition or transit and other target organs or tissues, and calculations of radiation transport between a source and target. These uncertainties are...Beck, H., and de Planque, G., 1968. The Radiation Field in Air Due to Distributed Gamma-Ray Sources in the Ground, HASL-195, Health and Safety
[Implant with a mobile or a fixed bearing in unicompartmental knee joint replacemen].
Matziolis, G; Tohtz, S; Gengenbach, B; Perka, C
2007-12-01
Although the goal of anatomical and functional joint reconstruction in unicompartmental knee replacement is well defined, no uniform implant design has become established. In particular, the differential indications for implantation of an implant with a mobile or a fixed bearing are still not clear. The long-term results of mobile and with fixed bearings are comparable, but there are significant differences in resulting knee joint kinematics, tribological properties and implant-associated complications. In unicompartmental knee replacement mobile bearings restore the physiological joint kinematics better than fixed implants, although the differences to total knee arthroplasty seem minor. The decoupling of mobile bearings from the tibia implant allows a high level of congruence with the femoral implant, resulting in larger contact areas than with fixed bearings. This fact in combination with the more physiological joint kinematics leads to less wear and a lower incidence of osteolyses with mobile bearings. Disadvantages of mobile bearings are the higher complication and early revision rates resulting from bearing dislocation and impingement syndromes caused by suboptimal implantation technique or instability. Especially in cases with ligamentous pathology fixed bearings involve a lower complication rate. It seems their use can also be beneficial in patients with a low level of activity, as problems related to wear are of minor importance for this subgroup. The data currently available allow differentiations between various indications for implants with mobile or fixed bearings, so that the implants can be matched to the patient and the joint pathology in unicompartmental knee joint replacement.
Vinnakota, Narayana R; Krishna, V; Viswanath, V; Ahmed, Zaheer; Shaik, Kamal S; Boppana, Naveen K
2016-12-01
To assess the knowledge, attitude, and practices of fixed dose combination drugs among postgraduate dental students. A cross-sectional study was carried out among postgraduate dental students of dental colleges in coastal Andhra Pradesh. Three colleges were randomly selected and students of all the three years were included. Data was collected from the specialities of oral medicine and radiology, oral surgery, endodontics, pedodontics, periodontics, and public health dentistry. The total sample was 90 postgraduate students; informed consent was obtained from the participants, and a pretested questionnaire was distributed to them. Data was analyzed using the Statistical Package for the Social Sciences version 20 software. Out of 90 postgraduates, 33 were males and 57 were females. Thirty-five percent were aware of the essential medical list (EML), among them 11% were from oral medicine and radiology and 6.7% were from pedodontics. However, most of them were unaware of the number of fixed dose combination drugs present in the World Health Organization EML. None of them were able to name at least a single banned fixed dose combination drug. Most of them were unaware of the advantages and disadvantages of using fixed dose combination drugs. Amoxicillin with clavulanic acid was the most common drug prescribed by students (73.3%) followed by ofloxacin with ornidazole (54.4%), ibuprofen with paracetamol (53.3%), and sulfamethoxazole with trimethoprim (6%). Most of them were unaware of the rationality in using fixed dose combination drugs. Common sources of information were medical representatives 43 (47.8%), internet 39 (43.3%), and 12 (13.3%) reported using WHO EML. There is an urgent need to improve knowledge on the rationality for using fixed dose combination, EML, and banned fixed dose combination in India to the promote rational use of fixed dose combination.
Loeber, Rolf; Hinshaw, Stephen P.; Pardini, Dustin A.
2018-01-01
Coercive parent–child interaction models posit that an escalating cycle of negative, bidirectional interchanges influences the development of boys’ externalizing problems and caregivers’ maladaptive parenting over time. However, longitudinal studies examining this hypothesis have been unable to rule out the possibility that between-individual factors account for bidirectional associations between child externalizing problems and maladaptive parenting. Using a longitudinal sample of boys (N = 503) repeatedly assessed eight times across 6-month intervals in childhood (in a range between 6 and 13 years), the current study is the first to use novel within-individual change (fixed effects) models to examine whether parents tend to increase their use of maladaptive parenting strategies following an increase in their son’s externalizing problems, or vice versa. These bidirectional associations were examined using multiple facets of externalizing problems (i.e., interpersonal callousness, conduct and oppositional defiant problems, hyperactivity/impulsivity) and parenting behaviors (i.e., physical punishment, involvement, parent–child communication). Analyses failed to support the notion that when boys increase their typical level of problem behaviors, their parents show an increase in their typical level of maladaptive parenting across the subsequent 6 month period, and vice versa. Instead, across 6-month intervals, within parent-son dyads, changes in maladaptive parenting and child externalizing problems waxed and waned in concert. Fixed effects models to address the topic of bidirectional relations between parent and child behavior are severely underrepresented. We recommend that other researchers who have found significant bidirectional parent–child associations using rank-order change models reexamine their data to determine whether these findings hold when examining changes within parent–child dyads. PMID:26780209
Besemer, Sytske; Loeber, Rolf; Hinshaw, Stephen P; Pardini, Dustin A
2016-10-01
Coercive parent-child interaction models posit that an escalating cycle of negative, bidirectional interchanges influences the development of boys' externalizing problems and caregivers' maladaptive parenting over time. However, longitudinal studies examining this hypothesis have been unable to rule out the possibility that between-individual factors account for bidirectional associations between child externalizing problems and maladaptive parenting. Using a longitudinal sample of boys (N = 503) repeatedly assessed eight times across 6-month intervals in childhood (in a range between 6 and 13 years), the current study is the first to use novel within-individual change (fixed effects) models to examine whether parents tend to increase their use of maladaptive parenting strategies following an increase in their son's externalizing problems, or vice versa. These bidirectional associations were examined using multiple facets of externalizing problems (i.e., interpersonal callousness, conduct and oppositional defiant problems, hyperactivity/impulsivity) and parenting behaviors (i.e., physical punishment, involvement, parent-child communication). Analyses failed to support the notion that when boys increase their typical level of problem behaviors, their parents show an increase in their typical level of maladaptive parenting across the subsequent 6 month period, and vice versa. Instead, across 6-month intervals, within parent-son dyads, changes in maladaptive parenting and child externalizing problems waxed and waned in concert. Fixed effects models to address the topic of bidirectional relations between parent and child behavior are severely underrepresented. We recommend that other researchers who have found significant bidirectional parent-child associations using rank-order change models reexamine their data to determine whether these findings hold when examining changes within parent-child dyads.
Xie, Jian-Bo; Du, Zhenglin; Bai, Lanqing; Tian, Changfu; Zhang, Yunzhi; Xie, Jiu-Yan; Wang, Tianshu; Liu, Xiaomeng; Chen, Xi; Cheng, Qi; Chen, Sanfeng; Li, Jilun
2014-01-01
We provide here a comparative genome analysis of 31 strains within the genus Paenibacillus including 11 new genomic sequences of N2-fixing strains. The heterogeneity of the 31 genomes (15 N2-fixing and 16 non-N2-fixing Paenibacillus strains) was reflected in the large size of the shell genome, which makes up approximately 65.2% of the genes in pan genome. Large numbers of transposable elements might be related to the heterogeneity. We discovered that a minimal and compact nif cluster comprising nine genes nifB, nifH, nifD, nifK, nifE, nifN, nifX, hesA and nifV encoding Mo-nitrogenase is conserved in the 15 N2-fixing strains. The nif cluster is under control of a σ70-depedent promoter and possesses a GlnR/TnrA-binding site in the promoter. Suf system encoding [Fe–S] cluster is highly conserved in N2-fixing and non-N2-fixing strains. Furthermore, we demonstrate that the nif cluster enabled Escherichia coli JM109 to fix nitrogen. Phylogeny of the concatenated NifHDK sequences indicates that Paenibacillus and Frankia are sister groups. Phylogeny of the concatenated 275 single-copy core genes suggests that the ancestral Paenibacillus did not fix nitrogen. The N2-fixing Paenibacillus strains were generated by acquiring the nif cluster via horizontal gene transfer (HGT) from a source related to Frankia. During the history of evolution, the nif cluster was lost, producing some non-N2-fixing strains, and vnf encoding V-nitrogenase or anf encoding Fe-nitrogenase was acquired, causing further diversification of some strains. In addition, some N2-fixing strains have additional nif and nif-like genes which may result from gene duplications. The evolution of nitrogen fixation in Paenibacillus involves a mix of gain, loss, HGT and duplication of nif/anf/vnf genes. This study not only reveals the organization and distribution of nitrogen fixation genes in Paenibacillus, but also provides insight into the complex evolutionary history of nitrogen fixation. PMID:24651173
Discrete square root smoothing.
NASA Technical Reports Server (NTRS)
Kaminski, P. G.; Bryson, A. E., Jr.
1972-01-01
The basic techniques applied in the square root least squares and square root filtering solutions are applied to the smoothing problem. Both conventional and square root solutions are obtained by computing the filtered solutions, then modifying the results to include the effect of all measurements. A comparison of computation requirements indicates that the square root information smoother (SRIS) is more efficient than conventional solutions in a large class of fixed interval smoothing problems.
Dynamic contact problem with adhesion and damage between thermo-electro-elasto-viscoplastic bodies
NASA Astrophysics Data System (ADS)
Hadj ammar, Tedjani; Saïdi, Abdelkader; Azeb Ahmed, Abdelaziz
2017-05-01
We study of a dynamic contact problem between two thermo-electro-elasto-viscoplastic bodies with damage and adhesion. The contact is frictionless and is modeled with normal compliance condition. We derive variational formulation for the model and prove an existence and uniqueness result of the weak solution. The proof is based on arguments of evolutionary variational inequalities, parabolic inequalities, differential equations, and fixed point theorem.
Engaging Deweyan Ethics in Health Care: Leonard Fleck's Rational Democratic Deliberation
ERIC Educational Resources Information Center
Lake, Danielle L.
2013-01-01
While the U.S. health care system is failing to serve many of its citizens, agreeing on what is wrong as well as on how to fix the system seems impossibly optimistic. Leonard Fleck attempts to do just this--to diagnose the problems and to address these problems through dialogue. Dewey's philosophy supports the direction of Fleck's work,…
Optimal trajectories for the aeroassisted flight experiment, 1988-89
NASA Technical Reports Server (NTRS)
Miele, A.
1989-01-01
Research is summarized on optimal trajectories for the aeroassisted flight experiment, performed by the Aero-Astronautics Group of Rice University during the period 1988 through 1989. This research includes the following topics: (1) equations of motion in an Earth-fixed system; (2) equations of motion in an inertial system; (3) formultion of the optimal trajectory problem; (4) results on the optimal trajectory problem; and (5) guidance implications.
NASA Technical Reports Server (NTRS)
Allman, Mark
1997-01-01
This note outlines two bugs found in the BSD 4.4 Lite TCP implementation, as well as the implications of these bugs and possible ways to correct them. The first problem encountered in this particular TCP implementation is the use of a 2 segment initial congestion window, rather than the standard 1 segment initial window. The second problem is that the receiver delays ACKs in violation of the delayed ACK rules,
Physics-Aware Informative Coverage Planning for Autonomous Vehicles
2014-06-01
environment and find the optimal path connecting fixed nodes, which is equivalent to solving the Traveling Salesman Problem (TSP). While TSP is an NP...intended for application to USV harbor patrolling, it is applicable to many different domains. The problem of traveling over an area and gathering...environment. I. INTRODUCTION There are many applications that need persistent monitor- ing of a given area, requiring repeated travel over the area to
NASA Astrophysics Data System (ADS)
Ruskey, Frank; Williams, Aaron
In the classic Josephus problem, elements 1, 2,...,n are placed in order around a circle and a skip value k is chosen. The problem proceeds in n rounds, where each round consists of traveling around the circle from the current position, and selecting the kth remaining element to be eliminated from the circle. After n rounds, every element is eliminated. Special attention is given to the last surviving element, denote it by j. We generalize this popular problem by introducing a uniform number of lives ℓ, so that elements are not eliminated until they have been selected for the ℓth time. We prove two main results: 1) When n and k are fixed, then j is constant for all values of ℓ larger than the nth Fibonacci number. In other words, the last surviving element stabilizes with respect to increasing the number of lives. 2) When n and j are fixed, then there exists a value of k that allows j to be the last survivor simultaneously for all values of ℓ. In other words, certain skip values ensure that a given position is the last survivor, regardless of the number of lives. For the first result we give an algorithm for determining j (and the entire sequence of selections) that uses O(n 2) arithmetic operations.
Video change detection for fixed wing UAVs
NASA Astrophysics Data System (ADS)
Bartelsen, Jan; Müller, Thomas; Ring, Jochen; Mück, Klaus; Brüstle, Stefan; Erdnüß, Bastian; Lutz, Bastian; Herbst, Theresa
2017-10-01
In this paper we proceed the work of Bartelsen et al.1 We present the draft of a process chain for an image based change detection which is designed for videos acquired by fixed wing unmanned aerial vehicles (UAVs). From our point of view, automatic video change detection for aerial images can be useful to recognize functional activities which are typically caused by the deployment of improvised explosive devices (IEDs), e.g. excavations, skid marks, footprints, left-behind tooling equipment, and marker stones. Furthermore, in case of natural disasters, like flooding, imminent danger can be recognized quickly. Due to the necessary flight range, we concentrate on fixed wing UAVs. Automatic change detection can be reduced to a comparatively simple photogrammetric problem when the perspective change between the "before" and "after" image sets is kept as small as possible. Therefore, the aerial image acquisition demands a mission planning with a clear purpose including flight path and sensor configuration. While the latter can be enabled simply by a fixed and meaningful adjustment of the camera, ensuring a small perspective change for "before" and "after" videos acquired by fixed wing UAVs is a challenging problem. Concerning this matter, we have performed tests with an advanced commercial off the shelf (COTS) system which comprises a differential GPS and autopilot system estimating the repetition accuracy of its trajectory. Although several similar approaches have been presented,23 as far as we are able to judge, the limits for this important issue are not estimated so far. Furthermore, we design a process chain to enable the practical utilization of video change detection. It consists of a front-end of a database to handle large amounts of video data, an image processing and change detection implementation, and the visualization of the results. We apply our process chain on the real video data acquired by the advanced COTS fixed wing UAV and synthetic data. For the image processing and change detection, we use the approach of Muller.4 Although it was developed for unmanned ground vehicles (UGVs), it enables a near real time video change detection for aerial videos. Concluding, we discuss the demands on sensor systems in the matter of change detection.
Acoustic field in unsteady moving media
NASA Technical Reports Server (NTRS)
Bauer, F.; Maestrello, L.; Ting, L.
1995-01-01
In the interaction of an acoustic field with a moving airframe the authors encounter a canonical initial value problem for an acoustic field induced by an unsteady source distribution, q(t,x) with q equivalent to 0 for t less than or equal to 0, in a medium moving with a uniform unsteady velocity U(t)i in the coordinate system x fixed on the airframe. Signals issued from a source point S in the domain of dependence D of an observation point P at time t will arrive at point P more than once corresponding to different retarded times, Tau in the interval (0, t). The number of arrivals is called the multiplicity of the point S. The multiplicity equals 1 if the velocity U remains subsonic and can be greater when U becomes supersonic. For an unsteady uniform flow U(t)i, rules are formulated for defining the smallest number of I subdomains V(sub i) of D with the union of V(sub i) equal to D. Each subdomain has multiplicity 1 and a formula for the corresponding retarded time. The number of subdomains V(sub i) with nonempty intersection is the multiplicity m of the intersection. The multiplicity is at most I. Examples demonstrating these rules are presented for media at accelerating and/or decelerating supersonic speed.
The Mean Curvature of the Influence Surface of Wave Equation With Sources on a Moving Surface
NASA Technical Reports Server (NTRS)
Farassat, F.; Farris, Mark
1999-01-01
The mean curvature of the influence surface of the space-time point (x, t) appears in linear supersonic propeller noise theory and in the Kirchhoff formula for a supersonic surface. Both these problems are governed by the linear wave equation with sources on a moving surface. The influence surface is also called the Sigma - surface in the aeroacoustic literature. This surface is the locus, in a frame fixed to the quiescent medium, of all the points of a radiating surface f(x, t) = 0 whose acoustic signals arrive simultaneously to an observer at position x and at the time t. Mathematically, the Sigma- surface is produced by the intersection of the characteristic conoid of the space-time point (x, t) and the moving surface. In this paper, we derive the expression for the local mean curvature of the Sigma - space of the space-time point for a moving rigid or deformable surface f(x, t) = 0. This expression is a complicated function of the geometric and kinematic parameters of the surface f(x, t) = 0. Using the results of this paper, the solution of the governing wave equation of high speed propeller noise radiation as well as the Kirchhoff formula for a supersonic surface can be written as very compact analytic expression.
Blanco-Claraco, José Luis; López-Martínez, Javier; Torres-Moreno, José Luis; Giménez-Fernández, Antonio
2015-01-01
Most experimental fields of science and engineering require the use of data acquisition systems (DAQ), devices in charge of sampling and converting electrical signals into digital data and, typically, performing all of the required signal preconditioning. Since commercial DAQ systems are normally focused on specific types of sensors and actuators, systems engineers may need to employ mutually-incompatible hardware from different manufacturers in applications demanding heterogeneous inputs and outputs, such as small-signal analog inputs, differential quadrature rotatory encoders or variable current outputs. A common undesirable side effect of heterogeneous DAQ hardware is the lack of an accurate synchronization between samples captured by each device. To solve such a problem with low-cost hardware, we present a novel modular DAQ architecture comprising a base board and a set of interchangeable modules. Our main design goal is the ability to sample all sources at predictable, fixed sampling frequencies, with a reduced synchronization mismatch (<1 μs) between heterogeneous signal sources. We present experiments in the field of mechanical engineering, illustrating vibration spectrum analyses from piezoelectric accelerometers and, as a novelty in these kinds of experiments, the spectrum of quadrature encoder signals. Part of the design and software will be publicly released online. PMID:26516865
Pen-chant: Acoustic emissions of handwriting and drawing
NASA Astrophysics Data System (ADS)
Seniuk, Andrew G.
The sounds generated by a writing instrument ('pen-chant') provide a rich and underutilized source of information for pattern recognition. We examine the feasibility of recognition of handwritten cursive text, exclusively through an analysis of acoustic emissions. We design and implement a family of recognizers using a template matching approach, with templates and similarity measures derived variously from: smoothed amplitude signal with fixed resolution, discrete sequence of magnitudes obtained from peaks in the smoothed amplitude signal, and ordered tree obtained from a scale space signal representation. Test results are presented for recognition of isolated lowercase cursive characters and for whole words. We also present qualitative results for recognizing gestures such as circling, scratch-out, check-marks, and hatching. Our first set of results, using samples provided by the author, yield recognition rates of over 70% (alphabet) and 90% (26 words), with a confidence of +/-8%, based solely on acoustic emissions. Our second set of results uses data gathered from nine writers. These results demonstrate that acoustic emissions are a rich source of information, usable---on their own or in conjunction with image-based features---to solve pattern recognition problems. In future work, this approach can be applied to writer identification, handwriting and gesture-based computer input technology, emotion recognition, and temporal analysis of sketches.
The evolving Planck mass in classically scale-invariant theories
NASA Astrophysics Data System (ADS)
Kannike, K.; Raidal, M.; Spethmann, C.; Veermäe, H.
2017-04-01
We consider classically scale-invariant theories with non-minimally coupled scalar fields, where the Planck mass and the hierarchy of physical scales are dynamically generated. The classical theories possess a fixed point, where scale invariance is spontaneously broken. In these theories, however, the Planck mass becomes unstable in the presence of explicit sources of scale invariance breaking, such as non-relativistic matter and cosmological constant terms. We quantify the constraints on such classical models from Big Bang Nucleosynthesis that lead to an upper bound on the non-minimal coupling and require trans-Planckian field values. We show that quantum corrections to the scalar potential can stabilise the fixed point close to the minimum of the Coleman-Weinberg potential. The time-averaged motion of the evolving fixed point is strongly suppressed, thus the limits on the evolving gravitational constant from Big Bang Nucleosynthesis and other measurements do not presently constrain this class of theories. Field oscillations around the fixed point, if not damped, contribute to the dark matter density of the Universe.
NASA Technical Reports Server (NTRS)
Stokes, B. O.; Wallace, C. J.
1978-01-01
Ammonia production by Klebsiella pneumoniae is not economical with present strains and improving nitrogen fixation to its theoretical limits in this organism is not sufficient to achieve economic viability. Because the value of both the hydrogen produced by this organism and the methane value of the carbon source required greatly exceed the value of the ammonia formed, ammonia (fixed nitrogen) should be considered the by-product. The production of hydrogen by KLEBSIELLA or other anaerobic nitrogen fixers should receive additional study, because the activity of nitrogenase offers a significant improvement in hydrogen production. The production of fixed nitrogen in the form of cell mass by Azotobacter is also uneconomical and the methane value of the carbon substrate exceeds the value of the nitrogen fixed. Parametric studies indicate that as efficiencies approach the theoretical limits the economics may become competitive. The use of nif-derepressed microorganisms, particularly blue-green algae, may have significant potential for in situ fertilization in the environment.