Sample records for time points approximately

  1. Approximation methods for combined thermal/structural design

    NASA Technical Reports Server (NTRS)

    Haftka, R. T.; Shore, C. P.

    1979-01-01

    Two approximation concepts for combined thermal/structural design are evaluated. The first concept is an approximate thermal analysis based on the first derivatives of structural temperatures with respect to design variables. Two commonly used first-order Taylor series expansions are examined. The direct and reciprocal expansions are special members of a general family of approximations, and for some conditions other members of that family of approximations are more accurate. Several examples are used to compare the accuracy of the different expansions. The second approximation concept is the use of critical time points for combined thermal and stress analyses of structures with transient loading conditions. Significant time savings are realized by identifying critical time points and performing the stress analysis for those points only. The design of an insulated panel which is exposed to transient heating conditions is discussed.

  2. Point Charges Optimally Placed to Represent the Multipole Expansion of Charge Distributions

    PubMed Central

    Onufriev, Alexey V.

    2013-01-01

    We propose an approach for approximating electrostatic charge distributions with a small number of point charges to optimally represent the original charge distribution. By construction, the proposed optimal point charge approximation (OPCA) retains many of the useful properties of point multipole expansion, including the same far-field asymptotic behavior of the approximate potential. A general framework for numerically computing OPCA, for any given number of approximating charges, is described. We then derive a 2-charge practical point charge approximation, PPCA, which approximates the 2-charge OPCA via closed form analytical expressions, and test the PPCA on a set of charge distributions relevant to biomolecular modeling. We measure the accuracy of the new approximations as the RMS error in the electrostatic potential relative to that produced by the original charge distribution, at a distance the extent of the charge distribution–the mid-field. The error for the 2-charge PPCA is found to be on average 23% smaller than that of optimally placed point dipole approximation, and comparable to that of the point quadrupole approximation. The standard deviation in RMS error for the 2-charge PPCA is 53% lower than that of the optimal point dipole approximation, and comparable to that of the point quadrupole approximation. We also calculate the 3-charge OPCA for representing the gas phase quantum mechanical charge distribution of a water molecule. The electrostatic potential calculated by the 3-charge OPCA for water, in the mid-field (2.8 Å from the oxygen atom), is on average 33.3% more accurate than the potential due to the point multipole expansion up to the octupole order. Compared to a 3 point charge approximation in which the charges are placed on the atom centers, the 3-charge OPCA is seven times more accurate, by RMS error. The maximum error at the oxygen-Na distance (2.23 Å ) is half that of the point multipole expansion up to the octupole order. PMID:23861790

  3. Time delay of critical images in the vicinity of cusp point of gravitational-lens systems

    NASA Astrophysics Data System (ADS)

    Alexandrov, A.; Zhdanov, V.

    2016-12-01

    We consider approximate analytical formulas for time-delays of critical images of a point source in the neighborhood of a cusp-caustic. We discuss zero, first and second approximations in powers of a parameter that defines the proximity of the source to the cusp. These formulas link the time delay with characteristics of the lens potential. The formula of zero approximation was obtained by Congdon, Keeton & Nordgren (MNRAS, 2008). In case of a general lens potential we derived first order correction thereto. If the potential is symmetric with respect to the cusp axis, then this correction is identically equal to zero. For this case, we obtained second order correction. The relations found are illustrated by a simple model example.

  4. Many-body perturbation theory using the density-functional concept: beyond the GW approximation.

    PubMed

    Bruneval, Fabien; Sottile, Francesco; Olevano, Valerio; Del Sole, Rodolfo; Reining, Lucia

    2005-05-13

    We propose an alternative formulation of many-body perturbation theory that uses the density-functional concept. Instead of the usual four-point integral equation for the polarizability, we obtain a two-point one, which leads to excellent optical absorption and energy-loss spectra. The corresponding three-point vertex function and self-energy are then simply calculated via an integration, for any level of approximation. Moreover, we show the direct impact of this formulation on the time-dependent density-functional theory. Numerical results for the band gap of bulk silicon and solid argon illustrate corrections beyond the GW approximation for the self-energy.

  5. Queueing analysis of a canonical model of real-time multiprocessors

    NASA Technical Reports Server (NTRS)

    Krishna, C. M.; Shin, K. G.

    1983-01-01

    A logical classification of multiprocessor structures from the point of view of control applications is presented. A computation of the response time distribution for a canonical model of a real time multiprocessor is presented. The multiprocessor is approximated by a blocking model. Two separate models are derived: one created from the system's point of view, and the other from the point of view of an incoming task.

  6. On the Motion of Agents across Terrain with Obstacles

    NASA Astrophysics Data System (ADS)

    Kuznetsov, A. V.

    2018-01-01

    The paper is devoted to finding the time optimal route of an agent travelling across a region from a given source point to a given target point. At each point of this region, a maximum allowed speed is specified. This speed limit may vary in time. The continuous statement of this problem and the case when the agent travels on a grid with square cells are considered. In the latter case, the time is also discrete, and the number of admissible directions of motion at each point in time is eight. The existence of an optimal solution of this problem is proved, and estimates of the approximate solution obtained on the grid are obtained. It is found that decreasing the size of cells below a certain limit does not further improve the approximation. These results can be used to estimate the quasi-optimal trajectory of the agent motion across the rugged terrain produced by an algorithm based on a cellular automaton that was earlier developed by the author.

  7. Neutral points of skylight polarization observed during the total eclipse on 11 August 1999.

    PubMed

    Horváth, Gábor; Pomozi, István; Gál, József

    2003-01-20

    We report here on the observation of unpolarized (neutral) points in the sky during the total solar eclipse on 11 August 1999. Near the zenith a neutral point was observed at 450 nm at two different points of time during totality. Around this celestial point the distribution of the angle of polarization was heterogeneous: The electric field vectors on the one side were approximately perpendicular to those on the other side. At another moment of totality, near the zenith a local minimum of the degree of linear polarization occurred at 550 nm. Near the antisolar meridian, at a low elevation another two neutral points occurred at 450 nm at a certain moment during totality. Approximately at the position of these neutral points, at another moment of totality a local minimum of the degree of polarization occurred at 550 nm, whereas at 450 nm a neutral point was observed, around which the angle-of-polarization pattern was homogeneous: The electric field vectors were approximately horizontal on both sides of the neutral point.

  8. Ensemble Space-Time Correlation of Plasma Turbulence in the Solar Wind.

    PubMed

    Matthaeus, W H; Weygand, J M; Dasso, S

    2016-06-17

    Single point measurement turbulence cannot distinguish variations in space and time. We employ an ensemble of one- and two-point measurements in the solar wind to estimate the space-time correlation function in the comoving plasma frame. The method is illustrated using near Earth spacecraft observations, employing ACE, Geotail, IMP-8, and Wind data sets. New results include an evaluation of both correlation time and correlation length from a single method, and a new assessment of the accuracy of the familiar frozen-in flow approximation. This novel view of the space-time structure of turbulence may prove essential in exploratory space missions such as Solar Probe Plus and Solar Orbiter for which the frozen-in flow hypothesis may not be a useful approximation.

  9. Pulmonary and pleural responses in Fischer 344 rats following short-term inhalation of a synthetic vitreous fiber. I. Quantitation of lung and pleural fiber burdens.

    PubMed

    Gelzleichter, T R; Bermudez, E; Mangum, J B; Wong, B A; Everitt, J I; Moss, O R

    1996-03-01

    The pleura is an important target tissue of fiber-induced disease, although it is not known whether fibers must be in direct contact with pleural cells to exert pathologic effects. In the present study, we determined the kinetics of fiber movement into pleural tissues of rats following inhalation of RCF-1, a ceramic fiber previously shown to induce neoplasms in the lung and pleura of rats. Male Fischer 344 rats were exposed by nose-only inhalation to RCF-1 at 89 mg/m3 (2645 WHO fibers/cc), 6 hr/day for 5 consecutive days. On Days 5 and 32, thoracic tissues were analyzed to determine pulmonary and pleural fiber burdens. Mean fiber counts were 22 x 10(6)/lung (25 x 10(3)/pleura) at Day 5 and 18 x 10(6)/lung (16 x 10(3)/pleura) at Day 32. Similar geometric mean lengths (GML) and diameters (GMD) of pulmonary fiber burdens were observed at both time points. Values were 5 microns for GML (geometric standard deviation GSD approximately 2.3) and 0.3 micron for GMD (GSD approximately 1.9), with correlations between length and diameter (tau) of 0.2-0.3. Size distributions of pleural fiber burdens at both time points were approximately 1.5 microns GML (GSD approximately 2.0) and 0.09 micron GMD (GSD approximately 1.5; tau approximately 0.2-0.5). Few fibers longer than 5 microns were observed at either time point. These findings demonstrate that fibers can rapidly translocate to pleural tissues. However, only short, thin (< 5 microns in length) fibers could be detected over the 32-day time course of the experiment.

  10. Smoothing data series by means of cubic splines: quality of approximation and introduction of a repeating spline approach

    NASA Astrophysics Data System (ADS)

    Wüst, Sabine; Wendt, Verena; Linz, Ricarda; Bittner, Michael

    2017-09-01

    Cubic splines with equidistant spline sampling points are a common method in atmospheric science, used for the approximation of background conditions by means of filtering superimposed fluctuations from a data series. What is defined as background or superimposed fluctuation depends on the specific research question. The latter also determines whether the spline or the residuals - the subtraction of the spline from the original time series - are further analysed.Based on test data sets, we show that the quality of approximation of the background state does not increase continuously with an increasing number of spline sampling points and/or decreasing distance between two spline sampling points. Splines can generate considerable artificial oscillations in the background and the residuals.We introduce a repeating spline approach which is able to significantly reduce this phenomenon. We apply it not only to the test data but also to TIMED-SABER temperature data and choose the distance between two spline sampling points in a way that is sensitive for a large spectrum of gravity waves.

  11. Editing wild points in isolation - Fast agreement for reliable systems (Preliminary version)

    NASA Technical Reports Server (NTRS)

    Kearns, Phil; Evans, Carol

    1989-01-01

    Consideration is given to the intuitively appealing notion of discarding sensor values which are strongly suspected of being erroneous in a modified approximate agreement protocol. Approximate agreement with editing imposes a time bound upon the convergence of the protocol - no such bound was possible for the original approximate agreement protocol. This new approach is potentially useful in the construction of asynchronous fault tolerant systems. The main result is that a wild-point replacement technique called t-worst editing can be shown to guarantee convergence of the approximate agreement protocol to a valid agreement value. Results are presented for a four-processor synchronous system in which a single processor may be faulty.

  12. A method to approximate a closest loadability limit using multiple load flow solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yorino, Naoto; Harada, Shigemi; Cheng, Haozhong

    A new method is proposed to approximate a closest loadability limit (CLL), or closest saddle node bifurcation point, using a pair of multiple load flow solutions. More strictly, the obtainable points by the method are the stationary points including not only CLL but also farthest and saddle points. An operating solution and a low voltage load flow solution are used to efficiently estimate the node injections at a CLL as well as the left and right eigenvectors corresponding to the zero eigenvalue of the load flow Jacobian. They can be used in monitoring loadability margin, in identification of weak spotsmore » in a power system and in the examination of an optimal control against voltage collapse. Most of the computation time of the proposed method is taken in calculating the load flow solution pair. The remaining computation time is less than that of an ordinary load flow.« less

  13. Method for discovering relationships in data by dynamic quantum clustering

    DOEpatents

    Weinstein, Marvin; Horn, David

    2017-05-09

    Data clustering is provided according to a dynamical framework based on quantum mechanical time evolution of states corresponding to data points. To expedite computations, we can approximate the time-dependent Hamiltonian formalism by a truncated calculation within a set of Gaussian wave-functions (coherent states) centered around the original points. This allows for analytic evaluation of the time evolution of all such states, opening up the possibility of exploration of relationships among data-points through observation of varying dynamical-distances among points and convergence of points into clusters. This formalism may be further supplemented by preprocessing, such as dimensional reduction through singular value decomposition and/or feature filtering.

  14. Method for discovering relationships in data by dynamic quantum clustering

    DOEpatents

    Weinstein, Marvin; Horn, David

    2014-10-28

    Data clustering is provided according to a dynamical framework based on quantum mechanical time evolution of states corresponding to data points. To expedite computations, we can approximate the time-dependent Hamiltonian formalism by a truncated calculation within a set of Gaussian wave-functions (coherent states) centered around the original points. This allows for analytic evaluation of the time evolution of all such states, opening up the possibility of exploration of relationships among data-points through observation of varying dynamical-distances among points and convergence of points into clusters. This formalism may be further supplemented by preprocessing, such as dimensional reduction through singular value decomposition and/or feature filtering.

  15. Fourth-order numerical solutions of diffusion equation by using SOR method with Crank-Nicolson approach

    NASA Astrophysics Data System (ADS)

    Muhiddin, F. A.; Sulaiman, J.

    2017-09-01

    The aim of this paper is to investigate the effectiveness of the Successive Over-Relaxation (SOR) iterative method by using the fourth-order Crank-Nicolson (CN) discretization scheme to derive a five-point Crank-Nicolson approximation equation in order to solve diffusion equation. From this approximation equation, clearly, it can be shown that corresponding system of five-point approximation equations can be generated and then solved iteratively. In order to access the performance results of the proposed iterative method with the fourth-order CN scheme, another point iterative method which is Gauss-Seidel (GS), also presented as a reference method. Finally the numerical results obtained from the use of the fourth-order CN discretization scheme, it can be pointed out that the SOR iterative method is superior in terms of number of iterations, execution time, and maximum absolute error.

  16. UNAERO: A package of FORTRAN subroutines for approximating unsteady aerodynamics in the time domain

    NASA Technical Reports Server (NTRS)

    Dunn, H. J.

    1985-01-01

    This report serves as an instruction and maintenance manual for a collection of CDC CYBER FORTRAN IV subroutines for approximating the unsteady aerodynamic forces in the time domain. The result is a set of constant-coefficient first-order differential equations that approximate the dynamics of the vehicle. Provisions are included for adjusting the number of modes used for calculating the approximations so that an accurate approximation is generated. The number of data points at different values of reduced frequency can also be varied to adjust the accuracy of the approximation over the reduced-frequency range. The denominator coefficients of the approximation may be calculated by means of a gradient method or a least-squares approximation technique. Both the approximation methods use weights on the residual error. A new set of system equations, at a different dynamic pressure, can be generated without the approximations being recalculated.

  17. Reverse engineering gene regulatory networks from measurement with missing values.

    PubMed

    Ogundijo, Oyetunji E; Elmas, Abdulkadir; Wang, Xiaodong

    2016-12-01

    Gene expression time series data are usually in the form of high-dimensional arrays. Unfortunately, the data may sometimes contain missing values: for either the expression values of some genes at some time points or the entire expression values of a single time point or some sets of consecutive time points. This significantly affects the performance of many algorithms for gene expression analysis that take as an input, the complete matrix of gene expression measurement. For instance, previous works have shown that gene regulatory interactions can be estimated from the complete matrix of gene expression measurement. Yet, till date, few algorithms have been proposed for the inference of gene regulatory network from gene expression data with missing values. We describe a nonlinear dynamic stochastic model for the evolution of gene expression. The model captures the structural, dynamical, and the nonlinear natures of the underlying biomolecular systems. We present point-based Gaussian approximation (PBGA) filters for joint state and parameter estimation of the system with one-step or two-step missing measurements . The PBGA filters use Gaussian approximation and various quadrature rules, such as the unscented transform (UT), the third-degree cubature rule and the central difference rule for computing the related posteriors. The proposed algorithm is evaluated with satisfying results for synthetic networks, in silico networks released as a part of the DREAM project, and the real biological network, the in vivo reverse engineering and modeling assessment (IRMA) network of yeast Saccharomyces cerevisiae . PBGA filters are proposed to elucidate the underlying gene regulatory network (GRN) from time series gene expression data that contain missing values. In our state-space model, we proposed a measurement model that incorporates the effect of the missing data points into the sequential algorithm. This approach produces a better inference of the model parameters and hence, more accurate prediction of the underlying GRN compared to when using the conventional Gaussian approximation (GA) filters ignoring the missing data points.

  18. A Real-Time Marker-Based Visual Sensor Based on a FPGA and a Soft Core Processor

    PubMed Central

    Tayara, Hilal; Ham, Woonchul; Chong, Kil To

    2016-01-01

    This paper introduces a real-time marker-based visual sensor architecture for mobile robot localization and navigation. A hardware acceleration architecture for post video processing system was implemented on a field-programmable gate array (FPGA). The pose calculation algorithm was implemented in a System on Chip (SoC) with an Altera Nios II soft-core processor. For every frame, single pass image segmentation and Feature Accelerated Segment Test (FAST) corner detection were used for extracting the predefined markers with known geometries in FPGA. Coplanar PosIT algorithm was implemented on the Nios II soft-core processor supplied with floating point hardware for accelerating floating point operations. Trigonometric functions have been approximated using Taylor series and cubic approximation using Lagrange polynomials. Inverse square root method has been implemented for approximating square root computations. Real time results have been achieved and pixel streams have been processed on the fly without any need to buffer the input frame for further implementation. PMID:27983714

  19. A Real-Time Marker-Based Visual Sensor Based on a FPGA and a Soft Core Processor.

    PubMed

    Tayara, Hilal; Ham, Woonchul; Chong, Kil To

    2016-12-15

    This paper introduces a real-time marker-based visual sensor architecture for mobile robot localization and navigation. A hardware acceleration architecture for post video processing system was implemented on a field-programmable gate array (FPGA). The pose calculation algorithm was implemented in a System on Chip (SoC) with an Altera Nios II soft-core processor. For every frame, single pass image segmentation and Feature Accelerated Segment Test (FAST) corner detection were used for extracting the predefined markers with known geometries in FPGA. Coplanar PosIT algorithm was implemented on the Nios II soft-core processor supplied with floating point hardware for accelerating floating point operations. Trigonometric functions have been approximated using Taylor series and cubic approximation using Lagrange polynomials. Inverse square root method has been implemented for approximating square root computations. Real time results have been achieved and pixel streams have been processed on the fly without any need to buffer the input frame for further implementation.

  20. Retrospective cost-effectiveness analyses for polio vaccination in the United States.

    PubMed

    Thompson, Kimberly M; Tebbens, Radboud J Duintjer

    2006-12-01

    The history of polio vaccination in the United States spans 50 years and includes different phases of the disease, multiple vaccines, and a sustained significant commitment of resources. We estimated cost-effectiveness ratios and assessed the net benefits of polio vaccination applicable at various points in time from the societal perspective and we discounted these back to appropriate points in time. We reconstructed vaccine price data from available sources and used these to retrospectively estimate the total costs of the U.S. historical polio vaccination strategies (all costs reported in year 2002 dollars). We estimate that the United States invested approximately US dollars 35 billion (1955 net present value, discount rate of 3%) in polio vaccines between 1955 and 2005 and will invest approximately US dollars 1.4 billion (1955 net present value, or US dollars 6.3 billion in 2006 net present value) between 2006 and 2015 assuming a policy of continued use of inactivated poliovirus vaccine (IPV) for routine vaccination. The historical and future investments translate into over 1.7 billion vaccinations that prevent approximately 1.1 million cases of paralytic polio and over 160,000 deaths (1955 net present values of approximately 480,000 cases and 73,000 deaths). Due to treatment cost savings, the investment implies net benefits of approximately US dollars 180 billion (1955 net present value), even without incorporating the intangible costs of suffering and death and of averted fear. Retrospectively, the U.S. investment in polio vaccination represents a highly valuable, cost-saving public health program. Observed changes in the cost-effectiveness ratio estimates over time suggest the need for living economic models for interventions that appropriately change with time. This article also demonstrates that estimates of cost-effectiveness ratios at any single time point may fail to adequately consider the context of the investment made to date and the importance of population and other dynamics, and shows the importance of dynamic modeling.

  1. Test particle propagation in magnetostatic turbulence. 2: The local approximation method

    NASA Technical Reports Server (NTRS)

    Klimas, A. J.; Sandri, G.; Scudder, J. D.; Howell, D. R.

    1976-01-01

    An approximation method for statistical mechanics is presented and applied to a class of problems which contains a test particle propagation problem. All of the available basic equations used in statistical mechanics are cast in the form of a single equation which is integrodifferential in time and which is then used as the starting point for the construction of the local approximation method. Simplification of the integrodifferential equation is achieved through approximation to the Laplace transform of its kernel. The approximation is valid near the origin in the Laplace space and is based on the assumption of small Laplace variable. No other small parameter is necessary for the construction of this approximation method. The n'th level of approximation is constructed formally, and the first five levels of approximation are calculated explicitly. It is shown that each level of approximation is governed by an inhomogeneous partial differential equation in time with time independent operator coefficients. The order in time of these partial differential equations is found to increase as n does. At n = 0 the most local first order partial differential equation which governs the Markovian limit is regained.

  2. Structural Reliability Analysis and Optimization: Use of Approximations

    NASA Technical Reports Server (NTRS)

    Grandhi, Ramana V.; Wang, Liping

    1999-01-01

    This report is intended for the demonstration of function approximation concepts and their applicability in reliability analysis and design. Particularly, approximations in the calculation of the safety index, failure probability and structural optimization (modification of design variables) are developed. With this scope in mind, extensive details on probability theory are avoided. Definitions relevant to the stated objectives have been taken from standard text books. The idea of function approximations is to minimize the repetitive use of computationally intensive calculations by replacing them with simpler closed-form equations, which could be nonlinear. Typically, the approximations provide good accuracy around the points where they are constructed, and they need to be periodically updated to extend their utility. There are approximations in calculating the failure probability of a limit state function. The first one, which is most commonly discussed, is how the limit state is approximated at the design point. Most of the time this could be a first-order Taylor series expansion, also known as the First Order Reliability Method (FORM), or a second-order Taylor series expansion (paraboloid), also known as the Second Order Reliability Method (SORM). From the computational procedure point of view, this step comes after the design point identification; however, the order of approximation for the probability of failure calculation is discussed first, and it is denoted by either FORM or SORM. The other approximation of interest is how the design point, or the most probable failure point (MPP), is identified. For iteratively finding this point, again the limit state is approximated. The accuracy and efficiency of the approximations make the search process quite practical for analysis intensive approaches such as the finite element methods; therefore, the crux of this research is to develop excellent approximations for MPP identification and also different approximations including the higher-order reliability methods (HORM) for representing the failure surface. This report is divided into several parts to emphasize different segments of the structural reliability analysis and design. Broadly, it consists of mathematical foundations, methods and applications. Chapter I discusses the fundamental definitions of the probability theory, which are mostly available in standard text books. Probability density function descriptions relevant to this work are addressed. In Chapter 2, the concept and utility of function approximation are discussed for a general application in engineering analysis. Various forms of function representations and the latest developments in nonlinear adaptive approximations are presented with comparison studies. Research work accomplished in reliability analysis is presented in Chapter 3. First, the definition of safety index and most probable point of failure are introduced. Efficient ways of computing the safety index with a fewer number of iterations is emphasized. In chapter 4, the probability of failure prediction is presented using first-order, second-order and higher-order methods. System reliability methods are discussed in chapter 5. Chapter 6 presents optimization techniques for the modification and redistribution of structural sizes for improving the structural reliability. The report also contains several appendices on probability parameters.

  3. Effects of directional uncertainty on visually-guided joystick pointing.

    PubMed

    Berryhill, Marian; Kveraga, Kestutis; Hughes, Howard C

    2005-02-01

    Reaction times generally follow the predictions of Hick's law as stimulus-response uncertainty increases, although notable exceptions include the oculomotor system. Saccadic and smooth pursuit eye movement reaction times are independent of stimulus-response uncertainty. Previous research showed that joystick pointing to targets, a motor analog of saccadic eye movements, is only modestly affected by increased stimulus-response uncertainty; however, a no-uncertainty condition (simple reaction time to 1 possible target) was not included. Here, we re-evaluate manual joystick pointing including a no-uncertainty condition. Analysis indicated simple joystick pointing reaction times were significantly faster than choice reaction times. Choice reaction times (2, 4, or 8 possible target locations) only slightly increased as the number of possible targets increased. These data suggest that, as with joystick tracking (a motor analog of smooth pursuit eye movements), joystick pointing is more closely approximated by a simple/choice step function than the log function predicted by Hick's law.

  4. 1SXPS: A Deep Swift X-Ray Telescope Point Source Catalog with Light Curves and Spectra

    NASA Technical Reports Server (NTRS)

    Evans, P. A.; Osborne, J. P.; Beardmore, A. P.; Page, K. L.; Willingale, R.; Mountford, C. J.; Pagani, C.; Burrows, D. N.; Kennea, J. A.; Perri, M.; hide

    2013-01-01

    We present the 1SXPS (Swift-XRT point source) catalog of 151,524 X-ray point sources detected by the Swift-XRT in 8 yr of operation. The catalog covers 1905 sq deg distributed approximately uniformly on the sky. We analyze the data in two ways. First we consider all observations individually, for which we have a typical sensitivity of approximately 3 × 10(exp -13) erg cm(exp -2) s(exp -1) (0.3-10 keV). Then we co-add all data covering the same location on the sky: these images have a typical sensitivity of approximately 9 × 10(exp -14) erg cm(exp -2) s(exp -1) (0.3-10 keV). Our sky coverage is nearly 2.5 times that of 3XMM-DR4, although the catalog is a factor of approximately 1.5 less sensitive. The median position error is 5.5 (90% confidence), including systematics. Our source detection method improves on that used in previous X-ray Telescope (XRT) catalogs and we report greater than 68,000 new X-ray sources. The goals and observing strategy of the Swift satellite allow us to probe source variability on multiple timescales, and we find approximately 30,000 variable objects in our catalog. For every source we give positions, fluxes, time series (in four energy bands and two hardness ratios), estimates of the spectral properties, spectra and spectral fits for the brightest sources, and variability probabilities in multiple energy bands and timescales.

  5. Effects of Point Count Duration, Time-of-Day, and Aural Stimuli on Detectability of Migratory and Resident Bird Species in Quintana Roo, Mexico

    Treesearch

    James F. Lynch

    1995-01-01

    Effects of count duration, time-of-day, and aural stimuli were studied in a series of unlimited-radius point counts conducted during winter in Quintana Roo, Mexico. The rate at which new species were detected was approximately three times higher during the first 5 minutes of each 15- minute count than in the final 5 minutes. The number of individuals and species...

  6. Fixed-point image orthorectification algorithms for reduced computational cost

    NASA Astrophysics Data System (ADS)

    French, Joseph Clinton

    Imaging systems have been applied to many new applications in recent years. With the advent of low-cost, low-power focal planes and more powerful, lower cost computers, remote sensing applications have become more wide spread. Many of these applications require some form of geolocation, especially when relative distances are desired. However, when greater global positional accuracy is needed, orthorectification becomes necessary. Orthorectification is the process of projecting an image onto a Digital Elevation Map (DEM), which removes terrain distortions and corrects the perspective distortion by changing the viewing angle to be perpendicular to the projection plane. Orthorectification is used in disaster tracking, landscape management, wildlife monitoring and many other applications. However, orthorectification is a computationally expensive process due to floating point operations and divisions in the algorithm. To reduce the computational cost of on-board processing, two novel algorithm modifications are proposed. One modification is projection utilizing fixed-point arithmetic. Fixed point arithmetic removes the floating point operations and reduces the processing time by operating only on integers. The second modification is replacement of the division inherent in projection with a multiplication of the inverse. The inverse must operate iteratively. Therefore, the inverse is replaced with a linear approximation. As a result of these modifications, the processing time of projection is reduced by a factor of 1.3x with an average pixel position error of 0.2% of a pixel size for 128-bit integer processing and over 4x with an average pixel position error of less than 13% of a pixel size for a 64-bit integer processing. A secondary inverse function approximation is also developed that replaces the linear approximation with a quadratic. The quadratic approximation produces a more accurate approximation of the inverse, allowing for an integer multiplication calculation to be used in place of the traditional floating point division. This method increases the throughput of the orthorectification operation by 38% when compared to floating point processing. Additionally, this method improves the accuracy of the existing integer-based orthorectification algorithms in terms of average pixel distance, increasing the accuracy of the algorithm by more than 5x. The quadratic function reduces the pixel position error to 2% and is still 2.8x faster than the 128-bit floating point algorithm.

  7. Cumulative Effect of Racial Discrimination on the Mental Health of Ethnic Minorities in the United Kingdom.

    PubMed

    Wallace, Stephanie; Nazroo, James; Bécares, Laia

    2016-07-01

    To examine the longitudinal association between cumulative exposure to racial discrimination and changes in the mental health of ethnic minority people. We used data from 4 waves (2009-2013) of the UK Household Longitudinal Study, a longitudinal household panel survey of approximately 40 000 households, including an ethnic minority boost sample of approximately 4000 households. Ethnic minority people who reported exposure to racial discrimination at 1 time point had 12-Item Short Form Health Survey (SF-12) mental component scores 1.93 (95% confidence interval [CI] = -3.31, -0.56) points lower than did those who reported no exposure to racial discrimination, whereas those who had been exposed to 2 or more domains of racial discrimination, at 2 different time points, had SF-12 mental component scores 8.26 (95% CI = -13.33, -3.18) points lower than did those who reported no experiences of racial discrimination. Controlling for racial discrimination and other socioeconomic factors reduced ethnic inequalities in mental health. Cumulative exposure to racial discrimination has incremental negative long-term effects on the mental health of ethnic minority people in the United Kingdom. Studies that examine exposure to racial discrimination at 1 point in time may underestimate the contribution of racism to poor health.

  8. Expansions for infinite or finite plane circular time-reversal mirrors and acoustic curtains for wave-field-synthesis.

    PubMed

    Mellow, Tim; Kärkkäinen, Leo

    2014-03-01

    An acoustic curtain is an array of microphones used for recording sound which is subsequently reproduced through an array of loudspeakers in which each loudspeaker reproduces the signal from its corresponding microphone. Here the sound originates from a point source on the axis of symmetry of the circular array. The Kirchhoff-Helmholtz integral for a plane circular curtain is solved analytically as fast-converging expansions, assuming an ideal continuous array, to speed up computations and provide insight. By reversing the time sequence of the recording (or reversing the direction of propagation of the incident wave so that the point source becomes an "ideal" point sink), the curtain becomes a time reversal mirror and the analytical solution for this is given simultaneously. In the case of an infinite planar array, it is demonstrated that either a monopole or dipole curtain will reproduce the diverging sound field of the point source on the far side. However, although the real part of the sound field of the infinite time-reversal mirror is reproduced, the imaginary part is an approximation due to the missing singularity. It is shown that the approximation may be improved by using the appropriate combination of monopole and dipole sources in the mirror.

  9. Standard of Practice and Flynn Effect Testimony in Death Penalty Cases

    ERIC Educational Resources Information Center

    Gresham, Frank M.; Reschly, Daniel J.

    2011-01-01

    The Flynn Effect is a well-established psychometric fact documenting substantial increases in measured intelligence test performance over time. Flynn's (1984) review of the literature established that Americans gain approximately 0.3 points per year or 3 points per decade in measured intelligence. The accurate assessment and interpretation of…

  10. A very deep IRAS survey at the north ecliptic pole

    NASA Technical Reports Server (NTRS)

    Houck, J. R.; Hacking, P. B.; Condon, J. J.

    1987-01-01

    The data from approximately 20 hours observation of the 4- to 6-square degree field surrounding the north ecliptic pole have been combined to produce a very deep IR survey at the four IRAS bands. Scans from both pointed and survey observations were included in the data analysis. At 12 and 25 microns the deep survey is limited by detector noise and is approximately 50 times deeper than the IRAS Point Source Catalog (PSC). At 60 microns the problems of source confusion and Galactic cirrus combine to limit the deep survey to approximately 12 times deeper than the PSC. These problems are so severe at 100 microns that flux values are only given for locations corresponding to sources selected at 60 microns. In all, 47 sources were detected at 12 microns, 37 at 25 microns, and 99 at 60 microns. The data-analysis procedures and the significance of the 12- and 60-micron source-count results are discussed.

  11. Global collocation methods for approximation and the solution of partial differential equations

    NASA Technical Reports Server (NTRS)

    Solomonoff, A.; Turkel, E.

    1986-01-01

    Polynomial interpolation methods are applied both to the approximation of functions and to the numerical solutions of hyperbolic and elliptic partial differential equations. The derivative matrix for a general sequence of the collocation points is constructed. The approximate derivative is then found by a matrix times vector multiply. The effects of several factors on the performance of these methods including the effect of different collocation points are then explored. The resolution of the schemes for both smooth functions and functions with steep gradients or discontinuities in some derivative are also studied. The accuracy when the gradients occur both near the center of the region and in the vicinity of the boundary is investigated. The importance of the aliasing limit on the resolution of the approximation is investigated in detail. Also examined is the effect of boundary treatment on the stability and accuracy of the scheme.

  12. Space Technology 5 Multi-Point Observations of Temporal Variability of Field-Aligned Currents

    NASA Technical Reports Server (NTRS)

    Le, Guan; Wang, Yongli; Slavin, James A.; Strangeway, Robert J.

    2008-01-01

    Space Technology 5 (ST5) is a three micro-satellite constellation deployed into a 300 x 4500 km, dawn-dusk, sun-synchronous polar orbit from March 22 to June 21, 2006, for technology validations. In this paper, we present a study of the temporal variability of field-aligned currents using multi-point magnetic field measurements from ST5. The data demonstrate that meso-scale current structures are commonly embedded within large-scale field-aligned current sheets. The meso-scale current structures are very dynamic with highly variable current density and/or polarity in time scales of approximately 10 min. They exhibit large temporal variations during both quiet and disturbed times in such time scales. On the other hand, the data also shown that the time scales for the currents to be relatively stable are approximately 1 min for meso-scale currents and approximately 10 min for large scale current sheets. These temporal features are obviously associated with dynamic variations of their particle carriers (mainly electrons) as they respond to the variations of the parallel electric field in auroral acceleration region. The characteristic time scales for the temporal variability of meso-scale field-aligned currents are found to be consistent with those of auroral parallel electric field.

  13. Fully polynomial-time approximation scheme for a special case of a quadratic Euclidean 2-clustering problem

    NASA Astrophysics Data System (ADS)

    Kel'manov, A. V.; Khandeev, V. I.

    2016-02-01

    The strongly NP-hard problem of partitioning a finite set of points of Euclidean space into two clusters of given sizes (cardinalities) minimizing the sum (over both clusters) of the intracluster sums of squared distances from the elements of the clusters to their centers is considered. It is assumed that the center of one of the sought clusters is specified at the desired (arbitrary) point of space (without loss of generality, at the origin), while the center of the other one is unknown and determined as the mean value over all elements of this cluster. It is shown that unless P = NP, there is no fully polynomial-time approximation scheme for this problem, and such a scheme is substantiated in the case of a fixed space dimension.

  14. Polynomial-Time Approximation Algorithm for the Problem of Cardinality-Weighted Variance-Based 2-Clustering with a Given Center

    NASA Astrophysics Data System (ADS)

    Kel'manov, A. V.; Motkova, A. V.

    2018-01-01

    A strongly NP-hard problem of partitioning a finite set of points of Euclidean space into two clusters is considered. The solution criterion is the minimum of the sum (over both clusters) of weighted sums of squared distances from the elements of each cluster to its geometric center. The weights of the sums are equal to the cardinalities of the desired clusters. The center of one cluster is given as input, while the center of the other is unknown and is determined as the point of space equal to the mean of the cluster elements. A version of the problem is analyzed in which the cardinalities of the clusters are given as input. A polynomial-time 2-approximation algorithm for solving the problem is constructed.

  15. Heat-Transfer Measurements on a 5.5- Inch-Diameter Hemispherical Concave Nose in Free Flight at Mach Numbers up to 6.6

    NASA Technical Reports Server (NTRS)

    Levine, Jack; Rumsey, Charles B.

    1958-01-01

    The aerodynamic heat transfer to a hemispherical concave nose has been measured in free flight at Mach numbers from 3.5 to 6.6 with corresponding Reynolds numbers based on nose diameter from 7.4 x 10(exp 6) to 14 x 10(exp 6). Over the test Mach number range the heating on the cup nose, expressed as a ratio to the theoretical stagnation-point heating on a hemisphere nose of the same diameter, varied from 0.05 to 0.13 at the stagnation point of the cup, was approximately 0.1 at other locations within 40 deg of the stagnation point, and varied from 0.6 to 0.8 just inside the lip where the highest heating rates occurred. At a Mach number of 5 the total heat input integrated over the surface of the cup nose including the lip was 0.55 times the theoretical value for a hemisphere nose with laminar boundary layer and 0.76 times that for a flat face. The heating at the stagnation point was approximately 1/5 as great as steady-flow tunnel results. Extremely high heating rates at the stagnation point (on the order of 30 times the stagnation-point values of the present test), which have occurred in conjunction with unsteady oscillatory flow around cup noses in wind-tunnel tests at Mach and Reynolds numbers within the present test range, were not observed.

  16. More Education = Better Jobs. Data Points: Volume 5, Issue 9

    ERIC Educational Resources Information Center

    American Association of Community Colleges, 2017

    2017-01-01

    The median weekly earnings for individuals age 25 and older who worked full time and had less than a high school diploma was $504 in 2016 (approximately $26,200 per year), compared to $819 (approximately $42,600 per year) for individuals with an associate degree. Data show that more education not only leads to higher earnings but also to lower…

  17. Speed Approach for UAV Collision Avoidance

    NASA Astrophysics Data System (ADS)

    Berdonosov, V. D.; Zivotova, A. A.; Htet Naing, Zaw; Zhuravlev, D. O.

    2018-05-01

    The article represents a new approach of defining potential collision of two or more UAVs in a common aviation area. UAVs trajectories are approximated by two or three trajectories’ points obtained from the ADS-B system. In the process of defining meeting points of trajectories, two cutoff values of the critical speed range, at which a UAVs collision is possible, are calculated. As calculation expressions for meeting points and cutoff values of the critical speed are represented in the analytical form, even if an on-board computer system has limited computational capacity, the time for calculation will be far less than the time of receiving data from ADS-B. For this reason, calculations can be updated at each cycle of new data receiving, and the trajectory approximation can be bounded by straight lines. Such approach allows developing the compact algorithm of collision avoidance, even for a significant amount of UAVs (more than several dozens). To proof the research adequacy, modeling was performed using a software system developed specifically for this purpose.

  18. Toxic Industrial Chemical Removal by Isostructural Metal-Organic Frameworks

    DTIC Science & Technology

    2011-01-01

    similar to H- ZSM -5. Fifteen control runs were performed over 8 months using approximately 35 mg of adsorbent and bead height of approximately 4 mm...from 1.5 - 60° with an exposure time of 10 s per step. No peaks could be resolved from the baseline for 20 > 35 °; therefore, this region was not...experiment, to evaluate the desorption behavior of the material. The dry air used in these experiments had a dew point of approximately - 35 °C. In all

  19. Estimation of Initial and Response Times of Laser Dew-Point Hygrometer by Measurement Simulation

    NASA Astrophysics Data System (ADS)

    Matsumoto, Sigeaki; Toyooka, Satoru

    1995-10-01

    The initial and the response times of the laser dew-point hygrometer were evaluated by measurement simulation. The simulation was based on loop computations of the surface temperature of a plate with dew deposition, the quantity of dew deposited and the intensity of scattered light from the surface at each short interval of measurement. The initial time was defined as the time necessary for the hygrometer to reach a temperature within ±0.5° C of the measured dew point from the start time of measurement, and the response time was also defined for stepwise dew-point changes of +5° C and -5° C. The simulation results are in approximate agreement with the recorded temperature and intensity of scattered light of the hygrometer. The evaluated initial time ranged from 0.3 min to 5 min in the temperature range from 0° C to 60° C, and the response time was also evaluated to be from 0.2 min to 3 min.

  20. Combined VSWIR/TIR Products Overview: Issues and Examples

    NASA Technical Reports Server (NTRS)

    Knox, Robert G.

    2010-01-01

    The presentation provides a summary of VSWIR data collected at 19-day intervals for most areas. TIR data was collected both day and night on a 5-day cycle (more frequently at higher latitudes), the TIR swath is four times as wide as VSWIR, and the 5-day orbit repeat is approximate. Topics include nested swath geometry for reference point design and coverage simulations for sample FLUXNET tower sites. Other points examined include variation in latitude for revisit frequency, overpass times, and TIR overlap geometry and timing between VSWIR data collections.

  1. Progressive Classification Using Support Vector Machines

    NASA Technical Reports Server (NTRS)

    Wagstaff, Kiri; Kocurek, Michael

    2009-01-01

    An algorithm for progressive classification of data, analogous to progressive rendering of images, makes it possible to compromise between speed and accuracy. This algorithm uses support vector machines (SVMs) to classify data. An SVM is a machine learning algorithm that builds a mathematical model of the desired classification concept by identifying the critical data points, called support vectors. Coarse approximations to the concept require only a few support vectors, while precise, highly accurate models require far more support vectors. Once the model has been constructed, the SVM can be applied to new observations. The cost of classifying a new observation is proportional to the number of support vectors in the model. When computational resources are limited, an SVM of the appropriate complexity can be produced. However, if the constraints are not known when the model is constructed, or if they can change over time, a method for adaptively responding to the current resource constraints is required. This capability is particularly relevant for spacecraft (or any other real-time systems) that perform onboard data analysis. The new algorithm enables the fast, interactive application of an SVM classifier to a new set of data. The classification process achieved by this algorithm is characterized as progressive because a coarse approximation to the true classification is generated rapidly and thereafter iteratively refined. The algorithm uses two SVMs: (1) a fast, approximate one and (2) slow, highly accurate one. New data are initially classified by the fast SVM, producing a baseline approximate classification. For each classified data point, the algorithm calculates a confidence index that indicates the likelihood that it was classified correctly in the first pass. Next, the data points are sorted by their confidence indices and progressively reclassified by the slower, more accurate SVM, starting with the items most likely to be incorrectly classified. The user can halt this reclassification process at any point, thereby obtaining the best possible result for a given amount of computation time. Alternatively, the results can be displayed as they are generated, providing the user with real-time feedback about the current accuracy of classification.

  2. Defect production in nonlinear quench across a quantum critical point.

    PubMed

    Sen, Diptiman; Sengupta, K; Mondal, Shreyoshi

    2008-07-04

    We show that the defect density n, for a slow nonlinear power-law quench with a rate tau(-1) and an exponent alpha>0, which takes the system through a critical point characterized by correlation length and dynamical critical exponents nu and z, scales as n approximately tau(-alphanud/(alphaznu+1)) [n approximately (alphag((alpha-1)/alpha)/tau)(nud/(znu+1))] if the quench takes the system across the critical point at time t=0 [t=t(0) not = 0], where g is a nonuniversal constant and d is the system dimension. These scaling laws constitute the first theoretical results for defect production in nonlinear quenches across quantum critical points and reproduce their well-known counterpart for a linear quench (alpha=1) as a special case. We supplement our results with numerical studies of well-known models and suggest experiments to test our theory.

  3. Image Registration Algorithm Based on Parallax Constraint and Clustering Analysis

    NASA Astrophysics Data System (ADS)

    Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song

    2018-01-01

    To resolve the problem of slow computation speed and low matching accuracy in image registration, a new image registration algorithm based on parallax constraint and clustering analysis is proposed. Firstly, Harris corner detection algorithm is used to extract the feature points of two images. Secondly, use Normalized Cross Correlation (NCC) function to perform the approximate matching of feature points, and the initial feature pair is obtained. Then, according to the parallax constraint condition, the initial feature pair is preprocessed by K-means clustering algorithm, which is used to remove the feature point pairs with obvious errors in the approximate matching process. Finally, adopt Random Sample Consensus (RANSAC) algorithm to optimize the feature points to obtain the final feature point matching result, and the fast and accurate image registration is realized. The experimental results show that the image registration algorithm proposed in this paper can improve the accuracy of the image matching while ensuring the real-time performance of the algorithm.

  4. Zero-sum two-player game theoretic formulation of affine nonlinear discrete-time systems using neural networks.

    PubMed

    Mehraeen, Shahab; Dierks, Travis; Jagannathan, S; Crow, Mariesa L

    2013-12-01

    In this paper, the nearly optimal solution for discrete-time (DT) affine nonlinear control systems in the presence of partially unknown internal system dynamics and disturbances is considered. The approach is based on successive approximate solution of the Hamilton-Jacobi-Isaacs (HJI) equation, which appears in optimal control. Successive approximation approach for updating control and disturbance inputs for DT nonlinear affine systems are proposed. Moreover, sufficient conditions for the convergence of the approximate HJI solution to the saddle point are derived, and an iterative approach to approximate the HJI equation using a neural network (NN) is presented. Then, the requirement of full knowledge of the internal dynamics of the nonlinear DT system is relaxed by using a second NN online approximator. The result is a closed-loop optimal NN controller via offline learning. A numerical example is provided illustrating the effectiveness of the approach.

  5. A Constant-Factor Approximation Algorithm for the Link Building Problem

    NASA Astrophysics Data System (ADS)

    Olsen, Martin; Viglas, Anastasios; Zvedeniouk, Ilia

    In this work we consider the problem of maximizing the PageRank of a given target node in a graph by adding k new links. We consider the case that the new links must point to the given target node (backlinks). Previous work [7] shows that this problem has no fully polynomial time approximation schemes unless P = NP. We present a polynomial time algorithm yielding a PageRank value within a constant factor from the optimal. We also consider the naive algorithm where we choose backlinks from nodes with high PageRank values compared to the outdegree and show that the naive algorithm performs much worse on certain graphs compared to the constant factor approximation scheme.

  6. Statistical significance approximation in local trend analysis of high-throughput time-series data using the theory of Markov chains.

    PubMed

    Xia, Li C; Ai, Dongmei; Cram, Jacob A; Liang, Xiaoyi; Fuhrman, Jed A; Sun, Fengzhu

    2015-09-21

    Local trend (i.e. shape) analysis of time series data reveals co-changing patterns in dynamics of biological systems. However, slow permutation procedures to evaluate the statistical significance of local trend scores have limited its applications to high-throughput time series data analysis, e.g., data from the next generation sequencing technology based studies. By extending the theories for the tail probability of the range of sum of Markovian random variables, we propose formulae for approximating the statistical significance of local trend scores. Using simulations and real data, we show that the approximate p-value is close to that obtained using a large number of permutations (starting at time points >20 with no delay and >30 with delay of at most three time steps) in that the non-zero decimals of the p-values obtained by the approximation and the permutations are mostly the same when the approximate p-value is less than 0.05. In addition, the approximate p-value is slightly larger than that based on permutations making hypothesis testing based on the approximate p-value conservative. The approximation enables efficient calculation of p-values for pairwise local trend analysis, making large scale all-versus-all comparisons possible. We also propose a hybrid approach by integrating the approximation and permutations to obtain accurate p-values for significantly associated pairs. We further demonstrate its use with the analysis of the Polymouth Marine Laboratory (PML) microbial community time series from high-throughput sequencing data and found interesting organism co-occurrence dynamic patterns. The software tool is integrated into the eLSA software package that now provides accelerated local trend and similarity analysis pipelines for time series data. The package is freely available from the eLSA website: http://bitbucket.org/charade/elsa.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strubbe, David

    Octopus is a scientific program aimed at the ab initio virtual experimentation on a hopefully ever-increasing range of system types. Electrons are described quantum-mechanically within density-functional theory (DFT), in its time-dependent form (TDDFT) when doing simulations in time. Nuclei are described classically as point particles. Electron-nucleus interaction is described within the pseudopotential approximation.

  8. Numerical modeling of a point-source image under relative motion of radiation receiver and atmosphere

    NASA Astrophysics Data System (ADS)

    Kucherov, A. N.; Makashev, N. K.; Ustinov, E. V.

    1994-02-01

    A procedure is proposed for numerical modeling of instantaneous and averaged (over various time intervals) distant-point-source images perturbed by a turbulent atmosphere that moves relative to the radiation receiver. Examples of image calculations under conditions of the significant effect of atmospheric turbulence in an approximation of geometrical optics are presented and analyzed.

  9. Obtaining Approximate Values of Exterior Orientation Elements of Multi-Intersection Images Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Li, X.; Li, S. W.

    2012-07-01

    In this paper, an efficient global optimization algorithm in the field of artificial intelligence, named Particle Swarm Optimization (PSO), is introduced into close range photogrammetric data processing. PSO can be applied to obtain the approximate values of exterior orientation elements under the condition that multi-intersection photography and a small portable plane control frame are used. PSO, put forward by an American social psychologist J. Kennedy and an electrical engineer R.C. Eberhart, is a stochastic global optimization method based on swarm intelligence, which was inspired by social behavior of bird flocking or fish schooling. The strategy of obtaining the approximate values of exterior orientation elements using PSO is as follows: in terms of image coordinate observed values and space coordinates of few control points, the equations of calculating the image coordinate residual errors can be given. The sum of absolute value of each image coordinate is minimized to be the objective function. The difference between image coordinate observed value and the image coordinate computed through collinear condition equation is defined as the image coordinate residual error. Firstly a gross area of exterior orientation elements is given, and then the adjustment of other parameters is made to get the particles fly in the gross area. After iterative computation for certain times, the satisfied approximate values of exterior orientation elements are obtained. By doing so, the procedures like positioning and measuring space control points in close range photogrammetry can be avoided. Obviously, this method can improve the surveying efficiency greatly and at the same time can decrease the surveying cost. And during such a process, only one small portable control frame with a couple of control points is employed, and there are no strict requirements for the space distribution of control points. In order to verify the effectiveness of this algorithm, two experiments are carried out. In the first experiment, images of a standard grid board are taken according to multi-intersection photography using digital camera. Three points or six points which are located on the left-down corner of the standard grid are regarded as control points respectively, and the exterior orientation elements of each image are computed through PSO, and compared with these elements computed through bundle adjustment. In the second experiment, the exterior orientation elements obtained from the first experiment are used as approximate values in bundle adjustment and then the space coordinates of other grid points on the board can be computed. The coordinate difference of grid points between these computed space coordinates and their known coordinates can be used to compute the accuracy. The point accuracy computed in above experiments are ±0.76mm and ±0.43mm respectively. The above experiments prove the effectiveness of PSO used in close range photogrammetry to compute approximate values of exterior orientation elements, and the algorithm can meet the requirement of higher accuracy. In short, PSO can get better results in a faster, cheaper way compared with other surveying methods in close range photogrammetry.

  10. Computerized optimization of multiple isocentres in stereotactic convergent beam irradiation

    NASA Astrophysics Data System (ADS)

    Treuer, U.; Treuer, H.; Hoevels, M.; Müller, R. P.; Sturm, V.

    1998-01-01

    A method for the fully computerized determination and optimization of positions of target points and collimator sizes in convergent beam irradiation is presented. In conventional interactive trial and error methods, which are very time consuming, the treatment parameters are chosen according to the operator's experience and improved successively. This time is reduced significantly by the use of a computerized procedure. After the definition of target volume and organs at risk in the CT or MR scans, an initial configuration is created automatically. In the next step the target point positions and collimator diameters are optimized by the program. The aim of the optimization is to find a configuration for which a prescribed dose at the target surface is approximated as close as possible. At the same time dose peaks inside the target volume are minimized and organs at risk and tissue surrounding the target are spared. To enhance the speed of the optimization a fast method for approximate dose calculation in convergent beam irradiation is used. A possible application of the method for calculating the leaf positions when irradiating with a micromultileaf collimator is briefly discussed. The success of the procedure has been demonstrated for several clinical cases with up to six target points.

  11. Mean-field approximation for the Sznajd model in complex networks

    NASA Astrophysics Data System (ADS)

    Araújo, Maycon S.; Vannucchi, Fabio S.; Timpanaro, André M.; Prado, Carmen P. C.

    2015-02-01

    This paper studies the Sznajd model for opinion formation in a population connected through a general network. A master equation describing the time evolution of opinions is presented and solved in a mean-field approximation. Although quite simple, this approximation allows us to capture the most important features regarding the steady states of the model. When spontaneous opinion changes are included, a discontinuous transition from consensus to polarization can be found as the rate of spontaneous change is increased. In this case we show that a hybrid mean-field approach including interactions between second nearest neighbors is necessary to estimate correctly the critical point of the transition. The analytical prediction of the critical point is also compared with numerical simulations in a wide variety of networks, in particular Barabási-Albert networks, finding reasonable agreement despite the strong approximations involved. The same hybrid approach that made it possible to deal with second-order neighbors could just as well be adapted to treat other problems such as epidemic spreading or predator-prey systems.

  12. Auto covariance computer

    NASA Technical Reports Server (NTRS)

    Hepner, T. E.; Meyers, J. F. (Inventor)

    1985-01-01

    A laser velocimeter covariance processor which calculates the auto covariance and cross covariance functions for a turbulent flow field based on Poisson sampled measurements in time from a laser velocimeter is described. The device will process a block of data that is up to 4096 data points in length and return a 512 point covariance function with 48-bit resolution along with a 512 point histogram of the interarrival times which is used to normalize the covariance function. The device is designed to interface and be controlled by a minicomputer from which the data is received and the results returned. A typical 4096 point computation takes approximately 1.5 seconds to receive the data, compute the covariance function, and return the results to the computer.

  13. Fast computation of quadrupole and hexadecapole approximations in microlensing with a single point-source evaluation

    NASA Astrophysics Data System (ADS)

    Cassan, Arnaud

    2017-07-01

    The exoplanet detection rate from gravitational microlensing has grown significantly in recent years thanks to a great enhancement of resources and improved observational strategy. Current observatories include ground-based wide-field and/or robotic world-wide networks of telescopes, as well as space-based observatories such as satellites Spitzer or Kepler/K2. This results in a large quantity of data to be processed and analysed, which is a challenge for modelling codes because of the complexity of the parameter space to be explored and the intensive computations required to evaluate the models. In this work, I present a method that allows to compute the quadrupole and hexadecapole approximations of the finite-source magnification with more efficiency than previously available codes, with routines about six times and four times faster, respectively. The quadrupole takes just about twice the time of a point-source evaluation, which advocates for generalizing its use to large portions of the light curves. The corresponding routines are available as open-source python codes.

  14. Method of forming pointed structures

    NASA Technical Reports Server (NTRS)

    Pugel, Diane E. (Inventor)

    2011-01-01

    A method of forming an array of pointed structures comprises depositing a ferrofluid on a substrate, applying a magnetic field to the ferrofluid to generate an array of surface protrusions, and solidifying the surface protrusions to form the array of pointed structures. The pointed structures may have a tip radius ranging from approximately 10 nm to approximately 25 micron. Solidifying the surface protrusions may be carried out at a temperature ranging from approximately 10 degrees C. to approximately 30 degrees C.

  15. Self-Consistent Field Theory of Gaussian Ring Polymers

    NASA Astrophysics Data System (ADS)

    Kim, Jaeup; Yang, Yong-Biao; Lee, Won Bo

    2012-02-01

    Ring polymers, being free from chain ends, have fundamental importance in understanding the polymer statics and dynamics which are strongly influenced by the chain end effects. At a glance, their theoretical treatment may not seem particularly difficult, but the absence of chain ends and the topological constraints make the problem non-trivial, which results in limited success in the analytical or semi-analytical formulation of ring polymer theory. Here, I present a self-consistent field theory (SCFT) formalism of Gaussian (topologically unconstrained) ring polymers for the first time. The resulting static property of homogeneous and inhomogeneous ring polymers are compared with the random phase approximation (RPA) results. The critical point for ring homopolymer system is exactly the same as the linear polymer case, χN = 2, since a critical point does not depend on local structures of polymers. The critical point for ring diblock copolymer melts is χN 17.795, which is approximately 1.7 times of that of linear diblock copolymer melts, χN 10.495. The difference is due to the ring structure constraint.

  16. An Estimation of the Likelihood of Significant Eruptions During 2000-2009 Using Poisson Statistics on Two-Point Moving Averages of the Volcanic Time Series

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.

    2001-01-01

    Since 1750, the number of cataclysmic volcanic eruptions (volcanic explosivity index (VEI)>=4) per decade spans 2-11, with 96 percent located in the tropics and extra-tropical Northern Hemisphere. A two-point moving average of the volcanic time series has higher values since the 1860's than before, being 8.00 in the 1910's (the highest value) and 6.50 in the 1980's, the highest since the 1910's peak. Because of the usual behavior of the first difference of the two-point moving averages, one infers that its value for the 1990's will measure approximately 6.50 +/- 1, implying that approximately 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially those having VEI>=5) nearly always have been associated with short-term episodes of global cooling, the occurrence of even one might confuse our ability to assess the effects of global warming. Poisson probability distributions reveal that the probability of one or more events with a VEI>=4 within the next ten years is >99 percent. It is approximately 49 percent for an event with a VEI>=5, and 18 percent for an event with a VEI>=6. Hence, the likelihood that a climatically significant volcanic eruption will occur within the next ten years appears reasonably high.

  17. On the origins of approximations for stochastic chemical kinetics.

    PubMed

    Haseltine, Eric L; Rawlings, James B

    2005-10-22

    This paper considers the derivation of approximations for stochastic chemical kinetics governed by the discrete master equation. Here, the concepts of (1) partitioning on the basis of fast and slow reactions as opposed to fast and slow species and (2) conditional probability densities are used to derive approximate, partitioned master equations, which are Markovian in nature, from the original master equation. Under different conditions dictated by relaxation time arguments, such approximations give rise to both the equilibrium and hybrid (deterministic or Langevin equations coupled with discrete stochastic simulation) approximations previously reported. In addition, the derivation points out several weaknesses in previous justifications of both the hybrid and equilibrium systems and demonstrates the connection between the original and approximate master equations. Two simple examples illustrate situations in which these two approximate methods are applicable and demonstrate the two methods' efficiencies.

  18. Development of Parameters for the Collection and Analysis of Lidar at Military Munitions Sites

    DTIC Science & Technology

    2010-01-01

    and inertial measurement unit (IMU) equipment is used to locate the sensor in the air . The time of return of the laser signal allows for the...approximately 15 centimeters (cm) on soft ground surfaces and a horizontal accuracy of approximately 60 cm, both compared to surveyed control points...provide more accurate topographic data than other sources, at a reasonable cost compared to alternatives such as ground survey or photogrammetry

  19. A comparison of the reduced and approximate systems for the time dependent computation of the polar wind and multiconstituent stellar winds

    NASA Technical Reports Server (NTRS)

    Browning, G. L.; Holzer, T. E.

    1992-01-01

    The paper derives the 'reduced' system of equations commonly used to describe the time evolution of the polar wind and multiconstituent stellar winds from the equations for a multispecies plasma with known temperature profiles by assuming that the electron thermal speed approaches infinity. The reduced system is proved to have unbounded growth near the sonic point of the protons for many of the standard parameter cases. For the same parameter cases, the unmodified system exhibits growth in some of the Fourier modes, but this growth is bounded. An alternate system (the 'approximate' system) in which the electron thermal speed is slowed down is introduced. The approximate system retains the mathematical behavior of the unmodified system and can be shown to accurately describe the smooth solutions of the unmodified system. Other advantages of the approximate system over the reduced system are discussed.

  20. Numerical approximation abilities correlate with and predict informal but not formal mathematics abilities

    PubMed Central

    Libertus, Melissa E.; Feigenson, Lisa; Halberda, Justin

    2013-01-01

    Previous research has found a relationship between individual differences in children’s precision when nonverbally approximating quantities and their school mathematics performance. School mathematics performance emerges from both informal (e.g., counting) and formal (e.g., knowledge of mathematics facts) abilities. It remains unknown whether approximation precision relates to both of these types of mathematics abilities. In the present study we assessed the precision of numerical approximation in 85 3- to 7-year-old children four times over a span of two years. Additionally, at the last time point, we tested children’s informal and formal mathematics abilities using the Test of Early Mathematics Ability (TEMA-3; Ginsburg & Baroody, 2003). We found that children’s numerical approximation precision correlated with and predicted their informal, but not formal, mathematics abilities when controlling for age and IQ. These results add to our growing understanding of the relationship between an unlearned, non-symbolic system of quantity representation and the system of mathematical reasoning that children come to master through instruction. PMID:24076381

  1. Numerical approximation abilities correlate with and predict informal but not formal mathematics abilities.

    PubMed

    Libertus, Melissa E; Feigenson, Lisa; Halberda, Justin

    2013-12-01

    Previous research has found a relationship between individual differences in children's precision when nonverbally approximating quantities and their school mathematics performance. School mathematics performance emerges from both informal (e.g., counting) and formal (e.g., knowledge of mathematics facts) abilities. It remains unknown whether approximation precision relates to both of these types of mathematics abilities. In the current study, we assessed the precision of numerical approximation in 85 3- to 7-year-old children four times over a span of 2years. In addition, at the final time point, we tested children's informal and formal mathematics abilities using the Test of Early Mathematics Ability (TEMA-3). We found that children's numerical approximation precision correlated with and predicted their informal, but not formal, mathematics abilities when controlling for age and IQ. These results add to our growing understanding of the relationship between an unlearned nonsymbolic system of quantity representation and the system of mathematics reasoning that children come to master through instruction. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. A Subsonic Aircraft Design Optimization With Neural Network and Regression Approximators

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.; Haller, William J.

    2004-01-01

    The Flight-Optimization-System (FLOPS) code encountered difficulty in analyzing a subsonic aircraft. The limitation made the design optimization problematic. The deficiencies have been alleviated through use of neural network and regression approximations. The insight gained from using the approximators is discussed in this paper. The FLOPS code is reviewed. Analysis models are developed and validated for each approximator. The regression method appears to hug the data points, while the neural network approximation follows a mean path. For an analysis cycle, the approximate model required milliseconds of central processing unit (CPU) time versus seconds by the FLOPS code. Performance of the approximators was satisfactory for aircraft analysis. A design optimization capability has been created by coupling the derived analyzers to the optimization test bed CometBoards. The approximators were efficient reanalysis tools in the aircraft design optimization. Instability encountered in the FLOPS analyzer was eliminated. The convergence characteristics were improved for the design optimization. The CPU time required to calculate the optimum solution, measured in hours with the FLOPS code was reduced to minutes with the neural network approximation and to seconds with the regression method. Generation of the approximators required the manipulation of a very large quantity of data. Design sensitivity with respect to the bounds of aircraft constraints is easily generated.

  3. Finite-element time evolution operator for the anharmonic oscillator

    NASA Technical Reports Server (NTRS)

    Milton, Kimball A.

    1995-01-01

    The finite-element approach to lattice field theory is both highly accurate (relative errors approximately 1/N(exp 2), where N is the number of lattice points) and exactly unitary (in the sense that canonical commutation relations are exactly preserved at the lattice sites). In this talk I construct matrix elements for dynamical variables and for the time evolution operator for the anharmonic oscillator, for which the continuum Hamiltonian is H = p(exp 2)/2 + lambda q(exp 4)/4. Construction of such matrix elements does not require solving the implicit equations of motion. Low order approximations turn out to be extremely accurate. For example, the matrix element of the time evolution operator in the harmonic oscillator ground state gives a results for the anharmonic oscillator ground state energy accurate to better than 1 percent, while a two-state approximation reduces the error to less than 0.1 percent.

  4. Perceptions of Father Involvement Patterns in Teenage-Mother Families: Predictors and Links to Mothers' Psychological Adjustment

    ERIC Educational Resources Information Center

    Kalil, Ariel; Ziol-Guest, Kathleen M.; Coley, Rebekah Levine

    2005-01-01

    Based on adolescent mothers' reports, longitudinal patterns of involvement of young, unmarried biological fathers (n=77) in teenage-mother families using cluster analytic techniques were examined. Approximately one third of fathers maintained high levels of involvement over time, another third demonstrated low involvement at both time points, and…

  5. Nonlinear Dynamics and Nucleation Kinetics in Near-Critical Liquids

    NASA Technical Reports Server (NTRS)

    Patashinski, Alexander Z.; Ratner, Mark A.; Pines, Vladimir

    1996-01-01

    The objective of our study is to model the nonlinear behavior of a near-critical liquid following a rapid change of the temperature and/or other thermodynamic parameters (pressure, external electric or gravitational field). The thermodynamic critical point is manifested by large, strongly correlated fluctuations of the order parameter (particle density in liquid-gas systems, concentration in binary solutions) in the critical range of scales. The largest critical length scale is the correlation radius r(sub c). According to the scaling theory, r(sub c) increases as r(sub c) = r(sub 0)epsilon(exp -alpha) when the nondimensional distance epsilon = (T - T(sub c))/T(sub c) to the critical point decreases. The normal gravity alters the nature of correlated long-range fluctuations when one reaches epsilon approximately equal to 10(exp -5), and correspondingly the relaxation time, tau(r(sub c)), is approximately equal to 10(exp -3) seconds; this time is short when compared to the typical experimental time. Close to the critical point, a rapid, relatively small temperature change may perturb the thermodynamic equilibrium on many scales. The critical fluctuations have a hierarchical structure, and the relaxation involves many length and time scales. Above the critical point, in the one-phase region, we consider the relaxation of the liquid following a sudden temperature change that simultaneously violates the equilibrium on many scales. Below T(sub c), a non-equilibrium state may include a distribution of small scale phase droplets; we consider the relaxation of such a droplet following a temperature change that has made the phase of the matrix stable.

  6. Comparison of Travel-Time and Amplitude Measurements for Deep-Focusing Time-Distance Helioseismology

    NASA Astrophysics Data System (ADS)

    Pourabdian, Majid; Fournier, Damien; Gizon, Laurent

    2018-04-01

    The purpose of deep-focusing time-distance helioseismology is to construct seismic measurements that have a high sensitivity to the physical conditions at a desired target point in the solar interior. With this technique, pairs of points on the solar surface are chosen such that acoustic ray paths intersect at this target (focus) point. Considering acoustic waves in a homogeneous medium, we compare travel-time and amplitude measurements extracted from the deep-focusing cross-covariance functions. Using a single-scattering approximation, we find that the spatial sensitivity of deep-focusing travel times to sound-speed perturbations is zero at the target location and maximum in a surrounding shell. This is unlike the deep-focusing amplitude measurements, which have maximum sensitivity at the target point. We compare the signal-to-noise ratio for travel-time and amplitude measurements for different types of sound-speed perturbations, under the assumption that noise is solely due to the random excitation of the waves. We find that, for highly localized perturbations in sound speed, the signal-to-noise ratio is higher for amplitude measurements than for travel-time measurements. We conclude that amplitude measurements are a useful complement to travel-time measurements in time-distance helioseismology.

  7. Approximations for column effect in airplane wing spars

    NASA Technical Reports Server (NTRS)

    Warner, Edward P; Short, Mac

    1927-01-01

    The significance attaching to "column effect" in airplane wing spars has been increasingly realized with the passage of time, but exact computations of the corrections to bending moment curves resulting from the existence of end loads are frequently omitted because of the additional labor involved in an analysis by rigorously correct methods. The present report represents an attempt to provide for approximate column effect corrections that can be graphically or otherwise expressed so as to be applied with a minimum of labor. Curves are plotted giving approximate values of the correction factors for single and two bay trusses of varying proportions and with various relationships between axial and lateral loads. It is further shown from an analysis of those curves that rough but useful approximations can be obtained from Perry's formula for corrected bending moment, with the assumed distance between points of inflection arbitrarily modified in accordance with rules given in the report. The discussion of general rules of variation of bending stress with axial load is accompanied by a study of the best distribution of the points of support along a spar for various conditions of loading.

  8. Blending Velocities In Task Space In Computing Robot Motions

    NASA Technical Reports Server (NTRS)

    Volpe, Richard A.

    1995-01-01

    Blending of linear and angular velocities between sequential specified points in task space constitutes theoretical basis of improved method of computing trajectories followed by robotic manipulators. In method, generalized velocity-vector-blending technique provides relatively simple, common conceptual framework for blending linear, angular, and other parametric velocities. Velocity vectors originate from straight-line segments connecting specified task-space points, called "via frames" and represent specified robot poses. Linear-velocity-blending functions chosen from among first-order, third-order-polynomial, and cycloidal options. Angular velocities blended by use of first-order approximation of previous orientation-matrix-blending formulation. Angular-velocity approximation yields small residual error, quantified and corrected. Method offers both relative simplicity and speed needed for generation of robot-manipulator trajectories in real time.

  9. Traveltime and dispersion in the Potomac River, Cumberland, Maryland, to Washington, D.C.

    USGS Publications Warehouse

    Taylor, Kenneth R.; James, Robert W.; Helinsky, Bernard M.

    1985-01-01

    A travel-time and dispersion study using rhodamine dye was conducted on the Potomac River between Cumberland, Maryland, and Washington, D.C., a distance of 189 miles. The flow during the study was at approximately the 90-percent flow-duration level. A similar study was conducted by Wilson and Forrest in 1964 at a flow duration of approximately 60 percent. The two sets of data were used to develop a generalized procedure for predicting travel-times and downstream concentrations resulting from spillage of water-soluble substances at any point along the river. The procedure will allow the user to calculate travel-time and concentration data for almost any spillage problem that occurs during periods of relatively steady flow between 50- and 95-percent flow duration. A new procedure for calculating unit peak concentration was derived. The new procedure depends on an analogy between a time-concentration curve and a scalene triangle. As a result of this analogy, the unit peak concentration can be expressed in terms of the length of the _lye or contaminant cloud. The new procedure facilitates the calculation of unit peak concentration for long reaches of river. Previously, there was no way to link unit peak concentration curves for studies in which the river was divided into subreaches for study. Variable dispersive characteristics caused mainly by low-head dams precluded useful extrapolation of the unit peak-concentration attenuation curves, as has been done in previous studies. The procedure is applied to a hypothetical situation in which 20,000 pounds of contaminant is spilled at a railroad crossing at Magnolia, West Virginia. The times required for the leading edge, the peak concentration, and the trailing edge of the contaminant cloud to reach Point of Rocks, Maryland (110 river miles downstream), are 295, 375, and 540 hours respectively, during a period when flow is at the 80-percent flow-duration level. The peak conservative concentration would be approximately 340 micrograms per liter at Point of Rocks.

  10. Repeated tracer tests in a karst system with concentrated allogenic recharge (Johnsbachtal, Austria)

    NASA Astrophysics Data System (ADS)

    Birk, Steffen; Wagner, Thomas; Pauritsch, Marcus; Winkler, Gerfried

    2015-04-01

    The Johnsbachtal (Austria) is a high Alpine headwater catchment covering an area of approximately 65 km², which is equipped with a hydrometeorological monitoring network (Strasser at al. 2013). The catchment is composed of carbonate rocks and crystalline rocks belonging to the Northern Calceraous Alps and the Greywacke Zone. The largest spring within the catchment, the Etzbach spring, is bound on karstified carbonate rocks of the Greywacke Zone. A stream sink located at a distance of approximately 1 km from the spring was used as injection point for repeated tracer tests in the years 2012, 2013, and 2014. In each case the tracer was recovered at the spring indicating an allogenic recharge component from the crystalline parts of the catchment. The spring discharge at the times of the three tracer tests varied between approximately 0.3 and 0.6 m³/s. Likewise the tracer travel times and thus the flow velocities were found to be different. Surprisingly, the largest tracer travel time (and thus lowest flow velocity) was obtained in 2013 when the spring discharge was highest (0.6 m³/s). In addition, the flow velocities in 2012 and 2014 were found to be clearly different, although the spring discharge was similar (roughly 0.3 m³/s) in both tests. Thus, the tracer velocity appears to be not correlated with the spring discharge. Field observations indicate that this finding can potentially be attributed to complexities at both the injection location (e.g., plugging of injection points and thus different flow paths) and the sampling point (i.e., the spring, which is composed of several outlet points representing different subcatchments). References: Strasser, U., Marke, T., Sass, O., Birk, S., Winkler, G. (2013): John's creek valley: a mountainous catchment for long-term interdisciplinary human-environment system research in Upper Styria (Austria). Environmental Earth Sciences, doi: 10.1007/s12665-013-2318-y

  11. Research on Modeling of Propeller in a Turboprop Engine

    NASA Astrophysics Data System (ADS)

    Huang, Jiaqin; Huang, Xianghua; Zhang, Tianhong

    2015-05-01

    In the simulation of engine-propeller integrated control system for a turboprop aircraft, a real-time propeller model with high-accuracy is required. A study is conducted to compare the real-time and precision performance of propeller models based on strip theory and lifting surface theory. The emphasis in modeling by strip theory is focused on three points as follows: First, FLUENT is adopted to calculate the lift and drag coefficients of the propeller. Next, a method to calculate the induced velocity which occurs in the ground rig test is presented. Finally, an approximate method is proposed to obtain the downwash angle of the propeller when the conventional algorithm has no solution. An advanced approximation of the velocities induced by helical horseshoe vortices is applied in the model based on lifting surface theory. This approximate method will reduce computing time and remain good accuracy. Comparison between the two modeling techniques shows that the model based on strip theory which owns more advantage on both real-time and high-accuracy can meet the requirement.

  12. Dynamics of the exponential integrate-and-fire model with slow currents and adaptation.

    PubMed

    Barranca, Victor J; Johnson, Daniel C; Moyher, Jennifer L; Sauppe, Joshua P; Shkarayev, Maxim S; Kovačič, Gregor; Cai, David

    2014-08-01

    In order to properly capture spike-frequency adaptation with a simplified point-neuron model, we study approximations of Hodgkin-Huxley (HH) models including slow currents by exponential integrate-and-fire (EIF) models that incorporate the same types of currents. We optimize the parameters of the EIF models under the external drive consisting of AMPA-type conductance pulses using the current-voltage curves and the van Rossum metric to best capture the subthreshold membrane potential, firing rate, and jump size of the slow current at the neuron's spike times. Our numerical simulations demonstrate that, in addition to these quantities, the approximate EIF-type models faithfully reproduce bifurcation properties of the HH neurons with slow currents, which include spike-frequency adaptation, phase-response curves, critical exponents at the transition between a finite and infinite number of spikes with increasing constant external drive, and bifurcation diagrams of interspike intervals in time-periodically forced models. Dynamics of networks of HH neurons with slow currents can also be approximated by corresponding EIF-type networks, with the approximation being at least statistically accurate over a broad range of Poisson rates of the external drive. For the form of external drive resembling realistic, AMPA-like synaptic conductance response to incoming action potentials, the EIF model affords great savings of computation time as compared with the corresponding HH-type model. Our work shows that the EIF model with additional slow currents is well suited for use in large-scale, point-neuron models in which spike-frequency adaptation is important.

  13. Point-ahead limitation on reciprocity tracking. [in earth-space optical link

    NASA Technical Reports Server (NTRS)

    Shapiro, J. H.

    1975-01-01

    The average power received at a spacecraft from a reciprocity-tracking transmitter is shown to be the free-space diffraction-limited result times a gain-reduction factor that is due to the point-ahead requirement. For a constant-power transmitter, the gain-reduction factor is approximately equal to the appropriate spherical-wave mutual-coherence function. For a constant-average-power transmitter, an exact expression is obtained for the gain-reduction factor.

  14. Double Bright Band Observations with High-Resolution Vertically Pointing Radar, Lidar, and Profiles

    NASA Technical Reports Server (NTRS)

    Emory, Amber E.; Demoz, Belay; Vermeesch, Kevin; Hicks, Michael

    2014-01-01

    On 11 May 2010, an elevated temperature inversion associated with an approaching warm front produced two melting layers simultaneously, which resulted in two distinct bright bands as viewed from the ER-2 Doppler radar system, a vertically pointing, coherent X band radar located in Greenbelt, MD. Due to the high temporal resolution of this radar system, an increase in altitude of the melting layer of approximately 1.2 km in the time span of 4 min was captured. The double bright band feature remained evident for approximately 17 min, until the lower atmosphere warmed enough to dissipate the lower melting layer. This case shows the relatively rapid evolution of freezing levels in response to an advancing warm front over a 2 h time period and the descent of an elevated warm air mass with time. Although observations of double bright bands are somewhat rare, the ability to identify this phenomenon is important for rainfall estimation from spaceborne sensors because algorithms employing the restriction of a radar bright band to a constant height, especially when sampling across frontal systems, will limit the ability to accurately estimate rainfall.

  15. Double bright band observations with high-resolution vertically pointing radar, lidar, and profilers

    NASA Astrophysics Data System (ADS)

    Emory, Amber E.; Demoz, Belay; Vermeesch, Kevin; Hicks, Micheal

    2014-07-01

    On 11 May 2010, an elevated temperature inversion associated with an approaching warm front produced two melting layers simultaneously, which resulted in two distinct bright bands as viewed from the ER-2 Doppler radar system, a vertically pointing, coherent X band radar located in Greenbelt, MD. Due to the high temporal resolution of this radar system, an increase in altitude of the melting layer of approximately 1.2 km in the time span of 4 min was captured. The double bright band feature remained evident for approximately 17 min, until the lower atmosphere warmed enough to dissipate the lower melting layer. This case shows the relatively rapid evolution of freezing levels in response to an advancing warm front over a 2 h time period and the descent of an elevated warm air mass with time. Although observations of double bright bands are somewhat rare, the ability to identify this phenomenon is important for rainfall estimation from spaceborne sensors because algorithms employing the restriction of a radar bright band to a constant height, especially when sampling across frontal systems, will limit the ability to accurately estimate rainfall.

  16. ACTS Multibeam Antenna On-Orbit Performance

    NASA Technical Reports Server (NTRS)

    Acosta, R.; Wright, D.; Mitchell, Kenneth

    1996-01-01

    The Advanced Communications Technology Satellite (ACTS) launched in September 1993 introduces several new technologies including a multibeam antenna (MBA) operating at Ka-band. The MBA with fixed and rapidly reconfigurable spot beams serves users equipped with small aperture terminals within the coverage area. The antenna produces spot beams with approximately 0.3 degrees beamwidth and gains of approximately 50 dBi. A number of MBA performance evaluations have been performed since the ACTS launch. These evaluations were designed to assess MBA performance (e.g., beam pointing stability, beam shape, gain, etc.) in the space environment. The on-orbit measurements found systematic environmental perturbation to the MBA beam pointing. These perturbations were found to be imposed by satellite attitude control system, antenna and spacecraft mechanical alignments, on-orbit thermal effects, etc. As a result, the footprint coverage of the MBA may not exactly cover the intended service area at all times. This report describes the space environment effects on the ACTS MBA performance as a function of time of the day and time of the year and compensation approaches for these effects.

  17. Effect of area ratio on the performance of a 5.5:1 pressure ratio centrifugal impeller

    NASA Technical Reports Server (NTRS)

    Schumann, L. F.; Clark, D. A.; Wood, J. R.

    1986-01-01

    A centrifugal impeller which was initially designed for a pressure ratio of approximately 5.5 and a mass flow rate of 0.959 kg/sec was tested with a vaneless diffuser for a range of design point impeller area ratios from 2.322 to 2.945. The impeller area ratio was changed by successively cutting back the impeller exit axial width from an initial value of 7.57 mm to a final value of 5.97 mm. In all, four separate area ratios were tested. For each area ratio a series of impeller exit axial clearances was also tested. Test results are based on impeller exit surveys of total pressure, total temperature, and flow angle at a radius 1.115 times the impeller exit radius. Results of the tests at design speed, peak efficiency, and an exit tip clearance of 8 percent of exit blade height show that the impeller equivalent pressure recovery coefficient peaked at a design point area ratio of approximately 2.748 while the impeller aerodynamic efficiency peaked at a lower value of area ratio of approximately 2.55. The variation of impeller efficiency with clearance showed expected trends with a loss of approximately 0.4 points in impeller efficiency for each percent increase in exit axial tip clearance for all impellers tested.

  18. Longitudinal changes in intellectual development in children with Fragile X syndrome.

    PubMed

    Hall, Scott S; Burns, David D; Lightbody, Amy A; Reiss, Allan L

    2008-08-01

    Structural equation modeling (SEM) was used to examine the development of intellectual functioning in 145 school-age pairs of siblings. Each pair included one child with Fragile X syndrome (FXS) and one unaffected sibling. All pairs of children were evaluated on the Wechsler Intelligence Scale for Children-Third Edition (WISC-III) at time 1 and 80 pairs of children received a second evaluation at time 2 approximately 4 years later. Compared to their unaffected siblings, children with FXS obtained significantly lower percentage correct scores on all subtests of the WISC at both time points. During the time between the first and second assessments, the annual rate of intellectual development was approximately 2.2 times faster in the unaffected children compared to the children with FXS. Levels of the fragile X mental retardation protein (FMRP) were highly associated with intellectual ability scores of the children with FXS at both time points (r=0.55 and 0.64 respectively). However, when gender, age, and the time between assessments were included as covariates in the structural equation model, FMRP accounted for only 5% of the variance in intellectual ability scores at time 1 and 13% of the variance at time 2. The results of this study suggest that slower learning contributes to the low and declining standardized IQ scores observed in children with FXS.

  19. On the Limitations of Taylor’s Hypothesis in Parker Solar Probe’s Measurements near the Alfvén Critical Point

    NASA Astrophysics Data System (ADS)

    Bourouaine, Sofiane; Perez, Jean C.

    2018-05-01

    In this Letter, we present an analysis of two-point, two-time correlation functions from high-resolution numerical simulations of Reflection-driven Alfvén Turbulence near the Alfvén critical point r c. The simulations model the turbulence in a prescribed background solar wind model chosen to match observational constraints. This analysis allows us to investigate the temporal decorrelation of solar wind turbulence and the validity of Taylor’s approximation near the heliocentric distance r c, which Parker Solar Probe (PSP) is expected to explore in the coming years. The simulations show that the temporal decay of the Fourier-transformed turbulence decorrelation function is better described by a Gaussian model rather than a pure exponential time decay, and that the decorrelation frequency is almost linear with perpendicular wave number k ⊥ (perpendicular with respect to the background magnetic field {{\\boldsymbol{B}}}0). Based on the simulations, we conclude that Taylor’s approximation cannot be used in this instance to provide a connection between the frequency ω of the time signal (measured in the probe frame) and the wavevector k ⊥ of the fluctuations because the frequency k ⊥ V sc (V sc is the spacecraft speed) near r c is comparable to the estimated decorrelation frequency. However, the use of Taylor’s approximation still leads to the correct spectral indices of the power spectra measured at the spacecraft frame. In this Letter, based on a Gaussian model, we suggest a modified relationship between ω and k ⊥, which might be useful in the interpretation of future PSP measurements.

  20. Dynamics of a linear system coupled to a chain of light nonlinear oscillators analyzed through a continuous approximation

    NASA Astrophysics Data System (ADS)

    Charlemagne, S.; Ture Savadkoohi, A.; Lamarque, C.-H.

    2018-07-01

    The continuous approximation is used in this work to describe the dynamics of a nonlinear chain of light oscillators coupled to a linear main system. A general methodology is applied to an example where the chain has local nonlinear restoring forces. The slow invariant manifold is detected at fast time scale. At slow time scale, equilibrium and singular points are sought around this manifold in order to predict periodic regimes and strongly modulated responses of the system. Analytical predictions are in good accordance with numerical results and represent a potent tool for designing nonlinear chains for passive control purposes.

  1. The deep fovea, sideways vision and spiral flight paths in raptors.

    PubMed

    Tucker, V A

    2000-12-01

    Raptors - falcons, hawks and eagles in this study - have two regions of the retina in each eye that are specialized for acute vision: the deep fovea and the shallow fovea. The line of sight of the deep fovea points forwards and approximately 45 degrees to the right or left of the head axis, while that of the shallow fovea also points forwards but approximately 15 degrees to the right or left of the head axis. The anatomy of the foveae suggests that the deep fovea has the higher acuity. Several species of raptors in this study repeatedly moved their heads among three positions while looking at an object: straight, with the head axis pointing towards the object; or sideways to the right or left, with the head axis pointing approximately 40 degrees to the side of the object. Since raptors do not rotate their eyes noticeably in the sockets, these movements presumably cause the image of the object to fall on the shallow and deep foveae. The movements occurred approximately every 2 s on average in hawks and falcons, and approximately every 5 s in bald eagles. The proportion of time that the raptors spent looking straight or sideways at an object depended on how far away the object was. At a distances closer than 8 m, they spent more time looking at the object straight, but as the distance increased to 21 m, they spent more time looking at it sideways. At distances of 40 m or more, raptors looked sideways at the object 80 % or more of the time. This dependence of head position on distance suggests that raptors use their more acute sideways vision to look at distant objects and sacrifice acuity for stereoscopic binocular vision to look at close objects. Having their most acute vision towards the side causes a conflict in raptors such as falcons, which dive at prey from great distances at high speeds: at a speed of 70 m s(-)(1), turning their head sideways to view the prey straight ahead with high visual acuity may increase aerodynamic drag by a factor of 2 or more and slow the raptor down. Raptors could resolve this conflict by diving along a logarithmic spiral path with their head straight and one eye looking sideways at the prey, rather than following the straight path to the prey with their head turned sideways. Although the spiral path is longer than the straight path, a mathematical model for an 'ideal falcon' shows that the falcon could reach the prey more quickly along the spiral path because the speed advantage of a straight head more than compensates for the longer path.

  2. The Relationship between OCT-measured Central Retinal Thickness and Visual Acuity in Diabetic Macular Edema

    PubMed Central

    2008-01-01

    Objective To compare optical coherence tomography (OCT)-measured retinal thickness and visual acuity in eyes with diabetic macular edema (DME) both before and after macular laser photocoagulation. Design Cross-sectional and longitudinal study. Participants 210 subjects (251 eyes) with DME enrolled in a randomized clinical trial of laser techniques. Methods Retinal thickness was measured with OCT and visual acuity was measured with the electronic-ETDRS procedure. Main Outcome Measures OCT-measured center point thickness and visual acuity Results The correlation coefficients for visual acuity versus OCT center point thickness were 0.52 at baseline and 0.49, 0.36, and 0.38 at 3.5, 8, and 12 months post-laser photocoagulation. The slope of the best fit line to the baseline data was approximately 4.4 letters (95% C.I.: 3.5, 5.3) better visual acuity for every 100 microns decrease in center point thickness at baseline with no important difference at follow-up visits. Approximately one-third of the variation in visual acuity could be predicted by a linear regression model that incorporated OCT center point thickness, age, hemoglobin A1C, and severity of fluorescein leakage in the center and inner subfields. The correlation between change in visual acuity and change in OCT center point thickening 3.5 months after laser treatment was 0.44 with no important difference at the other follow-up times. A subset of eyes showed paradoxical improvements in visual acuity with increased center point thickening (7–17% at the three time points) or paradoxical worsening of visual acuity with a decrease in center point thickening (18%–26% at the three time points). Conclusions There is modest correlation between OCT-measured center point thickness and visual acuity, and modest correlation of changes in retinal thickening and visual acuity following focal laser treatment for DME. However, a wide range of visual acuity may be observed for a given degree of retinal edema and paradoxical increases in center point thickening with increases in visual acuity as well as paradoxical decreases in center point thickening with decreases in visual acuity were not uncommon. Thus, although OCT measurements of retinal thickness represent an important tool in clinical evaluation, they cannot reliably substitute as a surrogate for visual acuity at a given point in time. This study does not address whether short-term changes on OCT are predictive of long-term effects on visual acuity. PMID:17123615

  3. The comparability of the universalism value over time and across countries in the European Social Survey: exact vs. approximate measurement invariance

    PubMed Central

    Zercher, Florian; Schmidt, Peter; Cieciuch, Jan; Davidov, Eldad

    2015-01-01

    Over the last decades, large international datasets such as the European Social Survey (ESS), the European Value Study (EVS) and the World Value Survey (WVS) have been collected to compare value means over multiple time points and across many countries. Yet analyzing comparative survey data requires the fulfillment of specific assumptions, i.e., that these values are comparable over time and across countries. Given the large number of groups that can be compared in repeated cross-national datasets, establishing measurement invariance has been, however, considered unrealistic. Indeed, studies which did assess it often failed to establish higher levels of invariance such as scalar invariance. In this paper we first introduce the newly developed approximate approach based on Bayesian structural equation modeling (BSEM) to assess cross-group invariance over countries and time points and contrast the findings with the results from the traditional exact measurement invariance test. BSEM examines whether measurement parameters are approximately (rather than exactly) invariant. We apply BSEM to a subset of items measuring the universalism value from the Portrait Values Questionnaire (PVQ) in the ESS. The invariance of this value is tested simultaneously across 15 ESS countries over six ESS rounds with 173,071 respondents and 90 groups in total. Whereas, the use of the traditional approach only legitimates the comparison of latent means of 37 groups, the Bayesian procedure allows the latent mean comparison of 73 groups. Thus, our empirical application demonstrates for the first time the BSEM test procedure on a particularly large set of groups. PMID:26089811

  4. A study of the application of singular perturbation theory. [development of a real time algorithm for optimal three dimensional aircraft maneuvers

    NASA Technical Reports Server (NTRS)

    Mehra, R. K.; Washburn, R. B.; Sajan, S.; Carroll, J. V.

    1979-01-01

    A hierarchical real time algorithm for optimal three dimensional control of aircraft is described. Systematic methods are developed for real time computation of nonlinear feedback controls by means of singular perturbation theory. The results are applied to a six state, three control variable, point mass model of an F-4 aircraft. Nonlinear feedback laws are presented for computing the optimal control of throttle, bank angle, and angle of attack. Real Time capability is assessed on a TI 9900 microcomputer. The breakdown of the singular perturbation approximation near the terminal point is examined Continuation methods are examined to obtain exact optimal trajectories starting from the singular perturbation solutions.

  5. Neural Network and Regression Approximations in High Speed Civil Transport Aircraft Design Optimization

    NASA Technical Reports Server (NTRS)

    Patniak, Surya N.; Guptill, James D.; Hopkins, Dale A.; Lavelle, Thomas M.

    1998-01-01

    Nonlinear mathematical-programming-based design optimization can be an elegant method. However, the calculations required to generate the merit function, constraints, and their gradients, which are frequently required, can make the process computational intensive. The computational burden can be greatly reduced by using approximating analyzers derived from an original analyzer utilizing neural networks and linear regression methods. The experience gained from using both of these approximation methods in the design optimization of a high speed civil transport aircraft is the subject of this paper. The Langley Research Center's Flight Optimization System was selected for the aircraft analysis. This software was exercised to generate a set of training data with which a neural network and a regression method were trained, thereby producing the two approximating analyzers. The derived analyzers were coupled to the Lewis Research Center's CometBoards test bed to provide the optimization capability. With the combined software, both approximation methods were examined for use in aircraft design optimization, and both performed satisfactorily. The CPU time for solution of the problem, which had been measured in hours, was reduced to minutes with the neural network approximation and to seconds with the regression method. Instability encountered in the aircraft analysis software at certain design points was also eliminated. On the other hand, there were costs and difficulties associated with training the approximating analyzers. The CPU time required to generate the input-output pairs and to train the approximating analyzers was seven times that required for solution of the problem.

  6. Diffraction and geometrical optical transfer functions: calculation time comparison

    NASA Astrophysics Data System (ADS)

    Díaz, José Antonio; Mahajan, Virendra N.

    2017-08-01

    In a recent paper, we compared the diffraction and geometrical optical transfer functions (OTFs) of an optical imaging system, and showed that the GOTF approximates the DOTF within 10% when a primary aberration is about two waves or larger [Appl. Opt., 55, 3241-3250 (2016)]. In this paper, we determine and compare the times to calculate the DOTF by autocorrelation or digital autocorrelation of the pupil function, and by a Fourier transform (FT) of the point-spread function (PSF); and the GOTF by a FT of the geometrical PSF and its approximation, the spot diagram. Our starting point for calculating the DOTF is the wave aberrations of the system in its pupil plane, and the ray aberrations in the image plane for the GOTF. The numerical results for primary aberrations and a typical imaging system show that the direct integrations are slow, but the calculation of the DOTF by a FT of the PSF is generally faster than the GOTF calculation by a FT of the spot diagram.

  7. Direct Measurement of Wave Kernels in Time-Distance Helioseismology

    NASA Technical Reports Server (NTRS)

    Duvall, T. L., Jr.

    2006-01-01

    Solar f-mode waves are surface-gravity waves which propagate horizontally in a thin layer near the photosphere with a dispersion relation approximately that of deep water waves. At the power maximum near 3 mHz, the wavelength of 5 Mm is large enough for various wave scattering properties to be observable. Gizon and Birch (2002,ApJ,571,966)h ave calculated kernels, in the Born approximation, for the sensitivity of wave travel times to local changes in damping rate and source strength. In this work, using isolated small magnetic features as approximate point-sourc'e scatterers, such a kernel has been measured. The observed kernel contains similar features to a theoretical damping kernel but not for a source kernel. A full understanding of the effect of small magnetic features on the waves will require more detailed modeling.

  8. Guidance of a Solar Sail Spacecraft to the Sun - L(2) Point.

    NASA Astrophysics Data System (ADS)

    Hur, Sun Hae

    The guidance of a solar sail spacecraft along a minimum-time path from an Earth orbit to a region near the Sun-Earth L_2 libration point is investigated. Possible missions to this point include a spacecraft "listening" for possible extra-terrestrial electromagnetic signals and a science payload to study the geomagnetic tail. A key advantage of the solar sail is that it requires no fuel. The control variables are the sail angles relative to the Sun-Earth line. The thrust is very small, on the order of 1 mm/s^2, and its magnitude and direction are highly coupled. Despite this limited controllability, the "free" thrust can be used for a wide variety of terminal conditions including halo orbits. If the Moon's mass is lumped with the Earth, there are quasi-equilibrium points near L_2. However, they are unstable so that some form of station keeping is required, and the sail can provide this without any fuel usage. In the two-dimensional case, regulating about a nominal orbit is shown to require less control and result in smaller amplitude error response than regulating about a quasi-equilibrium point. In the three-dimensional halo orbit case, station keeping using periodically varying gains is demonstrated. To compute the minimum-time path, the trajectory is divided into two segments: the spiral segment and the transition segment. The spiral segment is computed using a control law that maximizes the rate of energy increase at each time. The transition segment is computed as the solution of the time-optimal control problem from the endpoint of the spiral to the terminal point. It is shown that the path resulting from this approximate strategy is very close to the exact optimal path. For the guidance problem, the approximate strategy in the spiral segment already gives a nonlinear full-state feedback law. However, for large perturbations, follower guidance using an auxiliary propulsion is used for part of the spiral. In the transition segment, neighboring extremal feedback guidance using the solar sail, with feedforward control only near the terminal point, is used to correct perturbations in the initial conditions.

  9. Efficient Delaunay Tessellation through K-D Tree Decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morozov, Dmitriy; Peterka, Tom

    Delaunay tessellations are fundamental data structures in computational geometry. They are important in data analysis, where they can represent the geometry of a point set or approximate its density. The algorithms for computing these tessellations at scale perform poorly when the input data is unbalanced. We investigate the use of k-d trees to evenly distribute points among processes and compare two strategies for picking split points between domain regions. Because resulting point distributions no longer satisfy the assumptions of existing parallel Delaunay algorithms, we develop a new parallel algorithm that adapts to its input and prove its correctness. We evaluatemore » the new algorithm using two late-stage cosmology datasets. The new running times are up to 50 times faster using k-d tree compared with regular grid decomposition. Moreover, in the unbalanced data sets, decomposing the domain into a k-d tree is up to five times faster than decomposing it into a regular grid.« less

  10. Time of travel of solutes in selected reaches of the Sandusky River Basin, Ohio, 1972 and 1973

    USGS Publications Warehouse

    Westfall, Arthur O.

    1976-01-01

    A time of travel study of a 106-mile (171-kilometer) reach of the Sandusky River and a 39-mile (63-kilometer) reach of Tymochtee Creek was made to determine the time required for water released from Killdeer Reservoir on Tymochtee Creek to reach selected downstream points. In general, two dye sample runs were made through each subreach to define the time-discharge relation for approximating travel times at selected discharges within the measured range, and time-discharge graphs are presented for 38 subreaches. Graphs of dye dispersion and variation in relation to time are given for three selected sampling sites. For estimating travel time and velocities between points in the study reach, tables for selected flow durations are given. Duration curves of daily discharge for four index stations are presented to indicate the lo-flow characteristics and for use in shaping downward extensions of the time-discharge curves.

  11. Two Point Exponential Approximation Method for structural optimization of problems with frequency constraints

    NASA Technical Reports Server (NTRS)

    Fadel, G. M.

    1991-01-01

    The point exponential approximation method was introduced by Fadel et al. (Fadel, 1990), and tested on structural optimization problems with stress and displacement constraints. The reports in earlier papers were promising, and the method, which consists of correcting Taylor series approximations using previous design history, is tested in this paper on optimization problems with frequency constraints. The aim of the research is to verify the robustness and speed of convergence of the two point exponential approximation method when highly non-linear constraints are used.

  12. Longitudinal Relations between Prosocial Television Content and Adolescents' Prosocial and Aggressive Behavior: The Mediating Role of Empathic Concern and Self-Regulation

    ERIC Educational Resources Information Center

    Padilla-Walker, Laura M.; Coyne, Sarah M.; Collier, Kevin M.; Nielson, Matthew G.

    2015-01-01

    The current study examined longitudinal cross-lagged associations between prosocial TV (content and time) and prosocial and aggressive behavior during adolescence, and explored the mediating role of empathic concern and self-regulation. Participants were 441 adolescents who reported on their 3 favorite TV shows at 2 time points, approximately 2…

  13. An interpretation model of GPR point data in tunnel geological prediction

    NASA Astrophysics Data System (ADS)

    He, Yu-yao; Li, Bao-qi; Guo, Yuan-shu; Wang, Teng-na; Zhu, Ya

    2017-02-01

    GPR (Ground Penetrating Radar) point data plays an absolutely necessary role in the tunnel geological prediction. However, the research work on the GPR point data is very little and the results does not meet the actual requirements of the project. In this paper, a GPR point data interpretation model which is based on WD (Wigner distribution) and deep CNN (convolutional neural network) is proposed. Firstly, the GPR point data is transformed by WD to get the map of time-frequency joint distribution; Secondly, the joint distribution maps are classified by deep CNN. The approximate location of geological target is determined by observing the time frequency map in parallel; Finally, the GPR point data is interpreted according to the classification results and position information from the map. The simulation results show that classification accuracy of the test dataset (include 1200 GPR point data) is 91.83% at the 200 iteration. Our model has the advantages of high accuracy and fast training speed, and can provide a scientific basis for the development of tunnel construction and excavation plan.

  14. Euthanasia Method for Mice in Rapid Time-Course Pulmonary Pharmacokinetic Studies

    PubMed Central

    Schoell, Adam R; Heyde, Bruce R; Weir, Dana E; Chiang, Po-Chang; Hu, Yiding; Tung, David K

    2009-01-01

    To develop a means of euthanasia to support rapid time-course pharmacokinetic studies in mice, we compared retroorbital and intravenous lateral tail vein injection of ketamine–xylazine with regard to preparation time, utility, tissue distribution, and time to onset of euthanasia. Tissue distribution and time to onset of euthanasia did not differ between administration methods. However, retroorbital injection could be performed more rapidly than intravenous injection and was considered to be a technically simple and superior alternative for mouse euthanasia. Retroorbital ketamine–xylazine, CO2 gas, and intraperitoneal pentobarbital then were compared as euthanasia agents in a rapid time-point pharmacokinetic study. Retroorbital ketamine–xylazine was the most efficient and consistent of the 3 methods, with an average time to death of approximately 5 s after injection. In addition, euthanasia by retroorbital ketamine–xylazine enabled accurate sample collection at closely spaced time points and satisfied established criteria for acceptable euthanasia technique. PMID:19807971

  15. Euthanasia method for mice in rapid time-course pulmonary pharmacokinetic studies.

    PubMed

    Schoell, Adam R; Heyde, Bruce R; Weir, Dana E; Chiang, Po-Chang; Hu, Yiding; Tung, David K

    2009-09-01

    To develop a means of euthanasia to support rapid time-course pharmacokinetic studies in mice, we compared retroorbital and intravenous lateral tail vein injection of ketamine-xylazine with regard to preparation time, utility, tissue distribution, and time to onset of euthanasia. Tissue distribution and time to onset of euthanasia did not differ between administration methods. However, retroorbital injection could be performed more rapidly than intravenous injection and was considered to be a technically simple and superior alternative for mouse euthanasia. Retroorbital ketamine-xylazine, CO(2) gas, and intraperitoneal pentobarbital then were compared as euthanasia agents in a rapid time-point pharmacokinetic study. Retroorbital ketamine-xylazine was the most efficient and consistent of the 3 methods, with an average time to death of approximately 5 s after injection. In addition, euthanasia by retroorbital ketamine-xylazine enabled accurate sample collection at closely spaced time points and satisfied established criteria for acceptable euthanasia technique.

  16. On the numerical solution of the dynamically loaded hydrodynamic lubrication of the point contact problem

    NASA Technical Reports Server (NTRS)

    Lim, Sang G.; Brewe, David E.; Prahl, Joseph M.

    1990-01-01

    The transient analysis of hydrodynamic lubrication of a point-contact is presented. A body-fitted coordinate system is introduced to transform the physical domain to a rectangular computational domain, enabling the use of the Newton-Raphson method for determining pressures and locating the cavitation boundary, where the Reynolds boundary condition is specified. In order to obtain the transient solution, an explicit Euler method is used to effect a time march. The transient dynamic load is a sinusoidal function of time with frequency, fractional loading, and mean load as parameters. Results include the variation of the minimum film thickness and phase-lag with time as functions of excitation frequency. The results are compared with the analytic solution to the transient step bearing problem with the same dynamic loading function. The similarities of the results suggest an approximate model of the point contact minimum film thickness solution.

  17. Quantization improves stabilization of dynamical systems with delayed feedback

    NASA Astrophysics Data System (ADS)

    Stepan, Gabor; Milton, John G.; Insperger, Tamas

    2017-11-01

    We show that an unstable scalar dynamical system with time-delayed feedback can be stabilized by quantizing the feedback. The discrete time model corresponds to a previously unrecognized case of the microchaotic map in which the fixed point is both locally and globally repelling. In the continuous-time model, stabilization by quantization is possible when the fixed point in the absence of feedback is an unstable node, and in the presence of feedback, it is an unstable focus (spiral). The results are illustrated with numerical simulation of the unstable Hayes equation. The solutions of the quantized Hayes equation take the form of oscillations in which the amplitude is a function of the size of the quantization step. If the quantization step is sufficiently small, the amplitude of the oscillations can be small enough to practically approximate the dynamics around a stable fixed point.

  18. Zero point energy leakage in condensed phase dynamics: An assessment of quantum simulation methods for liquid water

    NASA Astrophysics Data System (ADS)

    Habershon, Scott; Manolopoulos, David E.

    2009-12-01

    The approximate quantum mechanical ring polymer molecular dynamics (RPMD) and linearized semiclassical initial value representation (LSC-IVR) methods are compared and contrasted in a study of the dynamics of the flexible q-TIP4P/F water model at room temperature. For this water model, a RPMD simulation gives a diffusion coefficient that is only a few percent larger than the classical diffusion coefficient, whereas a LSC-IVR simulation gives a diffusion coefficient that is three times larger. We attribute this discrepancy to the unphysical leakage of initially quantized zero point energy (ZPE) from the intramolecular to the intermolecular modes of the liquid as the LSC-IVR simulation progresses. In spite of this problem, which is avoided by construction in RPMD, the LSC-IVR may still provide a useful approximation to certain short-time dynamical properties which are not so strongly affected by the ZPE leakage. We illustrate this with an application to the liquid water dipole absorption spectrum, for which the RPMD approximation breaks down at frequencies in the O-H stretching region owing to contamination from the internal modes of the ring polymer. The LSC-IVR does not suffer from this difficulty and it appears to provide quite a promising way to calculate condensed phase vibrational spectra.

  19. Zero point energy leakage in condensed phase dynamics: an assessment of quantum simulation methods for liquid water.

    PubMed

    Habershon, Scott; Manolopoulos, David E

    2009-12-28

    The approximate quantum mechanical ring polymer molecular dynamics (RPMD) and linearized semiclassical initial value representation (LSC-IVR) methods are compared and contrasted in a study of the dynamics of the flexible q-TIP4P/F water model at room temperature. For this water model, a RPMD simulation gives a diffusion coefficient that is only a few percent larger than the classical diffusion coefficient, whereas a LSC-IVR simulation gives a diffusion coefficient that is three times larger. We attribute this discrepancy to the unphysical leakage of initially quantized zero point energy (ZPE) from the intramolecular to the intermolecular modes of the liquid as the LSC-IVR simulation progresses. In spite of this problem, which is avoided by construction in RPMD, the LSC-IVR may still provide a useful approximation to certain short-time dynamical properties which are not so strongly affected by the ZPE leakage. We illustrate this with an application to the liquid water dipole absorption spectrum, for which the RPMD approximation breaks down at frequencies in the O-H stretching region owing to contamination from the internal modes of the ring polymer. The LSC-IVR does not suffer from this difficulty and it appears to provide quite a promising way to calculate condensed phase vibrational spectra.

  20. Fourth-order convergence of a compact scheme for the one-dimensional biharmonic equation

    NASA Astrophysics Data System (ADS)

    Fishelov, D.; Ben-Artzi, M.; Croisille, J.-P.

    2012-09-01

    The convergence of a fourth-order compact scheme to the one-dimensional biharmonic problem is established in the case of general Dirichlet boundary conditions. The compact scheme invokes value of the unknown function as well as Pade approximations of its first-order derivative. Using the Pade approximation allows us to approximate the first-order derivative within fourth-order accuracy. However, although the truncation error of the discrete biharmonic scheme is of fourth-order at interior point, the truncation error drops to first-order at near-boundary points. Nonetheless, we prove that the scheme retains its fourth-order (optimal) accuracy. This is done by a careful inspection of the matrix elements of the discrete biharmonic operator. A number of numerical examples corroborate this effect. We also present a study of the eigenvalue problem uxxxx = νu. We compute and display the eigenvalues and the eigenfunctions related to the continuous and the discrete problems. By the positivity of the eigenvalues, one can deduce the stability of of the related time-dependent problem ut = -uxxxx. In addition, we study the eigenvalue problem uxxxx = νuxx. This is related to the stability of the linear time-dependent equation uxxt = νuxxxx. Its continuous and discrete eigenvalues and eigenfunction (or eigenvectors) are computed and displayed graphically.

  1. The Dipole Segment Model for Axisymmetrical Elongated Asteroids

    NASA Astrophysics Data System (ADS)

    Zeng, Xiangyuan; Zhang, Yonglong; Yu, Yang; Liu, Xiangdong

    2018-02-01

    Various simplified models have been investigated as a way to understand the complex dynamical environment near irregular asteroids. A dipole segment model is explored in this paper, one that is composed of a massive straight segment and two point masses at the extremities of the segment. Given an explicitly simple form of the potential function that is associated with the dipole segment model, five topological cases are identified with different sets of system parameters. Locations, stabilities, and variation trends of the system equilibrium points are investigated in a parametric way. The exterior potential distribution of nearly axisymmetrical elongated asteroids is approximated by minimizing the acceleration error in a test zone. The acceleration error minimization process determines the parameters of the dipole segment. The near-Earth asteroid (8567) 1996 HW1 is chosen as an example to evaluate the effectiveness of the approximation method for the exterior potential distribution. The advantages of the dipole segment model over the classical dipole and the traditional segment are also discussed. Percent error of acceleration and the degree of approximation are illustrated by using the dipole segment model to approximate four more asteroids. The high efficiency of the simplified model over the polyhedron is clearly demonstrated by comparing the CPU time.

  2. Relativistic wide-angle galaxy bispectrum on the light cone

    NASA Astrophysics Data System (ADS)

    Bertacca, Daniele; Raccanelli, Alvise; Bartolo, Nicola; Liguori, Michele; Matarrese, Sabino; Verde, Licia

    2018-01-01

    Given the important role that the galaxy bispectrum has recently acquired in cosmology and the scale and precision of forthcoming galaxy clustering observations, it is timely to derive the full expression of the large-scale bispectrum going beyond approximated treatments which neglect integrated terms or higher-order bias terms or use the Limber approximation. On cosmological scales, relativistic effects that arise from observing the past light cone alter the observed galaxy number counts, therefore leaving their imprints on N-point correlators at all orders. In this paper we compute for the first time the bispectrum including all general relativistic, local and integrated, effects at second order, the tracers' bias at second order, geometric effects as well as the primordial non-Gaussianity contribution. This is timely considering that future surveys will probe scales comparable to the horizon where approximations widely used currently may not hold; neglecting these effects may introduce biases in estimation of cosmological parameters as well as primordial non-Gaussianity.

  3. Social anxiety and alcohol-related sexual victimization: A longitudinal pilot study of college women.

    PubMed

    Schry, Amie R; Maddox, Brenna B; White, Susan W

    2016-10-01

    We sought to examine social anxiety as a risk factor for alcohol-related sexual victimization among college women. Women (Time 1: n = 574; Time 2: n = 88) who reported consuming alcohol at least once during the assessment timeframe participated. Social anxiety, alcohol use, alcohol-related consequences, and sexual victimization were assessed twice, approximately two months apart. Logistic regressions were used to examine social anxiety as a risk factor for alcohol-related sexual victimization at both time points. Longitudinally, women high in social anxiety were approximately three times more likely to endorse unwanted alcohol-related sexual experiences compared to women with low to moderate social anxiety. This study suggests social anxiety, a modifiable construct, increases risk for alcohol-related sexual victimization among college women. Implications for clinicians and risk-reduction program developers are discussed. Published by Elsevier Ltd.

  4. An Approximate Markov Model for the Wright-Fisher Diffusion and Its Application to Time Series Data.

    PubMed

    Ferrer-Admetlla, Anna; Leuenberger, Christoph; Jensen, Jeffrey D; Wegmann, Daniel

    2016-06-01

    The joint and accurate inference of selection and demography from genetic data is considered a particularly challenging question in population genetics, since both process may lead to very similar patterns of genetic diversity. However, additional information for disentangling these effects may be obtained by observing changes in allele frequencies over multiple time points. Such data are common in experimental evolution studies, as well as in the comparison of ancient and contemporary samples. Leveraging this information, however, has been computationally challenging, particularly when considering multilocus data sets. To overcome these issues, we introduce a novel, discrete approximation for diffusion processes, termed mean transition time approximation, which preserves the long-term behavior of the underlying continuous diffusion process. We then derive this approximation for the particular case of inferring selection and demography from time series data under the classic Wright-Fisher model and demonstrate that our approximation is well suited to describe allele trajectories through time, even when only a few states are used. We then develop a Bayesian inference approach to jointly infer the population size and locus-specific selection coefficients with high accuracy and further extend this model to also infer the rates of sequencing errors and mutations. We finally apply our approach to recent experimental data on the evolution of drug resistance in influenza virus, identifying likely targets of selection and finding evidence for much larger viral population sizes than previously reported. Copyright © 2016 by the Genetics Society of America.

  5. Statistical time-dependent model for the interstellar gas

    NASA Technical Reports Server (NTRS)

    Gerola, H.; Kafatos, M.; Mccray, R.

    1974-01-01

    We present models for temperature and ionization structure of low, uniform-density (approximately 0.3 per cu cm) interstellar gas in a galactic disk which is exposed to soft X rays from supernova outbursts occurring randomly in space and time. The structure was calculated by computing the time record of temperature and ionization at a given point by Monte Carlo simulation. The calculation yields probability distribution functions for ionized fraction, temperature, and their various observable moments. These time-dependent models predict a bimodal temperature distribution of the gas that agrees with various observations. Cold regions in the low-density gas may have the appearance of clouds in 21-cm absorption. The time-dependent model, in contrast to the steady-state model, predicts large fluctuations in ionization rate and the existence of cold (approximately 30 K), ionized (ionized fraction equal to about 0.1) regions.

  6. Semiclassical wave packet treatment of scattering resonances: application to the delta zero-point energy effect in recombination reactions.

    PubMed

    Vetoshkin, Evgeny; Babikov, Dmitri

    2007-09-28

    For the first time Feshbach-type resonances important in recombination reactions are characterized using the semiclassical wave packet method. This approximation allows us to determine the energies, lifetimes, and wave functions of the resonances and also to observe a very interesting correlation between them. Most important is that this approach permits description of a quantum delta-zero-point energy effect in recombination reactions and reproduces the anomalous rates of ozone formation.

  7. Semi-Analytic Reconstruction of Flux in Finite Volume Formulations

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2006-01-01

    Semi-analytic reconstruction uses the analytic solution to a second-order, steady, ordinary differential equation (ODE) to simultaneously evaluate the convective and diffusive flux at all interfaces of a finite volume formulation. The second-order ODE is itself a linearized approximation to the governing first- and second- order partial differential equation conservation laws. Thus, semi-analytic reconstruction defines a family of formulations for finite volume interface fluxes using analytic solutions to approximating equations. Limiters are not applied in a conventional sense; rather, diffusivity is adjusted in the vicinity of changes in sign of eigenvalues in order to achieve a sufficiently small cell Reynolds number in the analytic formulation across critical points. Several approaches for application of semi-analytic reconstruction for the solution of one-dimensional scalar equations are introduced. Results are compared with exact analytic solutions to Burger s Equation as well as a conventional, upwind discretization using Roe s method. One approach, the end-point wave speed (EPWS) approximation, is further developed for more complex applications. One-dimensional vector equations are tested on a quasi one-dimensional nozzle application. The EPWS algorithm has a more compact difference stencil than Roe s algorithm but reconstruction time is approximately a factor of four larger than for Roe. Though both are second-order accurate schemes, Roe s method approaches a grid converged solution with fewer grid points. Reconstruction of flux in the context of multi-dimensional, vector conservation laws including effects of thermochemical nonequilibrium in the Navier-Stokes equations is developed.

  8. Distribution and mixing of a liquid bolus in pleural space.

    PubMed

    Bodega, Francesca; Tresoldi, Claudio; Porta, Cristina; Zocchi, Luciano; Agostoni, Emilio

    2006-02-28

    Distribution and mixing time of boluses with labeled albumin in pleural space of anesthetized, supine rabbits were determined by sampling pleural liquid at different times in various intercostal spaces (ics), and in cranial and caudal mediastinum. During sampling, lung and chest wall were kept apposed by lung inflation. This was not necessary in costo-phrenic sinus. Here, 10 min after injection, lung inflation increased concentration of labeled albumin by 50%. Lung inflation probably displaces some pleural liquid cranio-caudally, increasing labeled albumin concentration caudally to injection point (6th ics), and decreasing it cranially. Boluses of 0.1-1 ml did not preferentially reach mediastinal regions, as maintained by others. Time for an approximate mixing was approximately 1 h for 0.1 ml, and approximately 30 min for 1 ml. This relatively long mixing time does not substantially affect determination of contribution of lymphatic drainage through stomata to overall removal of labeled albumin from 0.3 ml hydrothoraces lasting 3 h [Bodega, F., Agostoni, E., 2004. Contribution of lymphatic drainage through stomata to albumin removal from pleural space. Respir. Physiol. Neurobiol. 142, 251-263].

  9. Standard deviation of vertical two-point longitudinal velocity differences in the atmospheric boundary layer.

    NASA Technical Reports Server (NTRS)

    Fichtl, G. H.

    1971-01-01

    Statistical estimates of wind shear in the planetary boundary layer are important in the design of V/STOL aircraft, and for the design of the Space Shuttle. The data analyzed in this study consist of eleven sets of longitudinal turbulent velocity fluctuation time histories digitized at 0.2 sec intervals with approximately 18,000 data points per time history. The longitudinal velocity fluctuations were calculated with horizontal wind and direction data collected at the 18-, 30-, 60-, 90-, 120-, and 150-m levels. The data obtained confirm the result that Eulerian time spectra transformed to wave-number spectra with Taylor's frozen eddy hypothesis possess inertial-like behavior at wave-numbers well out of the inertial subrange.

  10. Gradients estimation from random points with volumetric tensor in turbulence

    NASA Astrophysics Data System (ADS)

    Watanabe, Tomoaki; Nagata, Koji

    2017-12-01

    We present an estimation method of fully-resolved/coarse-grained gradients from randomly distributed points in turbulence. The method is based on a linear approximation of spatial gradients expressed with the volumetric tensor, which is a 3 × 3 matrix determined by a geometric distribution of the points. The coarse grained gradient can be considered as a low pass filtered gradient, whose cutoff is estimated with the eigenvalues of the volumetric tensor. The present method, the volumetric tensor approximation, is tested for velocity and passive scalar gradients in incompressible planar jet and mixing layer. Comparison with a finite difference approximation on a Cartesian grid shows that the volumetric tensor approximation computes the coarse grained gradients fairly well at a moderate computational cost under various conditions of spatial distributions of points. We also show that imposing the solenoidal condition improves the accuracy of the present method for solenoidal vectors, such as a velocity vector in incompressible flows, especially when the number of the points is not large. The volumetric tensor approximation with 4 points poorly estimates the gradient because of anisotropic distribution of the points. Increasing the number of points from 4 significantly improves the accuracy. Although the coarse grained gradient changes with the cutoff length, the volumetric tensor approximation yields the coarse grained gradient whose magnitude is close to the one obtained by the finite difference. We also show that the velocity gradient estimated with the present method well captures the turbulence characteristics such as local flow topology, amplification of enstrophy and strain, and energy transfer across scales.

  11. Variable pulmonary responses from exposure to concentrated ambient air particles in a rat model of bronchitis.

    PubMed

    Kodavanti, U P; Mebane, R; Ledbetter, A; Krantz, T; McGee, J; Jackson, M C; Walsh, L; Hilliard, H; Chen, B Y; Richards, J; Costa, D L

    2000-04-01

    Chronic bronchitis may be considered a risk factor in particulate matter (PM)-induced morbidity. We hypothesized that a rat model of human bronchitis would be more susceptible to the pulmonary effects of concentrated ambient particles (CAPs) from Research Triangle Park, NC. Bronchitis was induced in male Sprague-Dawley rats (90-100 days of age) by exposure to 200 ppm sulfur dioxide (SO2), 6 h/day x 5 days/week x 6 weeks. One day following the last SO2 exposure, both healthy (air-exposed) and bronchitic (SO2-exposed) rats were exposed to filtered air (three healthy; four bronchitic) or CAPs (five healthy; four bronchitic) by whole-body inhalation, 6 h/day x 2 or 3 days. Pulmonary injury was determined either immediately (0h) or 18 h following final CAPs exposure. The study protocol involving 0 h time point was repeated four times (study #A, November, 1997; #B, February, 1998; #C and #D, May, 1998), whereas the study protocol involving 18 h time point was done only once (#F). In an additional study (#E), rats were exposed to residual oil fly ash (ROFA), approximately 1 mg/ m(3)x6 h/day x 3 days to mimic the CAPs protocol (February, 1998). The rats allowed 18 h recovery following CAPs exposure (#F) did not depict any CAPs-related differences in bronchoalveolar lavage fluid (BALF) injury markers. Of the four CAPs studies conducted (0 h time point), the first (#A) study (approximately 650 microg/m3 CAPs) revealed significant changes in the lungs of CAPs-exposed bronchitic rats compared to the clean air controls. These rats had increased BALF protein, albumin, N-acetyl glutaminidase (NAG) activity and neutrophils. The second (#B) study (approximately 475 microg/m3 CAPs) did not reveal any significant effects of CAPs on BALF parameters. Study protocols #C (approximately 869 microg/m3 CAPs) and #D (approximately 907 microg/m3 CAPs) revealed only moderate increases in the above mentioned BALF parameters in bronchitic rats exposed to CAPs. Pulmonary histologic evaluation of studies #A, #C, #D, and #F revealed marginally higher congestion and perivascular cellularity in CAPs-exposed bronchitic rats. Healthy and bronchitic rats exposed to ROFA (approximately 1 mg/m3) did not show significant pulmonary injury (#E). Analysis of leachable elemental components of CAPs revealed the presence of sulfur, zinc, manganese, and iron. There was an apparent lack of association between pulmonary injury and CAPs concentration, or its leachable sulfate or elemental content. In summary, real-time atmospheric PM may result in pulmonary injury, particularly in susceptible models. However, the variability observed in pulmonary responses to CAPs emphasizes the need to conduct repeated studies, perhaps in relation to the season, as composition of CAPs may vary. Additionally, potential variability in pathology of induced bronchitis or other lung disease may decrease the ability to distinguish toxic injury due to PM.

  12. Data-Driven Model Reduction and Transfer Operator Approximation

    NASA Astrophysics Data System (ADS)

    Klus, Stefan; Nüske, Feliks; Koltai, Péter; Wu, Hao; Kevrekidis, Ioannis; Schütte, Christof; Noé, Frank

    2018-06-01

    In this review paper, we will present different data-driven dimension reduction techniques for dynamical systems that are based on transfer operator theory as well as methods to approximate transfer operators and their eigenvalues, eigenfunctions, and eigenmodes. The goal is to point out similarities and differences between methods developed independently by the dynamical systems, fluid dynamics, and molecular dynamics communities such as time-lagged independent component analysis, dynamic mode decomposition, and their respective generalizations. As a result, extensions and best practices developed for one particular method can be carried over to other related methods.

  13. Simulating Nonequilibrium Radiation via Orthogonal Polynomial Refinement

    DTIC Science & Technology

    2015-01-07

    measured by the preprocessing time, computer memory space, and average query time. In many search procedures for the number of points np of a data set, a...analytic expression for the radiative flux density is possible by the commonly accepted local thermal equilibrium ( LTE ) approximation. A semi...Vol. 227, pp. 9463-9476, 2008. 10. Galvez, M., Ray-Tracing model for radiation transport in three-dimensional LTE system, App. Physics, Vol. 38

  14. On the Gibbs phenomenon 5: Recovering exponential accuracy from collocation point values of a piecewise analytic function

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Shu, Chi-Wang

    1994-01-01

    The paper presents a method to recover exponential accuracy at all points (including at the discontinuities themselves), from the knowledge of an approximation to the interpolation polynomial (or trigonometrical polynomial). We show that if we are given the collocation point values (or a highly accurate approximation) at the Gauss or Gauss-Lobatto points, we can reconstruct a uniform exponentially convergent approximation to the function f(x) in any sub-interval of analyticity. The proof covers the cases of Fourier, Chebyshev, Legendre, and more general Gegenbauer collocation methods.

  15. SigVox - A 3D feature matching algorithm for automatic street object recognition in mobile laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Wang, Jinhu; Lindenbergh, Roderik; Menenti, Massimo

    2017-06-01

    Urban road environments contain a variety of objects including different types of lamp poles and traffic signs. Its monitoring is traditionally conducted by visual inspection, which is time consuming and expensive. Mobile laser scanning (MLS) systems sample the road environment efficiently by acquiring large and accurate point clouds. This work proposes a methodology for urban road object recognition from MLS point clouds. The proposed method uses, for the first time, shape descriptors of complete objects to match repetitive objects in large point clouds. To do so, a novel 3D multi-scale shape descriptor is introduced, that is embedded in a workflow that efficiently and automatically identifies different types of lamp poles and traffic signs. The workflow starts by tiling the raw point clouds along the scanning trajectory and by identifying non-ground points. After voxelization of the non-ground points, connected voxels are clustered to form candidate objects. For automatic recognition of lamp poles and street signs, a 3D significant eigenvector based shape descriptor using voxels (SigVox) is introduced. The 3D SigVox descriptor is constructed by first subdividing the points with an octree into several levels. Next, significant eigenvectors of the points in each voxel are determined by principal component analysis (PCA) and mapped onto the appropriate triangle of a sphere approximating icosahedron. This step is repeated for different scales. By determining the similarity of 3D SigVox descriptors between candidate point clusters and training objects, street furniture is automatically identified. The feasibility and quality of the proposed method is verified on two point clouds obtained in opposite direction of a stretch of road of 4 km. 6 types of lamp pole and 4 types of road sign were selected as objects of interest. Ground truth validation showed that the overall accuracy of the ∼170 automatically recognized objects is approximately 95%. The results demonstrate that the proposed method is able to recognize street furniture in a practical scenario. Remaining difficult cases are touching objects, like a lamp pole close to a tree.

  16. Constrained Chebyshev approximations to some elementary functions suitable for evaluation with floating point arithmetic

    NASA Technical Reports Server (NTRS)

    Manos, P.; Turner, L. R.

    1972-01-01

    Approximations which can be evaluated with precision using floating-point arithmetic are presented. The particular set of approximations thus far developed are for the function TAN and the functions of USASI FORTRAN excepting SQRT and EXPONENTIATION. These approximations are, furthermore, specialized to particular forms which are especially suited to a computer with a small memory, in that all of the approximations can share one general purpose subroutine for the evaluation of a polynomial in the square of the working argument.

  17. Times for interplanetary trips

    NASA Technical Reports Server (NTRS)

    Jones, R. T.

    1976-01-01

    The times required to travel to the various planets at an acceleration of one g are calculated. Surrounding gravitational fields are neglected except for a relatively short distance near take-off or landing. The orbit consists of an essentially straight line with the thrust directed toward the destination up to the halfway point, but in the opposite direction for the remainder so that the velocity is zero on arrival. A table lists the approximate times required, and also the maximum velocities acquired in light units v/c for the various planets.

  18. 40 CFR 81.348 - Washington.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... point at which it is crossed by the existing BPA electrical transmission line; thence southeasterly along the BPA transmission line approximately 8 miles to point of the crossing of the south fork of the... approximately 6 miles to the point the Creek is crossed by the existing BPA electrical transmission line; thence...

  19. Registration of Vehicle-Borne Point Clouds and Panoramic Images Based on Sensor Constellations.

    PubMed

    Yao, Lianbi; Wu, Hangbin; Li, Yayun; Meng, Bin; Qian, Jinfei; Liu, Chun; Fan, Hongchao

    2017-04-11

    A mobile mapping system (MMS) is usually utilized to collect environmental data on and around urban roads. Laser scanners and panoramic cameras are the main sensors of an MMS. This paper presents a new method for the registration of the point clouds and panoramic images based on sensor constellation. After the sensor constellation was analyzed, a feature point, the intersection of the connecting line between the global positioning system (GPS) antenna and the panoramic camera with a horizontal plane, was utilized to separate the point clouds into blocks. The blocks for the central and sideward laser scanners were extracted with the segmentation feature points. Then, the point clouds located in the blocks were separated from the original point clouds. Each point in the blocks was used to find the accurate corresponding pixel in the relative panoramic images via a collinear function, and the position and orientation relationship amongst different sensors. A search strategy is proposed for the correspondence of laser scanners and lenses of panoramic cameras to reduce calculation complexity and improve efficiency. Four cases of different urban road types were selected to verify the efficiency and accuracy of the proposed method. Results indicate that most of the point clouds (with an average of 99.7%) were successfully registered with the panoramic images with great efficiency. Geometric evaluation results indicate that horizontal accuracy was approximately 0.10-0.20 m, and vertical accuracy was approximately 0.01-0.02 m for all cases. Finally, the main factors that affect registration accuracy, including time synchronization amongst different sensors, system positioning and vehicle speed, are discussed.

  20. Particle dynamics in the original Schwarzschild metric

    NASA Astrophysics Data System (ADS)

    Fimin, N. N.; Chechetkin, V. M.

    2016-04-01

    The properties of the original Schwarzschild metric for a point gravitating mass are considered. The laws of motion in the corresponding space-time are established, and the transition from the Schwarzschildmetric to the metric of a "dusty universe" are studied. The dynamics of a system of particles in thr post-Newtonian approximation are analyzed.

  1. In and out of Synch: Infant Childcare Teachers' Adaptations to Infants' Developmental Changes

    ERIC Educational Resources Information Center

    Recchia, Susan L.; Shin, Minsun

    2012-01-01

    This qualitative multi-case study explored the social exchanges and responsive connections between infants and their infant childcare teachers within a group care context. Infants' naturally occurring behaviours were videotaped purposefully at two separate time points, near the end of their first year and approximately six months later. Findings…

  2. 78 FR 37968 - Safety Zone; Fireworks Events in Captain of the Port New York Zone

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-25

    ... 165.160 (5.9) position 40[deg]45'56.9'' N, 074[deg]00'25.4'' W (NAD 1983), approximately 380 yards...[deg]47'09.9'' W (NAD 1983), thence to the point of origin. Date: June 28, 2013. Time: 8:50 p.m.-10:10...

  3. An accelerated hologram calculation using the wavefront recording plane method and wavelet transform

    NASA Astrophysics Data System (ADS)

    Arai, Daisuke; Shimobaba, Tomoyoshi; Nishitsuji, Takashi; Kakue, Takashi; Masuda, Nobuyuki; Ito, Tomoyoshi

    2017-06-01

    Fast hologram calculation methods are critical in real-time holography applications such as three-dimensional (3D) displays. We recently proposed a wavelet transform-based hologram calculation called WASABI. Even though WASABI can decrease the calculation time of a hologram from a point cloud, it increases the calculation time with increasing propagation distance. We also proposed a wavefront recoding plane (WRP) method. This is a two-step fast hologram calculation in which the first step calculates the superposition of light waves emitted from a point cloud in a virtual plane, and the second step performs a diffraction calculation from the virtual plane to the hologram plane. A drawback of the WRP method is in the first step when the point cloud has a large number of object points and/or a long distribution in the depth direction. In this paper, we propose a method combining WASABI and the WRP method in which the drawbacks of each can be complementarily solved. Using a consumer CPU, the proposed method succeeded in performing a hologram calculation with 2048 × 2048 pixels from a 3D object with one million points in approximately 0.4 s.

  4. 10 CFR Appendix A to Part 861 - Perimeter Description of DOE's Nevada Test Site

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...°34′20″; Thence easterly approximately 6.73 miles, to a point at latitude 37°20′45″ longitude 116°27′00″; Thence northeasterly approximately 4.94 miles to a point at latitude 37°23′07″, longitude 116°22′30″; Thence easterly approximately 4.81 miles to a point at latitude 37°23′07″, longitude 116°17′15...

  5. 10 CFR Appendix A to Part 861 - Perimeter Description of DOE's Nevada Test Site

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...°34′20″; Thence easterly approximately 6.73 miles, to a point at latitude 37°20′45″ longitude 116°27′00″; Thence northeasterly approximately 4.94 miles to a point at latitude 37°23′07″, longitude 116°22′30″; Thence easterly approximately 4.81 miles to a point at latitude 37°23′07″, longitude 116°17′15...

  6. 10 CFR Appendix A to Part 861 - Perimeter Description of DOE's Nevada Test Site

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...°34′20″; Thence easterly approximately 6.73 miles, to a point at latitude 37°20′45″ longitude 116°27′00″; Thence northeasterly approximately 4.94 miles to a point at latitude 37°23′07″, longitude 116°22′30″; Thence easterly approximately 4.81 miles to a point at latitude 37°23′07″, longitude 116°17′15...

  7. 10 CFR Appendix A to Part 861 - Perimeter Description of DOE's Nevada Test Site

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...°34′20″; Thence easterly approximately 6.73 miles, to a point at latitude 37°20′45″ longitude 116°27′00″; Thence northeasterly approximately 4.94 miles to a point at latitude 37°23′07″, longitude 116°22′30″; Thence easterly approximately 4.81 miles to a point at latitude 37°23′07″, longitude 116°17′15...

  8. 10 CFR Appendix A to Part 861 - Perimeter Description of DOE's Nevada Test Site

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...°34′20″; Thence easterly approximately 6.73 miles, to a point at latitude 37°20′45″ longitude 116°27′00″; Thence northeasterly approximately 4.94 miles to a point at latitude 37°23′07″, longitude 116°22′30″; Thence easterly approximately 4.81 miles to a point at latitude 37°23′07″, longitude 116°17′15...

  9. Theory of two-point correlations of jet noise

    NASA Technical Reports Server (NTRS)

    Ribner, H. S.

    1976-01-01

    A large body of careful experimental measurements of two-point correlations of far field jet noise was carried out. The model of jet-noise generation is an approximate version of an earlier work of Ribner, based on the foundations of Lighthill. The model incorporates isotropic turbulence superimposed on a specified mean shear flow, with assumed space-time velocity correlations, but with source convection neglected. The particular vehicle is the Proudman format, and the previous work (mean-square pressure) is extended to display the two-point space-time correlations of pressure. The shape of polar plots of correlation is found to derive from two main factors: (1) the noncompactness of the source region, which allows differences in travel times to the two microphones - the dominant effect; (2) the directivities of the constituent quadrupoles - a weak effect. The noncompactness effect causes the directional lobes in a polar plot to have pointed tips (cusps) and to be especially narrow in the plane of the jet axis. In these respects, and in the quantitative shapes of the normalized correlation curves, results of the theory show generally good agreement with Maestrello's experimental measurements.

  10. 75 FR 33851 - Florida Power & Light Company; Turkey Point, Units 6 and 7; Combined License Application, Notice...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-15

    ... Company; Turkey Point, Units 6 and 7; Combined License Application, Notice of Intent To Prepare an... application for a combined license (COL) to build Units 6 and 7 at its Turkey Point site, located in Miami... approximately 4.5 miles from the nearest boundary of the Turkey Point site; the site is approximately 25 miles...

  11. Assimilation of elements and digestion in grass shrimp pre-exposed to dietary mercury.

    PubMed

    Seebaugh, David R; Wallace, William G; L'amoreaux, William J; Stewart, Gillian M

    2012-08-01

    Grass shrimp Palaemonetes pugio were fed mercury (Hg)-contaminated oligochaetes for 15 days and analyzed for Hg, cadmium (Cd), and carbon assimilation efficiencies (AE) as well as toxicological end points related to digestion. Disproportionate increases in stable Hg concentrations in shrimp did not appear to be related to partitioning to trophically available Hg in worms. Hg AE by pre-exposed shrimp reached a plateau (approximately 53 %), whereas Cd AE varied (approximately 40-60 %) in a manner that was not dose-dependent. Carbon AE did not differ among treatments (approximately 69 %). Gut residence time was not impacted significantly by Hg pre-exposure (grand median approximately 465 min), however, there was a trend between curves showing percentages of individuals with markers in feces over time versus treatment. Feces-elimination rate did not vary with dietary pre-exposure. Extracellular protease activity varied approximately 1.9-fold but did not exhibit dose-dependency. pH increased over the range of Hg pre-exposures within the anterior (pH approximately 5.33-6.51) and posterior (pH approximately 5.29-6.25) regions of the cardiac proventriculus and Hg assimilation exhibited a negative relationship to hydrogen ion concentrations. The results of this study indicate that previous Hg ingestion can elicit post-assimilatory impacts on grass shrimp digestive physiology, which may, in turn, influence Hg assimilation during subsequent digestive cycles.

  12. Outdoor air pollution in close proximity to a continuous point source

    NASA Astrophysics Data System (ADS)

    Klepeis, Neil E.; Gabel, Etienne B.; Ott, Wayne R.; Switzer, Paul

    Data are lacking on human exposure to air pollutants occurring in ground-level outdoor environments within a few meters of point sources. To better understand outdoor exposure to tobacco smoke from cigarettes or cigars, and exposure to other types of outdoor point sources, we performed more than 100 controlled outdoor monitoring experiments on a backyard residential patio in which we released pure carbon monoxide (CO) as a tracer gas for continuous time periods lasting 0.5-2 h. The CO was emitted from a single outlet at a fixed per-experiment rate of 120-400 cc min -1 (˜140-450 mg min -1). We measured CO concentrations every 15 s at up to 36 points around the source along orthogonal axes. The CO sensors were positioned at standing or sitting breathing heights of 2-5 ft (up to 1.5 ft above and below the source) and at horizontal distances of 0.25-2 m. We simultaneously measured real-time air speed, wind direction, relative humidity, and temperature at single points on the patio. The ground-level air speeds on the patio were similar to those we measured during a survey of 26 outdoor patio locations in 5 nearby towns. The CO data exhibited a well-defined proximity effect similar to the indoor proximity effect reported in the literature. Average concentrations were approximately inversely proportional to distance. Average CO levels were approximately proportional to source strength, supporting generalization of our results to different source strengths. For example, we predict a cigarette smoker would cause average fine particle levels of approximately 70-110 μg m -3 at horizontal distances of 0.25-0.5 m. We also found that average CO concentrations rose significantly as average air speed decreased. We fit a multiplicative regression model to the empirical data that predicts outdoor concentrations as a function of source emission rate, source-receptor distance, air speed and wind direction. The model described the data reasonably well, accounting for ˜50% of the log-CO variability in 5-min CO concentrations.

  13. New theory on the reverberation of rooms. [considering sound wave travel time

    NASA Technical Reports Server (NTRS)

    Pujolle, J.

    1974-01-01

    The inadequacy of the various theories which have been proposed for finding the reverberation time of rooms can be explained by an attempt to examine what might occur at a listening point when image sources of determined acoustic power are added to the actual source. The number and locations of the image sources are stipulated. The intensity of sound at the listening point can be calculated by means of approximations whose conditions for validity are given. This leads to the proposal of a new expression for the reverberation time, yielding results which fall between those obtained through use of the Eyring and Millington formulae; these results are made to depend on the shape of the room by means of a new definition of the mean free path.

  14. The prevalence and correlates of self-harm ideation trajectories in Australian women from pregnancy to 4-years postpartum.

    PubMed

    Giallo, Rebecca; Pilkington, Pamela; Borschmann, Rohan; Seymour, Monique; Dunning, Melissa; Brown, Stephanie

    2018-03-15

    Women in the perinatal period are at increased risk of experiencing self-harm ideation. The current study longitudinally examines the prevalence, trajectories, and correlates of self-harm ideation in a population-based sample of Australian women from pregnancy through to the early years of parenting. Drawing on data from 1507 women participating in a prospective pregnancy cohort study, data were collected during pregnancy, at 3-, 6-, 12-, and 18-months postpartum, and 4-years postpartum. Longitudinal Latent Class Analysis was conducted to identify groups of women based on their responses to thoughts of self-harm at each time-point. Logistic regression analysis was used to identify factors associated with group membership. Approximately 4-5% of women reported experiencing self-harm ideation at each time-point from pregnancy to 4-years postpartum. Cross-sectional analyses revealed that self-harm ideation was most frequently endorsed in the first 12-months postpartum (4.6%), and approximately 15% of women reported self-harm ideation at least once during the study period. Longitudinally, approximately 7% of women had an enduring pattern of self-harm ideation from pregnancy to 4-years postpartum. Women who had experienced a range of preconception and current social health issues and disadvantage were at increased risk of self-harm ideation over time. Limitations included use of brief measures, along with an underrepresentation of participants with particular socio-demographic characteristics. A proportion of women are at increased risk of experiencing self-harm ideation during the perinatal period and in the early years of parenting, underscoring the need for early identification during pregnancy and early postpartum to facilitate timely early intervention. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. A Monte Carlo Application to Approximate the Integral from a to b of e Raised to the x Squared.

    ERIC Educational Resources Information Center

    Easterday, Kenneth; Smith, Tommy

    1992-01-01

    Proposes an alternative means of approximating the value of complex integrals, the Monte Carlo procedure. Incorporating a discrete approach and probability, an approximation is obtained from the ratio of computer-generated points falling under the curve to the number of points generated in a predetermined rectangle. (MDH)

  16. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Wenyang; Cheung, Yam; Sawant, Amit

    2016-05-15

    Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparsemore » regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. Conclusions: The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications.« less

  17. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system.

    PubMed

    Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan

    2016-05-01

    To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications.

  18. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system

    PubMed Central

    Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan

    2016-01-01

    Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. Conclusions: The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications. PMID:27147347

  19. Not just a drop in the bucket: expanding access to point-of-use water treatment systems.

    PubMed

    Mintz, E; Bartram, J; Lochery, P; Wegelin, M

    2001-10-01

    Since 1990, the number of people without access to safe water sources has remained constant at approximately 1.1 billion, of whom approximately 2.2 million die of waterborne disease each year. In developing countries, population growth and migrations strain existing water and sanitary infrastructure and complicate planning and construction of new infrastructure. Providing safe water for all is a long-term goal; however, relying only on time- and resource-intensive centralized solutions such as piped, treated water will leave hundreds of millions of people without safe water far into the future. Self-sustaining, decentralized approaches to making drinking water safe, including point-of-use chemical and solar disinfection, safe water storage, and behavioral change, have been widely field-tested. These options target the most affected, enhance health, contribute to development and productivity, and merit far greater priority for rapid implementation.

  20. Not Just a Drop in the Bucket: Expanding Access to Point-of-Use Water Treatment Systems

    PubMed Central

    Mintz, Eric; Bartram, Jamie; Lochery, Peter; Wegelin, Martin

    2001-01-01

    Since 1990, the number of people without access to safe water sources has remained constant at approximately 1.1 billion, of whom approximately 2.2 million die of waterborne disease each year. In developing countries, population growth and migrations strain existing water and sanitary infrastructure and complicate planning and construction of new infrastructure. Providing safe water for all is a long-term goal; however, relying only on time- and resource-intensive centralized solutions such as piped, treated water will leave hundreds of millions of people without safe water far into the future. Self-sustaining, decentralized approaches to making drinking water safe, including point-of-use chemical and solar disinfection, safe water storage, and behavioral change, have been widely field-tested. These options target the most affected, enhance health, contribute to development and productivity, and merit far greater priority for rapid implementation. PMID:11574307

  1. FIRST-ORDER COSMOLOGICAL PERTURBATIONS ENGENDERED BY POINT-LIKE MASSES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eingorn, Maxim, E-mail: maxim.eingorn@gmail.com

    2016-07-10

    In the framework of the concordance cosmological model, the first-order scalar and vector perturbations of the homogeneous background are derived in the weak gravitational field limit without any supplementary approximations. The sources of these perturbations (inhomogeneities) are presented in the discrete form of a system of separate point-like gravitating masses. The expressions found for the metric corrections are valid at all (sub-horizon and super-horizon) scales and converge at all points except at the locations of the sources. The average values of these metric corrections are zero (thus, first-order backreaction effects are absent). Both the Minkowski background limit and the Newtonianmore » cosmological approximation are reached under certain well-defined conditions. An important feature of the velocity-independent part of the scalar perturbation is revealed: up to an additive constant, this part represents a sum of Yukawa potentials produced by inhomogeneities with the same finite time-dependent Yukawa interaction range. The suggested connection between this range and the homogeneity scale is briefly discussed along with other possible physical implications.« less

  2. Detection of image structures using the Fisher information and the Rao metric.

    PubMed

    Maybank, Stephen J

    2004-12-01

    In many detection problems, the structures to be detected are parameterized by the points of a parameter space. If the conditional probability density function for the measurements is known, then detection can be achieved by sampling the parameter space at a finite number of points and checking each point to see if the corresponding structure is supported by the data. The number of samples and the distances between neighboring samples are calculated using the Rao metric on the parameter space. The Rao metric is obtained from the Fisher information which is, in turn, obtained from the conditional probability density function. An upper bound is obtained for the probability of a false detection. The calculations are simplified in the low noise case by making an asymptotic approximation to the Fisher information. An application to line detection is described. Expressions are obtained for the asymptotic approximation to the Fisher information, the volume of the parameter space, and the number of samples. The time complexity for line detection is estimated. An experimental comparison is made with a Hough transform-based method for detecting lines.

  3. [School refusal and dropping out of school: positioning regarding a Swiss perspective].

    PubMed

    Walitza, Susanne; Melfsen, Siebke; Della Casa, André; Schneller, Lena

    2013-01-01

    This article deals with refusal to attend school and dropping out of school from the point of view of child and adolescent psychiatry and psychology, in German speaking countries and from the perspective of Swiss schools and their administrative bodies. General epidemiological data on refusal to attend school show that approximately 5% of children and adolescents are likely to try to avoid attending school at some point. There is very little data available on the frequency of school drop-out. In the past two years (2011 and 2012), approximately 2% of all patients seen for the first time at the department of Child and Adolescent Psychiatry, University Zurich, were referred because of failure to attend school, making this phenomenon one of the most common reasons for referral in child and adolescent psychiatry. After a discussion of the epidemiology, symptomatology, causes and its risk factors, the article presents examples drawn from practice and guidelines for intervention in cases of refusal to attend school, and discusses ways of preventing school drop-out from the point of view of schools, hospitals and bodies such as educational psychology services in Switzerland.

  4. An Indoor Positioning Technique Based on a Feed-Forward Artificial Neural Network Using Levenberg-Marquardt Learning Method

    NASA Astrophysics Data System (ADS)

    Pahlavani, P.; Gholami, A.; Azimi, S.

    2017-09-01

    This paper presents an indoor positioning technique based on a multi-layer feed-forward (MLFF) artificial neural networks (ANN). Most of the indoor received signal strength (RSS)-based WLAN positioning systems use the fingerprinting technique that can be divided into two phases: the offline (calibration) phase and the online (estimation) phase. In this paper, RSSs were collected for all references points in four directions and two periods of time (Morning and Evening). Hence, RSS readings were sampled at a regular time interval and specific orientation at each reference point. The proposed ANN based model used Levenberg-Marquardt algorithm for learning and fitting the network to the training data. This RSS readings in all references points and the known position of these references points was prepared for training phase of the proposed MLFF neural network. Eventually, the average positioning error for this network using 30% check and validation data was computed approximately 2.20 meter.

  5. Thermalization of Wightman functions in AdS/CFT and quasinormal modes

    NASA Astrophysics Data System (ADS)

    Keränen, Ville; Kleinert, Philipp

    2016-07-01

    We study the time evolution of Wightman two-point functions of scalar fields in AdS3 -Vaidya, a spacetime undergoing gravitational collapse. In the boundary field theory, the collapse corresponds to a quench process where the dual 1 +1 -dimensional CFT is taken out of equilibrium and subsequently thermalizes. From the two-point function, we extract an effective occupation number in the boundary theory and study how it approaches the thermal Bose-Einstein distribution. We find that the Wightman functions, as well as the effective occupation numbers, thermalize with a rate set by the lowest quasinormal mode of the scalar field in the BTZ black hole background. We give a heuristic argument for the quasinormal decay, which is expected to apply to more general Vaidya spacetimes also in higher dimensions. This suggests a unified picture in which thermalization times of one- and two-point functions are determined by the lowest quasinormal mode. Finally, we study how these results compare to previous calculations of two-point functions based on the geodesic approximation.

  6. Deep-Focusing Time-Distance Helioseismology

    NASA Technical Reports Server (NTRS)

    Duvall, T. L., Jr.; Jensen, J. M.; Kosovichev, A. G.; Birch, A. C.; Fisher, Richard R. (Technical Monitor)

    2001-01-01

    Much progress has been made by measuring the travel times of solar acoustic waves from a central surface location to points at equal arc distance away. Depth information is obtained from the range of arc distances examined, with the larger distances revealing the deeper layers. This method we will call surface-focusing, as the common point, or focus, is at the surface. To obtain a clearer picture of the subsurface region, it would, no doubt, be better to focus on points below the surface. Our first attempt to do this used the ray theory to pick surface location pairs that would focus on a particular subsurface point. This is not the ideal procedure, as Born approximation kernels suggest that this focus should have zero sensitivity to sound speed inhomogeneities. However, the sensitivity is concentrated below the surface in a much better way than the old surface-focusing method, and so we expect the deep-focusing method to be more sensitive. A large sunspot group was studied by both methods. Inversions based on both methods will be compared.

  7. Solving the chemical master equation using sliding windows

    PubMed Central

    2010-01-01

    Background The chemical master equation (CME) is a system of ordinary differential equations that describes the evolution of a network of chemical reactions as a stochastic process. Its solution yields the probability density vector of the system at each point in time. Solving the CME numerically is in many cases computationally expensive or even infeasible as the number of reachable states can be very large or infinite. We introduce the sliding window method, which computes an approximate solution of the CME by performing a sequence of local analysis steps. In each step, only a manageable subset of states is considered, representing a "window" into the state space. In subsequent steps, the window follows the direction in which the probability mass moves, until the time period of interest has elapsed. We construct the window based on a deterministic approximation of the future behavior of the system by estimating upper and lower bounds on the populations of the chemical species. Results In order to show the effectiveness of our approach, we apply it to several examples previously described in the literature. The experimental results show that the proposed method speeds up the analysis considerably, compared to a global analysis, while still providing high accuracy. Conclusions The sliding window method is a novel approach to address the performance problems of numerical algorithms for the solution of the chemical master equation. The method efficiently approximates the probability distributions at the time points of interest for a variety of chemically reacting systems, including systems for which no upper bound on the population sizes of the chemical species is known a priori. PMID:20377904

  8. Safe landing area determination for a Moon lander by reachability analysis

    NASA Astrophysics Data System (ADS)

    Arslantaş, Yunus Emre; Oehlschlägel, Thimo; Sagliano, Marco

    2016-11-01

    In the last decades developments in space technology paved the way to more challenging missions like asteroid mining, space tourism and human expansion into the Solar System. These missions result in difficult tasks such as guidance schemes for re-entry, landing on celestial bodies and implementation of large angle maneuvers for spacecraft. There is a need for a safety system to increase the robustness and success of these missions. Reachability analysis meets this requirement by obtaining the set of all achievable states for a dynamical system starting from an initial condition with given admissible control inputs of the system. This paper proposes an algorithm for the approximation of nonconvex reachable sets (RS) by using optimal control. Therefore subset of the state space is discretized by equidistant points and for each grid point a distance function is defined. This distance function acts as an objective function for a related optimal control problem (OCP). Each infinite dimensional OCP is transcribed into a finite dimensional Nonlinear Programming Problem (NLP) by using Pseudospectral Methods (PSM). Finally, the NLPs are solved using available tools resulting in approximated reachable sets with information about the states of the dynamical system at these grid points. The algorithm is applied on a generic Moon landing mission. The proposed method computes approximated reachable sets and the attainable safe landing region with information about propellant consumption and time.

  9. An approximation method for improving dynamic network model fitting.

    PubMed

    Carnegie, Nicole Bohme; Krivitsky, Pavel N; Hunter, David R; Goodreau, Steven M

    There has been a great deal of interest recently in the modeling and simulation of dynamic networks, i.e., networks that change over time. One promising model is the separable temporal exponential-family random graph model (ERGM) of Krivitsky and Handcock, which treats the formation and dissolution of ties in parallel at each time step as independent ERGMs. However, the computational cost of fitting these models can be substantial, particularly for large, sparse networks. Fitting cross-sectional models for observations of a network at a single point in time, while still a non-negligible computational burden, is much easier. This paper examines model fitting when the available data consist of independent measures of cross-sectional network structure and the duration of relationships under the assumption of stationarity. We introduce a simple approximation to the dynamic parameters for sparse networks with relationships of moderate or long duration and show that the approximation method works best in precisely those cases where parameter estimation is most likely to fail-networks with very little change at each time step. We consider a variety of cases: Bernoulli formation and dissolution of ties, independent-tie formation and Bernoulli dissolution, independent-tie formation and dissolution, and dependent-tie formation models.

  10. "Blog, Blog, Blog": Online Journaling in Graduate Classes

    ERIC Educational Resources Information Center

    Craig, Dorothy Valcarcel; Young, Barbara

    2009-01-01

    In his recent publication, "Grown up Digital" (2009), Tapscott points out that many young adults between the ages of 25-35 are able to interact with various media in addition to multitasking approximately five activities at the same time. The same age group--many of whom are practicing teachers pursuing advanced degrees for a variety of…

  11. 75 FR 65282 - Medicare and Medicaid Programs; Requirements for Long Term Care Facilities; Hospice Services

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-22

    ... to qualify to participate as a skilled nursing facility (SNF) in the Medicare program, or as a nursing facility (NF) in the Medicaid program. We are proposing these requirements to ensure that long... According to CMS data, at any point in time, approximately 1.4 million elderly and disabled nursing home...

  12. Mixed-Poisson Point Process with Partially-Observed Covariates: Ecological Momentary Assessment of Smoking.

    PubMed

    Neustifter, Benjamin; Rathbun, Stephen L; Shiffman, Saul

    2012-01-01

    Ecological Momentary Assessment is an emerging method of data collection in behavioral research that may be used to capture the times of repeated behavioral events on electronic devices, and information on subjects' psychological states through the electronic administration of questionnaires at times selected from a probability-based design as well as the event times. A method for fitting a mixed Poisson point process model is proposed for the impact of partially-observed, time-varying covariates on the timing of repeated behavioral events. A random frailty is included in the point-process intensity to describe variation among subjects in baseline rates of event occurrence. Covariate coefficients are estimated using estimating equations constructed by replacing the integrated intensity in the Poisson score equations with a design-unbiased estimator. An estimator is also proposed for the variance of the random frailties. Our estimators are robust in the sense that no model assumptions are made regarding the distribution of the time-varying covariates or the distribution of the random effects. However, subject effects are estimated under gamma frailties using an approximate hierarchical likelihood. The proposed approach is illustrated using smoking data.

  13. Exponential approximations in optimal design

    NASA Technical Reports Server (NTRS)

    Belegundu, A. D.; Rajan, S. D.; Rajgopal, J.

    1990-01-01

    One-point and two-point exponential functions have been developed and proved to be very effective approximations of structural response. The exponential has been compared to the linear, reciprocal and quadratic fit methods. Four test problems in structural analysis have been selected. The use of such approximations is attractive in structural optimization to reduce the numbers of exact analyses which involve computationally expensive finite element analysis.

  14. Novel approximation of misalignment fading modeled by Beckmann distribution on free-space optical links.

    PubMed

    Boluda-Ruiz, Rubén; García-Zambrana, Antonio; Castillo-Vázquez, Carmen; Castillo-Vázquez, Beatriz

    2016-10-03

    A novel accurate and useful approximation of the well-known Beckmann distribution is presented here, which is used to model generalized pointing errors in the context of free-space optical (FSO) communication systems. We derive an approximate closed-form probability density function (PDF) for the composite gamma-gamma (GG) atmospheric turbulence with the pointing error model using the proposed approximation of the Beckmann distribution, which is valid for most practical terrestrial FSO links. This approximation takes into account the effect of the beam width, different jitters for the elevation and the horizontal displacement and the simultaneous effect of nonzero boresight errors for each axis at the receiver plane. Additionally, the proposed approximation allows us to delimit two different FSO scenarios. The first of them is when atmospheric turbulence is the dominant effect in relation to generalized pointing errors, and the second one when generalized pointing error is the dominant effect in relation to atmospheric turbulence. The second FSO scenario has not been studied in-depth by the research community. Moreover, the accuracy of the method is measured both visually and quantitatively using curve-fitting metrics. Simulation results are further included to confirm the analytical results.

  15. Political skill: A proactive inhibitor of workplace aggression exposure and an active buffer of the aggression-strain relationship.

    PubMed

    Zhou, Zhiqing E; Yang, Liu-Qin; Spector, Paul E

    2015-10-01

    In the current study we examined the role of 4 dimensions of political skill (social astuteness, interpersonal influence, networking ability, and apparent sincerity) in predicting subsequent workplace aggression exposure based on the proactive coping framework. Further, we investigated their buffering effects on the negative outcomes of experienced workplace aggression based on the transactional stress model. Data were collected from nurses at 3 time points: before graduation (Time 1, n = 346), approximately 6 months after graduation (Time 2, n = 214), and approximately 12 months after graduation (Time 3, n = 161). Results showed that Time 1 interpersonal influence and apparent sincerity predicted subsequent physical aggression exposure. Exposure to physical and/or psychological workplace aggression was related to increased anger and musculoskeletal injury, and decreased job satisfaction and career commitment. Further, all dimensions of political skill but networking ability buffered some negative effects of physical aggression, and all dimensions but social astuteness buffered some negative effects of psychological aggression. (c) 2015 APA, all rights reserved).

  16. Locating CVBEM collocation points for steady state heat transfer problems

    USGS Publications Warehouse

    Hromadka, T.V.

    1985-01-01

    The Complex Variable Boundary Element Method or CVBEM provides a highly accurate means of developing numerical solutions to steady state two-dimensional heat transfer problems. The numerical approach exactly solves the Laplace equation and satisfies the boundary conditions at specified points on the boundary by means of collocation. The accuracy of the approximation depends upon the nodal point distribution specified by the numerical analyst. In order to develop subsequent, refined approximation functions, four techniques for selecting additional collocation points are presented. The techniques are compared as to the governing theory, representation of the error of approximation on the problem boundary, the computational costs, and the ease of use by the numerical analyst. ?? 1985.

  17. Universal statistics of terminal dynamics before collapse

    NASA Astrophysics Data System (ADS)

    Lenner, Nicolas; Eule, Stephan; Wolf, Fred

    Recent biological developments have both drastically increased the precision as well as amount of generated data, allowing for a switching from pure mean value characterization of the process under consideration to an analysis of the whole ensemble, exploiting the stochastic nature of biology. We focus on the general class of non-equilibrium processes with distinguished terminal points as can be found in cell fate decision, check points or cognitive neuroscience. Aligning the data to a terminal point (e.g. represented as an absorbing boundary) allows to device a general methodology to characterize and reverse engineer the terminating history. Using a small noise approximation we derive mean variance and covariance of the aligned data for general finite time singularities.

  18. Local pulse wave velocity estimated from small vibrations measured ultrasonically at multiple points on the arterial wall

    NASA Astrophysics Data System (ADS)

    Ito, Mika; Arakawa, Mototaka; Kanai, Hiroshi

    2018-07-01

    Pulse wave velocity (PWV) is used as a diagnostic criterion for arteriosclerosis, a major cause of heart disease and cerebrovascular disease. However, there are several problems with conventional PWV measurement techniques. One is that a pulse wave is assumed to only have an incident component propagating at a constant speed from the heart to the femoral artery, and another is that PWV is only determined from a characteristic time such as the rise time of the blood pressure waveform. In this study, we noninvasively measured the velocity waveform of small vibrations at multiple points on the carotid arterial wall using ultrasound. Local PWV was determined by analyzing the phase component of the velocity waveform by the least squares method. This method allowed measurement of the time change of the PWV at approximately the arrival time of the pulse wave, which discriminates the period when the reflected component is not contaminated.

  19. Optimization of Time-Dependent Particle Tracing Using Tetrahedral Decomposition

    NASA Technical Reports Server (NTRS)

    Kenwright, David; Lane, David

    1995-01-01

    An efficient algorithm is presented for computing particle paths, streak lines and time lines in time-dependent flows with moving curvilinear grids. The integration, velocity interpolation and step-size control are all performed in physical space which avoids the need to transform the velocity field into computational space. This leads to higher accuracy because there are no Jacobian matrix approximations or expensive matrix inversions. Integration accuracy is maintained using an adaptive step-size control scheme which is regulated by the path line curvature. The problem of cell-searching, point location and interpolation in physical space is simplified by decomposing hexahedral cells into tetrahedral cells. This enables the point location to be done analytically and substantially faster than with a Newton-Raphson iterative method. Results presented show this algorithm is up to six times faster than particle tracers which operate on hexahedral cells yet produces almost identical particle trajectories.

  20. Registration of Vehicle-Borne Point Clouds and Panoramic Images Based on Sensor Constellations

    PubMed Central

    Yao, Lianbi; Wu, Hangbin; Li, Yayun; Meng, Bin; Qian, Jinfei; Liu, Chun; Fan, Hongchao

    2017-01-01

    A mobile mapping system (MMS) is usually utilized to collect environmental data on and around urban roads. Laser scanners and panoramic cameras are the main sensors of an MMS. This paper presents a new method for the registration of the point clouds and panoramic images based on sensor constellation. After the sensor constellation was analyzed, a feature point, the intersection of the connecting line between the global positioning system (GPS) antenna and the panoramic camera with a horizontal plane, was utilized to separate the point clouds into blocks. The blocks for the central and sideward laser scanners were extracted with the segmentation feature points. Then, the point clouds located in the blocks were separated from the original point clouds. Each point in the blocks was used to find the accurate corresponding pixel in the relative panoramic images via a collinear function, and the position and orientation relationship amongst different sensors. A search strategy is proposed for the correspondence of laser scanners and lenses of panoramic cameras to reduce calculation complexity and improve efficiency. Four cases of different urban road types were selected to verify the efficiency and accuracy of the proposed method. Results indicate that most of the point clouds (with an average of 99.7%) were successfully registered with the panoramic images with great efficiency. Geometric evaluation results indicate that horizontal accuracy was approximately 0.10–0.20 m, and vertical accuracy was approximately 0.01–0.02 m for all cases. Finally, the main factors that affect registration accuracy, including time synchronization amongst different sensors, system positioning and vehicle speed, are discussed. PMID:28398256

  1. Quasi-periodic pulsations in solar hard X-ray and microwave flares

    NASA Technical Reports Server (NTRS)

    Kosugi, Takeo; Kiplinger, Alan L.

    1986-01-01

    For more than a decade, various studies have pointed out that hard X-ray and microwave time profiles of some solar flares show quasi-periodic fluctuations or pulsations. Nevertheless, it was not until recently that a flare displaying large amplitude quasi-periodic pulsations in X-rays and microwaves was observed with good spectral coverage and with a sufficient time resolution. The event occurred on June 7, 1980, at approximately 0312 UT, and exhibits seven intense pulses with a quasi-periodicity of approximately 8 seconds in microwaves, hard X-rays, and gamma-ray lines. On May 12, 1983, at approximately 0253 UT, another good example of this type of flare was observed both in hard X-rays and in microwaves. Temporal and spectral characteristics of this flare are compared with the event of June 7, 1980. In order to further explore these observational results and theoretical scenarios, a study of nine additional quasi-periodic events were incorporated with the results from the two flares described. Analysis of these events are briefly summarized.

  2. On the mathematical treatment of the Born-Oppenheimer approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jecko, Thierry, E-mail: thierry.jecko@u-cergy.fr

    2014-05-15

    Motivated by the paper by Sutcliffe and Woolley [“On the quantum theory of molecules,” J. Chem. Phys. 137, 22A544 (2012)], we present the main ideas used by mathematicians to show the accuracy of the Born-Oppenheimer approximation for molecules. Based on mathematical works on this approximation for molecular bound states, in scattering theory, in resonance theory, and for short time evolution, we give an overview of some rigorous results obtained up to now. We also point out the main difficulties mathematicians are trying to overcome and speculate on further developments. The mathematical approach does not fit exactly to the common usemore » of the approximation in Physics and Chemistry. We criticize the latter and comment on the differences, contributing in this way to the discussion on the Born-Oppenheimer approximation initiated by Sutcliffe and Woolley. The paper neither contains mathematical statements nor proofs. Instead, we try to make accessible mathematically rigourous results on the subject to researchers in Quantum Chemistry or Physics.« less

  3. TH-AB-202-08: A Robust Real-Time Surface Reconstruction Method On Point Clouds Captured From a 3D Surface Photogrammetry System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, W; Sawant, A; Ruan, D

    2016-06-15

    Purpose: Surface photogrammetry (e.g. VisionRT, C-Rad) provides a noninvasive way to obtain high-frequency measurement for patient motion monitoring in radiotherapy. This work aims to develop a real-time surface reconstruction method on the acquired point clouds, whose acquisitions are subject to noise and missing measurements. In contrast to existing surface reconstruction methods that are usually computationally expensive, the proposed method reconstructs continuous surfaces with comparable accuracy in real-time. Methods: The key idea in our method is to solve and propagate a sparse linear relationship from the point cloud (measurement) manifold to the surface (reconstruction) manifold, taking advantage of the similarity inmore » local geometric topology in both manifolds. With consistent point cloud acquisition, we propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, building the point correspondences by the iterative closest point (ICP) method. To accommodate changing noise levels and/or presence of inconsistent occlusions, we further propose a modified sparse regression (MSR) model to account for the large and sparse error built by ICP, with a Laplacian prior. We evaluated our method on both clinical acquired point clouds under consistent conditions and simulated point clouds with inconsistent occlusions. The reconstruction accuracy was evaluated w.r.t. root-mean-squared-error, by comparing the reconstructed surfaces against those from the variational reconstruction method. Results: On clinical point clouds, both the SR and MSR models achieved sub-millimeter accuracy, with mean reconstruction time reduced from 82.23 seconds to 0.52 seconds and 0.94 seconds, respectively. On simulated point cloud with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent performance despite the introduced occlusions. Conclusion: We have developed a real-time and robust surface reconstruction method on point clouds acquired by photogrammetry systems. It serves an important enabling step for real-time motion tracking in radiotherapy. This work is supported in part by NIH grant R01 CA169102-02.« less

  4. Very early reaction intermediates detected by microsecond time scale kinetics of cytochrome cd1-catalyzed reduction of nitrite.

    PubMed

    Sam, Katharine A; Strampraad, Marc J F; de Vries, Simon; Ferguson, Stuart J

    2008-10-10

    Paracoccus pantotrophus cytochrome cd(1) is a nitrite reductase found in the periplasm of many denitrifying bacteria. It catalyzes the reduction of nitrite to nitric oxide during the denitrification part of the biological nitrogen cycle. Previous studies of early millisecond intermediates in the nitrite reduction reaction have shown, by comparison with pH 7.0, that at the optimum pH, approximately pH 6, the earliest intermediates were lost in the dead time of the instrument. Access to early time points (approximately 100 micros) through use of an ultra-rapid mixing device has identified a spectroscopically novel intermediate, assigned as the Michaelis complex, formed from reaction of fully reduced enzyme with nitrite. Spectroscopic observation of the subsequent transformation of this species has provided data that demand reappraisal of the general belief that the two subunits of the enzyme function independently.

  5. Convergence of discrete Aubry–Mather model in the continuous limit

    NASA Astrophysics Data System (ADS)

    Su, Xifeng; Thieullen, Philippe

    2018-05-01

    We develop two approximation schemes for solving the cell equation and the discounted cell equation using Aubry–Mather–Fathi theory. The Hamiltonian is supposed to be Tonelli, time-independent and periodic in space. By Legendre transform it is equivalent to find a fixed point of some nonlinear operator, called Lax-Oleinik operator, which may be discounted or not. By discretizing in time, we are led to solve an additive eigenvalue problem involving a discrete Lax–Oleinik operator. We show how to approximate the effective Hamiltonian and some weak KAM solutions by letting the time step in the discrete model tend to zero. We also obtain a selected discrete weak KAM solution as in Davini et al (2016 Invent. Math. 206 29–55), and show that it converges to a particular solution of the cell equation. In order to unify the two settings, continuous and discrete, we develop a more general formalism of the short-range interactions.

  6. Why Nature has made a choice of one time and three space coordinates?

    NASA Astrophysics Data System (ADS)

    Mankoc Borstnik, N.; Nielsen, H. B.

    2002-12-01

    We propose a possible answer to one of the most exciting open questions in physics and cosmology, that is, the question why we seem to experience four-dimensional spacetime with three ordinary and one time dimensions. Making assumptions (such as particles being in first approximation massless) about the equations of motion, we argue for restrictions on the number of space and time dimensions. Accepting our explanation of the spacetime signature and the number of dimensions would be a point supporting (further) the importance of the 'internal space'.

  7. Optimal symmetric flight studies

    NASA Technical Reports Server (NTRS)

    Weston, A. R.; Menon, P. K. A.; Bilimoria, K. D.; Cliff, E. M.; Kelley, H. J.

    1985-01-01

    Several topics in optimal symmetric flight of airbreathing vehicles are examined. In one study, an approximation scheme designed for onboard real-time energy management of climb-dash is developed and calculations for a high-performance aircraft presented. In another, a vehicle model intermediate in complexity between energy and point-mass models is explored and some quirks in optimal flight characteristics peculiar to the model uncovered. In yet another study, energy-modelling procedures are re-examined with a view to stretching the range of validity of zeroth-order approximation by special choice of state variables. In a final study, time-fuel tradeoffs in cruise-dash are examined for the consequences of nonconvexities appearing in the classical steady cruise-dash model. Two appendices provide retrospective looks at two early publications on energy modelling and related optimal control theory.

  8. Time-varying Entry Heating Profile Replication with a Rotating Arc Jet Test Article

    NASA Technical Reports Server (NTRS)

    Grinstead, Jay Henderson; Venkatapathy, Ethiraj; Noyes, Eric A.; Mach, Jeffrey J.; Empey, Daniel M.; White, Todd R.

    2014-01-01

    A new approach for arc jet testing of thermal protection materials at conditions approximating the time-varying conditions of atmospheric entry was developed and demonstrated. The approach relies upon the spatial variation of heat flux and pressure over a cylindrical test model. By slowly rotating a cylindrical arc jet test model during exposure to an arc jet stream, each point on the test model will experience constantly changing applied heat flux. The predicted temporal profile of heat flux at a point on a vehicle can be replicated by rotating the cylinder at a prescribed speed and direction. An electromechanical test model mechanism was designed, built, and operated during an arc jet test to demonstrate the technique.

  9. [Formula: see text]Interpreting change on the neurobehavioral symptom inventory and the PTSD checklist in military personnel.

    PubMed

    Belanger, Heather G; Lange, Rael T; Bailie, Jason; Iverson, Grant L; Arrieux, Jacques P; Ivins, Brian J; Cole, Wesley R

    2016-10-01

    The purpose of this study was to examine the prevalence and stability of symptom reporting in a healthy military sample and to develop reliable change indices for two commonly used self-report measures in the military health care system. Participants were 215 U.S. active duty service members recruited from Fort Bragg, NC as normal controls as part of a larger study. Participants completed the Neurobehavioral Symptom Inventory (NSI) and Posttraumatic Checklist (PCL) twice, separated by approximately 30 days. Depending on the endorsement level used (i.e. ratings of 'mild' or greater vs. ratings of 'moderate' or greater), approximately 2-15% of this sample met DSM-IV symptom criteria for Postconcussional Disorder across time points, while 1-6% met DSM-IV symptom criteria for Posttraumatic Stress Disorder. Effect sizes for change from Time 1 to Time 2 on individual symptoms were small (Cohen's d = .01 to .13). The test-retest reliability for the NSI total score was r = .78 and the PCL score was r = .70. An eight-point change in symptom reporting represented reliable change on the NSI total score, with a seven-point change needed on the PCL. Postconcussion-like symptoms are not unique to mild TBI and are commonly reported in a healthy soldier sample. It is important for clinicians to use normative data when evaluating a service member or veteran and when evaluating the likelihood that a change in symptom reporting is reliable and clinically meaningful.

  10. Coalescent Inference Using Serially Sampled, High-Throughput Sequencing Data from Intrahost HIV Infection

    PubMed Central

    Dialdestoro, Kevin; Sibbesen, Jonas Andreas; Maretty, Lasse; Raghwani, Jayna; Gall, Astrid; Kellam, Paul; Pybus, Oliver G.; Hein, Jotun; Jenkins, Paul A.

    2016-01-01

    Human immunodeficiency virus (HIV) is a rapidly evolving pathogen that causes chronic infections, so genetic diversity within a single infection can be very high. High-throughput “deep” sequencing can now measure this diversity in unprecedented detail, particularly since it can be performed at different time points during an infection, and this offers a potentially powerful way to infer the evolutionary dynamics of the intrahost viral population. However, population genomic inference from HIV sequence data is challenging because of high rates of mutation and recombination, rapid demographic changes, and ongoing selective pressures. In this article we develop a new method for inference using HIV deep sequencing data, using an approach based on importance sampling of ancestral recombination graphs under a multilocus coalescent model. The approach further extends recent progress in the approximation of so-called conditional sampling distributions, a quantity of key interest when approximating coalescent likelihoods. The chief novelties of our method are that it is able to infer rates of recombination and mutation, as well as the effective population size, while handling sampling over different time points and missing data without extra computational difficulty. We apply our method to a data set of HIV-1, in which several hundred sequences were obtained from an infected individual at seven time points over 2 years. We find mutation rate and effective population size estimates to be comparable to those produced by the software BEAST. Additionally, our method is able to produce local recombination rate estimates. The software underlying our method, Coalescenator, is freely available. PMID:26857628

  11. Effect of nose shape on three-dimensional stagnation region streamlines and heating rates

    NASA Technical Reports Server (NTRS)

    Hassan, Basil; Dejarnette, Fred R.; Zoby, E. V.

    1991-01-01

    A new method for calculating the three-dimensional inviscid surface streamlines and streamline metrics using Cartesian coordinates and time as the independent variable of integration has been developed. The technique calculates the streamline from a specified point on the body to a point near the stagnation point by using a prescribed pressure distribution in the Euler equations. The differential equations, which are singular at the stagnation point, are of the two point boundary value problem type. Laminar heating rates are calculated using the axisymmetric analog concept for three-dimensional boundary layers and approximate solutions to the axisymmetric boundary layer equations. Results for elliptic conic forebody geometries show that location of the point of maximum heating depends on the type of conic in the plane of symmetry and the angle of attack, and that this location is in general different from the stagnation point. The new method was found to give smooth predictions of heat transfer in the nose region where previous methods gave oscillatory results.

  12. Earth Observations

    NASA Image and Video Library

    2010-07-25

    ISS024-E-009526 (25 July 2010) --- Dominic Point Fire in Montana is featured in this image photographed by an Expedition 24 crew member on the International Space Station. Lightning strikes in the forested mountains of the western United States, and human activities, can spark wild fires during the summer dry season. The Dominic Point Fire was first reported near 3:00 p.m. local time on July 25 2010. Approximately one hour later, the space station crew photographed the fire?s large smoke plume ? already extending at least eight kilometers to the east ? from orbit as they passed almost directly overhead. Forest Service fire crews, slurry bombers and helicopters were on the scene by that evening. The fire may have been started by a lightning strike, as there are no trails leading into the fire area located approximately 22 kilometers northeast of Hamilton, MT (according to local reports). As of July 26, 2010 the fire had burned approximately 283?405 hectares of the Bitterroot National Forest in western Montana. The fire is thought to have expanded quickly due to high temperatures, low humidity, and favorable winds with an abundance of deadfall ? dead trees and logs that provide readily combustible fuels ? in the area.

  13. Near-ultraviolet imaging of Jupiter's satellite Io with the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Paresce, F.; Sartoretti, P.; Albrecht, R.; Barbieri, C.; Blades, J. C.; Boksenberg, A.; Crane, P.; Deharveng, J. M.; Disney, M. J.; Jakobsen, P.

    1992-01-01

    The surface of Jupiter's Galilean satellite Io has been resolved for the first time in the near ultraviolet at 2850 A by the Faint Object Camera (FOC) on the Hubble Space Telescope (HST). The restored images reveal significant surface structure down to the resolution limit of the optical system corresponding to approximately 250 km at the sub-earth point.

  14. Hopelessness as a Predictor of Attempted Suicide among First Admission Patients with Psychosis: A 10-Year Cohort Study

    ERIC Educational Resources Information Center

    Klonsky, E. David; Kotov, Roman; Bakst, Shelly; Rabinowitz, Jonathan; Bromet, Evelyn J.

    2012-01-01

    Little is known about the longitudinal relationship of hopelessness to attempted suicide in psychotic disorders. This study addresses this gap by assessing hopelessness and attempted suicide at multiple time-points over 10 years in a first-admission cohort with psychosis (n = 414). Approximately one in five participants attempted suicide during…

  15. Fast generation of computer-generated holograms using wavelet shrinkage.

    PubMed

    Shimobaba, Tomoyoshi; Ito, Tomoyoshi

    2017-01-09

    Computer-generated holograms (CGHs) are generated by superimposing complex amplitudes emitted from a number of object points. However, this superposition process remains very time-consuming even when using the latest computers. We propose a fast calculation algorithm for CGHs that uses a wavelet shrinkage method, eliminating small wavelet coefficient values to express approximated complex amplitudes using only a few representative wavelet coefficients.

  16. An approximate, maximum terminal velocity descent to a point

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eisler, G.R.; Hull, D.G.

    1987-01-01

    No closed form control solution exists for maximizing the terminal velocity of a hypersonic glider at an arbitrary point. As an alternative, this study uses neighboring extremal theory to provide a sampled data feedback law to guide the vehicle to a constrained ground range and altitude. The guidance algorithm is divided into two parts: 1) computation of a nominal, approximate, maximum terminal velocity trajectory to a constrained final altitude and computation of the resulting unconstrained groundrange, and 2) computation of the neighboring extremal control perturbation at the sample value of flight path angle to compensate for changes in the approximatemore » physical model and enable the vehicle to reach the on-board computed groundrange. The trajectories are characterized by glide and dive flight to the target to minimize the time spent in the denser parts of the atmosphere. The proposed on-line scheme successfully brings the final altitude and range constraints together, as well as compensates for differences in flight model, atmosphere, and aerodynamics at the expense of guidance update computation time. Comparison with an independent, parameter optimization solution for the terminal velocity is excellent. 6 refs., 3 figs.« less

  17. Determining the Uncertainty of X-Ray Absorption Measurements

    PubMed Central

    Wojcik, Gary S.

    2004-01-01

    X-ray absorption (or more properly, x-ray attenuation) techniques have been applied to study the moisture movement in and moisture content of materials like cement paste, mortar, and wood. An increase in the number of x-ray counts with time at a location in a specimen may indicate a decrease in moisture content. The uncertainty of measurements from an x-ray absorption system, which must be known to properly interpret the data, is often assumed to be the square root of the number of counts, as in a Poisson process. No detailed studies have heretofore been conducted to determine the uncertainty of x-ray absorption measurements or the effect of averaging data on the uncertainty. In this study, the Poisson estimate was found to adequately approximate normalized root mean square errors (a measure of uncertainty) of counts for point measurements and profile measurements of water specimens. The Poisson estimate, however, was not reliable in approximating the magnitude of the uncertainty when averaging data from paste and mortar specimens. Changes in uncertainty from differing averaging procedures were well-approximated by a Poisson process. The normalized root mean square errors decreased when the x-ray source intensity, integration time, collimator size, and number of scanning repetitions increased. Uncertainties in mean paste and mortar count profiles were kept below 2 % by averaging vertical profiles at horizontal spacings of 1 mm or larger with counts per point above 4000. Maximum normalized root mean square errors did not exceed 10 % in any of the tests conducted. PMID:27366627

  18. Publications - GMC 310 | Alaska Division of Geological & Geophysical

    Science.gov Websites

    approximations of core (4,309.5'-4,409') from the BP Exploration (Alaska) Inc. Milne Point G-1 well Authors UCS approximations of core (4,309.5'-4,409') from the BP Exploration (Alaska) Inc. Milne Point G-1

  19. An Algebraic Approach to Guarantee Harmonic Balance Method Using Gröbner Base

    NASA Astrophysics Data System (ADS)

    Yagi, Masakazu; Hisakado, Takashi; Okumura, Kohshi

    Harmonic balance (HB) method is well known principle for analyzing periodic oscillations on nonlinear networks and systems. Because the HB method has a truncation error, approximated solutions have been guaranteed by error bounds. However, its numerical computation is very time-consuming compared with solving the HB equation. This paper proposes an algebraic representation of the error bound using Gröbner base. The algebraic representation enables to decrease the computational cost of the error bound considerably. Moreover, using singular points of the algebraic representation, we can obtain accurate break points of the error bound by collisions.

  20. Adaptive finite-volume WENO schemes on dynamically redistributed grids for compressible Euler equations

    NASA Astrophysics Data System (ADS)

    Pathak, Harshavardhana S.; Shukla, Ratnesh K.

    2016-08-01

    A high-order adaptive finite-volume method is presented for simulating inviscid compressible flows on time-dependent redistributed grids. The method achieves dynamic adaptation through a combination of time-dependent mesh node clustering in regions characterized by strong solution gradients and an optimal selection of the order of accuracy and the associated reconstruction stencil in a conservative finite-volume framework. This combined approach maximizes spatial resolution in discontinuous regions that require low-order approximations for oscillation-free shock capturing. Over smooth regions, high-order discretization through finite-volume WENO schemes minimizes numerical dissipation and provides excellent resolution of intricate flow features. The method including the moving mesh equations and the compressible flow solver is formulated entirely on a transformed time-independent computational domain discretized using a simple uniform Cartesian mesh. Approximations for the metric terms that enforce discrete geometric conservation law while preserving the fourth-order accuracy of the two-point Gaussian quadrature rule are developed. Spurious Cartesian grid induced shock instabilities such as carbuncles that feature in a local one-dimensional contact capturing treatment along the cell face normals are effectively eliminated through upwind flux calculation using a rotated Hartex-Lax-van Leer contact resolving (HLLC) approximate Riemann solver for the Euler equations in generalized coordinates. Numerical experiments with the fifth and ninth-order WENO reconstructions at the two-point Gaussian quadrature nodes, over a range of challenging test cases, indicate that the redistributed mesh effectively adapts to the dynamic flow gradients thereby improving the solution accuracy substantially even when the initial starting mesh is non-adaptive. The high adaptivity combined with the fifth and especially the ninth-order WENO reconstruction allows remarkably sharp capture of discontinuous propagating shocks with simultaneous resolution of smooth yet complex small scale unsteady flow features to an exceptional detail.

  1. Boiling point measurement of a small amount of brake fluid by thermocouple and its application.

    PubMed

    Mogami, Kazunari

    2002-09-01

    This study describes a new method for measuring the boiling point of a small amount of brake fluid using a thermocouple and a pear shaped flask. The boiling point of brake fluid was directly measured with an accuracy that was within approximately 3 C of that determined by the Japanese Industrial Standards method, even though the sample volume was only a few milliliters. The method was applied to measure the boiling points of brake fluid samples from automobiles. It was clear that the boiling points of brake fluid from some automobiles dropped to approximately 140 C from about 230 C, and that one of the samples from the wheel cylinder was approximately 45 C lower than brake fluid from the reserve tank. It is essential to take samples from the wheel cylinder, as this is most easily subjected to heating.

  2. Fitting by Orthonormal Polynomials of Silver Nanoparticles Spectroscopic Data

    NASA Astrophysics Data System (ADS)

    Bogdanova, Nina; Koleva, Mihaela

    2018-02-01

    Our original Orthonormal Polynomial Expansion Method (OPEM) in one-dimensional version is applied for first time to describe the silver nanoparticles (NPs) spectroscopic data. The weights for approximation include experimental errors in variables. In this way we construct orthonormal polynomial expansion for approximating the curve on a non equidistant point grid. The corridors of given data and criteria define the optimal behavior of searched curve. The most important subinterval of spectra data is investigated, where the minimum (surface plasmon resonance absorption) is looking for. This study describes the Ag nanoparticles produced by laser approach in a ZnO medium forming a AgNPs/ZnO nanocomposite heterostructure.

  3. A-posteriori error estimation for the finite point method with applications to compressible flow

    NASA Astrophysics Data System (ADS)

    Ortega, Enrique; Flores, Roberto; Oñate, Eugenio; Idelsohn, Sergio

    2017-08-01

    An a-posteriori error estimate with application to inviscid compressible flow problems is presented. The estimate is a surrogate measure of the discretization error, obtained from an approximation to the truncation terms of the governing equations. This approximation is calculated from the discrete nodal differential residuals using a reconstructed solution field on a modified stencil of points. Both the error estimation methodology and the flow solution scheme are implemented using the Finite Point Method, a meshless technique enabling higher-order approximations and reconstruction procedures on general unstructured discretizations. The performance of the proposed error indicator is studied and applications to adaptive grid refinement are presented.

  4. Effects of an exercise and scheduled-toileting intervention on appetite and constipation in nursing home residents.

    PubMed

    Simmons, S F; Schnelle, J F

    2004-01-01

    To evaluate the effects of an exercise and scheduled-toileting intervention on appetite and constipation in nursing home (NH) residents. A controlled, clinical intervention trial with 89 residents in two NHs. Research staff provided exercise and toileting assistance every two hours, four times per day, five days a week for 32 weeks. Oral food and fluid consumption during meals was measured at baseline, eight and 32 weeks. Bowel movement frequency was measured at baseline and 32 weeks. The intervention group showed significant improvements or maintenance across all measures of daily physical activity, functional performance, and strength compared to the control group. Participants in both groups consumed an average of approximately 55% of meals at all three time points (approximately 1100 calories/day) with no change over time in either group. There was also no change in the frequency of bowel movements in either group, which averaged less than one in two days for both groups; and, approximately one-half of all participants had no bowel movement in two days. An exercise and scheduled-toileting intervention alone is not sufficient to improve oral food and fluid consumption during meals and bowel movement frequency in NH residents.

  5. Pluto in Hi-Def Note: There is debate within the science community as to whether Pluto should be

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This image demonstrates the first detection of Pluto using the high-resolution mode on the New Horizons Long-Range Reconnaissance Imager (LORRI). The mode provides a clear separation between Pluto and numerous nearby background stars. When the image was taken on October 6, 2007, Pluto was located in the constellation Serpens, in a region of the sky dense with background stars.

    Typically, LORRI's exposure time in hi-res mode is limited to approximately 0.1 seconds, but by using a special pointing mode that allowed an increase in the exposure time to 0.967 seconds, scientists were able to spot Pluto, which is approximately 15,000 times fainter than human eyes can detect.

    New Horizons was still too far from Pluto (3.6 billion kilometers, or 2.2 billion miles) for LORRI to resolve any details on Pluto's surface that won't happen until summer 2014, approximately one year before closest approach. For now the entire Pluto system remains a bright dot to the spacecraft's telescopic camera, though LORRI is expected to start resolving Charon from Pluto seeing them as separate objects in summer 2010.

  6. Roots of polynomials by ratio of successive derivatives

    NASA Technical Reports Server (NTRS)

    Crouse, J. E.; Putt, C. W.

    1972-01-01

    An order of magnitude study of the ratios of successive polynomial derivatives yields information about the number of roots at an approached root point and the approximate location of a root point from a nearby point. The location approximation improves as a root is approached, so a powerful convergence procedure becomes available. These principles are developed into a computer program which finds the roots of polynomials with real number coefficients.

  7. A General Method for Solving Systems of Non-Linear Equations

    NASA Technical Reports Server (NTRS)

    Nachtsheim, Philip R.; Deiss, Ron (Technical Monitor)

    1995-01-01

    The method of steepest descent is modified so that accelerated convergence is achieved near a root. It is assumed that the function of interest can be approximated near a root by a quadratic form. An eigenvector of the quadratic form is found by evaluating the function and its gradient at an arbitrary point and another suitably selected point. The terminal point of the eigenvector is chosen to lie on the line segment joining the two points. The terminal point found lies on an axis of the quadratic form. The selection of a suitable step size at this point leads directly to the root in the direction of steepest descent in a single step. Newton's root finding method not infrequently diverges if the starting point is far from the root. However, the current method in these regions merely reverts to the method of steepest descent with an adaptive step size. The current method's performance should match that of the Levenberg-Marquardt root finding method since they both share the ability to converge from a starting point far from the root and both exhibit quadratic convergence near a root. The Levenberg-Marquardt method requires storage for coefficients of linear equations. The current method which does not require the solution of linear equations requires more time for additional function and gradient evaluations. The classic trade off of time for space separates the two methods.

  8. Initial conditions for critical Higgs inflation

    NASA Astrophysics Data System (ADS)

    Salvio, Alberto

    2018-05-01

    It has been pointed out that a large non-minimal coupling ξ between the Higgs and the Ricci scalar can source higher derivative operators, which may change the predictions of Higgs inflation. A variant, called critical Higgs inflation, employs the near-criticality of the top mass to introduce an inflection point in the potential and lower drastically the value of ξ. We here study whether critical Higgs inflation can occur even if the pre-inflationary initial conditions do not satisfy the slow-roll behavior (retaining translation and rotation symmetries). A positive answer is found: inflation turns out to be an attractor and therefore no fine-tuning of the initial conditions is necessary. A very large initial Higgs time-derivative (as compared to the potential energy density) is compensated by a moderate increase in the initial field value. These conclusions are reached by solving the exact Higgs equation without using the slow-roll approximation. This also allows us to consistently treat the inflection point, where the standard slow-roll approximation breaks down. Here we make use of an approach that is independent of the UV completion of gravity, by taking initial conditions that always involve sub-planckian energies.

  9. Numerical simulations of piecewise deterministic Markov processes with an application to the stochastic Hodgkin-Huxley model.

    PubMed

    Ding, Shaojie; Qian, Min; Qian, Hong; Zhang, Xuejuan

    2016-12-28

    The stochastic Hodgkin-Huxley model is one of the best-known examples of piecewise deterministic Markov processes (PDMPs), in which the electrical potential across a cell membrane, V(t), is coupled with a mesoscopic Markov jump process representing the stochastic opening and closing of ion channels embedded in the membrane. The rates of the channel kinetics, in turn, are voltage-dependent. Due to this interdependence, an accurate and efficient sampling of the time evolution of the hybrid stochastic systems has been challenging. The current exact simulation methods require solving a voltage-dependent hitting time problem for multiple path-dependent intensity functions with random thresholds. This paper proposes a simulation algorithm that approximates an alternative representation of the exact solution by fitting the log-survival function of the inter-jump dwell time, H(t), with a piecewise linear one. The latter uses interpolation points that are chosen according to the time evolution of the H(t), as the numerical solution to the coupled ordinary differential equations of V(t) and H(t). This computational method can be applied to all PDMPs. Pathwise convergence of the approximated sample trajectories to the exact solution is proven, and error estimates are provided. Comparison with a previous algorithm that is based on piecewise constant approximation is also presented.

  10. Linearized semiclassical initial value time correlation functions with maximum entropy analytic continuation.

    PubMed

    Liu, Jian; Miller, William H

    2008-09-28

    The maximum entropy analytic continuation (MEAC) method is used to extend the range of accuracy of the linearized semiclassical initial value representation (LSC-IVR)/classical Wigner approximation for real time correlation functions. LSC-IVR provides a very effective "prior" for the MEAC procedure since it is very good for short times, exact for all time and temperature for harmonic potentials (even for correlation functions of nonlinear operators), and becomes exact in the classical high temperature limit. This combined MEAC+LSC/IVR approach is applied here to two highly nonlinear dynamical systems, a pure quartic potential in one dimensional and liquid para-hydrogen at two thermal state points (25 and 14 K under nearly zero external pressure). The former example shows the MEAC procedure to be a very significant enhancement of the LSC-IVR for correlation functions of both linear and nonlinear operators, and especially at low temperature where semiclassical approximations are least accurate. For liquid para-hydrogen, the LSC-IVR is seen already to be excellent at T=25 K, but the MEAC procedure produces a significant correction at the lower temperature (T=14 K). Comparisons are also made as to how the MEAC procedure is able to provide corrections for other trajectory-based dynamical approximations when used as priors.

  11. Incremental isometric embedding of high-dimensional data using connected neighborhood graphs.

    PubMed

    Zhao, Dongfang; Yang, Li

    2009-01-01

    Most nonlinear data embedding methods use bottom-up approaches for capturing the underlying structure of data distributed on a manifold in high dimensional space. These methods often share the first step which defines neighbor points of every data point by building a connected neighborhood graph so that all data points can be embedded to a single coordinate system. These methods are required to work incrementally for dimensionality reduction in many applications. Because input data stream may be under-sampled or skewed from time to time, building connected neighborhood graph is crucial to the success of incremental data embedding using these methods. This paper presents algorithms for updating $k$-edge-connected and $k$-connected neighborhood graphs after a new data point is added or an old data point is deleted. It further utilizes a simple algorithm for updating all-pair shortest distances on the neighborhood graph. Together with incremental classical multidimensional scaling using iterative subspace approximation, this paper devises an incremental version of Isomap with enhancements to deal with under-sampled or unevenly distributed data. Experiments on both synthetic and real-world data sets show that the algorithm is efficient and maintains low dimensional configurations of high dimensional data under various data distributions.

  12. Lightning Simulation and Design Program (LSDP)

    NASA Astrophysics Data System (ADS)

    Smith, D. A.

    This computer program simulates a user-defined lighting configuration. It has been developed as a tool to aid in the design of exterior lighting systems. Although this program is used primarily for perimeter security lighting design, it has potential use for any application where the light can be approximated by a point source. A data base of luminaire photometric information is maintained for use with this program. The user defines the surface area to be illuminated with a rectangular grid and specifies luminaire positions. Illumination values are calculated for regularly spaced points in that area and isolux contour plots are generated. The numerical and graphical output for a particular site mode are then available for analysis. The amount of time spent on point-to-point illumination computation with this progress is much less than that required for tedious hand calculations. The ease with which various parameters can be interactively modified with the progress also reduces the time and labor expended. Consequently, the feasibility of design ideas can be examined, modified, and retested more thoroughly, and overall design costs can be substantially lessened by using this progress as an adjunct to the design process.

  13. Last millennium Northern Hemisphere summer temperatures from tree rings: Part II, spatially resolved reconstructions

    NASA Astrophysics Data System (ADS)

    Anchukaitis, Kevin J.; Wilson, Rob; Briffa, Keith R.; Büntgen, Ulf; Cook, Edward R.; D'Arrigo, Rosanne; Davi, Nicole; Esper, Jan; Frank, David; Gunnarson, Björn E.; Hegerl, Gabi; Helama, Samuli; Klesse, Stefan; Krusic, Paul J.; Linderholm, Hans W.; Myglan, Vladimir; Osborn, Timothy J.; Zhang, Peng; Rydval, Milos; Schneider, Lea; Schurer, Andrew; Wiles, Greg; Zorita, Eduardo

    2017-05-01

    Climate field reconstructions from networks of tree-ring proxy data can be used to characterize regional-scale climate changes, reveal spatial anomaly patterns associated with atmospheric circulation changes, radiative forcing, and large-scale modes of ocean-atmosphere variability, and provide spatiotemporal targets for climate model comparison and evaluation. Here we use a multiproxy network of tree-ring chronologies to reconstruct spatially resolved warm season (May-August) mean temperatures across the extratropical Northern Hemisphere (40-90°N) using Point-by-Point Regression (PPR). The resulting annual maps of temperature anomalies (750-1988 CE) reveal a consistent imprint of volcanism, with 96% of reconstructed grid points experiencing colder conditions following eruptions. Solar influences are detected at the bicentennial (de Vries) frequency, although at other time scales the influence of insolation variability is weak. Approximately 90% of reconstructed grid points show warmer temperatures during the Medieval Climate Anomaly when compared to the Little Ice Age, although the magnitude varies spatially across the hemisphere. Estimates of field reconstruction skill through time and over space can guide future temporal extension and spatial expansion of the proxy network.

  14. Quench in the 1D Bose-Hubbard model: Topological defects and excitations from the Kosterlitz-Thouless phase transition dynamics

    PubMed Central

    Dziarmaga, Jacek; Zurek, Wojciech H.

    2014-01-01

    Kibble-Zurek mechanism (KZM) uses critical scaling to predict density of topological defects and other excitations created in second order phase transitions. We point out that simply inserting asymptotic critical exponents deduced from the immediate vicinity of the critical point to obtain predictions can lead to results that are inconsistent with a more careful KZM analysis based on causality – on the comparison of the relaxation time of the order parameter with the “time distance” from the critical point. As a result, scaling of quench-generated excitations with quench rates can exhibit behavior that is locally (i.e., in the neighborhood of any given quench rate) well approximated by the power law, but with exponents that depend on that rate, and that are quite different from the naive prediction based on the critical exponents relevant for asymptotically long quench times. Kosterlitz-Thouless scaling (that governs e.g. Mott insulator to superfluid transition in the Bose-Hubbard model in one dimension) is investigated as an example of this phenomenon. PMID:25091996

  15. A Dancing Black Hole

    NASA Astrophysics Data System (ADS)

    Shoemaker, Deirdre; Smith, Kenneth; Schnetter, Erik; Fiske, David; Laguna, Pablo; Pullin, Jorge

    2002-04-01

    Recently, stationary black holes have been successfully simulated for up to times of approximately 600-1000M, where M is the mass of the black hole. Considering that the expected burst of gravitational radiation from a binary black hole merger would last approximately 200-500M, black hole codes are approaching the point where simulations of mergers may be feasible. We will present two types of simulations of single black holes obtained with a code based on the Baumgarte-Shapiro-Shibata-Nakamura formulation of the Einstein evolution equations. One type of simulations addresses the stability properties of stationary black hole evolutions. The second type of simulations demonstrates the ability of our code to move a black hole through the computational domain. This is accomplished by shifting the stationary black hole solution to a coordinate system in which the location of the black hole is time dependent.

  16. Improving Palliative Care Team Meetings: Structure, Inclusion, and "Team Care".

    PubMed

    Brennan, Caitlin W; Kelly, Brittany; Skarf, Lara Michal; Tellem, Rotem; Dunn, Kathleen M; Poswolsky, Sheila

    2016-07-01

    Increasing demands on palliative care teams point to the need for continuous improvement to ensure teams are working collaboratively and efficiently. This quality improvement initiative focused on improving interprofessional team meeting efficiency and subsequently patient care. Meeting start and end times improved from a mean of approximately 9 and 6 minutes late in the baseline period, respectively, to a mean of 4.4 minutes late (start time) and ending early in our sustainability phase. Mean team satisfaction improved from 2.4 to 4.5 on a 5-point Likert-type scale. The improvement initiative clarified communication about patients' plans of care, thus positively impacting team members' ability to articulate goals to other professionals, patients, and families. We propose several recommendations in the form of a team meeting "toolkit." © The Author(s) 2015.

  17. The relativistic gravity train

    NASA Astrophysics Data System (ADS)

    Seel, Max

    2018-05-01

    The gravity train that takes 42.2 min from any point A to any other point B that is connected by a straight-line tunnel through Earth has captured the imagination more than most other applications in calculus or introductory physics courses. Brachystochron and, most recently, nonlinear density solutions have been discussed. Here relativistic corrections are presented. It is discussed how the corrections affect the time to fall through Earth, the Sun, a white dwarf, a neutron star, and—the ultimate limit—the difference in time measured by a moving, a stationary and the fiducial observer at infinity if the density of the sphere approaches the density of a black hole. The relativistic gravity train can serve as a problem with approximate and exact analytic solutions and as numerical exercise in any introductory course on relativity.

  18. Traveltime and dispersion in the Shenandoah River and its tributaries, Waynesboro, Virginia, to Harpers Ferry, West Virginia

    USGS Publications Warehouse

    Taylor, K.R.; James, R.W.; Helinsky, B.M.

    1986-01-01

    Two traveltime and dispersion measurements using rhodamin dye were conducted on a 178-mile reach of the Shenandoah River between Waynesboro, Virginia, and Harpers Ferry, West Virginia. The flows during the two measurements were at approximately the 85% and 45% flow durations. The two sets of data were used to develop a generalized procedure for predicting traveltimes and downstream concentrations resulting from spillage of water soluble substances at any point along the river reach studied. The procedure can be used to calculate traveltime and concentration data for almost any spillage that occurs during relatively steady flow between a 40% to 95% flow duration. Based on an analogy between the general shape of a time concentration curve and a scalene triangle, the procedures can be used on long river reaches to approximate the conservative time concentration curve for instantaneous spills of contaminants. The triangular approximation technique can be combined with a superposition technique to predict the approximate, conservative time concentration curve for constant rate and variable rate injections of contaminants. The procedure was applied to a hypothetical situation in which 5,000 pounds of contaminants is spilled instantaneously at Island Ford, Virginia. The times required for the leading edge, the peak concentration, and the trailing edge of the contaminant cloud to reach the water intake at Front Royal, Virginia (85 miles downstream), are 234,280, and 340 hrs, respectively, for a flow at an 80% flow duration. The conservative peak concentration would be approximately 940 micrograms/L at Front Royal. The procedures developed cannot be depended upon when a significant hydraulic wave or other unsteady flow condition exists in the flow system or when the spilled material floats or is immiscible in water. (Author 's abstract)

  19. OPTRAN- OPTIMAL LOW THRUST ORBIT TRANSFERS

    NASA Technical Reports Server (NTRS)

    Breakwell, J. V.

    1994-01-01

    OPTRAN is a collection of programs that solve the problem of optimal low thrust orbit transfers between non-coplanar circular orbits for spacecraft with chemical propulsion systems. The programs are set up to find Hohmann-type solutions, with burns near the perigee and apogee of the transfer orbit. They will solve both fairly long burn-arc transfers and "divided-burn" transfers. Program modeling includes a spherical earth gravity model and propulsion system models for either constant thrust or constant acceleration. The solutions obtained are optimal with respect to fuel use: i.e., final mass of the spacecraft is maximized with respect to the controls. The controls are the direction of thrust and the thrust on/off times. Two basic types of programs are provided in OPTRAN. The first type is for "exact solution" which results in complete, exact tkme-histories. The exact spacecraft position, velocity, and optimal thrust direction are given throughout the maneuver, as are the optimal thrust switch points, the transfer time, and the fuel costs. Exact solution programs are provided in two versions for non-coplanar transfers and in a fast version for coplanar transfers. The second basic type is for "approximate solutions" which results in approximate information on the transfer time and fuel costs. The approximate solution is used to estimate initial conditions for the exact solution. It can be used in divided-burn transfers to find the best number of burns with respect to time. The approximate solution is useful by itself in relatively efficient, short burn-arc transfers. These programs are written in FORTRAN 77 for batch execution and have been implemented on a DEC VAX series computer with the largest program having a central memory requirement of approximately 54K of 8 bit bytes. The OPTRAN program were developed in 1983.

  20. Performance analysis of a dual-tree algorithm for computing spatial distance histograms

    PubMed Central

    Chen, Shaoping; Tu, Yi-Cheng; Xia, Yuni

    2011-01-01

    Many scientific and engineering fields produce large volume of spatiotemporal data. The storage, retrieval, and analysis of such data impose great challenges to database systems design. Analysis of scientific spatiotemporal data often involves computing functions of all point-to-point interactions. One such analytics, the Spatial Distance Histogram (SDH), is of vital importance to scientific discovery. Recently, algorithms for efficient SDH processing in large-scale scientific databases have been proposed. These algorithms adopt a recursive tree-traversing strategy to process point-to-point distances in the visited tree nodes in batches, thus require less time when compared to the brute-force approach where all pairwise distances have to be computed. Despite the promising experimental results, the complexity of such algorithms has not been thoroughly studied. In this paper, we present an analysis of such algorithms based on a geometric modeling approach. The main technique is to transform the analysis of point counts into a problem of quantifying the area of regions where pairwise distances can be processed in batches by the algorithm. From the analysis, we conclude that the number of pairwise distances that are left to be processed decreases exponentially with more levels of the tree visited. This leads to the proof of a time complexity lower than the quadratic time needed for a brute-force algorithm and builds the foundation for a constant-time approximate algorithm. Our model is also general in that it works for a wide range of point spatial distributions, histogram types, and space-partitioning options in building the tree. PMID:21804753

  1. The gamma ray continuum spectrum from the galactic center disk and point sources

    NASA Technical Reports Server (NTRS)

    Gehrels, Neil; Tueller, Jack

    1992-01-01

    A light curve of gamma-ray continuum emission from point sources in the galactic center region is generated from balloon and satellite observations made over the past 25 years. The emphasis is on the wide field-of-view instruments which measure the combined flux from all sources within approximately 20 degrees of the center. These data have not been previously used for point-source analyses because of the unknown contribution from diffuse disk emission. In this study, the galactic disk component is estimated from observations made by the Gamma Ray Imaging Spectrometer (GRIS) instrument in Oct. 1988. Surprisingly, there are several times during the past 25 years when all gamma-ray sources (at 100 keV) within about 20 degrees of the galactic center are turned off or are in low emission states. This implies that the sources are all variable and few in number. The continuum gamma-ray emission below approximately 150 keV from the black hole candidate 1E1740.7-2942 is seen to turn off in May 1989 on a time scale of less than two weeks, significantly shorter than ever seen before. With the continuum below 150 keV turned off, the spectral shape derived from the HEXAGONE observation on 22 May 1989 is very peculiar with a peak near 200 keV. This source was probably in its normal state for more than half of all observations since the mid-1960's. There are only two observations (in 1977 and 1979) for which the sum flux from the point sources in the region significantly exceeds that from 1E1740.7-2942 in its normal state.

  2. Search for Gamma-Ray Emission from Local Primordial Black Holes with the Fermi Large Area Telescope

    DOE PAGES

    Ackermann, M.; Atwood, W. B.; Baldini, L.; ...

    2018-04-10

    Black holes with masses below approximately 10 15 g are expected to emit gamma-rays with energies above a few tens of MeV, which can be detected by the Fermi Large Area Telescope (LAT). Although black holes with these masses cannot be formed as a result of stellar evolution, they may have formed in the early universe and are therefore called primordial black holes (PBHs). Previous searches for PBHs have focused on either short-timescale bursts or the contribution of PBHs to the isotropic gamma-ray emission. We show that, in cases of individual PBHs, the Fermi-LAT is most sensitive to PBHs with temperatures above approximately 16 GeV and masses 6 × 10 11 g, which it can detect out to a distance of about 0.03 pc. These PBHs have a remaining lifetime of months to years at the start of the Fermi mission. They would appear as potentially moving point sources with gamma-ray emission that become spectrally harder and brighter with time until the PBH completely evaporates. In this paper, we develop a new algorithm to detect the proper motion of gamma-ray point sources, and apply it to 318 unassociated point sources at a high galactic latitude in the third Fermi-LAT source catalog. None of the unassociated point sources with spectra consistent with PBH evaporation show significant proper motion. Finally, using the nondetection of PBH candidates, we derive a 99% confidence limit on the PBH evaporation rate in the vicinity of Earth,more » $${\\dot{\\rho }}_{\\mathrm{PBH}}\\lt 7.2\\times {10}^{3}\\ {\\mathrm{pc}}^{-3}\\,{\\mathrm{yr}}^{-1}$$. This limit is similar to the limits obtained with ground-based gamma-ray observatories.« less

  3. Search for Gamma-Ray Emission from Local Primordial Black Holes with the Fermi Large Area Telescope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ackermann, M.; Atwood, W. B.; Baldini, L.

    Black holes with masses below approximately 10 15 g are expected to emit gamma-rays with energies above a few tens of MeV, which can be detected by the Fermi Large Area Telescope (LAT). Although black holes with these masses cannot be formed as a result of stellar evolution, they may have formed in the early universe and are therefore called primordial black holes (PBHs). Previous searches for PBHs have focused on either short-timescale bursts or the contribution of PBHs to the isotropic gamma-ray emission. We show that, in cases of individual PBHs, the Fermi-LAT is most sensitive to PBHs with temperatures above approximately 16 GeV and masses 6 × 10 11 g, which it can detect out to a distance of about 0.03 pc. These PBHs have a remaining lifetime of months to years at the start of the Fermi mission. They would appear as potentially moving point sources with gamma-ray emission that become spectrally harder and brighter with time until the PBH completely evaporates. In this paper, we develop a new algorithm to detect the proper motion of gamma-ray point sources, and apply it to 318 unassociated point sources at a high galactic latitude in the third Fermi-LAT source catalog. None of the unassociated point sources with spectra consistent with PBH evaporation show significant proper motion. Finally, using the nondetection of PBH candidates, we derive a 99% confidence limit on the PBH evaporation rate in the vicinity of Earth,more » $${\\dot{\\rho }}_{\\mathrm{PBH}}\\lt 7.2\\times {10}^{3}\\ {\\mathrm{pc}}^{-3}\\,{\\mathrm{yr}}^{-1}$$. This limit is similar to the limits obtained with ground-based gamma-ray observatories.« less

  4. Implementation of polyatomic MCTDHF capability

    NASA Astrophysics Data System (ADS)

    Haxton, Daniel; Jones, Jeremiah; Rescigno, Thomas; McCurdy, C. William; Ibrahim, Khaled; Williams, Sam; Vecharynski, Eugene; Rouet, Francois-Henry; Li, Xiaoye; Yang, Chao

    2015-05-01

    The implementation of the Multiconfiguration Time-Dependent Hartree-Fock method for poly- atomic molecules using a cartesian product grid of sinc basis functions will be discussed. The focus will be on two key components of the method: first, the use of a resolution-of-the-identity approximation; sec- ond, the use of established techniques for triple Toeplitz matrix algebra using fast Fourier transform over distributed memory architectures (MPI 3D FFT). The scaling of two-electron matrix element transformations is converted from O(N4) to O(N log N) by including these components. Here N = n3, with n the number of points on a side. We test the prelim- inary implementation by calculating absorption spectra of small hydro- carbons, using approximately 16-512 points on a side. This work is supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, under the Early Career program, and by the offices of BES and Advanced Scientific Computing Research, under the SciDAC program.

  5. Solving Laplace equation to investigate the volcanic ground deformation pattern

    NASA Astrophysics Data System (ADS)

    Brahmi, Mouna; Castaldo, Raffaele; Barone, Andrea; Fedi, Maurizio; Tizzani, Pietro

    2017-04-01

    Volcanic eruptions are generally preceded by unrest phenomena, which are characterized by variations in the geophysical and geochemical state of the system. The most evident unrest parameters are the spatial and temporal topographic changes, which typically result in uplift or subsidence of the volcano edifice, usually caused by magma accumulation or hot fluid concentration in shallow reservoirs (Denasoquo et al., 2009). If the observed ground deformation phenomenon is very quick and the time evolution of the process shows a linear tendency, we can approximate the problem by using an elastic rheology model of the crust beneath the volcano. In this scenario, by considering the elastic field theory under the Boussinesq (1885) and Love (1892) approximations, we can evaluate the displacement field induced by a generic source in a homogeneous, elastic, half-space at an arbitrary point. To this purpose, we use the depth to extreme points (DEXP) method. By using this approach, we are able to estimate the depth and the geometry of the active source, responsible of the observed ground deformation.

  6. On the complexity and approximability of some Euclidean optimal summing problems

    NASA Astrophysics Data System (ADS)

    Eremeev, A. V.; Kel'manov, A. V.; Pyatkin, A. V.

    2016-10-01

    The complexity status of several well-known discrete optimization problems with the direction of optimization switching from maximum to minimum is analyzed. The task is to find a subset of a finite set of Euclidean points (vectors). In these problems, the objective functions depend either only on the norm of the sum of the elements from the subset or on this norm and the cardinality of the subset. It is proved that, if the dimension of the space is a part of the input, then all these problems are strongly NP-hard. Additionally, it is shown that, if the space dimension is fixed, then all the problems are NP-hard even for dimension 2 (on a plane) and there are no approximation algorithms with a guaranteed accuracy bound for them unless P = NP. It is shown that, if the coordinates of the input points are integer, then all the problems can be solved in pseudopolynomial time in the case of a fixed space dimension.

  7. On the relation between correlation dimension, approximate entropy and sample entropy parameters, and a fast algorithm for their calculation

    NASA Astrophysics Data System (ADS)

    Zurek, Sebastian; Guzik, Przemyslaw; Pawlak, Sebastian; Kosmider, Marcin; Piskorski, Jaroslaw

    2012-12-01

    We explore the relation between correlation dimension, approximate entropy and sample entropy parameters, which are commonly used in nonlinear systems analysis. Using theoretical considerations we identify the points which are shared by all these complexity algorithms and show explicitly that the above parameters are intimately connected and mutually interdependent. A new geometrical interpretation of sample entropy and correlation dimension is provided and the consequences for the interpretation of sample entropy, its relative consistency and some of the algorithms for parameter selection for this quantity are discussed. To get an exact algorithmic relation between the three parameters we construct a very fast algorithm for simultaneous calculations of the above, which uses the full time series as the source of templates, rather than the usual 10%. This algorithm can be used in medical applications of complexity theory, as it can calculate all three parameters for a realistic recording of 104 points within minutes with the use of an average notebook computer.

  8. From 16-bit to high-accuracy IDCT approximation: fruits of single architecture affliation

    NASA Astrophysics Data System (ADS)

    Liu, Lijie; Tran, Trac D.; Topiwala, Pankaj

    2007-09-01

    In this paper, we demonstrate an effective unified framework for high-accuracy approximation of the irrational co-effcient floating-point IDCT by a single integer-coeffcient fixed-point architecture. Our framework is based on a modified version of the Loeffler's sparse DCT factorization, and the IDCT architecture is constructed via a cascade of dyadic lifting steps and butterflies. We illustrate that simply varying the accuracy of the approximating parameters yields a large family of standard-compliant IDCTs, from rare 16-bit approximations catering to portable computing to ultra-high-accuracy 32-bit versions that virtually eliminate any drifting effect when pairing with the 64-bit floating-point IDCT at the encoder. Drifting performances of the proposed IDCTs along with existing popular IDCT algorithms in H.263+, MPEG-2 and MPEG-4 are also demonstrated.

  9. Real-Time Configuration of Networked Embedded Systems

    DTIC Science & Technology

    2005-05-01

    and inside buildings. Such information is also useful to civilians, as it can be used for personal navigation by campers and hikers, firemen and...traveled, and use direction of movement and distance traveled to generate trajectory points, which are then appropriately displayed. There were...the waist belt is used to detect acceleration of body movement . From the filtered signal, we can approximate the step length by [1] (reference

  10. Interacting charges and the classical electron radius

    NASA Astrophysics Data System (ADS)

    De Luca, Roberto; Di Mauro, Marco; Faella, Orazio; Naddeo, Adele

    2018-03-01

    The equation of the motion of a point charge q repelled by a fixed point-like charge Q is derived and studied. In solving this problem useful concepts in classical and relativistic kinematics, in Newtonian mechanics and in non-linear ordinary differential equations are revised. The validity of the approximations is discussed from the physical point of view. In particular the classical electron radius emerges naturally from the requirement that the initial distance is large enough for the non-relativistic approximation to be valid. The relevance of this topic for undergraduate physics teaching is pointed out.

  11. Solidification of a binary mixture

    NASA Technical Reports Server (NTRS)

    Antar, B. N.

    1982-01-01

    The time dependent concentration and temperature profiles of a finite layer of a binary mixture are investigated during solidification. The coupled time dependent Stefan problem is solved numerically using an implicit finite differencing algorithm with the method of lines. Specifically, the temporal operator is approximated via an implicit finite difference operator resulting in a coupled set of ordinary differential equations for the spatial distribution of the temperature and concentration for each time. Since the resulting differential equations set form a boundary value problem with matching conditions at an unknown spatial point, the method of invariant imbedding is used for its solution.

  12. GOES-R Dual Isolation

    NASA Technical Reports Server (NTRS)

    Freesland, Doug; Carter, Delano; Chapel, Jim; Clapp, Brian; Howat, John; Krimchansky, Alexander

    2015-01-01

    The Geostationary Operational Environmental Satellite-R Series (GOES-R) is the first of the next generation geostationary weather satellites, scheduled for delivery in late 2015. GOES-R represents a quantum increase in Earth and solar weather observation capabilities, with 4 times the resolution, 5 times the observation rate, and 3 times the number of spectral bands for Earth observations. With the improved resolution, comes the instrument suite's increased sensitive to disturbances over a broad spectrum 0-512 Hz. Sources of disturbance include reaction wheels, thruster firings for station keeping and momentum management, gimbal motion, and internal instrument disturbances. To minimize the impact of these disturbances, the baseline design includes an Earth Pointed Platform (EPP), a stiff optical bench to which the two nadir pointed instruments are collocated together with the Guidance Navigation & Control (GN&C) star trackers and Inertial Measurement Units (IMUs). The EPP is passively isolated from the spacecraft bus with Honeywell D-Strut isolators providing attenuation for frequencies above approximately 5 Hz in all six degrees-of-freedom. A change in Reaction Wheel Assembly (RWA) vendors occurred very late in the program. To reduce the risk of RWA disturbances impacting performance, a secondary passive isolation system manufactured by Moog CSA Engineering was incorporated under each of the six 160 Nms RWAs, tuned to provide attenuation at frequencies above approximately 50 Hz. Integrated wheel and isolator testing was performed on a Kistler table at NASA Goddard Space Flight Center. High fidelity simulations were conducted to evaluate jitter performance for four topologies: 1) hard mounted no isolation, 2) EPP isolation only, 2) RWA isolation only, and 4) dual isolation. Simulation results demonstrate excellent performance relative to the pointing stability requirements, with dual isolated Line of Sight (LOS) jitter less than 1 micron rad.

  13. Tipping point analysis of ocean acoustic noise

    NASA Astrophysics Data System (ADS)

    Livina, Valerie N.; Brouwer, Albert; Harris, Peter; Wang, Lian; Sotirakopoulos, Kostas; Robinson, Stephen

    2018-02-01

    We apply tipping point analysis to a large record of ocean acoustic data to identify the main components of the acoustic dynamical system and study possible bifurcations and transitions of the system. The analysis is based on a statistical physics framework with stochastic modelling, where we represent the observed data as a composition of deterministic and stochastic components estimated from the data using time-series techniques. We analyse long-term and seasonal trends, system states and acoustic fluctuations to reconstruct a one-dimensional stochastic equation to approximate the acoustic dynamical system. We apply potential analysis to acoustic fluctuations and detect several changes in the system states in the past 14 years. These are most likely caused by climatic phenomena. We analyse trends in sound pressure level within different frequency bands and hypothesize a possible anthropogenic impact on the acoustic environment. The tipping point analysis framework provides insight into the structure of the acoustic data and helps identify its dynamic phenomena, correctly reproducing the probability distribution and scaling properties (power-law correlations) of the time series.

  14. Critical Slowing Down in Time-to-Extinction: An Example of Critical Phenomena in Ecology

    NASA Technical Reports Server (NTRS)

    Gandhi, Amar; Levin, Simon; Orszag, Steven

    1998-01-01

    We study a model for two competing species that explicitly accounts for effects due to discreteness, stochasticity and spatial extension of populations. The two species are equally preferred by the environment and do better when surrounded by others of the same species. We observe that the final outcome depends on the initial densities (uniformly distributed in space) of the two species. The observed phase transition is a continuous one and key macroscopic quantities like the correlation length of clusters and the time-to-extinction diverge at a critical point. Away from the critical point, the dynamics can be described by a mean-field approximation. Close to the critical point, however, there is a crossover to power-law behavior because of the gross mismatch between the largest and smallest scales in the system. We have developed a theory based on surface effects, which is in good agreement with the observed behavior. The course-grained reaction-diffusion system obtained from the mean-field dynamics agrees well with the particle system.

  15. Point-Focus Concentration Compact Telescoping Array: Extreme Environments Solar Power Base Phase Final Report

    NASA Technical Reports Server (NTRS)

    McEachen, Michael E.; Murphy, Dave; Meinhold, Shen; Spink, Jim; Eskenazi, Mike; O'Neill, Mark

    2017-01-01

    Orbital ATK, in partnership with Mark ONeill LLC (MOLLC), has developed a novel solar array platform, PFC-CTA, which provides a significant advance in performance and cost reduction compared to all currently available space solar systems. PFC refers to the Point Focus Concentration of light provided by MOLLCs thin, flat Fresnel optics. These lenses focus light to a point of approximately 100 times the intensity of the ambient light, onto a solar cell of approximately 125th the size of the lens. CTA stands for Compact Telescoping Array, which is the solar array blanket structural platform originally devised by NASA and currently being advanced by Orbital ATK and partners under NASA and AFRL funding to a projected TRL 5+ by late-2018.The NASA Game Changing Development Extreme Environment Solar Power (EESP) Base Phase study has enabled Orbital ATK to refine component designs, perform component level and system performance analyses, and test prototype hardware of the key elements of PFC-CTA, and increased the TRL of PFC-specific technology elements to TRL 4. Key performance metrics currently projected are as follows: Scalability from 5 kW to 300 kW per wing (AM0); Specific Power 500 Wkg (AM0); Stowage Efficiency 100 kWm3; 5:1 margin on pointing tolerance vs. capability; 50 launched cost savings; Wide range of operability between Venus and Saturn by active andor passive thermal management.

  16. Shock characterization of TOAD pins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weirick, L.J.; Navarro, N.J.

    1995-08-01

    The purpose of this program was to characterize Time Of Arrival Detectors (TOAD) pins response to shock loading with respect to risetime, amplitude, repeatability and consistency. TOAD pins were subjected to impacts of 35 to 420 kilobars amplitude and approximately 1 ms pulse width to investigate the timing spread of four pins and the voltage output profile of the individual pins. Sets of pins were also aged at 45{degrees}, 60{degrees}, and 80{degrees}C for approximately nine weeks before shock testing at 315 kilobars impact stress. Four sets of pins were heated to 50.2{degrees}C (125{degrees}F) for approximately two hours and then impactedmore » at either 50 or 315 kilobars. Also, four sets of pins were aged at 60{degrees}C for nine weeks and then heated to 50.2{degrees}C before shock testing at 50 and 315 kilobars impact stress, respectively. Particle velocity measurements at the contact point between the stainless steel targets and TOAD pins were made using a Velocity Interferometer System for Any Reflector (VISAR) to monitor both the amplitude and profile of the shock waves.« less

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krityakierne, Tipaluck; Akhtar, Taimoor; Shoemaker, Christine A.

    This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centersmore » from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.« less

  18. Model-based screening for critical wet-weather discharges related to micropollutants from urban areas.

    PubMed

    Mutzner, Lena; Staufer, Philipp; Ort, Christoph

    2016-11-01

    Wet-weather discharges contribute to anthropogenic micropollutant loads entering the aquatic environment. Thousands of wet-weather discharges exist in Swiss sewer systems, and we do not have the capacity to monitor them all. We consequently propose a model-based approach designed to identify critical discharge points in order to support effective monitoring. We applied a dynamic substance flow model to four substances representing different entry routes: indoor (Triclosan, Mecoprop, Copper) as well as rainfall-mobilized (Glyphosate, Mecoprop, Copper) inputs. The accumulation on different urban land-use surfaces in dry weather and subsequent substance-specific wash-off is taken into account. For evaluation, we use a conservative screening approach to detect critical discharge points. This approach considers only local dilution generated onsite from natural, unpolluted areas, i.e. excluding upstream dilution. Despite our conservative assumptions, we find that the environmental quality standards for Glyphosate and Mecoprop are not exceeded during any 10-min time interval over a representative one-year simulation period for all 2500 Swiss municipalities. In contrast, the environmental quality standard is exceeded during at least 20% of the discharge time at 83% of all modelled discharge points for Copper and at 71% for Triclosan. For Copper, this corresponds to a total median duration of approximately 19 days per year. For Triclosan, discharged only via combined sewer overflows, this means a median duration of approximately 10 days per year. In general, stormwater outlets contribute more to the calculated effect than combined sewer overflows for rainfall-mobilized substances. We further evaluate the Urban Index (A urban,impervious /A natural ) as a proxy for critical discharge points: catchments where Triclosan and Copper exceed the corresponding environmental quality standard often have an Urban Index >0.03. A dynamic substance flow analysis allows us to identify the most critical discharge points to be prioritized for more detailed analyses and monitoring. This forms a basis for the efficient mitigation of pollution. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Accuracy & Computational Considerations for Wide--Angle One--way Seismic Propagators and Multiple Scattering by Invariant Embedding

    NASA Astrophysics Data System (ADS)

    Thomson, C. J.

    2004-12-01

    Pseudodifferential operators (PSDOs) yield in principle exact one--way seismic wave equations, which are attractive both conceptually and for their promise of computational efficiency. The one--way operators can be extended to include multiple--scattering effects, again in principle exactly. In practice approximations must be made and, as an example, the variable--wavespeed Helmholtz equation for scalar waves in two space dimensions is here factorized to give the one--way wave equation. This simple case permits clear identification of a sequence of physically reasonable approximations to be used when the mathematically exact PSDO one--way equation is implemented on a computer. As intuition suggests, these approximations hinge on the medium gradients in the direction transverse to the main propagation direction. A key point is that narrow--angle approximations are to be avoided in the interests of accuracy. Another key consideration stems from the fact that the so--called ``standard--ordering'' PSDO indicates how lateral interpolation of the velocity structure can significantly reduce computational costs associated with the Fourier or plane--wave synthesis lying at the heart of the calculations. The decision on whether a slow or a fast Fourier transform code should be used rests upon how many lateral model parameters are truly distinct. A third important point is that the PSDO theory shows what approximations are necessary in order to generate an exponential one--way propagator for the laterally varying case, representing the intuitive extension of classical integral--transform solutions for a laterally homogeneous medium. This exponential propagator suggests the use of larger discrete step sizes, and it can also be used to approach phase--screen like approximations (though the latter are not the main interest here). Numerical comparisons with finite--difference solutions will be presented in order to assess the approximations being made and to gain an understanding of computation time differences. The ideas described extend to the three--dimensional, generally anisotropic case and to multiple scattering by invariant embedding.

  20. Time-dependent importance sampling in semiclassical initial value representation calculations for time correlation functions. II. A simplified implementation.

    PubMed

    Tao, Guohua; Miller, William H

    2012-09-28

    An efficient time-dependent (TD) Monte Carlo (MC) importance sampling method has recently been developed [G. Tao and W. H. Miller, J. Chem. Phys. 135, 024104 (2011)] for the evaluation of time correlation functions using the semiclassical (SC) initial value representation (IVR) methodology. In this TD-SC-IVR method, the MC sampling uses information from both time-evolved phase points as well as their initial values, and only the "important" trajectories are sampled frequently. Even though the TD-SC-IVR was shown in some benchmark examples to be much more efficient than the traditional time-independent sampling method (which uses only initial conditions), the calculation of the SC prefactor-which is computationally expensive, especially for large systems-is still required for accepted trajectories. In the present work, we present an approximate implementation of the TD-SC-IVR method that is completely prefactor-free; it gives the time correlation function as a classical-like magnitude function multiplied by a phase function. Application of this approach to flux-flux correlation functions (which yield reaction rate constants) for the benchmark H + H(2) system shows very good agreement with exact quantum results. Limitations of the approximate approach are also discussed.

  1. Application of the QSPR approach to the boiling points of azeotropes.

    PubMed

    Katritzky, Alan R; Stoyanova-Slavova, Iva B; Tämm, Kaido; Tamm, Tarmo; Karelson, Mati

    2011-04-21

    CODESSA Pro derivative descriptors were calculated for a data set of 426 azeotropic mixtures by the centroid approximation and the weighted-contribution-factor approximation. The two approximations produced almost identical four-descriptor QSPR models relating the structural characteristic of the individual components of azeotropes to the azeotropic boiling points. These models were supported by internal and external validations. The descriptors contributing to the QSPR models are directly related to the three components of the enthalpy (heat) of vaporization.

  2. Analytical approximation of a distorted reflector surface defined by a discrete set of points

    NASA Technical Reports Server (NTRS)

    Acosta, Roberto J.; Zaman, Afroz A.

    1988-01-01

    Reflector antennas on Earth orbiting spacecrafts generally cannot be described analytically. The reflector surface is subjected to a large temperature fluctuation and gradients, and is thus warped from its true geometrical shape. Aside from distortion by thermal stresses, reflector surfaces are often purposely shaped to minimize phase aberrations and scanning losses. To analyze distorted reflector antennas defined by discrete surface points, a numerical technique must be applied to compute an interpolatory surface passing through a grid of discrete points. In this paper, the distorted reflector surface points are approximated by two analytical components: an undistorted surface component and a surface error component. The undistorted surface component is a best fit paraboloid polynomial for the given set of points and the surface error component is a Fourier series expansion of the deviation of the actual surface points, from the best fit paraboloid. By applying the numerical technique to approximate the surface normals of the distorted reflector surface, the induced surface current can be obtained using physical optics technique. These surface currents are integrated to find the far field radiation pattern.

  3. 36 CFR 7.10 - Zion National Park.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... unplowed, graded dirt road from the park boundary in the southeast corner of Sec. 13, T. 39 S., R. 11 W... distance of approximately one mile. (4) The unplowed, graded dirt road from the Lava Point Ranger Station... approximately two miles. (5) The unplowed, graded dirt road from the Lava Point Ranger Station, north to the...

  4. Numerical simulation for turbulent heating around the forebody fairing of H-II rocket

    NASA Astrophysics Data System (ADS)

    Nomura, Shigeaki; Yamamoto, Yukimitsu; Fukushima, Yukio

    Concerning the heat transfer distributions around the nose fairing of the Japanese new launch vehicle H-II rocket, numerical simulations have been conducted for the conditions along its nominal ascent trajectory and some experimental tests have been conducted additionally to confirm the numerical results. The thin layer approximated Navier-Stokes equations with Baldwin-Lomax's algebraic turbulent model were solved by the time dependent finite difference method. Results of numerical simulations showed that a high peak heating would occur near the stagnation point on the spherical nose portion due to the transition to turbulent flow during the period when large stagnation point heating was predicted. The experiments were conducted under the condition of M = 5 and Re = 10 to the 6th which was similar to the flight condition where the maximum stagnation point heating would occur. The experimental results also showed a high peak heating near the stagnation point over the spherical nose portion.

  5. Improvements on the minimax algorithm for the Laplace transformation of orbital energy denominators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helmich-Paris, Benjamin, E-mail: b.helmichparis@vu.nl; Visscher, Lucas, E-mail: l.visscher@vu.nl

    2016-09-15

    We present a robust and non-heuristic algorithm that finds all extremum points of the error distribution function of numerically Laplace-transformed orbital energy denominators. The extremum point search is one of the two key steps for finding the minimax approximation. If pre-tabulation of initial guesses is supposed to be avoided, strategies for a sufficiently robust algorithm have not been discussed so far. We compare our non-heuristic approach with a bracketing and bisection algorithm and demonstrate that 3 times less function evaluations are required altogether when applying it to typical non-relativistic and relativistic quantum chemical systems.

  6. Tower Illuminance Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ho, Clifford K.; Sims, Cianan

    TIM is a real-time interactive concentrating solar field simulation. TIM models a concentrating tower (receiver), heliostat field, and potential reflected glare based on user-specified parameters such as field capacity, tower height and location. TIM provides a navigable 3D interface, allowing the user to “fly” around the field to determine the potential glare hazard from off-target heliostats. Various heliostat aiming strategies are available for specifying how heliostats behave when in standby mode. Strategies include annulus, point-per-group, up-aiming and single-point-focus. Additionally, TIM includes an avian path feature for approximating the irradiance and feather temperature of a bird flying through the field airspace.

  7. The effect of Fisher information matrix approximation methods in population optimal design calculations.

    PubMed

    Strömberg, Eric A; Nyberg, Joakim; Hooker, Andrew C

    2016-12-01

    With the increasing popularity of optimal design in drug development it is important to understand how the approximations and implementations of the Fisher information matrix (FIM) affect the resulting optimal designs. The aim of this work was to investigate the impact on design performance when using two common approximations to the population model and the full or block-diagonal FIM implementations for optimization of sampling points. Sampling schedules for two example experiments based on population models were optimized using the FO and FOCE approximations and the full and block-diagonal FIM implementations. The number of support points was compared between the designs for each example experiment. The performance of these designs based on simulation/estimations was investigated by computing bias of the parameters as well as through the use of an empirical D-criterion confidence interval. Simulations were performed when the design was computed with the true parameter values as well as with misspecified parameter values. The FOCE approximation and the Full FIM implementation yielded designs with more support points and less clustering of sample points than designs optimized with the FO approximation and the block-diagonal implementation. The D-criterion confidence intervals showed no performance differences between the full and block diagonal FIM optimal designs when assuming true parameter values. However, the FO approximated block-reduced FIM designs had higher bias than the other designs. When assuming parameter misspecification in the design evaluation, the FO Full FIM optimal design was superior to the FO block-diagonal FIM design in both of the examples.

  8. Euler solutions to nonlinear acoustics of non-lifting rotor blades

    NASA Technical Reports Server (NTRS)

    Baeder, J. D.

    1991-01-01

    For the first time a computational fluid dynamics (CFD) method is used to calculate directly the high-speed impulsive (HSI) noise of a non-lifting hovering rotor blade out to a distance of over three rotor radii. In order to accurately propagate the acoustic wave in a stable and efficient manner, an implicit upwind-biased Euler method is solved on a grid with points clustered along the line of propagation. A detailed validation of the code is performed for a rectangular rotor blade at tip Mach numbers ranging from 0.88 to 0.92. The agreement with experiment is excellent at both the sonic cylinder and at 2.18 rotor radii. The agreement at 3.09 rotor radii is still very good, showing improvements over the results from the best previous method. Grid sensitivity studies indicate that with special attention to the location of the boundaries a grid with approximately 60,000 points is adequate. This results in a computational time of approximately 40 minutes on a Cray-XMP. The practicality of the method to calculate HSI noise is demonstrated by expanding the scope of the investigation to examine the rectangular blade as well as a highly swept and tapered blade over a tip Mach number range of 0.80 to 0.95. Comparisons with experimental data are excellent and the advantages of planform modifications are clearly evident. New insight is gained into the mechanisms of nonlinear propagation and the minimum distance at which a valid comparison of different rotors can be made: approximately two rotor radii from the center of rotation.

  9. Euler solutions to nonlinear acoustics of non-lifting hovering rotor blades

    NASA Technical Reports Server (NTRS)

    Baeder, J. D.

    1991-01-01

    For the first time a computational fluid dynamics (CFD) method is used to calculate directly the high-speed impulsive (HSI) noise of a non-lifting hovering rotor blade out to a distance of over three rotor radii. In order to accurately propagate the acoustic wave in a stable and efficient manner, an implicit upwind-biased Euler method is solved on a grid with points clustered along the line of propagation. A detailed validation of the code is performed for a rectangular rotor blade at tip Mach numbers ranging from 0.88 to 0.92. The agreement with experiment is excellent at both the sonic cylinder and at 2.18 rotor radii. The agreement at 3.09 rotor radii is still very good, showing improvements over the results from the best previous method. Grid sensitivity studies indicate that with special attention to the location of the boundaries a grid with approximately 60,000 points is adequate. This results in a computational time of approximately 40 minutes on a Cray-XMP. The practicality of the method to calculate HSI noise is demonstrated by expanding the scope of the investigation to examine the rectangular blade as well as a highly swept and tapered blade over a tip Mach number range of 0.80 to 0.95. Comparisons with experimental data are excellent and the advantages of planform modifications are clearly evident. New insight is gained into the mechanisms of nonlinear propagation and the minimum distance at which a valid comparison of different rotors can be made: approximately two rotor radii from the center of rotation.

  10. Exploiting Fast-Variables to Understand Population Dynamics and Evolution

    NASA Astrophysics Data System (ADS)

    Constable, George W. A.; McKane, Alan J.

    2018-07-01

    We describe a continuous-time modelling framework for biological population dynamics that accounts for demographic noise. In the spirit of the methodology used by statistical physicists, transitions between the states of the system are caused by individual events while the dynamics are described in terms of the time-evolution of a probability density function. In general, the application of the diffusion approximation still leaves a description that is quite complex. However, in many biological applications one or more of the processes happen slowly relative to the system's other processes, and the dynamics can be approximated as occurring within a slow low-dimensional subspace. We review these time-scale separation arguments and analyse the more simple stochastic dynamics that result in a number of cases. We stress that it is important to retain the demographic noise derived in this way, and emphasise this point by showing that it can alter the direction of selection compared to the prediction made from an analysis of the corresponding deterministic model.

  11. Exploiting Fast-Variables to Understand Population Dynamics and Evolution

    NASA Astrophysics Data System (ADS)

    Constable, George W. A.; McKane, Alan J.

    2017-11-01

    We describe a continuous-time modelling framework for biological population dynamics that accounts for demographic noise. In the spirit of the methodology used by statistical physicists, transitions between the states of the system are caused by individual events while the dynamics are described in terms of the time-evolution of a probability density function. In general, the application of the diffusion approximation still leaves a description that is quite complex. However, in many biological applications one or more of the processes happen slowly relative to the system's other processes, and the dynamics can be approximated as occurring within a slow low-dimensional subspace. We review these time-scale separation arguments and analyse the more simple stochastic dynamics that result in a number of cases. We stress that it is important to retain the demographic noise derived in this way, and emphasise this point by showing that it can alter the direction of selection compared to the prediction made from an analysis of the corresponding deterministic model.

  12. Application of Approximate Pattern Matching in Two Dimensional Spaces to Grid Layout for Biochemical Network Maps

    PubMed Central

    Inoue, Kentaro; Shimozono, Shinichi; Yoshida, Hideaki; Kurata, Hiroyuki

    2012-01-01

    Background For visualizing large-scale biochemical network maps, it is important to calculate the coordinates of molecular nodes quickly and to enhance the understanding or traceability of them. The grid layout is effective in drawing compact, orderly, balanced network maps with node label spaces, but existing grid layout algorithms often require a high computational cost because they have to consider complicated positional constraints through the entire optimization process. Results We propose a hybrid grid layout algorithm that consists of a non-grid, fast layout (preprocessor) algorithm and an approximate pattern matching algorithm that distributes the resultant preprocessed nodes on square grid points. To demonstrate the feasibility of the hybrid layout algorithm, it is characterized in terms of the calculation time, numbers of edge-edge and node-edge crossings, relative edge lengths, and F-measures. The proposed algorithm achieves outstanding performances compared with other existing grid layouts. Conclusions Use of an approximate pattern matching algorithm quickly redistributes the laid-out nodes by fast, non-grid algorithms on the square grid points, while preserving the topological relationships among the nodes. The proposed algorithm is a novel use of the pattern matching, thereby providing a breakthrough for grid layout. This application program can be freely downloaded from http://www.cadlive.jp/hybridlayout/hybridlayout.html. PMID:22679486

  13. Application of approximate pattern matching in two dimensional spaces to grid layout for biochemical network maps.

    PubMed

    Inoue, Kentaro; Shimozono, Shinichi; Yoshida, Hideaki; Kurata, Hiroyuki

    2012-01-01

    For visualizing large-scale biochemical network maps, it is important to calculate the coordinates of molecular nodes quickly and to enhance the understanding or traceability of them. The grid layout is effective in drawing compact, orderly, balanced network maps with node label spaces, but existing grid layout algorithms often require a high computational cost because they have to consider complicated positional constraints through the entire optimization process. We propose a hybrid grid layout algorithm that consists of a non-grid, fast layout (preprocessor) algorithm and an approximate pattern matching algorithm that distributes the resultant preprocessed nodes on square grid points. To demonstrate the feasibility of the hybrid layout algorithm, it is characterized in terms of the calculation time, numbers of edge-edge and node-edge crossings, relative edge lengths, and F-measures. The proposed algorithm achieves outstanding performances compared with other existing grid layouts. Use of an approximate pattern matching algorithm quickly redistributes the laid-out nodes by fast, non-grid algorithms on the square grid points, while preserving the topological relationships among the nodes. The proposed algorithm is a novel use of the pattern matching, thereby providing a breakthrough for grid layout. This application program can be freely downloaded from http://www.cadlive.jp/hybridlayout/hybridlayout.html.

  14. Branching-ratio approximation for the self-exciting Hawkes process

    NASA Astrophysics Data System (ADS)

    Hardiman, Stephen J.; Bouchaud, Jean-Philippe

    2014-12-01

    We introduce a model-independent approximation for the branching ratio of Hawkes self-exciting point processes. Our estimator requires knowing only the mean and variance of the event count in a sufficiently large time window, statistics that are readily obtained from empirical data. The method we propose greatly simplifies the estimation of the Hawkes branching ratio, recently proposed as a proxy for market endogeneity and formerly estimated using numerical likelihood maximization. We employ our method to support recent theoretical and experimental results indicating that the best fitting Hawkes model to describe S&P futures price changes is in fact critical (now and in the recent past) in light of the long memory of financial market activity.

  15. Generalization of Wilemski-Fixman-Weiss decoupling approximation to the case involving multiple sinks of different sizes, shapes, and reactivities.

    PubMed

    Uhm, Jesik; Lee, Jinuk; Eun, Changsun; Lee, Sangyoub

    2006-08-07

    We generalize the Wilemski-Fixman-Weiss decoupling approximation to calculate the transient rate of absorption of point particles into multiple sinks of different sizes, shapes, and reactivities. As an application we consider the case involving two spherical sinks. We obtain a Laplace-transform expression for the transient rate that is in excellent agreement with computer simulations. The long-time steady-state rate has a relatively simple expression, which clearly shows the dependence on the diffusion constant of the particles and on the sizes and reactivities of sinks, and its numerical result is in good agreement with the known exact result that is given in terms of recursion relations.

  16. Stochastic optimal control of ultradiffusion processes with application to dynamic portfolio management

    NASA Astrophysics Data System (ADS)

    Marcozzi, Michael D.

    2008-12-01

    We consider theoretical and approximation aspects of the stochastic optimal control of ultradiffusion processes in the context of a prototype model for the selling price of a European call option. Within a continuous-time framework, the dynamic management of a portfolio of assets is effected through continuous or point control, activation costs, and phase delay. The performance index is derived from the unique weak variational solution to the ultraparabolic Hamilton-Jacobi equation; the value function is the optimal realization of the performance index relative to all feasible portfolios. An approximation procedure based upon a temporal box scheme/finite element method is analyzed; numerical examples are presented in order to demonstrate the viability of the approach.

  17. Software algorithm and hardware design for real-time implementation of new spectral estimator

    PubMed Central

    2014-01-01

    Background Real-time spectral analyzers can be difficult to implement for PC computer-based systems because of the potential for high computational cost, and algorithm complexity. In this work a new spectral estimator (NSE) is developed for real-time analysis, and compared with the discrete Fourier transform (DFT). Method Clinical data in the form of 216 fractionated atrial electrogram sequences were used as inputs. The sample rate for acquisition was 977 Hz, or approximately 1 millisecond between digital samples. Real-time NSE power spectra were generated for 16,384 consecutive data points. The same data sequences were used for spectral calculation using a radix-2 implementation of the DFT. The NSE algorithm was also developed for implementation as a real-time spectral analyzer electronic circuit board. Results The average interval for a single real-time spectral calculation in software was 3.29 μs for NSE versus 504.5 μs for DFT. Thus for real-time spectral analysis, the NSE algorithm is approximately 150× faster than the DFT. Over a 1 millisecond sampling period, the NSE algorithm had the capability to spectrally analyze a maximum of 303 data channels, while the DFT algorithm could only analyze a single channel. Moreover, for the 8 second sequences, the NSE spectral resolution in the 3-12 Hz range was 0.037 Hz while the DFT spectral resolution was only 0.122 Hz. The NSE was also found to be implementable as a standalone spectral analyzer board using approximately 26 integrated circuits at a cost of approximately $500. The software files used for analysis are included as a supplement, please see the Additional files 1 and 2. Conclusions The NSE real-time algorithm has low computational cost and complexity, and is implementable in both software and hardware for 1 millisecond updates of multichannel spectra. The algorithm may be helpful to guide radiofrequency catheter ablation in real time. PMID:24886214

  18. A new approach to blind deconvolution of astronomical images

    NASA Astrophysics Data System (ADS)

    Vorontsov, S. V.; Jefferies, S. M.

    2017-05-01

    We readdress the strategy of finding approximate regularized solutions to the blind deconvolution problem, when both the object and the point-spread function (PSF) have finite support. Our approach consists in addressing fixed points of an iteration in which both the object x and the PSF y are approximated in an alternating manner, discarding the previous approximation for x when updating x (similarly for y), and considering the resultant fixed points as candidates for a sensible solution. Alternating approximations are performed by truncated iterative least-squares descents. The number of descents in the object- and in the PSF-space play a role of two regularization parameters. Selection of appropriate fixed points (which may not be unique) is performed by relaxing the regularization gradually, using the previous fixed point as an initial guess for finding the next one, which brings an approximation of better spatial resolution. We report the results of artificial experiments with noise-free data, targeted at examining the potential capability of the technique to deconvolve images of high complexity. We also show the results obtained with two sets of satellite images acquired using ground-based telescopes with and without adaptive optics compensation. The new approach brings much better results when compared with an alternating minimization technique based on positivity-constrained conjugate gradients, where the iterations stagnate when addressing data of high complexity. In the alternating-approximation step, we examine the performance of three different non-blind iterative deconvolution algorithms. The best results are provided by the non-negativity-constrained successive over-relaxation technique (+SOR) supplemented with an adaptive scheduling of the relaxation parameter. Results of comparable quality are obtained with steepest descents modified by imposing the non-negativity constraint, at the expense of higher numerical costs. The Richardson-Lucy (or expectation-maximization) algorithm fails to locate stable fixed points in our experiments, due apparently to inappropriate regularization properties.

  19. A Statistical Guide to the Design of Deep Mutational Scanning Experiments

    PubMed Central

    Matuszewski, Sebastian; Hildebrandt, Marcel E.; Ghenu, Ana-Hermina; Jensen, Jeffrey D.; Bank, Claudia

    2016-01-01

    The characterization of the distribution of mutational effects is a key goal in evolutionary biology. Recently developed deep-sequencing approaches allow for accurate and simultaneous estimation of the fitness effects of hundreds of engineered mutations by monitoring their relative abundance across time points in a single bulk competition. Naturally, the achievable resolution of the estimated fitness effects depends on the specific experimental setup, the organism and type of mutations studied, and the sequencing technology utilized, among other factors. By means of analytical approximations and simulations, we provide guidelines for optimizing time-sampled deep-sequencing bulk competition experiments, focusing on the number of mutants, the sequencing depth, and the number of sampled time points. Our analytical results show that sampling more time points together with extending the duration of the experiment improves the achievable precision disproportionately compared with increasing the sequencing depth or reducing the number of competing mutants. Even if the duration of the experiment is fixed, sampling more time points and clustering these at the beginning and the end of the experiment increase experimental power and allow for efficient and precise assessment of the entire range of selection coefficients. Finally, we provide a formula for calculating the 95%-confidence interval for the measurement error estimate, which we implement as an interactive web tool. This allows for quantification of the maximum expected a priori precision of the experimental setup, as well as for a statistical threshold for determining deviations from neutrality for specific selection coefficient estimates. PMID:27412710

  20. Transmitter pointing loss calculation for free-space optical communications link analyses

    NASA Technical Reports Server (NTRS)

    Marshall, William K.

    1987-01-01

    In calculating the performance of free-space optical communications links, the transmitter pointing loss is one of the two most important factors. It is shown in this paper that the traditional formula for the instantaneous pointing loss (i.e., for the transmitter telescope far-field beam pattern) is quite inaccurate. A more accurate and practical approximation is developed in which the pointing loss is calculated using a Taylor series approximation. The four-term series is shown to be accurate to 0.1 dB for the theta angles not greater than 0.9 lambda/D (wavelength/telescope diameter).

  1. Pretty as a Princess: Longitudinal Effects of Engagement with Disney Princesses on Gender Stereotypes, Body Esteem, and Prosocial Behavior in Children

    ERIC Educational Resources Information Center

    Coyne, Sarah M.; Linder, Jennifer Ruh; Rasmussen, Eric E.; Nelson, David A.; Birkbeck, Victoria

    2016-01-01

    This study examined level of engagement with Disney Princess media/products as it relates to gender-stereotypical behavior, body esteem (i.e. body image), and prosocial behavior during early childhood. Participants consisted of 198 children (M[subscript age] = 58 months), who were tested at two time points (approximately 1 year apart). Data…

  2. Progress report on hot particle studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baum, J.W.; Kaurin, D.G.; Waligorski, M.

    1992-02-01

    NCRP Report 106 on the effects of hot particles on the skin of pigs, monkeys, and humans was critically reviewed and reassessed. The analysis of the data of Forbes and Mikhail on the effects from activated UC{sub 2} particles, ranging in diameter from 144 {mu}m to 328 {mu}m, led to the formulation of a new model to predict both the threshold for acute ulceration and for ulcer diameter. In this model, a point dose of 27 Gy at a depth of 1.33 mm in tissue will cause an ulcer with a diameter determined by the radius to which this dosemore » extends. Application of the model to the Forbes and Mikhail data obtained with mixed fission product beta particles yielded a threshold'' (5% probability) of 6 {times} 10{sup 9} beta particles from a point source of high energy (2.25 MeV maximum) beta particles on skin. The above model was used to predict that approximately 1.2 {times} 10{sup 10} beta particles from Sr-Y-90 would produce similar effects, since few Sr-90 beta particles reach 1.33 mm depth. These emissions correspond to doses at 70-{mu}m depth in tissue of approximately 5.3 to 5.5 Gy averaged over 1 cm{sup 2}, respectively.« less

  3. Progress report on hot particle studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baum, J.W.; Kaurin, D.G.; Waligorski, M.

    1992-02-01

    NCRP Report 106 on the effects of hot particles on the skin of pigs, monkeys, and humans was critically reviewed and reassessed. The analysis of the data of Forbes and Mikhail on the effects from activated UC{sub 2} particles, ranging in diameter from 144 {mu}m to 328 {mu}m, led to the formulation of a new model to predict both the threshold for acute ulceration and for ulcer diameter. In this model, a point dose of 27 Gy at a depth of 1.33 mm in tissue will cause an ulcer with a diameter determined by the radius to which this dosemore » extends. Application of the model to the Forbes and Mikhail data obtained with mixed fission product beta particles yielded a ``threshold`` (5% probability) of 6 {times} 10{sup 9} beta particles from a point source of high energy (2.25 MeV maximum) beta particles on skin. The above model was used to predict that approximately 1.2 {times} 10{sup 10} beta particles from Sr-Y-90 would produce similar effects, since few Sr-90 beta particles reach 1.33 mm depth. These emissions correspond to doses at 70-{mu}m depth in tissue of approximately 5.3 to 5.5 Gy averaged over 1 cm{sup 2}, respectively.« less

  4. Predicting financial market crashes using ghost singularities.

    PubMed

    Smug, Damian; Ashwin, Peter; Sornette, Didier

    2018-01-01

    We analyse the behaviour of a non-linear model of coupled stock and bond prices exhibiting periodically collapsing bubbles. By using the formalism of dynamical system theory, we explain what drives the bubbles and how foreshocks or aftershocks are generated. A dynamical phase space representation of that system coupled with standard multiplicative noise rationalises the log-periodic power law singularity pattern documented in many historical financial bubbles. The notion of 'ghosts of finite-time singularities' is introduced and used to estimate the end of an evolving bubble, using finite-time singularities of an approximate normal form near the bifurcation point. We test the forecasting skill of this method on different stochastic price realisations and compare with Monte Carlo simulations of the full system. Remarkably, the approximate normal form is significantly more precise and less biased. Moreover, the method of ghosts of singularities is less sensitive to the noise realisation, thus providing more robust forecasts.

  5. Predicting financial market crashes using ghost singularities

    PubMed Central

    2018-01-01

    We analyse the behaviour of a non-linear model of coupled stock and bond prices exhibiting periodically collapsing bubbles. By using the formalism of dynamical system theory, we explain what drives the bubbles and how foreshocks or aftershocks are generated. A dynamical phase space representation of that system coupled with standard multiplicative noise rationalises the log-periodic power law singularity pattern documented in many historical financial bubbles. The notion of ‘ghosts of finite-time singularities’ is introduced and used to estimate the end of an evolving bubble, using finite-time singularities of an approximate normal form near the bifurcation point. We test the forecasting skill of this method on different stochastic price realisations and compare with Monte Carlo simulations of the full system. Remarkably, the approximate normal form is significantly more precise and less biased. Moreover, the method of ghosts of singularities is less sensitive to the noise realisation, thus providing more robust forecasts. PMID:29596485

  6. Stability of iterative procedures with errors for approximating common fixed points of a couple of q-contractive-like mappings in Banach spaces

    NASA Astrophysics Data System (ADS)

    Zeng, Lu-Chuan; Yao, Jen-Chih

    2006-09-01

    Recently, Agarwal, Cho, Li and Huang [R.P. Agarwal, Y.J. Cho, J. Li, N.J. Huang, Stability of iterative procedures with errors approximating common fixed points for a couple of quasi-contractive mappings in q-uniformly smooth Banach spaces, J. Math. Anal. Appl. 272 (2002) 435-447] introduced the new iterative procedures with errors for approximating the common fixed point of a couple of quasi-contractive mappings and showed the stability of these iterative procedures with errors in Banach spaces. In this paper, we introduce a new concept of a couple of q-contractive-like mappings (q>1) in a Banach space and apply these iterative procedures with errors for approximating the common fixed point of the couple of q-contractive-like mappings. The results established in this paper improve, extend and unify the corresponding ones of Agarwal, Cho, Li and Huang [R.P. Agarwal, Y.J. Cho, J. Li, N.J. Huang, Stability of iterative procedures with errors approximating common fixed points for a couple of quasi-contractive mappings in q-uniformly smooth Banach spaces, J. Math. Anal. Appl. 272 (2002) 435-447], Chidume [C.E. Chidume, Approximation of fixed points of quasi-contractive mappings in Lp spaces, Indian J. Pure Appl. Math. 22 (1991) 273-386], Chidume and Osilike [C.E. Chidume, M.O. Osilike, Fixed points iterations for quasi-contractive maps in uniformly smooth Banach spaces, Bull. Korean Math. Soc. 30 (1993) 201-212], Liu [Q.H. Liu, On Naimpally and Singh's open questions, J. Math. Anal. Appl. 124 (1987) 157-164; Q.H. Liu, A convergence theorem of the sequence of Ishikawa iterates for quasi-contractive mappings, J. Math. Anal. Appl. 146 (1990) 301-305], Osilike [M.O. Osilike, A stable iteration procedure for quasi-contractive maps, Indian J. Pure Appl. Math. 27 (1996) 25-34; M.O. Osilike, Stability of the Ishikawa iteration method for quasi-contractive maps, Indian J. Pure Appl. Math. 28 (1997) 1251-1265] and many others in the literature.

  7. Implicit assimilation for marine ecological models

    NASA Astrophysics Data System (ADS)

    Weir, B.; Miller, R.; Spitz, Y. H.

    2012-12-01

    We use a new data assimilation method to estimate the parameters of a marine ecological model. At a given point in the ocean, the estimated values of the parameters determine the behaviors of the modeled planktonic groups, and thus indicate which species are dominant. To begin, we assimilate in situ observations, e.g., the Bermuda Atlantic Time-series Study, the Hawaii Ocean Time-series, and Ocean Weather Station Papa. From there, we estimate the parameters at surrounding points in space based on satellite observations of ocean color. Given the variation of the estimated parameters, we divide the ocean into regions meant to represent distinct ecosystems. An important feature of the data assimilation approach is that it refines the confidence limits of the optimal Gaussian approximation to the distribution of the parameters. This enables us to determine the ecological divisions with greater accuracy.

  8. Brief report: a preliminary study of fetal head circumference growth in autism spectrum disorder.

    PubMed

    Whitehouse, Andrew J O; Hickey, Martha; Stanley, Fiona J; Newnham, John P; Pennell, Craig E

    2011-01-01

    Fetal head circumference (HC) growth was examined prospectively in children with autism spectrum disorder (ASD). ASD participants (N = 14) were each matched with four control participants (N = 56) on a range of parameters known to influence fetal growth. HC was measured using ultrasonography at approximately 18 weeks gestation and again at birth using a paper tape-measure. Overall body size was indexed by fetal femur-length and birth length. There was no between-groups difference in head circumference at either time-point. While a small number of children with ASD had disproportionately large head circumference relative to body size at both time-points, the between-groups difference did not reach statistical significance in this small sample. These preliminary findings suggest that further investigation of fetal growth in ASD is warranted.

  9. Visual Form Perception Can Be a Cognitive Correlate of Lower Level Math Categories for Teenagers.

    PubMed

    Cui, Jiaxin; Zhang, Yiyun; Cheng, Dazhi; Li, Dawei; Zhou, Xinlin

    2017-01-01

    Numerous studies have assessed the cognitive correlates of performance in mathematics, but little research has been conducted to systematically examine the relations between visual perception as the starting point of visuospatial processing and typical mathematical performance. In the current study, we recruited 223 seventh graders to perform a visual form perception task (figure matching), numerosity comparison, digit comparison, exact computation, approximate computation, and curriculum-based mathematical achievement tests. Results showed that, after controlling for gender, age, and five general cognitive processes (choice reaction time, visual tracing, mental rotation, spatial working memory, and non-verbal matrices reasoning), visual form perception had unique contributions to numerosity comparison, digit comparison, and exact computation, but had no significant relation with approximate computation or curriculum-based mathematical achievement. These results suggest that visual form perception is an important independent cognitive correlate of lower level math categories, including the approximate number system, digit comparison, and exact computation.

  10. 27 CFR 9.226 - Inwood Valley.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... east-northeasterly in a straight line approximately 4.1 miles, onto the Inwood map, to the 1,786-foot... 2.1 miles to the 2,086-foot elevation point, section 15, T31N/R1W; then (3) Proceed north-northeasterly in a straight line approximately 0.7 mile to the marked 1,648-foot elevation point (which should...

  11. 27 CFR 9.226 - Inwood Valley.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... east-northeasterly in a straight line approximately 4.1 miles, onto the Inwood map, to the 1,786-foot... 2.1 miles to the 2,086-foot elevation point, section 15, T31N/R1W; then (3) Proceed north-northeasterly in a straight line approximately 0.7 mile to the marked 1,648-foot elevation point (which should...

  12. 27 CFR 9.233 - Kelsey Bench-Lake County.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... mile to the point where the road intersects a straight line drawn westward from the marked 2,493-foot..., approximately 0.8 mile to the first intersection of the eastern boundary of section 26 and the 1,720-foot..., a total distance of approximately 3.25 miles, to the marked 1,439-foot elevation point in section 29...

  13. Modeling deep brain stimulation: point source approximation versus realistic representation of the electrode

    NASA Astrophysics Data System (ADS)

    Zhang, Tianhe C.; Grill, Warren M.

    2010-12-01

    Deep brain stimulation (DBS) has emerged as an effective treatment for movement disorders; however, the fundamental mechanisms by which DBS works are not well understood. Computational models of DBS can provide insights into these fundamental mechanisms and typically require two steps: calculation of the electrical potentials generated by DBS and, subsequently, determination of the effects of the extracellular potentials on neurons. The objective of this study was to assess the validity of using a point source electrode to approximate the DBS electrode when calculating the thresholds and spatial distribution of activation of a surrounding population of model neurons in response to monopolar DBS. Extracellular potentials in a homogenous isotropic volume conductor were calculated using either a point current source or a geometrically accurate finite element model of the Medtronic DBS 3389 lead. These extracellular potentials were coupled to populations of model axons, and thresholds and spatial distributions were determined for different electrode geometries and axon orientations. Median threshold differences between DBS and point source electrodes for individual axons varied between -20.5% and 9.5% across all orientations, monopolar polarities and electrode geometries utilizing the DBS 3389 electrode. Differences in the percentage of axons activated at a given amplitude by the point source electrode and the DBS electrode were between -9.0% and 12.6% across all monopolar configurations tested. The differences in activation between the DBS and point source electrodes occurred primarily in regions close to conductor-insulator interfaces and around the insulating tip of the DBS electrode. The robustness of the point source approximation in modeling several special cases—tissue anisotropy, a long active electrode and bipolar stimulation—was also examined. Under the conditions considered, the point source was shown to be a valid approximation for predicting excitation of populations of neurons in response to DBS.

  14. Super Clausius-Clapeyron scaling of extreme hourly precipitation and its relation to large-scale atmospheric conditions

    NASA Astrophysics Data System (ADS)

    Lenderink, Geert; Barbero, Renaud; Loriaux, Jessica; Fowler, Hayley

    2017-04-01

    Present-day precipitation-temperature scaling relations indicate that hourly precipitation extremes may have a response to warming exceeding the Clausius-Clapeyron (CC) relation; for The Netherlands the dependency on surface dew point temperature follows two times the CC relation corresponding to 14 % per degree. Our hypothesis - as supported by a simple physical argument presented here - is that this 2CC behaviour arises from the physics of convective clouds. So, we think that this response is due to local feedbacks related to the convective activity, while other large scale atmospheric forcing conditions remain similar except for the higher temperature (approximately uniform warming with height) and absolute humidity (corresponding to the assumption of unchanged relative humidity). To test this hypothesis, we analysed the large-scale atmospheric conditions accompanying summertime afternoon precipitation events using surface observations combined with a regional re-analysis for the data in The Netherlands. Events are precipitation measurements clustered in time and space derived from approximately 30 automatic weather stations. The hourly peak intensities of these events again reveal a 2CC scaling with the surface dew point temperature. The temperature excess of moist updrafts initialized at the surface and the maximum cloud depth are clear functions of surface dew point temperature, confirming the key role of surface humidity on convective activity. Almost no differences in relative humidity and the dry temperature lapse rate were found across the dew point temperature range, supporting our theory that 2CC scaling is mainly due to the response of convection to increases in near surface humidity, while other atmospheric conditions remain similar. Additionally, hourly precipitation extremes are on average accompanied by substantial large-scale upward motions and therefore large-scale moisture convergence, which appears to accelerate with surface dew point. This increase in large-scale moisture convergence appears to be consequence of latent heat release due to the convective activity as estimated from the quasi-geostrophic omega equation. Consequently, most hourly extremes occur in precipitation events with considerable spatial extent. Importantly, this event size appears to increase rapidly at the highest dew point temperature range, suggesting potentially strong impacts of climatic warming.

  15. The timing of control signals underlying fast point-to-point arm movements.

    PubMed

    Ghafouri, M; Feldman, A G

    2001-04-01

    It is known that proprioceptive feedback induces muscle activation when the facilitation of appropriate motoneurons exceeds their threshold. In the suprathreshold range, the muscle-reflex system produces torques depending on the position and velocity of the joint segment(s) that the muscle spans. The static component of the torque-position relationship is referred to as the invariant characteristic (IC). According to the equilibrium-point (EP) hypothesis, control systems produce movements by changing the activation thresholds and thus shifting the IC of the appropriate muscles in joint space. This control process upsets the balance between muscle and external torques at the initial limb configuration and, to regain the balance, the limb is forced to establish a new configuration or, if the movement is prevented, a new level of static torques. Taken together, the joint angles and the muscle torques generated at an equilibrium configuration define a single variable called the EP. Thus by shifting the IC, control systems reset the EP. Muscle activation and movement emerge following the EP resetting because of the natural physical tendency of the system to reach equilibrium. Empirical and simulation studies support the notion that the control IC shifts and the resulting EP shifts underlying fast point-to-point arm movements are gradual rather than step-like. However, controversies exist about the duration of these shifts. Some studies suggest that the IC shifts cease with the movement offset. Other studies propose that the IC shifts end early in comparison to the movement duration (approximately, at peak velocity). The purpose of this study was to evaluate the duration of the IC shifts underlying fast point-to-point arm movements. Subjects made fast (hand peak velocity about 1.3 m/s) planar arm movements toward different targets while grasping a handle. Hand forces applied to the handle and shoulder/elbow torques were, respectively, measured from a force sensor placed on the handle, or computed with equations of motion. In some trials, an electromagnetic brake prevented movements. In such movements, the hand force and joint torques reached a steady state after a time that was much smaller than the movement duration in unobstructed movements and was approximately equal to the time to peak velocity (mean difference < 80 ms). In an additional experiment, subjects were instructed to rapidly initiate corrections of the pushing force in response to movement arrest. They were able to initiate such corrections only when the joint torques and the pushing force had practically reached a steady state. The latency of correction onset was, however, smaller than the duration of unobstructed movements. We concluded that during the time at which the steady state torques were reached, the control pattern of IC shifts remained the same despite the movement block. Thereby the duration of these shifts did not exceed the time of reaching the steady state torques. Our findings are consistent with the hypothesis that, in unobstructed movements, the IC shifts and resulting shifts in the EP end approximately at peak velocity. In other words, during the latter part of the movement, the control signals responsible for the equilibrium shift remained constant, and the movement was driven by the arm inertial, viscous and elastic forces produced by the muscle-reflex system. Fast movements may thus be completed without continuous control guidance. As a consequence, central corrections and sequential commands may be issued rapidly, without waiting for the end of kinematic responses to each command, which may be important for many motor behaviours including typing, piano playing and speech. Our study also illustrates that the timing of the control signals may be substantially different from that of the resulting motor output and that the same control pattern may produce different motor outputs depending on external conditions.

  16. Spitzer Photometry of Approximately 1 Million Stars in M31 and 15 Other Galaxies

    NASA Technical Reports Server (NTRS)

    Khan, Rubab

    2017-01-01

    We present Spitzer IRAC 3.6-8 micrometer and Multiband Imaging Photometer 24 micrometer point-source catalogs for M31 and 15 other mostly large, star-forming galaxies at distances approximately 3.5-14 Mpc, including M51, M83, M101, and NGC 6946. These catalogs contain approximately 1 million sources including approximately 859,000 in M31 and approximately 116,000 in the other galaxies. They were created following the procedures described in Khan et al. through a combination of pointspread function (PSF) fitting and aperture photometry. These data products constitute a resource to improve our understanding of the IR-bright (3.6-24 micrometer) point-source populations in crowded extragalactic stellar fields and to plan observations with the James Webb Space Telescope.

  17. Efficient determination of the uncertainty for the optimization of SPECT system design: a subsampled fisher information matrix.

    PubMed

    Fuin, Niccolo; Pedemonte, Stefano; Arridge, Simon; Ourselin, Sebastien; Hutton, Brian F

    2014-03-01

    System designs in single photon emission tomography (SPECT) can be evaluated based on the fundamental trade-off between bias and variance that can be achieved in the reconstruction of emission tomograms. This trade off can be derived analytically using the Cramer-Rao type bounds, which imply the calculation and the inversion of the Fisher information matrix (FIM). The inverse of the FIM expresses the uncertainty associated to the tomogram, enabling the comparison of system designs. However, computing, storing and inverting the FIM is not practical with 3-D imaging systems. In order to tackle the problem of the computational load in calculating the inverse of the FIM, a method based on the calculation of the local impulse response and the variance, in a single point, from a single row of the FIM, has been previously proposed for system design. However this approximation (circulant approximation) does not capture the global interdependence between the variables in shift-variant systems such as SPECT, and cannot account e.g., for data truncation or missing data. Our new formulation relies on subsampling the FIM. The FIM is calculated over a subset of voxels arranged in a grid that covers the whole volume. Every element of the FIM at the grid points is calculated exactly, accounting for the acquisition geometry and for the object. This new formulation reduces the computational complexity in estimating the uncertainty, but nevertheless accounts for the global interdependence between the variables, enabling the exploration of design spaces hindered by the circulant approximation. The graphics processing unit accelerated implementation of the algorithm reduces further the computation times, making the algorithm a good candidate for real-time optimization of adaptive imaging systems. This paper describes the subsampled FIM formulation and implementation details. The advantages and limitations of the new approximation are explored, in comparison with the circulant approximation, in the context of design optimization of a parallel-hole collimator SPECT system and of an adaptive imaging system (similar to the commercially available D-SPECT).

  18. Magnetic Reconnection during Turbulence: Statistics of X-Points and Heating

    NASA Astrophysics Data System (ADS)

    Shay, M. A.; Haggerty, C. C.; Parashar, T.; Matthaeus, W. H.; Phan, T.; Drake, J. F.; Servidio, S.; Wan, M.

    2017-12-01

    Magnetic reconnection is a ubiquitous plasma phenomenon that has been observed in turbulent plasma systems. It is an important part of the turbulent dynamics and heating of space, laboratory and astrophysical plasmas. Recent simulation and observational studies have detailed how magnetic reconnection heats plasma and this work has developed to the point where it can be applied to larger and more complex plasma systems. In this context, we examine the statistics of magnetic reconnection in fully kinetic PIC simulations to quantify the role of magnetic reconnection on energy dissipation and plasma heating. Most notably, we study the time evolution of these x-line statistics in decaying turbulence. First, we examine the distribution of reconnection rates at the x-points found in the simulation and find that their distribution is broader than the MHD counterpart, and the average value is approximately 0.1. Second, we study the time evolution of the x-points to determine when reconnection is most active in the turbulence. Finally, using our findings on these statistics, reconnection heating predictions are applied to the regions surrounding the identified x-points and this is used to study the role of magnetic reconnection in turbulent heating of plasma. The ratio of ion to electron heating rates is found to be consistent with magnetic reconnection predictions.

  19. A similarity hypothesis for the two-point correlation tensor in a temporally evolving plane wake

    NASA Technical Reports Server (NTRS)

    Ewing, D. W.; George, W. K.; Moser, R. D.; Rogers, M. M.

    1995-01-01

    The analysis demonstrated that the governing equations for the two-point velocity correlation tensor in the temporally evolving wake admit similarity solutions, which include the similarity solutions for the single-point moment as a special case. The resulting equations for the similarity solutions include two constants, beta and Re(sub sigma), that are ratios of three characteristic time scales of processes in the flow: a viscous time scale, a time scale characteristic of the spread rate of the flow, and a characteristic time scale of the mean strain rate. The values of these ratios depend on the initial conditions of the flow and are most likely measures of the coherent structures in the initial conditions. The occurrences of these constants in the governing equations for the similarity solutions indicates that these solutions, in general, will only be the same for two flows if these two constants are equal (and hence the coherent structures in the flows are related). The comparisons between the predictions of the similarity hypothesis and the data presented here and elsewhere indicate that the similarity solutions for the two-point correlation tensors provide a good approximation of the measures of those motions that are not significantly affected by the boundary conditions caused by the finite extent of real flows. Thus, the two-point similarity hypothesis provides a useful tool for both numerical and physical experimentalist that can be used to examine how the finite extent of real flows affect the evolution of the different scales of motion in the flow.

  20. Adaptive surrogate model based multi-objective transfer trajectory optimization between different libration points

    NASA Astrophysics Data System (ADS)

    Peng, Haijun; Wang, Wei

    2016-10-01

    An adaptive surrogate model-based multi-objective optimization strategy that combines the benefits of invariant manifolds and low-thrust control toward developing a low-computational-cost transfer trajectory between libration orbits around the L1 and L2 libration points in the Sun-Earth system has been proposed in this paper. A new structure for a multi-objective transfer trajectory optimization model that divides the transfer trajectory into several segments and gives the dominations for invariant manifolds and low-thrust control in different segments has been established. To reduce the computational cost of multi-objective transfer trajectory optimization, a mixed sampling strategy-based adaptive surrogate model has been proposed. Numerical simulations show that the results obtained from the adaptive surrogate-based multi-objective optimization are in agreement with the results obtained using direct multi-objective optimization methods, and the computational workload of the adaptive surrogate-based multi-objective optimization is only approximately 10% of that of direct multi-objective optimization. Furthermore, the generating efficiency of the Pareto points of the adaptive surrogate-based multi-objective optimization is approximately 8 times that of the direct multi-objective optimization. Therefore, the proposed adaptive surrogate-based multi-objective optimization provides obvious advantages over direct multi-objective optimization methods.

  1. Communication: Analytic continuation of the virial series through the critical point using parametric approximants.

    PubMed

    Barlow, Nathaniel S; Schultz, Andrew J; Weinstein, Steven J; Kofke, David A

    2015-08-21

    The mathematical structure imposed by the thermodynamic critical point motivates an approximant that synthesizes two theoretically sound equations of state: the parametric and the virial. The former is constructed to describe the critical region, incorporating all scaling laws; the latter is an expansion about zero density, developed from molecular considerations. The approximant is shown to yield an equation of state capable of accurately describing properties over a large portion of the thermodynamic parameter space, far greater than that covered by each treatment alone.

  2. Increased horizontal viewing zone angle of a hologram by resolution redistribution of a spatial light modulator.

    PubMed

    Takaki, Yasuhiro; Hayashi, Yuki

    2008-07-01

    The narrow viewing zone angle is one of the problems associated with electronic holography. We propose a technique that enables the ratio of horizontal and vertical resolutions of a spatial light modulator (SLM) to be altered. This technique increases the horizontal resolution of a SLM several times, so that the horizontal viewing zone angle is also increased several times. A SLM illuminated by a slanted point light source array is imaged by a 4f imaging system in which a horizontal slit is located on the Fourier plane. We show that the horizontal resolution was increased four times and that the horizontal viewing zone angle was increased approximately four times.

  3. Objective evaluation of female feet and leg joint conformation at time of selection and post first parity in swine.

    PubMed

    Stock, J D; Calderón Díaz, J A; Rothschild, M F; Mote, B E; Stalder, K J

    2018-06-09

    Feet and legs of replacement females were objectively evaluated at selection, i.e. approximately 150 days of age (n=319) and post first parity, i.e. any time after weaning of first litter and before 2nd parturition (n=277) to 1) compare feet and leg joint angle ranges between selection and post first parity; 2) identify feet and leg joint angle differences between selection and first three weeks of second gestation; 3) identify feet and leg join angle differences between farms and gestation days during second gestation; and 4) obtain genetic variance components for conformation angles for the two time points measured. Angles for carpal joint (knee), metacarpophalangeal joint (front pastern), metatarsophalangeal joint (rear pastern), tarsal joint (hock), and rear stance were measured using image analysis software. Between selection and post first parity significant differences were observed for all joints measured (P < 0.05). Knee, front and rear pastern angles were less (more flexion), and hock angles were greater (less flexion) as age progressed (P < 0.05), while the rear stance pattern was less (feet further under center) at selection than post first parity (only including measures during first three weeks of second gestation). Only using post first parity leg conformation information, farm was a significant source of variation for front and rear pasterns and rear stance angle measurements (P < 0.05). Knee angle was less (more flexion) (P < 0.05) as gestation age progressed. Heritability estimates were low to moderate (0.04 - 0.35) for all traits measured across time points. Genetic correlations between the same joints at different time points were high (> 0.8) between the front leg joints and low (<0.2) between the rear leg joints. High genetic correlations between time points indicate that the trait can be considered the same at either time point, and low genetic correlations indicate that the trait at different time points should be considered as two separate traits. Minimal change in the front leg suggests conformation traits that remain between selection and post first parity, while larger changes in rear leg indicate that rear leg conformation traits should be evaluated at multiple time periods.

  4. Singularities of Floquet scattering and tunneling

    NASA Astrophysics Data System (ADS)

    Landa, H.

    2018-04-01

    We study quasibound states and scattering with short-range potentials in three dimensions, subject to an axial periodic driving. We find that poles of the scattering S matrix can cross the real energy axis as a function of the drive amplitude, making the S matrix nonanalytic at a singular point. For the corresponding quasibound states that can tunnel out of (or get captured within) a potential well, this results in a discontinuous jump in both the angular momentum and energy of emitted (absorbed) waves. We also analyze elastic and inelastic scattering of slow particles in the time-dependent potential. For a drive amplitude at the singular point, there is a total absorption of incoming low-energy (s wave) particles and their conversion to high-energy outgoing (mostly p ) waves. We examine the relation of such Floquet singularities, lacking in an effective time-independent approximation, with well-known "spectral singularities" (or "exceptional points"). These results are based on an analytic approach for obtaining eigensolutions of time-dependent periodic Hamiltonians with mixed cylindrical and spherical symmetry, and apply broadly to particles interacting via power-law forces and subject to periodic fields, e.g., co-trapped ions and atoms.

  5. Application of closed-form solutions to a mesh point field in silicon solar cells

    NASA Technical Reports Server (NTRS)

    Lamorte, M. F.

    1985-01-01

    A computer simulation method is discussed that provides for equivalent simulation accuracy, but that exhibits significantly lower CPU running time per bias point compared to other techniques. This new method is applied to a mesh point field as is customary in numerical integration (NI) techniques. The assumption of a linear approximation for the dependent variable, which is typically used in the finite difference and finite element NI methods, is not required. Instead, the set of device transport equations is applied to, and the closed-form solutions obtained for, each mesh point. The mesh point field is generated so that the coefficients in the set of transport equations exhibit small changes between adjacent mesh points. Application of this method to high-efficiency silicon solar cells is described; and the method by which Auger recombination, ambipolar considerations, built-in and induced electric fields, bandgap narrowing, carrier confinement, and carrier diffusivities are treated. Bandgap narrowing has been investigated using Fermi-Dirac statistics, and these results show that bandgap narrowing is more pronounced and that it is temperature-dependent in contrast to the results based on Boltzmann statistics.

  6. Space-by-Time Modular Decomposition Effectively Describes Whole-Body Muscle Activity During Upright Reaching in Various Directions

    PubMed Central

    Hilt, Pauline M.; Delis, Ioannis; Pozzo, Thierry; Berret, Bastien

    2018-01-01

    The modular control hypothesis suggests that motor commands are built from precoded modules whose specific combined recruitment can allow the performance of virtually any motor task. Despite considerable experimental support, this hypothesis remains tentative as classical findings of reduced dimensionality in muscle activity may also result from other constraints (biomechanical couplings, data averaging or low dimensionality of motor tasks). Here we assessed the effectiveness of modularity in describing muscle activity in a comprehensive experiment comprising 72 distinct point-to-point whole-body movements during which the activity of 30 muscles was recorded. To identify invariant modules of a temporal and spatial nature, we used a space-by-time decomposition of muscle activity that has been shown to encompass classical modularity models. To examine the decompositions, we focused not only on the amount of variance they explained but also on whether the task performed on each trial could be decoded from the single-trial activations of modules. For the sake of comparison, we confronted these scores to the scores obtained from alternative non-modular descriptions of the muscle data. We found that the space-by-time decomposition was effective in terms of data approximation and task discrimination at comparable reduction of dimensionality. These findings show that few spatial and temporal modules give a compact yet approximate representation of muscle patterns carrying nearly all task-relevant information for a variety of whole-body reaching movements. PMID:29666576

  7. Considering the reversibility of passive and reactive transport problems: Are forward-in-time and backward-in-time models ever equivalent?

    NASA Astrophysics Data System (ADS)

    Engdahl, N.

    2017-12-01

    Backward in time (BIT) simulations of passive tracers are often used for capture zone analysis, source area identification, and generation of travel time and age distributions. The BIT approach has the potential to become an immensely powerful tool for direct inverse modeling but the necessary relationships between the processes modeled in the forward and backward models have yet to be formally established. This study explores the time reversibility of passive and reactive transport models in a variety of 2D heterogeneous domains using particle-based random walk methods for the transport and nonlinear reaction steps. Distributed forward models are used to generate synthetic observations that form the initial conditions for the backward in time models and we consider both linear-flood and point injections. The results for passive travel time distributions show that forward and backward models are not exactly equivalent but that the linear-flood BIT models are reasonable approximations. Point based BIT models fall within the travel time range of the forward models, though their distributions can be distinctive in some cases. The BIT approximation is not as robust when nonlinear reactive transport is considered and we find that this reaction system is only exactly reversible under uniform flow conditions. We use a series of simplified, longitudinally symmetric, but heterogeneous, domains to illustrate the causes of these discrepancies between the two model types. Many of the discrepancies arise because diffusion is a "self-adjoint" operator, which causes mass to spread in the forward and backward models. This allows particles to enter low velocity regions in the both models, which has opposite effects in the forward and reverse models. It may be possible to circumvent some of these limitations using an anti-diffusion model to undo mixing when time is reversed, but this is beyond the capabilities of the existing Lagrangian methods.

  8. Mean-force-field and mean-spherical approximations for the electric microfield distribution at a charged point in the charged-hard-particles fluid

    NASA Astrophysics Data System (ADS)

    Rosenfeld, Yaakov

    1989-01-01

    The linearized mean-force-field approximation, leading to a Gaussian distribution, provides an exact formal solution to the mean-spherical integral equation model for the electric microfield distribution at a charged point in the general charged-hard-particles fluid. Lado's explicit solution for plasmas immediately follows this general observation.

  9. Chiral interface at the finite temperature transition point of QCD

    NASA Technical Reports Server (NTRS)

    Frei, Z.; Patkos, A.

    1990-01-01

    The domain wall between coexisting chirally symmetric and broken symmetry regions is studied in a saddle point approximation to the effective three-flavor sigma model. In the chiral limit the surface tension varies in the range ((40 to -50)MeV)(exp 3). The width of the domain wall is estimated to be approximately or equal to 4.5 fm.

  10. Finding Dantzig Selectors with a Proximity Operator based Fixed-point Algorithm

    DTIC Science & Technology

    2014-11-01

    experiments showed that this method usually outperforms the method in [2] in terms of CPU time while producing solutions of comparable quality. The... method proposed in [19]. To alleviate the difficulty caused by the subprob- lem without a closed form solution , a linearized ADM was proposed for the...a closed form solution , but the β-related subproblem does not and is solved approximately by using the nonmonotone gradient method in [18]. The

  11. Simultaneity: A Question of Time, Space, Resources and Purpose

    DTIC Science & Technology

    2001-05-01

    operational headquarters responsible for execution. 61 Preceded by special operations forces, US Army Rangers parachuted on Point Salines’ airfield at...approximately 0530 hours on 25 October.62 The 82nd Airborne Division and a Marine Amphibious Unit followed the Rangers . Despite a multitude of deviations...objectives in the south. 68 Vessey then approved the course of action that specified a coup de main in which US Army Rangers , Marines and airborne troops

  12. Minimizing Statistical Bias with Queries.

    DTIC Science & Technology

    1995-09-14

    method for optimally selecting these points would o er enormous savings in time and money. An active learning system will typically attempt to select data...research in active learning assumes that the sec- ond term of Equation 2 is approximately zero, that is, that the learner is unbiased. If this is the case...outperforms the variance- minimizing algorithm and random exploration. and e ective strategy for active learning . I have given empirical evidence that, with

  13. Time-Distance Helioseismology with the MDI Instrument: Initial Results

    NASA Technical Reports Server (NTRS)

    Duvall, T. L., Jr.; Kosovichev, A. G.; Scherrer, P. H.; Bogart, R. S.; Bush, R. I.; DeForest, C.; Hoeksema, J. T.; Schou, J.; Saba, J. L. R.; Tarbell, T. D.; hide

    1997-01-01

    In time-distance helioseismology, the travel time of acoustic waves is measured between various points on the solar surface. To some approximation, the waves can be considered to follow ray paths that depend only on a mean solar model, with the curvature of the ray paths being caused by the increasing sound speed with depth below the surface. The travel time is effected by various inhomogeneities along the ray path, including flows, temperature inhomogeneities, and magnetic fields. By measuring a large number of times between different locations and using an inversion method, it is possible to construct 3-dimensional maps of the subsurface inhomogeneities. The SOI/MDI experiment on SOHO has several unique capabilities for time-distance helioseismology. The great stability of the images observed without benefit of an intervening atmosphere is quite striking. It his made it possible for us to detect the travel time fo separations of points as small as 2.4 Mm in the high-resolution mode of MDI (0.6 arc sec 1/pixel). This has enabled the detection of the supergranulation flow. Coupled with the inversion technique, we can now study the 3-dimensional evolution of the flows near the solar surface.

  14. Asymptotic safety of quantum gravity beyond Ricci scalars

    NASA Astrophysics Data System (ADS)

    Falls, Kevin; King, Callum R.; Litim, Daniel F.; Nikolakopoulos, Kostas; Rahmede, Christoph

    2018-04-01

    We investigate the asymptotic safety conjecture for quantum gravity including curvature invariants beyond Ricci scalars. Our strategy is put to work for families of gravitational actions which depend on functions of the Ricci scalar, the Ricci tensor, and products thereof. Combining functional renormalization with high order polynomial approximations and full numerical integration we derive the renormalization group flow for all couplings and analyse their fixed points, scaling exponents, and the fixed point effective action as a function of the background Ricci curvature. The theory is characterized by three relevant couplings. Higher-dimensional couplings show near-Gaussian scaling with increasing canonical mass dimension. We find that Ricci tensor invariants stabilize the UV fixed point and lead to a rapid convergence of polynomial approximations. We apply our results to models for cosmology and establish that the gravitational fixed point admits inflationary solutions. We also compare findings with those from f (R ) -type theories in the same approximation and pin-point the key new effects due to Ricci tensor interactions. Implications for the asymptotic safety conjecture of gravity are indicated.

  15. HORN-6 special-purpose clustered computing system for electroholography.

    PubMed

    Ichihashi, Yasuyuki; Nakayama, Hirotaka; Ito, Tomoyoshi; Masuda, Nobuyuki; Shimobaba, Tomoyoshi; Shiraki, Atsushi; Sugie, Takashige

    2009-08-03

    We developed the HORN-6 special-purpose computer for holography. We designed and constructed the HORN-6 board to handle an object image composed of one million points and constructed a cluster system composed of 16 HORN-6 boards. Using this HORN-6 cluster system, we succeeded in creating a computer-generated hologram of a three-dimensional image composed of 1,000,000 points at a rate of 1 frame per second, and a computer-generated hologram of an image composed of 100,000 points at a rate of 10 frames per second, which is near video rate, when the size of a computer-generated hologram is 1,920 x 1,080. The calculation speed is approximately 4,600 times faster than that of a personal computer with an Intel 3.4-GHz Pentium 4 CPU.

  16. Adaptive Neuro-Fuzzy Modeling of UH-60A Pilot Vibration

    NASA Technical Reports Server (NTRS)

    Kottapalli, Sesi; Malki, Heidar A.; Langari, Reza

    2003-01-01

    Adaptive neuro-fuzzy relationships have been developed to model the UH-60A Black Hawk pilot floor vertical vibration. A 200 point database that approximates the entire UH-60A helicopter flight envelope is used for training and testing purposes. The NASA/Army Airloads Program flight test database was the source of the 200 point database. The present study is conducted in two parts. The first part involves level flight conditions and the second part involves the entire (200 point) database including maneuver conditions. The results show that a neuro-fuzzy model can successfully predict the pilot vibration. Also, it is found that the training phase of this neuro-fuzzy model takes only two or three iterations to converge for most cases. Thus, the proposed approach produces a potentially viable model for real-time implementation.

  17. Predictive Trip Detection for Nuclear Power Plants

    NASA Astrophysics Data System (ADS)

    Rankin, Drew J.; Jiang, Jin

    2016-08-01

    This paper investigates the use of a Kalman filter (KF) to predict, within the shutdown system (SDS) of a nuclear power plant (NPP), whether safety parameter measurements have reached a trip set-point. In addition, least squares (LS) estimation compensates for prediction error due to system-model mismatch. The motivation behind predictive shutdown is to reduce the amount of time between the occurrence of a fault or failure and the time of trip detection, referred to as time-to-trip. These reductions in time-to-trip can ultimately lead to increases in safety and productivity margins. The proposed predictive SDS differs from conventional SDSs in that it compares point-predictions of the measurements, rather than sensor measurements, against trip set-points. The predictive SDS is validated through simulation and experiments for the steam generator water level safety parameter. Performance of the proposed predictive SDS is compared against benchmark conventional SDS with respect to time-to-trip. In addition, this paper analyzes: prediction uncertainty, as well as; the conditions under which it is possible to achieve reduced time-to-trip. Simulation results demonstrate that on average the predictive SDS reduces time-to-trip by an amount of time equal to the length of the prediction horizon and that the distribution of times-to-trip is approximately Gaussian. Experimental results reveal that a reduced time-to-trip can be achieved in a real-world system with unknown system-model mismatch and that the predictive SDS can be implemented with a scan time of under 100ms. Thus, this paper is a proof of concept for KF/LS-based predictive trip detection.

  18. Contour-based image warping

    NASA Astrophysics Data System (ADS)

    Chan, Kwai H.; Lau, Rynson W.

    1996-09-01

    Image warping concerns about transforming an image from one spatial coordinate to another. It is widely used for the vidual effect of deforming and morphing images in the film industry. A number of warping techniques have been introduced, which are mainly based on the corresponding pair mapping of feature points, feature vectors or feature patches (mostly triangular or quadrilateral). However, very often warping of an image object with an arbitrary shape is required. This requires a warping technique which is based on boundary contour instead of feature points or feature line-vectors. In addition, when feature point or feature vector based techniques are used, approximation of the object boundary by using point or vectors is required. In this case, the matching process of the corresponding pairs will be very time consuming if a fine approximation is required. In this paper, we propose a contour-based warping technique for warping image objects with arbitrary shapes. The novel idea of the new method is the introduction of mathematical morphology to allow a more flexible control of image warping. Two morphological operators are used as contour determinators. The erosion operator is used to warp image contents which are inside a user specified contour while the dilation operation is used to warp image contents which are outside of the contour. This new method is proposed to assist further development of a semi-automatic motion morphing system when accompanied with robust feature extractors such as deformable template or active contour model.

  19. Fibromyalgia Outcomes Over Time: Results from a Prospective Observational Study in the United States

    PubMed Central

    Schaefer, Caroline P.; Adams, Edgar H.; Udall, Margarita; Masters, Elizabeth T.; Mann, Rachael M.; Daniel, Shoshana R.; McElroy, Heather J.; Cappelleri, Joseph C.; Clair, Andrew G.; Hopps, Markay; Staud, Roland; Mease, Philip; Silverman, Stuart L.

    2016-01-01

    Background: Longitudinal research on outcomes of patients with fibromyalgia is limited. Objective: To assess clinician and patient-reported outcomes over time among fibromyalgia patients. Methods: At enrollment (Baseline) and follow-up (approximately 2 years later), consented patients were screened for chronic widespread pain (CWP), attended a physician site visit to determine fibromyalgia status, and completed an online questionnaire assessing pain, sleep, function, health status, productivity, medications, and healthcare resource use. Results: Seventy-six fibromyalgia patients participated at both time points (at Baseline: 86.8% white, 89.5% female, mean age 50.9 years, and mean duration of fibromyalgia 4.1 years). Mean number of tender points at each physician visit was 14.1 and 13.5, respectively; 11 patients no longer screened positive for CWP at follow-up. A majority reported medication use for pain (59.2% at Baseline, 62.0% at Follow-up). The most common medication classes were opioids (32.4%), SSRIs (16.9%), and tramadol (14.1%) at Follow-up. Significant mean changes over time were observed for fibromyalgia symptoms (modified American College of Rheumatology 2010 criteria: 18.4 to 16.9; P=0.004), pain interference with function (Brief Pain Inventory-Short Form: 5.9 to 5.3; P=0.013), and sleep (Medical Outcomes Study-Sleep Scale: 58.3 to 52.7; P=0.004). Patients achieving ≥2 point improvement in pain (14.5%) experienced greater changes in pain interference with function (6.8 to 3.4; P=0.001) and sleep (62.4 to 51.0; P=0.061). Conclusion: Fibromyalgia patients reported high levels of burden at both time points, with few significant changes observed over time. Outcomes were variable among patients over time and were better among those with greater pain improvement. PMID:28077978

  20. SOP: parallel surrogate global optimization with Pareto center selection for computationally expensive single objective problems

    DOE PAGES

    Krityakierne, Tipaluck; Akhtar, Taimoor; Shoemaker, Christine A.

    2016-02-02

    This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centersmore » from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.« less

  1. Construction of nested maximin designs based on successive local enumeration and modified novel global harmony search algorithm

    NASA Astrophysics Data System (ADS)

    Yi, Jin; Li, Xinyu; Xiao, Mi; Xu, Junnan; Zhang, Lin

    2017-01-01

    Engineering design often involves different types of simulation, which results in expensive computational costs. Variable fidelity approximation-based design optimization approaches can realize effective simulation and efficiency optimization of the design space using approximation models with different levels of fidelity and have been widely used in different fields. As the foundations of variable fidelity approximation models, the selection of sample points of variable-fidelity approximation, called nested designs, is essential. In this article a novel nested maximin Latin hypercube design is constructed based on successive local enumeration and a modified novel global harmony search algorithm. In the proposed nested designs, successive local enumeration is employed to select sample points for a low-fidelity model, whereas the modified novel global harmony search algorithm is employed to select sample points for a high-fidelity model. A comparative study with multiple criteria and an engineering application are employed to verify the efficiency of the proposed nested designs approach.

  2. Data approximation using a blending type spline construction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dalmo, Rune; Bratlie, Jostein

    2014-11-18

    Generalized expo-rational B-splines (GERBS) is a blending type spline construction where local functions at each knot are blended together by C{sup k}-smooth basis functions. One way of approximating discrete regular data using GERBS is by partitioning the data set into subsets and fit a local function to each subset. Partitioning and fitting strategies can be devised such that important or interesting data points are interpolated in order to preserve certain features. We present a method for fitting discrete data using a tensor product GERBS construction. The method is based on detection of feature points using differential geometry. Derivatives, which aremore » necessary for feature point detection and used to construct local surface patches, are approximated from the discrete data using finite differences.« less

  3. Singular reduction of resonant Hamiltonians

    NASA Astrophysics Data System (ADS)

    Meyer, Kenneth R.; Palacián, Jesús F.; Yanguas, Patricia

    2018-06-01

    We investigate the dynamics of resonant Hamiltonians with n degrees of freedom to which we attach a small perturbation. Our study is based on the geometric interpretation of singular reduction theory. The flow of the Hamiltonian vector field is reconstructed from the cross sections corresponding to an approximation of this vector field in an energy surface. This approximate system is also built using normal forms and applying reduction theory obtaining the reduced Hamiltonian that is defined on the orbit space. Generically, the reduction is of singular character and we classify the singularities in the orbit space, getting three different types of singular points. A critical point of the reduced Hamiltonian corresponds to a family of periodic solutions in the full system whose characteristic multipliers are approximated accordingly to the nature of the critical point.

  4. The functional equation truncation method for approximating slow invariant manifolds: a rapid method for computing intrinsic low-dimensional manifolds.

    PubMed

    Roussel, Marc R; Tang, Terry

    2006-12-07

    A slow manifold is a low-dimensional invariant manifold to which trajectories nearby are rapidly attracted on the way to the equilibrium point. The exact computation of the slow manifold simplifies the model without sacrificing accuracy on the slow time scales of the system. The Maas-Pope intrinsic low-dimensional manifold (ILDM) [Combust. Flame 88, 239 (1992)] is frequently used as an approximation to the slow manifold. This approximation is based on a linearized analysis of the differential equations and thus neglects curvature. We present here an efficient way to calculate an approximation equivalent to the ILDM. Our method, called functional equation truncation (FET), first develops a hierarchy of functional equations involving higher derivatives which can then be truncated at second-derivative terms to explicitly neglect the curvature. We prove that the ILDM and FET-approximated (FETA) manifolds are identical for the one-dimensional slow manifold of any planar system. In higher-dimensional spaces, the ILDM and FETA manifolds agree to numerical accuracy almost everywhere. Solution of the FET equations is, however, expected to generally be faster than the ILDM method.

  5. On the role of the frozen surface approximation in small wave-height perturbation theory for moving surfaces

    NASA Astrophysics Data System (ADS)

    Keiffer, Richard; Novarini, Jorge; Scharstein, Robert

    2002-11-01

    In the standard development of the small wave-height approximation (SWHA) perturbation theory for scattering from moving rough surfaces [e.g., E. Y. Harper and F. M. Labianca, J. Acoust. Soc. Am. 58, 349-364 (1975)] the necessity for any sort of frozen surface approximation is avoided by the replacement of the rough boundary by a flat (and static) boundary. In this paper, this seemingly fortuitous byproduct of the small wave-height approximation is examined and found to fail to fully agree with an analysis based on the kinematics of the problem. Specifically, the first-order correction term from standard perturbation approach predicts a scattered amplitude that depends on the source frequency, whereas the kinematics of the problem point to a scattered amplitude that depends on the scattered frequency. It is shown that a perturbation approach in which an explicit frozen surface approximation is made before the SWHA is invoked predicts (first-order) scattered amplitudes that are in agreement with the kinematic analysis. [Work supported by ONR/NRL (PE 61153N-32) and by grants of computer time DoD HPC Shared Resource Center at Stennis Space Center, MS.

  6. A Fast Implementation of the ISODATA Clustering Algorithm

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.; Netanyahu, Nathan S.; LeMoigne, Jacqueline

    2005-01-01

    Clustering is central to many image processing and remote sensing applications. ISODATA is one of the most popular and widely used clustering methods in geoscience applications, but it can run slowly, particularly with large data sets. We present a more efficient approach to ISODATA clustering, which achieves better running times by storing the points in a kd-tree and through a modification of the way in which the algorithm estimates the dispersion of each cluster. We also present an approximate version of the algorithm which allows the user to further improve the running time, at the expense of lower fidelity in computing the nearest cluster center to each point. We provide both theoretical and empirical justification that our modified approach produces clusterings that are very similar to those produced by the standard ISODATA approach. We also provide empirical studies on both synthetic data and remotely sensed Landsat and MODIS images that show that our approach has significantly lower running times.

  7. A Fast Implementation of the Isodata Clustering Algorithm

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Le Moigne, Jacqueline; Mount, David M.; Netanyahu, Nathan S.

    2007-01-01

    Clustering is central to many image processing and remote sensing applications. ISODATA is one of the most popular and widely used clustering methods in geoscience applications, but it can run slowly, particularly with large data sets. We present a more efficient approach to IsoDATA clustering, which achieves better running times by storing the points in a kd-tree and through a modification of the way in which the algorithm estimates the dispersion of each cluster. We also present an approximate version of the algorithm which allows the user to further improve the running time, at the expense of lower fidelity in computing the nearest cluster center to each point. We provide both theoretical and empirical justification that our modified approach produces clusterings that are very similar to those produced by the standard ISODATA approach. We also provide empirical studies on both synthetic data and remotely sensed Landsat and MODIS images that show that our approach has significantly lower running times.

  8. Phonon vibrational frequencies of all single-wall carbon nanotubes at the lambda point: reduced matrix calculations.

    PubMed

    Wang, Yufang; Wu, Yanzhao; Feng, Min; Wang, Hui; Jin, Qinghua; Ding, Datong; Cao, Xuewei

    2008-12-01

    With a simple method-the reduced matrix method, we simplified the calculation of the phonon vibrational frequencies according to SWNTs structure and their phonon symmetric property and got the dispersion properties of all SWNTs at Gamma point in Brillouin zone, whose diameters lie between 0.6 and 2.5 nm. The calculating time is shrunk about 2-4 orders. A series of the dependent relationships between the diameters of SWNTs and the frequencies of Raman and IR active modes are given. Several fine structures including "glazed tile" structures in omega approximately d figures are found, which might predict a certain macro-quantum phenomenon of the phonons in SWNTs.

  9. Crossover from BCS to Bose superconductivity: A functional integral approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Randeria, M.; Sa de Melo, C.A.R.; Engelbrecht, J.R.

    1993-04-01

    We use a functional integral formulation to study the crossover from cooperative Cooper pairing to the formation and condensation of tightly bound pairs in a 3D continuum model of fermions with attractive interactions. The inadequacy of a saddle point approximation with increasing coupling is pointed out, and the importance of temporal (quantum) fluctuations for normal state properties at intermediate and strong coupling is emphasized. In addition to recovering the Nozieres-Schmitt-Pink interpolation scheme for T{sub c}, and the Leggett variational results for T = 0, we also present results for evolution of the time-dependent Ginzburg-Landau equation and collective mode spectrum asmore » a function of the coupling.« less

  10. [Use of THP-1 for allergens identification method validation].

    PubMed

    Zhao, Xuezheng; Jia, Qiang; Zhang, Jun; Li, Xue; Zhang, Yanshu; Dai, Yufei

    2014-05-01

    Look for an in vitro test method to evaluate sensitization using THP-1 cells by the changes of the expression of cytokines to provide more reliable markers of the identification of sensitization. The monocyte-like THP-1 cells were induced and differentiated into THP-1-macrophages with PMA (0.1 microg/ml). The changes of expression of cytokines at different time points after the cells being treated with five known allergens, 2,4-dinitrochlorobenzene (DNCB), nickel sulfate (NiSO4), phenylene diamine (PPDA) potassium dichromate (K2Cr2O7) and toluene diisocyanate (TDI) and two non-allergens sodium dodecyl sulfate (SDS) and isopropanol (IPA) at various concentrations were evaluated. The IL-6 and TNF-alpha production was measured by ELISA. The secretion of IL-1beta and IL-8 was analyzed by Cytometric Bead Array (CBA). The section of the IL-6, TNF-alpha, IL-1beta and IL-8 were the highest when THP-1 cells were exposed to NiSO4, DNCB and K2Cr2O7 for 6h, PPDA and TDI for 12h. The production of IL-6 were approximately 40, 25, 20, 50 and 50 times for five kinds chemical allergens NiSO4, DNCB, K2Cr2O7, PPDA and TDI respectively at the optimum time points and the optimal concentration compared to the control group. The expression of TNF-alpha were 20, 12, 20, 8 and 5 times more than the control group respectively. IL-1beta secretion were 30, 60, 25, 30 and 45 times respectively compared to the control group. The production of IL-8 were approximately 15, 12, 15, 12 and 7 times respectively compared to the control group. Both non-allergens SDS and IPA significantly induced IL-6 secretion in a dose-dependent manner however SDS cause a higher production levels, approximately 20 times of the control. Therefore IL-6 may not be a reliable marker for identification of allergens. TNF-alpha, IL-1beta and IL-8 expressions did not change significantly after exposed to the two non-allergens. The test method using THP-1 cells by detecting the productions of cytokines (TNF-alpha, IL-1beta and IL-8) can effectively distinguish chemical allergens and non-allergens. The three cytokines may be reliable markers for the identification of potential sensitizing chemicals.

  11. Reentrant behaviors in the phase diagram of spin-1 planar ferromagnet with single-ion anisotropy

    NASA Astrophysics Data System (ADS)

    Rabuffo, I.; De Cesare, L.; Caramico D'Auria, A.; Mercaldo, M. T.

    2018-05-01

    We used the two-time Green function framework to investigate the role played by the easy-axis single-ion anisotropy on the phase diagram of (d > 2)-dimensional spin-1planar ferromagnets, which exhibit a magnetic field induced quantum phase transition. We tackled the problem using two different kind of approximations: the Anderson-Callen decoupling scheme and the Devlin approach. In the latter scheme, the exchange anisotropy terms in the equations of motion are treated at the Tyablikov decoupling level while the crystal field anisotropy contribution is handled exactly. The emerging key result is a reentrant structure of the phase diagram close to the quantum critical point, for certain values of the single-ion anisotropy parameter. We compare the results obtained within the two approximation schemes. In particular, we recover the same qualitative behavior. We show the phase diagram, close to the field-induced quantum critical point and the behavior of the susceptibility for different values of the single-ion anisotropy parameter, enhancing the differences between the two different scenarios (i.e. with and without reentrant behavior).

  12. 5-Aminouracil treatment. A method for estimating G2.

    PubMed

    Socher, S H; Davidson, D

    1971-02-01

    Treatment of Vicia faba lateral roots with a range of concentrations of 5-aminouracil (5-AU) indicate that cells are stopped at a particular point in interphase. The timing of the fall in mitotic index suggests that cells are held at the S - G(2) transition. When cells are held at this point, treatments with 5-AU can be used to estimate the duration of G(2) + mitosis/2 of proliferating cells. Treatment with 5-AU can also be used to demonstrate the presence of subpopulations of dividing cells that differ in their G(2) duration. Using this method, 5-AU-induced inhibition, we have confirmed that in V. faba lateral roots there are two populations of dividing cells: (a) a fast-dividing population, which makes up approximately 85% of the proliferating cell population and has a G(2) + mitosis/2 duration of 3.3 hr, and (b) a slow-dividing population, which makes up approximately 15% of dividing cells and has a G(2) duration in excess of 12 hr. These estimates are similar to those obtained from percentage labeled mitosis (PLM) curves after incorporation of thymidine-(3)H.

  13. Reconstruction of vegetation and lake level at Moon Lake, North Dakota, from high-resolution pollen and diatom data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grimm, E.C.; Laird, K.R.; Mueller, P.G.

    High-resolution fossil-pollen and diatom data from Moon Lake, North Dakota, reveal major climate and vegetation changes near the western margin of the tall-grass prairie. Fourteen AMS radiocarbon dates provide excellent time control for the past {approximately}11,800 {sup 14}C years B.P. Picea dominated during the late-glacial until it abruptly declined {approximately}10,300 B.P. During the early Holocene ({approximately}10,300-8000 B.P.), deciduous trees and shrubs (Populus, Betula, Corylus, Quercus, and especially Ulmus) were common, but prairie taxa (Poaceae, Artemisia, and Chenopodiaceae/Amaranthaceae) gradually increased. During this period the diatoms indicate the lake becoming gradually more saline as water-level fell. By {approximately}8000 B.P., salinity had increasedmore » to the point that the diatoms were no longer sensitive to further salinity increases. However, fluctuating pollen percentages of mud-flat weeds (Ambrosia and Iva) indicate frequently changing water levels during the mid-Holocene ({approximately}8000-5000 B.P.). The driest millennium was 7000-6000 B.P., when Iva annua was common. After {approximately}3000 B.P. the lake became less-saline, and the diatoms were again sensitive to changing salinity. The Medieval Warm Period and Little Ice Age are clearly evident in the diatom data.« less

  14. A Tidal Disruption Event in a Nearby Galaxy Hosting an Intermediate Mass Black Hole

    NASA Technical Reports Server (NTRS)

    Donato, D; Cenko, S. B.; Covino, S.; Troja, E.; Pursimo, T.; Cheung, C. C.; Fox, O.; Kutyrev, A.; Campana, S.; Fugazza, D.; hide

    2014-01-01

    We report the serendipitous discovery of a bright point source flare in the Abell cluster A1795 with archival EUVE and Chandra observations. Assuming the EUVE emission is associated with the Chandra source, the X-ray 0.5-7 kiloelectronvolt flux declined by a factor of approximately 2300 over a time span of 6 years, following a power-law decay with index approximately equal to 2.44 plus or minus 0.40. The Chandra data alone vary by a factor of approximately 20. The spectrum is well fit by a blackbody with a constant temperature of kiloteslas approximately equal to 0.09 kiloelectronvolts (approximately equal to 10 (sup 6) Kelvin). The flare is spatially coincident with the nuclear region of a faint, inactive galaxy with a photometric redshift consistent at the 1 sigma level with the cluster (redshift = 0.062476).We argue that these properties are indicative of a tidal disruption of a star by a black hole (BH) with log(M (sub BH) / M (sub 1 solar mass)) approximately equal to 5.5 plus or minus 0.5. If so, such a discovery indicates that tidal disruption flares may be used to probe BHs in the intermediate mass range, which are very difficult to study by other means.

  15. Confirmation of Earth-Mass Planets Orbiting the Millisecond Pulsar PSR B1257 + 12.

    PubMed

    Wolszczan, A

    1994-04-22

    The discovery of two Earth-mass planets orbiting an old ( approximately 10(9) years), rapidly spinning neutron star, the 6.2-millisecond radio pulsar PSR B1257+12, was announced in early 1992. It was soon pointed out that the approximately 3:2 ratio of the planets' orbital periods should lead to accurately predictable and possibly measurable gravitational perturbations of their orbits. The unambiguous detection of this effect, after 3 years of systematic timing observations of PSR B1257+12 with the 305-meter Arecibo radiotelescope, as well as the discovery of another, moon-mass object in orbit around the pulsar, constitutes irrefutable evidence that the first planetary system around a star other than the sun has been identified.

  16. SU-F-R-17: Advancing Glioblastoma Multiforme (GBM) Recurrence Detection with MRI Image Texture Feature Extraction and Machine Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, V; Ruan, D; Nguyen, D

    Purpose: To test the potential of early Glioblastoma Multiforme (GBM) recurrence detection utilizing image texture pattern analysis in serial MR images post primary treatment intervention. Methods: MR image-sets of six time points prior to the confirmed recurrence diagnosis of a GBM patient were included in this study, with each time point containing T1 pre-contrast, T1 post-contrast, T2-Flair, and T2-TSE images. Eight Gray-level co-occurrence matrix (GLCM) texture features including Contrast, Correlation, Dissimilarity, Energy, Entropy, Homogeneity, Sum-Average, and Variance were calculated from all images, resulting in a total of 32 features at each time point. A confirmed recurrent volume was contoured, alongmore » with an adjacent non-recurrent region-of-interest (ROI) and both volumes were propagated to all prior time points via deformable image registration. A support vector machine (SVM) with radial-basis-function kernels was trained on the latest time point prior to the confirmed recurrence to construct a model for recurrence classification. The SVM model was then applied to all prior time points and the volumes classified as recurrence were obtained. Results: An increase in classified volume was observed over time as expected. The size of classified recurrence maintained at a stable level of approximately 0.1 cm{sup 3} up to 272 days prior to confirmation. Noticeable volume increase to 0.44 cm{sup 3} was demonstrated at 96 days prior, followed by significant increase to 1.57 cm{sup 3} at 42 days prior. Visualization of the classified volume shows the merging of recurrence-susceptible region as the volume change became noticeable. Conclusion: Image texture pattern analysis in serial MR images appears to be sensitive to detecting the recurrent GBM a long time before the recurrence is confirmed by a radiologist. The early detection may improve the efficacy of targeted intervention including radiosurgery. More patient cases will be included to create a generalizable classification model applicable to a larger patient cohort. NIH R43CA183390 and R01CA188300.NSF Graduate Research Fellowship DGE-1144087.« less

  17. The Laser Ranging Experiment of the Lunar Reconnaissance Orbiter: Five Years of Operations and Data Analysis

    NASA Technical Reports Server (NTRS)

    Mao, Dandan; McGarry, Jan F.; Mazarico, Erwan; Neumann, Gregory A.; Sun, Xiaoli; Torrence, Mark H.; Zagwodzki, Thomas W.; Rowlands, David D.; Hoffman, Evan D.; Horvath, Julie E.; hide

    2016-01-01

    We describe the results of the Laser Ranging (LR) experiment carried out from June 2009 to September 2014 in order to make one-way time-of-flight measurements of laser pulses between Earth-based laser ranging stations and the Lunar Reconnaissance Orbiter (LRO) orbiting the Moon. Over 4,000 hours of successful LR data are obtained from 10 international ground stations. The 20-30 centimeter precision of the full-rate LR data is further improved to 5-10 centimeter after conversion into normal points. The main purpose of LR is to utilize the high accuracy normal point data to improve the quality of the LRO orbits, which are nomi- nally determined by the radiometric S-band tracking data. When independently used in the LRO precision orbit determination process with the high-resolution GRAIL (Gravity Recovery and Interior Laboratory) gravity model, LR data provide good orbit solutions, with an average difference of approximately 50 meters in total position, and approximately 20 centimeters in radial direction, compared to the definitive LRO trajectory. When used in combination with the S-band tracking data, LR data help to improve the orbit accuracy in the radial direction to approximately 15 centimeters. In order to obtain highly accurate LR range measurements for precise orbit determination results, it is critical to closely model the behavior of the clocks both at the ground stations and on the spacecraft. LR provides a unique data set to calibrate the spacecraft clock. The LRO spacecraft clock is characterized by the LR data to a timing knowledge of 0.015 milliseconds over the entire 5 years of LR operation. We here present both the engineering setup of the LR experiments and the detailed analysis results of the LR data.

  18. A spline-based approach for computing spatial impulse responses.

    PubMed

    Ellis, Michael A; Guenther, Drake; Walker, William F

    2007-05-01

    Computer simulations are an essential tool for the design of phased-array ultrasonic imaging systems. FIELD II, which determines the two-way temporal response of a transducer at a point in space, is the current de facto standard for ultrasound simulation tools. However, the need often arises to obtain two-way spatial responses at a single point in time, a set of dimensions for which FIELD II is not well optimized. This paper describes an analytical approach for computing the two-way, far-field, spatial impulse response from rectangular transducer elements under arbitrary excitation. The described approach determines the response as the sum of polynomial functions, making computational implementation quite straightforward. The proposed algorithm, named DELFI, was implemented as a C routine under Matlab and results were compared to those obtained under similar conditions from the well-established FIELD II program. Under the specific conditions tested here, the proposed algorithm was approximately 142 times faster than FIELD II for computing spatial sensitivity functions with similar amounts of error. For temporal sensitivity functions with similar amounts of error, the proposed algorithm was about 1.7 times slower than FIELD II using rectangular elements and 19.2 times faster than FIELD II using triangular elements. DELFI is shown to be an attractive complement to FIELD II, especially when spatial responses are needed at a specific point in time.

  19. A Statistical Guide to the Design of Deep Mutational Scanning Experiments.

    PubMed

    Matuszewski, Sebastian; Hildebrandt, Marcel E; Ghenu, Ana-Hermina; Jensen, Jeffrey D; Bank, Claudia

    2016-09-01

    The characterization of the distribution of mutational effects is a key goal in evolutionary biology. Recently developed deep-sequencing approaches allow for accurate and simultaneous estimation of the fitness effects of hundreds of engineered mutations by monitoring their relative abundance across time points in a single bulk competition. Naturally, the achievable resolution of the estimated fitness effects depends on the specific experimental setup, the organism and type of mutations studied, and the sequencing technology utilized, among other factors. By means of analytical approximations and simulations, we provide guidelines for optimizing time-sampled deep-sequencing bulk competition experiments, focusing on the number of mutants, the sequencing depth, and the number of sampled time points. Our analytical results show that sampling more time points together with extending the duration of the experiment improves the achievable precision disproportionately compared with increasing the sequencing depth or reducing the number of competing mutants. Even if the duration of the experiment is fixed, sampling more time points and clustering these at the beginning and the end of the experiment increase experimental power and allow for efficient and precise assessment of the entire range of selection coefficients. Finally, we provide a formula for calculating the 95%-confidence interval for the measurement error estimate, which we implement as an interactive web tool. This allows for quantification of the maximum expected a priori precision of the experimental setup, as well as for a statistical threshold for determining deviations from neutrality for specific selection coefficient estimates. Copyright © 2016 by the Genetics Society of America.

  20. Registration of terrestrial mobile laser data on 2D or 3D geographic database by use of a non-rigid ICP approach.

    NASA Astrophysics Data System (ADS)

    Monnier, F.; Vallet, B.; Paparoditis, N.; Papelard, J.-P.; David, N.

    2013-10-01

    This article presents a generic and efficient method to register terrestrial mobile data with imperfect location on a geographic database with better overall accuracy but less details. The registration method proposed in this paper is based on a semi-rigid point to plane ICP ("Iterative Closest Point"). The main applications of such registration is to improve existing geographic databases, particularly in terms of accuracy, level of detail and diversity of represented objects. Other applications include fine geometric modelling and fine façade texturing, object extraction such as trees, poles, road signs marks, facilities, vehicles, etc. The geopositionning system of mobile mapping systems is affected by GPS masks that are only partially corrected by an Inertial Navigation System (INS) which can cause an important drift. As this drift varies non-linearly, but slowly in time, it will be modelled by a translation defined as a piecewise linear function of time which variation over time will be minimized (rigidity term). For each iteration of the ICP, the drift is estimated in order to minimise the distance between laser points and planar model primitives (data attachment term). The method has been tested on real data (a scan of the city of Paris of 3.6 million laser points registered on a 3D model of approximately 71,400 triangles).

  1. Density-matrix simulation of small surface codes under current and projected experimental noise

    NASA Astrophysics Data System (ADS)

    O'Brien, T. E.; Tarasinski, B.; DiCarlo, L.

    2017-09-01

    We present a density-matrix simulation of the quantum memory and computing performance of the distance-3 logical qubit Surface-17, following a recently proposed quantum circuit and using experimental error parameters for transmon qubits in a planar circuit QED architecture. We use this simulation to optimize components of the QEC scheme (e.g., trading off stabilizer measurement infidelity for reduced cycle time) and to investigate the benefits of feedback harnessing the fundamental asymmetry of relaxation-dominated error in the constituent transmons. A lower-order approximate calculation extends these predictions to the distance-5 Surface-49. These results clearly indicate error rates below the fault-tolerance threshold of the surface code, and the potential for Surface-17 to perform beyond the break-even point of quantum memory. However, Surface-49 is required to surpass the break-even point of computation at state-of-the-art qubit relaxation times and readout speeds.

  2. Don't panic: interpretation bias is predictive of new onsets of panic disorder.

    PubMed

    Woud, Marcella L; Zhang, Xiao Chi; Becker, Eni S; McNally, Richard J; Margraf, Jürgen

    2014-01-01

    Psychological models of panic disorder postulate that interpretation of ambiguous material as threatening is an important maintaining factor for the disorder. However, demonstrations of whether such a bias predicts onset of panic disorder are missing. In the present study, we used data from the Dresden Prediction Study, in which a epidemiologic sample of young German women was tested at two time points approximately 17 months apart, allowing the study of biased interpretation as a potential risk factor. At time point one, participants completed an Interpretation Questionnaire including two types of ambiguous scenarios: panic-related and general threat-related. Analyses revealed that a panic-related interpretation bias predicted onset of panic disorder, even after controlling for two established risk factors: anxiety sensitivity and fear of bodily sensations. This is the first prospective study demonstrating the incremental validity of interpretation bias as a predictor of panic disorder onset. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. About one counterexample of applying method of splitting in modeling of plating processes

    NASA Astrophysics Data System (ADS)

    Solovjev, D. S.; Solovjeva, I. A.; Litovka, Yu V.; Korobova, I. L.

    2018-05-01

    The paper presents the main factors that affect the uniformity of the thickness distribution of plating on the surface of the product. The experimental search for the optimal values of these factors is expensive and time-consuming. The problem of adequate simulation of coating processes is very relevant. The finite-difference approximation using seven-point and five-point templates in combination with the splitting method is considered as solution methods for the equations of the model. To study the correctness of the solution of equations of the mathematical model by these methods, the experiments were conducted on plating with a flat anode and cathode, which relative position was not changed in the bath. The studies have shown that the solution using the splitting method was up to 1.5 times faster, but it did not give adequate results due to the geometric features of the task under the given boundary conditions.

  4. Fluctuating asymmetry and wing size of Argia tinctipennis Selys (Zygoptera: Coenagrionidae) in relation to riparian forest preservation status.

    PubMed

    Pinto, N S; Juen, L; Cabette, H S R; De Marco, P

    2012-06-01

    Effects of riparian vegetation removal on body size and wing fluctuating asymmetry (FA) of Argia tinctipennis Selys (Odonata: Coenagrionidae) were studied in the River Suiá-Miçú basin, which is part of the Xingu basin in Brazilian Amazonia. A total of 70 specimens (n = 33 from preserved and n = 37 from degraded areas) was measured. Five wing measures of each wing (totalizing ten measured characters) were taken. Preserved and degraded points presented non-overlapped variations of a Habitat Integrity Index, supporting the environmental differentiation between these two categories. FA increases in degraded areas approximately four times for the width between the nodus and proximal portion of the pterostigma of forewings (FW), two times for the width of the wing in the region of nodus of FW, and approximately 1.7 times for the number of postnodal cells of FW. The increase is almost five times for the width between the nodus and the proximal portion of the pterostigma of hind wings (HW), three times for the number of postnodal cells of HW, and approximately 1.6 times the width between quadrangle and nodus of HW. Individuals of preserved sites were nearly 3.3% larger than for degraded sites, based on mean hind wing length. Our results supports that the development of A. tinctipennis in degraded areas is affected by riparian vegetation removal and may reflect in wing FA variations. Consequently, these FA measures may be a useful tool for bioassessment using Odonata insects as a model.

  5. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    DOE PAGES

    Grout, Ray; Kolla, Hemanth; Minion, Michael; ...

    2017-05-08

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. Here, we demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less

  6. Shock characterization of toad pins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weirick, L.J.; Navarro, M.J.

    1996-05-01

    The purpose of this program was to characterize Time Of Arrival Detectors (TOAD) pins response to shock loading with respect to risetime, amplitude, repeatability and consistency. TOAD pins were subjected to impacts of 35 to 420 kilobars amplitude and approximately 1 ms pulse width to investigate the timing spread of four pins and the voltage output profile of the individual pins. Sets of pins were also aged at 45{degree}, 60{degree} and 80{degree}C for approximately nine weeks before shock testing at 315 kilobars impact stress. Four sets of pins were heated to 50.2{degree}C (125{degree}F) for approximately two hours and then impactedmore » at either 50 or 315 kilobars. Also, four sets of pins were aged at 60{degree}C for nine weeks and then heated to 50.2{degree}C before shock testing at 50 and 315 kilobars impact stress, respectively. Particle velocity measurements at the contact point between the stainless steel targets and TOAD pins were made using a Velocity Interferometer System for Any Reflector (VISAR) to monitor both the amplitude and profile of the shock waves. {copyright} {ital 1996 American Institute of Physics.}« less

  7. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grout, Ray; Kolla, Hemanth; Minion, Michael

    2017-05-08

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher- order accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited tomore » recovering from soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual on the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehen- sive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less

  8. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grout, Ray; Kolla, Hemanth; Minion, Michael

    2017-05-08

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less

  9. Vacuum Stress in Schwarzschild Spacetime

    NASA Astrophysics Data System (ADS)

    Howard, Kenneth Webster

    Vacuum stress in the conformally invariant scalar field in the region exterior to the horizon of a Schwarzschild black hole is examined. In the Hartle-Hawking vacuum state <(phi)('2)> and are calculated. Covariant point-splitting renormalization is used, as is a mode sum expression for the Hartle-Hawking propagator. It is found that <(phi)('2)> separates naturally into two parts, a part that has a simple analytic form coinciding with the approximate expression of Whiting and Page, and a small remainder. The results of our numerical evaluation of the remainder agree with, but are more accurate than, those previously given by Fawcett. We find that also separates into two terms. The first coincides with the approximate expression obtained by Page with a Gaussian approximation to the proper time Green function. The second term, composed of sums over mode functions, is evaluated numerically. It is found that the total expression is in good qualitative agreement with Page's approximation. Our results disagree with previous numerical results given by Fawcett. The error in Fawcett's calculation is explained.

  10. Mode instability in one-dimensional anharmonic lattices: Variational equation approach

    NASA Astrophysics Data System (ADS)

    Yoshimura, K.

    1999-03-01

    The stability of normal mode oscillations has been studied in detail under the single-mode excitation condition for the Fermi-Pasta-Ulam-β lattice. Numerical experiments indicate that the mode stability depends strongly on k/N, where k is the wave number of the initially excited mode and N is the number of degrees of freedom in the system. It has been found that this feature does not change when N increases. We propose an average variational equation - approximate version of the variational equation - as a theoretical tool to facilitate a linear stability analysis. It is shown that this strong k/N dependence of the mode stability can be explained from the view point of the linear stability of the relevant orbits. We introduce a low-dimensional approximation of the average variational equation, which approximately describes the time evolution of variations in four normal mode amplitudes. The linear stability analysis based on this four-mode approximation demonstrates that the parametric instability mechanism plays a crucial role in the strong k/N dependence of the mode stability.

  11. Voronoi distance based prospective space-time scans for point data sets: a dengue fever cluster analysis in a southeast Brazilian town

    PubMed Central

    2011-01-01

    Background The Prospective Space-Time scan statistic (PST) is widely used for the evaluation of space-time clusters of point event data. Usually a window of cylindrical shape is employed, with a circular or elliptical base in the space domain. Recently, the concept of Minimum Spanning Tree (MST) was applied to specify the set of potential clusters, through the Density-Equalizing Euclidean MST (DEEMST) method, for the detection of arbitrarily shaped clusters. The original map is cartogram transformed, such that the control points are spread uniformly. That method is quite effective, but the cartogram construction is computationally expensive and complicated. Results A fast method for the detection and inference of point data set space-time disease clusters is presented, the Voronoi Based Scan (VBScan). A Voronoi diagram is built for points representing population individuals (cases and controls). The number of Voronoi cells boundaries intercepted by the line segment joining two cases points defines the Voronoi distance between those points. That distance is used to approximate the density of the heterogeneous population and build the Voronoi distance MST linking the cases. The successive removal of edges from the Voronoi distance MST generates sub-trees which are the potential space-time clusters. Finally, those clusters are evaluated through the scan statistic. Monte Carlo replications of the original data are used to evaluate the significance of the clusters. An application for dengue fever in a small Brazilian city is presented. Conclusions The ability to promptly detect space-time clusters of disease outbreaks, when the number of individuals is large, was shown to be feasible, due to the reduced computational load of VBScan. Instead of changing the map, VBScan modifies the metric used to define the distance between cases, without requiring the cartogram construction. Numerical simulations showed that VBScan has higher power of detection, sensitivity and positive predicted value than the Elliptic PST. Furthermore, as VBScan also incorporates topological information from the point neighborhood structure, in addition to the usual geometric information, it is more robust than purely geometric methods such as the elliptic scan. Those advantages were illustrated in a real setting for dengue fever space-time clusters. PMID:21513556

  12. Nonequilibrium self-energy functional theory

    NASA Astrophysics Data System (ADS)

    Hofmann, Felix; Eckstein, Martin; Arrigoni, Enrico; Potthoff, Michael

    2013-10-01

    The self-energy functional theory (SFT) is generalized to describe the real-time dynamics of correlated lattice-fermion models far from thermal equilibrium. This is achieved by starting from a reformulation of the original equilibrium theory in terms of double-time Green's functions on the Keldysh-Matsubara contour. With the help of a generalized Luttinger-Ward functional, we construct a functional Ω̂[Σ] which is stationary at the physical (nonequilibrium) self-energy Σ and which yields the grand potential of the initial thermal state Ω at the physical point. Nonperturbative approximations can be defined by specifying a reference system that serves to generate trial self-energies. These self-energies are varied by varying the reference system's one-particle parameters on the Keldysh-Matsubara contour. In the case of thermal equilibrium, this approach reduces to the conventional SFT. Contrary to the equilibrium theory, however, “unphysical” variations, i.e., variations that are different on the upper and the lower branches of the Keldysh contour, must be considered to fix the time dependence of the optimal physical parameters via the variational principle. Functional derivatives in the nonequilibrium SFT Euler equation are carried out analytically to derive conditional equations for the variational parameters that are accessible to a numerical evaluation via a time-propagation scheme. Approximations constructed by means of the nonequilibrium SFT are shown to be inherently causal, internally consistent, and to respect macroscopic conservation laws resulting from gauge symmetries of the Hamiltonian. This comprises the nonequilibrium dynamical mean-field theory but also dynamical-impurity and variational-cluster approximations that are specified by reference systems with a finite number of degrees of freedom. In this way, nonperturbative and consistent approximations can be set up, the numerical evaluation of which is accessible to an exact-diagonalization approach.

  13. Design of barrier bucket kicker control system

    NASA Astrophysics Data System (ADS)

    Ni, Fa-Fu; Wang, Yan-Yu; Yin, Jun; Zhou, De-Tai; Shen, Guo-Dong; Zheng, Yang-De.; Zhang, Jian-Chuan; Yin, Jia; Bai, Xiao; Ma, Xiao-Li

    2018-05-01

    The Heavy-Ion Research Facility in Lanzhou (HIRFL) contains two synchrotrons: the main cooler storage ring (CSRm) and the experimental cooler storage ring (CSRe). Beams are extracted from CSRm, and injected into CSRe. To apply the Barrier Bucket (BB) method on the CSRe beam accumulation, a new BB technology based kicker control system was designed and implemented. The controller of the system is implemented using an Advanced Reduced Instruction Set Computer (RISC) Machine (ARM) chip and a field-programmable gate array (FPGA) chip. Within the architecture, ARM is responsible for data presetting and floating number arithmetic processing. The FPGA computes the RF phase point of the two rings and offers more accurate control of the time delay. An online preliminary experiment on HIRFL was also designed to verify the functionalities of the control system. The result shows that the reference trigger point of two different sinusoidal RF signals for an arbitrary phase point was acquired with a matched phase error below 1° (approximately 2.1 ns), and the step delay time better than 2 ns were realized.

  14. Parallel algorithm for determining motion vectors in ice floe images by matching edge features

    NASA Technical Reports Server (NTRS)

    Manohar, M.; Ramapriyan, H. K.; Strong, J. P.

    1988-01-01

    A parallel algorithm is described to determine motion vectors of ice floes using time sequences of images of the Arctic ocean obtained from the Synthetic Aperture Radar (SAR) instrument flown on-board the SEASAT spacecraft. Researchers describe a parallel algorithm which is implemented on the MPP for locating corresponding objects based on their translationally and rotationally invariant features. The algorithm first approximates the edges in the images by polygons or sets of connected straight-line segments. Each such edge structure is then reduced to a seed point. Associated with each seed point are the descriptions (lengths, orientations and sequence numbers) of the lines constituting the corresponding edge structure. A parallel matching algorithm is used to match packed arrays of such descriptions to identify corresponding seed points in the two images. The matching algorithm is designed such that fragmentation and merging of ice floes are taken into account by accepting partial matches. The technique has been demonstrated to work on synthetic test patterns and real image pairs from SEASAT in times ranging from .5 to 0.7 seconds for 128 x 128 images.

  15. Infrared imaging results of an excited planar jet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farrington, R.B.

    1991-12-01

    Planar jets are used for many applications including heating, cooling, and ventilation. Generally such a jet is designed to provide good mixing within an enclosure. In building applications, the jet provides both thermal comfort and adequate indoor air quality. Increased mixing rates may lead to lower short-circuiting of conditioned air, elimination of dead zones within the occupied zone, reduced energy costs, increased occupant comfort, and higher indoor air quality. This paper discusses using an infrared imaging system to show the effect of excitation of a jet on the spread angle and on the jet mixing efficiency. Infrared imaging captures amore » large number of data points in real time (over 50,000 data points per image) providing significant advantages over single-point measurements. We used a screen mesh with a time constant of approximately 0.3 seconds as a target for the infrared camera to detect temperature variations in the jet. The infrared images show increased jet spread due to excitation of the jet. Digital data reduction and analysis show change in jet isotherms and quantify the increased mixing caused by excitation. 17 refs., 20 figs.« less

  16. An optimized treatment for algorithmic differentiation of an important glaciological fixed-point problem

    DOE PAGES

    Goldberg, Daniel N.; Narayanan, Sri Hari Krishna; Hascoet, Laurent; ...

    2016-05-20

    We apply an optimized method to the adjoint generation of a time-evolving land ice model through algorithmic differentiation (AD). The optimization involves a special treatment of the fixed-point iteration required to solve the nonlinear stress balance, which differs from a straightforward application of AD software, and leads to smaller memory requirements and in some cases shorter computation times of the adjoint. The optimization is done via implementation of the algorithm of Christianson (1994) for reverse accumulation of fixed-point problems, with the AD tool OpenAD. For test problems, the optimized adjoint is shown to have far lower memory requirements, potentially enablingmore » larger problem sizes on memory-limited machines. In the case of the land ice model, implementation of the algorithm allows further optimization by having the adjoint model solve a sequence of linear systems with identical (as opposed to varying) matrices, greatly improving performance. Finally, the methods introduced here will be of value to other efforts applying AD tools to ice models, particularly ones which solve a hybrid shallow ice/shallow shelf approximation to the Stokes equations.« less

  17. Space Technology 5 Multi-point Measurements of Near-Earth Magnetic Fields: Initial Results

    NASA Technical Reports Server (NTRS)

    Slavin, James A.; Le, G.; Strangeway, R. L.; Wang, Y.; Boardsen, S.A.; Moldwin, M. B.; Spence, H. E.

    2007-01-01

    The Space Technology 5 (ST-5) mission successfully placed three micro-satellites in a 300 x 4500 km dawn-dusk orbit on 22 March 2006. Each spacecraft carried a boom-mounted vector fluxgate magnetometer that returned highly sensitive and accurate measurements of the geomagnetic field. These data allow, for the first time, the separation of temporal and spatial variations in field-aligned current (FAC) perturbations measured in low-Earth orbit on time scales of approximately 10 sec to 10 min. The constellation measurements are used to directly determine field-aligned current sheet motion, thickness and current density. In doing so, we demonstrate two multi-point methods for the inference of FAC current density that have not previously been possible in low-Earth orbit; 1) the "standard method," based upon s/c velocity, but corrected for FAC current sheet motion, and 2) the "gradiometer method" which uses simultaneous magnetic field measurements at two points with known separation. Future studies will apply these methods to the entire ST-5 data set and expand to include geomagnetic field gradient analyses as well as field-aligned and ionospheric currents.

  18. Spline approximation, Part 1: Basic methodology

    NASA Astrophysics Data System (ADS)

    Ezhov, Nikolaj; Neitzel, Frank; Petrovic, Svetozar

    2018-04-01

    In engineering geodesy point clouds derived from terrestrial laser scanning or from photogrammetric approaches are almost never used as final results. For further processing and analysis a curve or surface approximation with a continuous mathematical function is required. In this paper the approximation of 2D curves by means of splines is treated. Splines offer quite flexible and elegant solutions for interpolation or approximation of "irregularly" distributed data. Depending on the problem they can be expressed as a function or as a set of equations that depend on some parameter. Many different types of splines can be used for spline approximation and all of them have certain advantages and disadvantages depending on the approximation problem. In a series of three articles spline approximation is presented from a geodetic point of view. In this paper (Part 1) the basic methodology of spline approximation is demonstrated using splines constructed from ordinary polynomials and splines constructed from truncated polynomials. In the forthcoming Part 2 the notion of B-spline will be explained in a unique way, namely by using the concept of convex combinations. The numerical stability of all spline approximation approaches as well as the utilization of splines for deformation detection will be investigated on numerical examples in Part 3.

  19. Approximations of e and ?: An Exploration

    ERIC Educational Resources Information Center

    Brown, Philip R.

    2017-01-01

    Fractional approximations of e and p are discovered by searching for repetitions or partial repetitions of digit strings in their expansions in different number bases. The discovery of such fractional approximations is suggested for students and teachers as an entry point into mathematics research.

  20. Resumming the large-N approximation for time evolving quantum systems

    NASA Astrophysics Data System (ADS)

    Mihaila, Bogdan; Dawson, John F.; Cooper, Fred

    2001-05-01

    In this paper we discuss two methods of resumming the leading and next to leading order in 1/N diagrams for the quartic O(N) model. These two approaches have the property that they preserve both boundedness and positivity for expectation values of operators in our numerical simulations. These approximations can be understood either in terms of a truncation to the infinitely coupled Schwinger-Dyson hierarchy of equations, or by choosing a particular two-particle irreducible vacuum energy graph in the effective action of the Cornwall-Jackiw-Tomboulis formalism. We confine our discussion to the case of quantum mechanics where the Lagrangian is L(x,ẋ)=(12)∑Ni=1x˙2i-(g/8N)[∑Ni=1x2i- r20]2. The key to these approximations is to treat both the x propagator and the x2 propagator on similar footing which leads to a theory whose graphs have the same topology as QED with the x2 propagator playing the role of the photon. The bare vertex approximation is obtained by replacing the exact vertex function by the bare one in the exact Schwinger-Dyson equations for the one and two point functions. The second approximation, which we call the dynamic Debye screening approximation, makes the further approximation of replacing the exact x2 propagator by its value at leading order in the 1/N expansion. These two approximations are compared with exact numerical simulations for the quantum roll problem. The bare vertex approximation captures the physics at large and modest N better than the dynamic Debye screening approximation.

  1. Scalable Prediction of Energy Consumption using Incremental Time Series Clustering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simmhan, Yogesh; Noor, Muhammad Usman

    2013-10-09

    Time series datasets are a canonical form of high velocity Big Data, and often generated by pervasive sensors, such as found in smart infrastructure. Performing predictive analytics on time series data can be computationally complex, and requires approximation techniques. In this paper, we motivate this problem using a real application from the smart grid domain. We propose an incremental clustering technique, along with a novel affinity score for determining cluster similarity, which help reduce the prediction error for cumulative time series within a cluster. We evaluate this technique, along with optimizations, using real datasets from smart meters, totaling ~700,000 datamore » points, and show the efficacy of our techniques in improving the prediction error of time series data within polynomial time.« less

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makhov, Dmitry V.; Shalashilin, Dmitrii V.; Glover, William J.

    We present a new algorithm for ab initio quantum nonadiabatic molecular dynamics that combines the best features of ab initio Multiple Spawning (AIMS) and Multiconfigurational Ehrenfest (MCE) methods. In this new method, ab initio multiple cloning (AIMC), the individual trajectory basis functions (TBFs) follow Ehrenfest equations of motion (as in MCE). However, the basis set is expanded (as in AIMS) when these TBFs become sufficiently mixed, preventing prolonged evolution on an averaged potential energy surface. We refer to the expansion of the basis set as “cloning,” in analogy to the “spawning” procedure in AIMS. This synthesis of AIMS and MCEmore » allows us to leverage the benefits of mean-field evolution during periods of strong nonadiabatic coupling while simultaneously avoiding mean-field artifacts in Ehrenfest dynamics. We explore the use of time-displaced basis sets, “trains,” as a means of expanding the basis set for little cost. We also introduce a new bra-ket averaged Taylor expansion (BAT) to approximate the necessary potential energy and nonadiabatic coupling matrix elements. The BAT approximation avoids the necessity of computing electronic structure information at intermediate points between TBFs, as is usually done in saddle-point approximations used in AIMS. The efficiency of AIMC is demonstrated on the nonradiative decay of the first excited state of ethylene. The AIMC method has been implemented within the AIMS-MOLPRO package, which was extended to include Ehrenfest basis functions.« less

  3. Revisiting the diffusion approximation to estimate evolutionary rates of gene family diversification.

    PubMed

    Gjini, Erida; Haydon, Daniel T; David Barry, J; Cobbold, Christina A

    2014-01-21

    Genetic diversity in multigene families is shaped by multiple processes, including gene conversion and point mutation. Because multi-gene families are involved in crucial traits of organisms, quantifying the rates of their genetic diversification is important. With increasing availability of genomic data, there is a growing need for quantitative approaches that integrate the molecular evolution of gene families with their higher-scale function. In this study, we integrate a stochastic simulation framework with population genetics theory, namely the diffusion approximation, to investigate the dynamics of genetic diversification in a gene family. Duplicated genes can diverge and encode new functions as a result of point mutation, and become more similar through gene conversion. To model the evolution of pairwise identity in a multigene family, we first consider all conversion and mutation events in a discrete manner, keeping track of their details and times of occurrence; second we consider only the infinitesimal effect of these processes on pairwise identity accounting for random sampling of genes and positions. The purely stochastic approach is closer to biological reality and is based on many explicit parameters, such as conversion tract length and family size, but is more challenging analytically. The population genetics approach is an approximation accounting implicitly for point mutation and gene conversion, only in terms of per-site average probabilities. Comparison of these two approaches across a range of parameter combinations reveals that they are not entirely equivalent, but that for certain relevant regimes they do match. As an application of this modelling framework, we consider the distribution of nucleotide identity among VSG genes of African trypanosomes, representing the most prominent example of a multi-gene family mediating parasite antigenic variation and within-host immune evasion. © 2013 Published by Elsevier Ltd. All rights reserved.

  4. Gas Diffusion in Fluids Containing Bubbles

    NASA Technical Reports Server (NTRS)

    Zak, M.; Weinberg, M. C.

    1982-01-01

    Mathematical model describes movement of gases in fluid containing many bubbles. Model makes it possible to predict growth and shrink age of bubbles as function of time. New model overcomes complexities involved in analysis of varying conditions by making two simplifying assumptions. It treats bubbles as point sources, and it employs approximate expression for gas concentration gradient at liquid/bubble interface. In particular, it is expected to help in developing processes for production of high-quality optical glasses in space.

  5. Initial Results of a Survey of Earth's L4 Point for Possible Earth Trojan Asteroids

    NASA Astrophysics Data System (ADS)

    Connors, M.; Veillet, C.; Wiegert, P.; Innanen, K.; Mikkola, S.

    2000-10-01

    Using the Canada-France-Hawaii 3.6 m telescope and the new CFH12k wide-field CCD imager, a survey of the region near Earth's L4 (morning) Lagrange Point was conducted in May and July/August 2000, in hopes of finding asteroids at or near this point. This survey was motivated by the dynamical interest of a possible Earth Trojan asteroid (ETA) population and by the fact that they would be the easiest asteroids to access from Earth. Recent calculations (Wiegert, Innanen and Mikkola, 2000, Icarus v. 145, 33-43) indicate stability of objects in ETA orbits over a million year timescale and that their on-sky density would be greatest roughly five degrees sunward of the L4 position. An optimized search technique was used, with tracking at the anticipated rate of the target bodies, near real-time scanning of images, and duplication of fields to aid in detection and permit followup. Limited time is available on any given night to search near the Lagrange points, and operations must be conducted at large air mass. Approximately 9 square degrees were efficiently searched and two interesting asteroids were found, NEA 2000 PM8 and our provisionally named CFZ001. CFZ001 cannot be excluded from being an Earth Trojan although that is not the optimal solution for the short arc we observed. This object, of R magnitude 22, was easily detected, suggesting that our search technique worked well. This survey supports the earlier conclusion of Whitely and Tholen (1998, Icarus v. 136, 154-167) that a large population of several hundred meter diameter ETAs does not exist. However, our effective search technique and the discovery of two interesting asteroids suggest the value of completing the survey with approximately 10 more square degrees to be searched near L4 and a comparable search to be done at L5. Funding from Canada's NSERC and HIA and the Academic Research Fund of Athabasca University is gratefully acknowledged.

  6. Peripheral immunophenotype and viral promoter variants during the asymptomatic phase of feline immunodeficiency virus infection.

    PubMed

    Murphy, B; Hillman, C; McDonnel, S

    2014-01-22

    Feline immunodeficiency virus (FIV)-infected cats enter a clinically asymptomatic phase during chronic infection. Despite the lack of overt clinical disease, the asymptomatic phase is characterized by persistent immunologic impairment. In the peripheral blood obtained from cats experimentally infected with FIV-C for approximately 5 years, we identified a persistent inversion of the CD4/CD8 ratio. We cloned and sequenced the FIV-C long terminal repeat containing the viral promoter from cells infected with the inoculating virus and from in vivo-derived peripheral blood mononuclear cells and CD4 T cells isolated at multiple time points throughout the asymptomatic phase. Relative to the inoculating virus, viral sequences amplified from cells isolated from all of the infected animals demonstrated multiple single nucleotide mutations and a short deletion within the viral U3, R and U5 regions. A transcriptionally inactivating proviral mutation in the U3 promoter AP-1 site was identified at multiple time points from all of the infected animals but not within cell-associated viral RNA. In contrast, no mutations were identified within the sequence of the viral dUTPase gene amplified from PBMC isolated at approximately 5 years post-infection relative to the inoculating sequence. The possible implications of these mutations to viral pathogenesis are discussed. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. First Neutrino Point-Source Results from the 22 String Icecube Detector

    NASA Astrophysics Data System (ADS)

    Abbasi, R.; Abdou, Y.; Ackermann, M.; Adams, J.; Aguilar, J.; Ahlers, M.; Andeen, K.; Auffenberg, J.; Bai, X.; Baker, M.; Barwick, S. W.; Bay, R.; Bazo Alba, J. L.; Beattie, K.; Beatty, J. J.; Bechet, S.; Becker, J. K.; Becker, K.-H.; Benabderrahmane, M. L.; Berdermann, J.; Berghaus, P.; Berley, D.; Bernardini, E.; Bertrand, D.; Besson, D. Z.; Bissok, M.; Blaufuss, E.; Boersma, D. J.; Bohm, C.; Bolmont, J.; Böser, S.; Botner, O.; Bradley, L.; Braun, J.; Breder, D.; Castermans, T.; Chirkin, D.; Christy, B.; Clem, J.; Cohen, S.; Cowen, D. F.; D'Agostino, M. V.; Danninger, M.; Day, C. T.; De Clercq, C.; Demirörs, L.; Depaepe, O.; Descamps, F.; Desiati, P.; de Vries-Uiterweerd, G.; De Young, T.; Diaz-Velez, J. C.; Dreyer, J.; Dumm, J. P.; Duvoort, M. R.; Edwards, W. R.; Ehrlich, R.; Eisch, J.; Ellsworth, R. W.; Engdegård, O.; Euler, S.; Evenson, P. A.; Fadiran, O.; Fazely, A. R.; Feusels, T.; Filimonov, K.; Finley, C.; Foerster, M. M.; Fox, B. D.; Franckowiak, A.; Franke, R.; Gaisser, T. K.; Gallagher, J.; Ganugapati, R.; Gerhardt, L.; Gladstone, L.; Goldschmidt, A.; Goodman, J. A.; Gozzini, R.; Grant, D.; Griesel, T.; Groß, A.; Grullon, S.; Gunasingha, R. M.; Gurtner, M.; Ha, C.; Hallgren, A.; Halzen, F.; Han, K.; Hanson, K.; Hasegawa, Y.; Heise, J.; Helbing, K.; Herquet, P.; Hickford, S.; Hill, G. C.; Hoffman, K. D.; Hoshina, K.; Hubert, D.; Huelsnitz, W.; Hülß, J.-P.; Hulth, P. O.; Hultqvist, K.; Hussain, S.; Imlay, R. L.; Inaba, M.; Ishihara, A.; Jacobsen, J.; Japaridze, G. S.; Johansson, H.; Joseph, J. M.; Kampert, K.-H.; Kappes, A.; Karg, T.; Karle, A.; Kelley, J. L.; Kenny, P.; Kiryluk, J.; Kislat, F.; Klein, S. R.; Klepser, S.; Knops, S.; Kohnen, G.; Kolanoski, H.; Köpke, L.; Kowalski, M.; Kowarik, T.; Krasberg, M.; Kuehn, K.; Kuwabara, T.; Labare, M.; Lafebre, S.; Laihem, K.; Landsman, H.; Lauer, R.; Leich, H.; Lennarz, D.; Lucke, A.; Lundberg, J.; Lünemann, J.; Madsen, J.; Majumdar, P.; Maruyama, R.; Mase, K.; Matis, H. S.; McParland, C. P.; Meagher, K.; Merck, M.; Mészáros, P.; Middell, E.; Milke, N.; Miyamoto, H.; Mohr, A.; Montaruli, T.; Morse, R.; Movit, S. M.; Münich, K.; Nahnhauer, R.; Nam, J. W.; Nießen, P.; Nygren, D. R.; Odrowski, S.; Olivas, A.; Olivo, M.; Ono, M.; Panknin, S.; Patton, S.; Pérez de los Heros, C.; Petrovic, J.; Piegsa, A.; Pieloth, D.; Pohl, A. C.; Porrata, R.; Potthoff, N.; Price, P. B.; Prikockis, M.; Przybylski, G. T.; Rawlins, K.; Redl, P.; Resconi, E.; Rhode, W.; Ribordy, M.; Rizzo, A.; Rodrigues, J. P.; Roth, P.; Rothmaier, F.; Rott, C.; Roucelle, C.; Rutledge, D.; Ryckbosch, D.; Sander, H.-G.; Sarkar, S.; Satalecka, K.; Schlenstedt, S.; Schmidt, T.; Schneider, D.; Schukraft, A.; Schulz, O.; Schunck, M.; Seckel, D.; Semburg, B.; Seo, S. H.; Sestayo, Y.; Seunarine, S.; Silvestri, A.; Slipak, A.; Spiczak, G. M.; Spiering, C.; Stamatikos, M.; Stanev, T.; Stephens, G.; Stezelberger, T.; Stokstad, R. G.; Stoufer, M. C.; Stoyanov, S.; Strahler, E. A.; Straszheim, T.; Sulanke, K.-H.; Sullivan, G. W.; Swillens, Q.; Taboada, I.; Tarasova, O.; Tepe, A.; Ter-Antonyan, S.; Terranova, C.; Tilav, S.; Tluczykont, M.; Toale, P. A.; Tosi, D.; Turčan, D.; van Eijndhoven, N.; Vandenbroucke, J.; Van Overloop, A.; Voigt, B.; Walck, C.; Waldenmaier, T.; Walter, M.; Wendt, C.; Westerhoff, S.; Whitehorn, N.; Wiebusch, C. H.; Wiedemann, A.; Wikström, G.; Williams, D. R.; Wischnewski, R.; Wissing, H.; Woschnagg, K.; Xu, X. W.; Yodh, G.; Ice Cube Collaboration

    2009-08-01

    We present new results of searches for neutrino point sources in the northern sky, using data recorded in 2007-2008 with 22 strings of the IceCube detector (approximately one-fourth of the planned total) and 275.7 days of live time. The final sample of 5114 neutrino candidate events agrees well with the expected background of atmospheric muon neutrinos and a small component of atmospheric muons. No evidence of a point source is found, with the most significant excess of events in the sky at 2.2σ after accounting for all trials. The average upper limit over the northern sky for point sources of muon-neutrinos with E -2 spectrum is E^{2} Φ_{ν_{μ}} < 1.4 × 10^{-11} TeV cm^{-2} s^{-1}, in the energy range from 3 TeV to 3 PeV, improving the previous best average upper limit by the AMANDA-II detector by a factor of 2.

  8. Influence of the distance between target surface and focal point on the expansion dynamics of a laser-induced silicon plasma with spatial confinement

    NASA Astrophysics Data System (ADS)

    Zhang, Dan; Chen, Anmin; Wang, Xiaowei; Wang, Ying; Sui, Laizhi; Ke, Da; Li, Suyu; Jiang, Yuanfei; Jin, Mingxing

    2018-05-01

    Expansion dynamics of a laser-induced plasma plume, with spatial confinement, for various distances between the target surface and focal point were studied by the fast photography technique. A silicon wafer was ablated to induce the plasma with a Nd:YAG laser in an atmospheric environment. The expansion dynamics of the plasma plume depended on the distance between the target surface and focal point. In addition, spatially confined time-resolved images showed the different structures of the plasma plumes at different distances between the target surface and focal point. By analyzing the plume images, the optimal distance for emission enhancement was found to be approximately 6 mm away from the geometrical focus using a 10 cm focal length lens. This optimized distance resulted in the strongest compression ratio of the plasma plume by the reflected shock wave. Furthermore, the duration of the interaction between the reflected shock wave and the plasma plume was also prolonged.

  9. Two-parametric {\\delta'} -interactions: approximation by Schrödinger operators with localized rank-two perturbations

    NASA Astrophysics Data System (ADS)

    Golovaty, Yuriy

    2018-06-01

    We construct a norm resolvent approximation to the family of point interactions , by Schrödinger operators with localized rank-two perturbations coupled with short range potentials. In particular, a new approximation to the -interactions is obtained.

  10. Kernel K-Means Sampling for Nyström Approximation.

    PubMed

    He, Li; Zhang, Hong

    2018-05-01

    A fundamental problem in Nyström-based kernel matrix approximation is the sampling method by which training set is built. In this paper, we suggest to use kernel -means sampling, which is shown in our works to minimize the upper bound of a matrix approximation error. We first propose a unified kernel matrix approximation framework, which is able to describe most existing Nyström approximations under many popular kernels, including Gaussian kernel and polynomial kernel. We then show that, the matrix approximation error upper bound, in terms of the Frobenius norm, is equal to the -means error of data points in kernel space plus a constant. Thus, the -means centers of data in kernel space, or the kernel -means centers, are the optimal representative points with respect to the Frobenius norm error upper bound. Experimental results, with both Gaussian kernel and polynomial kernel, on real-world data sets and image segmentation tasks show the superiority of the proposed method over the state-of-the-art methods.

  11. Gene expression profiling of peripheral blood mononuclear cells (PBMC) from Mycobacterium bovis infected cattle after in vitro antigenic stimulation with purified protein derivative of tuberculin (PPD).

    PubMed

    Meade, Kieran G; Gormley, Eamonn; Park, Stephen D E; Fitzsimons, Tara; Rosa, Guilherme J M; Costello, Eamon; Keane, Joseph; Coussens, Paul M; MacHugh, David E

    2006-09-15

    Microarray analysis of messenger RNA (mRNA) abundance was used to investigate the gene expression program of peripheral blood mononuclear cells (PBMC) from cattle infected with Mycobacterium bovis, the causative agent of bovine tuberculosis. An immunospecific bovine microarray platform (BOTL-4) with spot features representing 1336 genes was used for transcriptional profiling of PBMC from six M. bovis-infected cattle stimulated in vitro with bovine purified protein derivative of tuberculin (PPD-bovine). Cells were harvested at four time points (3 h, 6 h, 12 h and 24 h post-stimulation) and a split-plot design with pooled samples was used for the microarray experiment to compare gene expression between PPD-bovine stimulated PBMC and unstimulated controls for each time point. Statistical analyses of these data revealed 224 genes (approximately 17% of transcripts on the array) differentially expressed between stimulated and unstimulated PBMC across the 24 h time course (P<0.05). Of the 224 genes, 87 genes were significantly upregulated and 137 genes were significantly downregulated in M. bovis-infected PBMC stimulated with PPD-bovine across the 24 h time course. However, perturbation of the PBMC transcriptome was most apparent at time points 3 h and 12 h post-stimulation, with 81 and 84 genes differentially expressed, respectively. In addition, a more stringent statistical threshold (P<0.01) revealed 35 genes (approximately 3%) that were differentially expressed across the time course. Real-time quantitative reverse transcription PCR (qRT-PCR) of selected genes validated the microarray results and demonstrated a wide range of differentially expressed genes in PPD-bovine-, PPD-avian- and Concanavalin A (ConA) stimulated PBMC, including the interferon-gamma gene (IFNG), which was upregulated in PBMC stimulated with PPD-bovine (40-fold), PPD-avian (10-fold) and ConA (8-fold) after in vitro culture for 12 h. The pattern of expression of these genes in PPD-bovine stimulated PBMC provides the first description of an M. bovis-specific signature of infection that may provide insights into the molecular basis of the host response to infection. Although the present study was carried out with mixed PBMC cell populations, it will guide future studies to dissect immune cell-specific gene expression patterns in response to M. bovis infection.

  12. A Numerical Approximation Framework for the Stochastic Linear Quadratic Regulator on Hilbert Spaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levajković, Tijana, E-mail: tijana.levajkovic@uibk.ac.at, E-mail: t.levajkovic@sf.bg.ac.rs; Mena, Hermann, E-mail: hermann.mena@uibk.ac.at; Tuffaha, Amjad, E-mail: atufaha@aus.edu

    We present an approximation framework for computing the solution of the stochastic linear quadratic control problem on Hilbert spaces. We focus on the finite horizon case and the related differential Riccati equations (DREs). Our approximation framework is concerned with the so-called “singular estimate control systems” (Lasiecka in Optimal control problems and Riccati equations for systems with unbounded controls and partially analytic generators: applications to boundary and point control problems, 2004) which model certain coupled systems of parabolic/hyperbolic mixed partial differential equations with boundary or point control. We prove that the solutions of the approximate finite-dimensional DREs converge to the solutionmore » of the infinite-dimensional DRE. In addition, we prove that the optimal state and control of the approximate finite-dimensional problem converge to the optimal state and control of the corresponding infinite-dimensional problem.« less

  13. nu-Anomica: A Fast Support Vector Based Novelty Detection Technique

    NASA Technical Reports Server (NTRS)

    Das, Santanu; Bhaduri, Kanishka; Oza, Nikunj C.; Srivastava, Ashok N.

    2009-01-01

    In this paper we propose nu-Anomica, a novel anomaly detection technique that can be trained on huge data sets with much reduced running time compared to the benchmark one-class Support Vector Machines algorithm. In -Anomica, the idea is to train the machine such that it can provide a close approximation to the exact decision plane using fewer training points and without losing much of the generalization performance of the classical approach. We have tested the proposed algorithm on a variety of continuous data sets under different conditions. We show that under all test conditions the developed procedure closely preserves the accuracy of standard one-class Support Vector Machines while reducing both the training time and the test time by 5 - 20 times.

  14. A time-series study of sick building syndrome: chronic, biotoxin-associated illness from exposure to water-damaged buildings.

    PubMed

    Shoemaker, Ritchie C; House, Dennis E

    2005-01-01

    The human health risk for chronic illnesses involving multiple body systems following inhalation exposure to the indoor environments of water-damaged buildings (WDBs) has remained poorly characterized and the subject of intense controversy. The current study assessed the hypothesis that exposure to the indoor environments of WDBs with visible microbial colonization was associated with illness. The study used a cross-sectional design with assessments at five time points, and the interventions of cholestyramine (CSM) therapy, exposure avoidance following therapy, and reexposure to the buildings after illness resolution. The methodological approach included oral administration of questionnaires, medical examinations, laboratory analyses, pulmonary function testing, and measurements of visual function. Of the 21 study volunteers, 19 completed assessment at each of the five time points. Data at Time Point 1 indicated multiple symptoms involving at least four organ systems in all study participants, a restrictive respiratory condition in four participants, and abnormally low visual contrast sensitivity (VCS) in 18 participants. Serum leptin levels were abnormally high and alpha melanocyte stimulating hormone (MSH) levels were abnormally low. Assessments at Time Point 2, following 2 weeks of CSM therapy, indicated a highly significant improvement in health status. Improvement was maintained at Time Point 3, which followed exposure avoidance without therapy. Reexposure to the WDBs resulted in illness reacquisition in all participants within 1 to 7 days. Following another round of CSM therapy, assessments at Time Point 5 indicated a highly significant improvement in health status. The group-mean number of symptoms decreased from 14.9+/-0.8 S.E.M. at Time Point 1 to 1.2+/-0.3 S.E.M., and the VCS deficit of approximately 50% at Time Point 1 was fully resolved. Leptin and MSH levels showed statistically significant improvement. The results indicated that CSM was an effective therapeutic agent, that VCS was a sensitive and specific indicator of neurologic function, and that illness involved systemic and hypothalamic processes. Although the results supported the general hypothesis that illness was associated with exposure to the WDBs, this conclusion was tempered by several study limitations. Exposure to specific agents was not demonstrated, study participants were not randomly selected, and double-blinding procedures were not used. Additional human and animal studies are needed to confirm this conclusion, investigate the role of complex mixtures of bacteria, fungi, mycotoxins, endotoxins, and antigens in illness causation, and characterize modes of action. Such data will improve the assessment of human health risk from chronic exposure to WDBs.

  15. Limiting Magnitude, τ, t eff, and Image Quality in DES Year 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    H. Neilsen, Jr.; Bernstein, Gary; Gruendl, Robert

    The Dark Energy Survey (DES) is an astronomical imaging survey being completed with the DECam imager on the Blanco telescope at CTIO. After each night of observing, the DES data management (DM) group performs an initial processing of that night's data, and uses the results to determine which exposures are of acceptable quality, and which need to be repeated. The primary measure by which we declare an image of acceptable quality ismore » $$\\tau$$, a scaling of the exposure time. This is the scale factor that needs to be applied to the open shutter time to reach the same photometric signal to noise ratio for faint point sources under a set of canonical good conditions. These conditions are defined to be seeing resulting in a PSF full width at half maximum (FWHM) of 0.9" and a pre-defined sky brightness which approximates the zenith sky brightness under fully dark conditions. Point source limiting magnitude and signal to noise should therefore vary with t in the same way they vary with exposure time. Measurements of point sources and $$\\tau$$ in the first year of DES data confirm that they do. In the context of DES, the symbol $$t_{eff}$$ and the expression "effective exposure time" usually refer to the scaling factor, $$\\tau$$, rather than the actual effective exposure time; the "effective exposure time" in this case refers to the effective duration of one second, rather than the effective duration of an exposure.« less

  16. A photoelectric amplifier as a dye detector

    USGS Publications Warehouse

    Ebel, Wesley J.

    1962-01-01

    A dye detector, based on a modified photoelectric amplifier, has been planned, built, and tested. It was designed to record automatically the time of arrival of fluorescein dye at predetermined points in a stream system. Laboratory tests and stream trials proved the instrument to be efficient. Small changes in color can be detected in turbid or clear water. The unit has been used successfully for timing intervals of more than 17 hours; significant savings of time and manpower have resulted. Replacement of the clock, included in the original device, with a recording milliammeter increases the efficiency of the unit by contin,!ously recording changes in turbidity. The addition of this component would increase the cost from $75 to approximately $105.

  17. Time dependent semiclassical tunneling through one dimensional barriers using only real valued trajectories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herman, Michael F.

    2015-10-28

    The time independent semiclassical treatment of barrier tunneling has been understood for a very long time. Several semiclassical approaches to time dependent tunneling through barriers have also been presented. These typically involve trajectories for which the position variable is a complex function of time. In this paper, a method is presented that uses only real valued trajectories, thus avoiding the complications that can arise when complex trajectories are employed. This is accomplished by expressing the time dependent wave packet as an integration over momentum. The action function in the exponent in this expression is expanded to second order in themore » momentum. The expansion is around the momentum, p{sub 0{sup *}}, at which the derivative of the real part of the action is zero. The resulting Gaussian integral is then taken. The stationary phase approximation requires that the derivative of the full action is zero at the expansion point, and this leads to a complex initial momentum and complex tunneling trajectories. The “pseudo-stationary phase” approximation employed in this work results in real values for the initial momentum and real valued trajectories. The transmission probabilities obtained are found to be in good agreement with exact quantum results.« less

  18. Research study on stabilization and control: Modern sampled data control theory

    NASA Technical Reports Server (NTRS)

    Kuo, B. C.; Singh, G.; Yackel, R. A.

    1973-01-01

    A numerical analysis of spacecraft stability parameters was conducted. The analysis is based on a digital approximation by point by point state comparison. The technique used is that of approximating a continuous data system by a sampled data model by comparison of the states of the two systems. Application of the method to the digital redesign of the simplified one axis dynamics of the Skylab is presented.

  19. Multiple scattered radiation emerging from continental haze layers. 1: Radiance, polarization, and neutral points

    NASA Technical Reports Server (NTRS)

    Kattawar, G. W.; Plass, G. N.; Hitzfelder, S. J.

    1975-01-01

    The complete radiation field is calculated for scattering layers of various optical thicknesses. Results obtained for Rayleigh and haze scattering are compared. Calculated radiances show differences as large as 23% compared to the approximate scalar theory of radiative transfer, while the same differences are approximately 0.1% for a continental haze phase function. The polarization of reflected and transmitted radiation is given for various optical thicknesses, solar zenith angles, and surface albedos. Two types of neutral points occur for aerosol phase functions. Rayleigh-like neutral points arise from zero polarization that occurs at scattering angles of 0 deg and 180 deg. For Rayleigh phase functions, the position of these points varies with the optical thickness of the scattering layer. Non-Rayleigh neutral points are associated with the zeros of polarization which occur between the end points of the single scattering curve, and are found over a wide range of azimuthal angles.

  20. Fixed point theorems of GPS carrier phase ambiguity resolution and their application to massive network processing: Ambizap

    NASA Astrophysics Data System (ADS)

    Blewitt, Geoffrey

    2008-12-01

    Precise point positioning (PPP) has become popular for Global Positioning System (GPS) geodetic network analysis because for n stations, PPP has O(n) processing time, yet solutions closely approximate those of O(n3) full network analysis. Subsequent carrier phase ambiguity resolution (AR) further improves PPP precision and accuracy; however, full-network bootstrapping AR algorithms are O(n4), limiting single network solutions to n < 100. In this contribution, fixed point theorems of AR are derived and then used to develop "Ambizap," an O(n) algorithm designed to give results that closely approximate full network AR. Ambizap has been tested to n ≈ 2800 and proves to be O(n) in this range, adding only ˜50% to PPP processing time. Tests show that a 98-station network is resolved on a 3-GHz CPU in 7 min, versus 22 h using O(n4) AR methods. Ambizap features a novel network adjustment filter, producing solutions that precisely match O(n4) full network analysis. The resulting coordinates agree to ≪1 mm with current AR methods, much smaller than the ˜3-mm RMS precision of PPP alone. A 2000-station global network can be ambiguity resolved in ˜2.5 h. Together with PPP, Ambizap enables rapid, multiple reanalysis of large networks (e.g., ˜1000-station EarthScope Plate Boundary Observatory) and facilitates the addition of extra stations to an existing network solution without need to reprocess all data. To meet future needs, PPP plus Ambizap is designed to handle ˜10,000 stations per day on a 3-GHz dual-CPU desktop PC.

  1. Numerical methods for stiff systems of two-point boundary value problems

    NASA Technical Reports Server (NTRS)

    Flaherty, J. E.; Omalley, R. E., Jr.

    1983-01-01

    Numerical procedures are developed for constructing asymptotic solutions of certain nonlinear singularly perturbed vector two-point boundary value problems having boundary layers at one or both endpoints. The asymptotic approximations are generated numerically and can either be used as is or to furnish a general purpose two-point boundary value code with an initial approximation and the nonuniform computational mesh needed for such problems. The procedures are applied to a model problem that has multiple solutions and to problems describing the deformation of thin nonlinear elastic beam that is resting on an elastic foundation.

  2. A novel point cloud registration using 2D image features

    NASA Astrophysics Data System (ADS)

    Lin, Chien-Chou; Tai, Yen-Chou; Lee, Jhong-Jin; Chen, Yong-Sheng

    2017-01-01

    Since a 3D scanner only captures a scene of a 3D object at a time, a 3D registration for multi-scene is the key issue of 3D modeling. This paper presents a novel and an efficient 3D registration method based on 2D local feature matching. The proposed method transforms the point clouds into 2D bearing angle images and then uses the 2D feature based matching method, SURF, to find matching pixel pairs between two images. The corresponding points of 3D point clouds can be obtained by those pixel pairs. Since the corresponding pairs are sorted by their distance between matching features, only the top half of the corresponding pairs are used to find the optimal rotation matrix by the least squares approximation. In this paper, the optimal rotation matrix is derived by orthogonal Procrustes method (SVD-based approach). Therefore, the 3D model of an object can be reconstructed by aligning those point clouds with the optimal transformation matrix. Experimental results show that the accuracy of the proposed method is close to the ICP, but the computation cost is reduced significantly. The performance is six times faster than the generalized-ICP algorithm. Furthermore, while the ICP requires high alignment similarity of two scenes, the proposed method is robust to a larger difference of viewing angle.

  3. Polarization independent thermally tunable erbium-doped fiber amplifier gain equalizer using a cascaded Mach-Zehnder coupler.

    PubMed

    Sahu, P P

    2008-02-10

    A thermally tunable erbium-doped fiber amplifier (EDFA) gain equalizer filter based on compact point symmetric cascaded Mach-Zehnder (CMZ) coupler is presented with its mathematical model and is found to be polarization dependent due to stress anisotropy caused by local heating for thermo-optic phase change from its mathematical analysis. A thermo-optic delay line structure with a stress releasing groove is proposed and designed for the reduction of polarization dependent characteristics of the high index contrast point symmetric delay line structure of the device. It is found from thermal analysis by using an implicit finite difference method that temperature gradients of the proposed structure, which mainly causes the release of stress anisotropy, is approximately nine times more than that of the conventional structure. It is also seen that the EDFA gain equalized spectrum by using the point symmetric CMZ device based on the proposed structure is almost polarization independent.

  4. MacBurn's cylinder test problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shestakov, Aleksei I.

    2016-02-29

    This note describes test problem for MacBurn which illustrates its performance. The source is centered inside a cylinder with axial-extent-to-radius ratio s.t. each end receives 1/4 of the thermal energy. The source (fireball) is modeled as either a point or as disk of finite radius, as described by Marrs et al. For the latter, the disk is divided into 13 equal area segments, each approximated as a point source and models a partially occluded fireball. If the source is modeled as a single point, one obtains very nearly the expected deposition, e.g., 1/4 of the flux on each end andmore » energy is conserved. If the source is modeled as a disk, both conservation and energy fraction degrade. However, errors decrease if the source radius to domain size ratio decreases. Modeling the source as a disk increases run-times.« less

  5. Design of the stabilizing control of the orbital motion in the vicinity of the collinear libration point L1 using the analytical representation of the invariant manifold

    NASA Astrophysics Data System (ADS)

    Maliavkin, G. P.; Shmyrov, A. S.; Shmyrov, V. A.

    2018-05-01

    Vicinities of collinear libration points of the Sun-Earth system are currently quite attractive for the space navigation. Today, various projects on placing of spacecrafts observing the Sun in the L1 libration point and telescopes in L2 have been implemented (e.g. spacecrafts "WIND", "SOHO", "Herschel", "Planck"). Collinear libration points being unstable leads to the problem of stabilization of a spacecraft's motion. Laws of stabilizing motion control in vicinity of L1 point can be constructed using the analytical representation of a stable invariant manifold. Efficiency of these control laws depends on the precision of the representation. Within the model of Hill's approximation of the circular restricted three-body problem in the rotating geocentric coordinate system one can obtain the analytical representation of an invariant manifold filled with bounded trajectories in a form of series in terms of powers of the phase variables. Approximate representations of the orders from the first to the fourth inclusive can be used to construct four laws of stabilizing feedback motion control under which trajectories approach the manifold. By virtue of numerical simulation the comparison can be made: how the precision of the representation of the invariant manifold influences the efficiency of the control, expressed by energy consumptions (characteristic velocity). It shows that using approximations of higher orders in constructing the control laws can significantly reduce the energy consumptions on implementing the control compared to the linear approximation.

  6. Hazard ratio estimation and inference in clinical trials with many tied event times.

    PubMed

    Mehrotra, Devan V; Zhang, Yiwei

    2018-06-13

    The medical literature contains numerous examples of randomized clinical trials with time-to-event endpoints in which large numbers of events accrued over relatively short follow-up periods, resulting in many tied event times. A generally common feature across such examples was that the logrank test was used for hypothesis testing and the Cox proportional hazards model was used for hazard ratio estimation. We caution that this common practice is particularly risky in the setting of many tied event times for two reasons. First, the estimator of the hazard ratio can be severely biased if the Breslow tie-handling approximation for the Cox model (the default in SAS and Stata software) is used. Second, the 95% confidence interval for the hazard ratio can include one even when the corresponding logrank test p-value is less than 0.05. To help establish a better practice, with applicability for both superiority and noninferiority trials, we use theory and simulations to contrast Wald and score tests based on well-known tie-handling approximations for the Cox model. Our recommendation is to report the Wald test p-value and corresponding confidence interval based on the Efron approximation. The recommended test is essentially as powerful as the logrank test, the accompanying point and interval estimates of the hazard ratio have excellent statistical properties even in settings with many tied event times, inferential alignment between the p-value and confidence interval is guaranteed, and implementation is straightforward using commonly used software. Copyright © 2018 John Wiley & Sons, Ltd.

  7. Gaussian Radial Basis Function for Efficient Computation of Forest Indirect Illumination

    NASA Astrophysics Data System (ADS)

    Abbas, Fayçal; Babahenini, Mohamed Chaouki

    2018-06-01

    Global illumination of natural scenes in real time like forests is one of the most complex problems to solve, because the multiple inter-reflections between the light and material of the objects composing the scene. The major problem that arises is the problem of visibility computation. In fact, the computing of visibility is carried out for all the set of leaves visible from the center of a given leaf, given the enormous number of leaves present in a tree, this computation performed for each leaf of the tree which also reduces performance. We describe a new approach that approximates visibility queries, which precede in two steps. The first step is to generate point cloud representing the foliage. We assume that the point cloud is composed of two classes (visible, not-visible) non-linearly separable. The second step is to perform a point cloud classification by applying the Gaussian radial basis function, which measures the similarity in term of distance between each leaf and a landmark leaf. It allows approximating the visibility requests to extract the leaves that will be used to calculate the amount of indirect illumination exchanged between neighbor leaves. Our approach allows efficiently treat the light exchanges in the scene of a forest, it allows a fast computation and produces images of good visual quality, all this takes advantage of the immense power of computation of the GPU.

  8. Computing aerodynamic sound using advanced statistical turbulence theories

    NASA Technical Reports Server (NTRS)

    Hecht, A. M.; Teske, M. E.; Bilanin, A. J.

    1981-01-01

    It is noted that the calculation of turbulence-generated aerodynamic sound requires knowledge of the spatial and temporal variation of Q sub ij (xi sub k, tau), the two-point, two-time turbulent velocity correlations. A technique is presented to obtain an approximate form of these correlations based on closure of the Reynolds stress equations by modeling of higher order terms. The governing equations for Q sub ij are first developed for a general flow. The case of homogeneous, stationary turbulence in a unidirectional constant shear mean flow is then assumed. The required closure form for Q sub ij is selected which is capable of qualitatively reproducing experimentally observed behavior. This form contains separation time dependent scale factors as parameters and depends explicitly on spatial separation. The approximate forms of Q sub ij are used in the differential equations and integral moments are taken over the spatial domain. The velocity correlations are used in the Lighthill theory of aerodynamic sound by assuming normal joint probability.

  9. Method of making carbon fiber-carbon matrix reinforced ceramic composites

    NASA Technical Reports Server (NTRS)

    Williams, Brian (Inventor); Benander, Robert (Inventor)

    2007-01-01

    A method of making a carbon fiber-carbon matrix reinforced ceramic composite wherein the result is a carbon fiber-carbon matrix reinforcement is embedded within a ceramic matrix. The ceramic matrix does not penetrate into the carbon fiber-carbon matrix reinforcement to any significant degree. The carbide matrix is a formed in situ solid carbide of at least one metal having a melting point above about 1850 degrees centigrade. At least when the composite is intended to operate between approximately 1500 and 2000 degrees centigrade for extended periods of time the solid carbide with the embedded reinforcement is formed first by reaction infiltration. Molten silicon is then diffused into the carbide. The molten silicon diffuses preferentially into the carbide matrix but not to any significant degree into the carbon-carbon reinforcement. Where the composite is intended to operate between approximately 2000 and 2700 degrees centigrade for extended periods of time such diffusion of molten silicon into the carbide is optional and generally preferred, but not essential.

  10. High‐resolution trench photomosaics from image‐based modeling: Workflow and error analysis

    USGS Publications Warehouse

    Reitman, Nadine G.; Bennett, Scott E. K.; Gold, Ryan D.; Briggs, Richard; Duross, Christopher

    2015-01-01

    Photomosaics are commonly used to construct maps of paleoseismic trench exposures, but the conventional process of manually using image‐editing software is time consuming and produces undesirable artifacts and distortions. Herein, we document and evaluate the application of image‐based modeling (IBM) for creating photomosaics and 3D models of paleoseismic trench exposures, illustrated with a case‐study trench across the Wasatch fault in Alpine, Utah. Our results include a structure‐from‐motion workflow for the semiautomated creation of seamless, high‐resolution photomosaics designed for rapid implementation in a field setting. Compared with conventional manual methods, the IBM photomosaic method provides a more accurate, continuous, and detailed record of paleoseismic trench exposures in approximately half the processing time and 15%–20% of the user input time. Our error analysis quantifies the effect of the number and spatial distribution of control points on model accuracy. For this case study, an ∼87  m2 exposure of a benched trench photographed at viewing distances of 1.5–7 m yields a model with <2  cm root mean square error (rmse) with as few as six control points. Rmse decreases as more control points are implemented, but the gains in accuracy are minimal beyond 12 control points. Spreading control points throughout the target area helps to minimize error. We propose that 3D digital models and corresponding photomosaics should be standard practice in paleoseismic exposure archiving. The error analysis serves as a guide for future investigations that seek balance between speed and accuracy during photomosaic and 3D model construction.

  11. Visual Form Perception Can Be a Cognitive Correlate of Lower Level Math Categories for Teenagers

    PubMed Central

    Cui, Jiaxin; Zhang, Yiyun; Cheng, Dazhi; Li, Dawei; Zhou, Xinlin

    2017-01-01

    Numerous studies have assessed the cognitive correlates of performance in mathematics, but little research has been conducted to systematically examine the relations between visual perception as the starting point of visuospatial processing and typical mathematical performance. In the current study, we recruited 223 seventh graders to perform a visual form perception task (figure matching), numerosity comparison, digit comparison, exact computation, approximate computation, and curriculum-based mathematical achievement tests. Results showed that, after controlling for gender, age, and five general cognitive processes (choice reaction time, visual tracing, mental rotation, spatial working memory, and non-verbal matrices reasoning), visual form perception had unique contributions to numerosity comparison, digit comparison, and exact computation, but had no significant relation with approximate computation or curriculum-based mathematical achievement. These results suggest that visual form perception is an important independent cognitive correlate of lower level math categories, including the approximate number system, digit comparison, and exact computation. PMID:28824513

  12. Flywheel Energy Storage System Designed for the International Space Station

    NASA Technical Reports Server (NTRS)

    Delventhal, Rex A.

    2002-01-01

    Following successful operation of a developmental flywheel energy storage system in fiscal year 2000, researchers at the NASA Glenn Research Center began developing a flight design of a flywheel system for the International Space Station (ISS). In such an application, a two-flywheel system can replace one of the nickel-hydrogen battery strings in the ISS power system. The development unit, sized at approximately one-eighth the size needed for ISS was run at 60,000 rpm. The design point for the flight unit is a larger composite flywheel, approximately 17 in. long and 13 in. in diameter, running at 53,000 rpm when fully charged. A single flywheel system stores 2.8 kW-hr of useable energy, enough to light a 100-W light bulb for over 24 hr. When housed in an ISS orbital replacement unit, the flywheel would provide energy storage with approximately 3 times the service life of the nickel-hydrogen battery currently in use.

  13. Fractal-Based Analysis of the Influence of Music on Human Respiration

    NASA Astrophysics Data System (ADS)

    Reza Namazi, H.

    An important challenge in respiration related studies is to investigate the influence of external stimuli on human respiration. Auditory stimulus is an important type of stimuli that influences human respiration. However, no one discovered any trend, which relates the characteristics of the auditory stimuli to the characteristics of the respiratory signal. In this paper, we investigate the correlation between auditory stimuli and respiratory signal from fractal point of view. We found out that the fractal structure of respiratory signal is correlated with the fractal structure of the applied music. Based on the obtained results, the music with greater fractal dimension will result in respiratory signal with smaller fractal dimension. In order to verify this result, we benefit from approximate entropy. The results show the respiratory signal will have smaller approximate entropy by choosing the music with smaller approximate entropy. The method of analysis could be further investigated to analyze the variations of different physiological time series due to the various types of stimuli when the complexity is the main concern.

  14. Earth Observations

    NASA Image and Video Library

    2010-09-20

    ISS024-E-015121 (20 Sept. 2010) --- Twitchell Canyon Fire in central Utah is featured in this image photographed by an Expedition 24 crew member on the International Space Station (ISS). The Twitchell Canyon Fire near central Utah?s Fishlake National Forest is reported to have an area of approximately 13,383 hectares (approximately 134 square kilometers, or 33,071 acres). This detailed image shows smoke plumes generated by several fire spots close to the southwestern edge of the burned area. The fire was started by a lightning strike on July 20, 2010. Whereas many of the space station images of Earth are looking straight down (nadir), this photograph was exposed at an angle. The space station was located over a point approximately 509 kilometers (316 miles) to the northeast, near the Colorado/Wyoming border, at the time the image was taken on Sept. 20. Southwesterly winds were continuing to extend smoke plumes from the fire to the northeast. While the Twitchell Canyon region is sparsely populated, Interstate Highway 15 is visible at upper left.

  15. 33 CFR 334.430 - Neuse River and tributaries at Marine Corps Air Station Cherry Point, North Carolina; restricted...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Marine Corps Air Station Cherry Point, North Carolina; restricted area and danger zone. 334.430 Section... Air Station Cherry Point, North Carolina; restricted area and danger zone. (a) The restricted area... Station, Cherry Point, North Carolina, extending from the mouth of Hancock Creek to a point approximately...

  16. Nonlinear digital signal processing in mental health: characterization of major depression using instantaneous entropy measures of heartbeat dynamics.

    PubMed

    Valenza, Gaetano; Garcia, Ronald G; Citi, Luca; Scilingo, Enzo P; Tomaz, Carlos A; Barbieri, Riccardo

    2015-01-01

    Nonlinear digital signal processing methods that address system complexity have provided useful computational tools for helping in the diagnosis and treatment of a wide range of pathologies. More specifically, nonlinear measures have been successful in characterizing patients with mental disorders such as Major Depression (MD). In this study, we propose the use of instantaneous measures of entropy, namely the inhomogeneous point-process approximate entropy (ipApEn) and the inhomogeneous point-process sample entropy (ipSampEn), to describe a novel characterization of MD patients undergoing affective elicitation. Because these measures are built within a nonlinear point-process model, they allow for the assessment of complexity in cardiovascular dynamics at each moment in time. Heartbeat dynamics were characterized from 48 healthy controls and 48 patients with MD while emotionally elicited through either neutral or arousing audiovisual stimuli. Experimental results coming from the arousing tasks show that ipApEn measures are able to instantaneously track heartbeat complexity as well as discern between healthy subjects and MD patients. Conversely, standard heart rate variability (HRV) analysis performed in both time and frequency domains did not show any statistical significance. We conclude that measures of entropy based on nonlinear point-process models might contribute to devising useful computational tools for care in mental health.

  17. On the sighting of unicorns: A variational approach to computing invariant sets in dynamical systems

    NASA Astrophysics Data System (ADS)

    Junge, Oliver; Kevrekidis, Ioannis G.

    2017-06-01

    We propose to compute approximations to invariant sets in dynamical systems by minimizing an appropriate distance between a suitably selected finite set of points and its image under the dynamics. We demonstrate, through computational experiments, that this approach can successfully converge to approximations of (maximal) invariant sets of arbitrary topology, dimension, and stability, such as, e.g., saddle type invariant sets with complicated dynamics. We further propose to extend this approach by adding a Lennard-Jones type potential term to the objective function, which yields more evenly distributed approximating finite point sets, and illustrate the procedure through corresponding numerical experiments.

  18. On the sighting of unicorns: A variational approach to computing invariant sets in dynamical systems.

    PubMed

    Junge, Oliver; Kevrekidis, Ioannis G

    2017-06-01

    We propose to compute approximations to invariant sets in dynamical systems by minimizing an appropriate distance between a suitably selected finite set of points and its image under the dynamics. We demonstrate, through computational experiments, that this approach can successfully converge to approximations of (maximal) invariant sets of arbitrary topology, dimension, and stability, such as, e.g., saddle type invariant sets with complicated dynamics. We further propose to extend this approach by adding a Lennard-Jones type potential term to the objective function, which yields more evenly distributed approximating finite point sets, and illustrate the procedure through corresponding numerical experiments.

  19. Signal-to-noise ratio for the wide field-planetary camera of the Space Telescope

    NASA Technical Reports Server (NTRS)

    Zissa, D. E.

    1984-01-01

    Signal-to-noise ratios for the Wide Field Camera and Planetary Camera of the Space Telescope were calculated as a function of integration time. Models of the optical systems and CCD detector arrays were used with a 27th visual magnitude point source and a 25th visual magnitude per arc-sq. second extended source. A 23rd visual magnitude per arc-sq. second background was assumed. The models predicted signal-to-noise ratios of 10 within 4 hours for the point source centered on a signal pixel. Signal-to-noise ratios approaching 10 are estimated for approximately 0.25 x 0.25 arc-second areas within the extended source after 10 hours integration.

  20. Discretized energy minimization in a wave guide with point sources

    NASA Technical Reports Server (NTRS)

    Propst, G.

    1994-01-01

    An anti-noise problem on a finite time interval is solved by minimization of a quadratic functional on the Hilbert space of square integrable controls. To this end, the one-dimensional wave equation with point sources and pointwise reflecting boundary conditions is decomposed into a system for the two propagating components of waves. Wellposedness of this system is proved for a class of data that includes piecewise linear initial conditions and piecewise constant forcing functions. It is shown that for such data the optimal piecewise constant control is the solution of a sparse linear system. Methods for its computational treatment are presented as well as examples of their applicability. The convergence of discrete approximations to the general optimization problem is demonstrated by finite element methods.

  1. 27 CFR 9.225 - Middleburg Virginia.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... (downstream) for approximately 8.2 miles, crossing onto the Point of Rocks map, to the mouth of Catoctin Creek; then (2) Proceed southwesterly (upstream) along the meandering Catoctin Creek for approximately 4 miles... State Route 663 for approximately 0.1 mile to State Route 665 (locally known as Loyalty Road) in...

  2. 27 CFR 9.225 - Middleburg Virginia.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... (downstream) for approximately 8.2 miles, crossing onto the Point of Rocks map, to the mouth of Catoctin Creek; then (2) Proceed southwesterly (upstream) along the meandering Catoctin Creek for approximately 4 miles... State Route 663 for approximately 0.1 mile to State Route 665 (locally known as Loyalty Road) in...

  3. 27 CFR 9.141 - Escondido Valley.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... westerly direction approximately 17 miles; (5) The boundary continues to follow the 3000 foot contour line... intermittent stream approximately 18 miles east of the city of Fort Stockton (standard reference GE3317 on the... easterly direction approximately 9 miles until a southbound trail diverges from I-10 just past the point...

  4. 27 CFR 9.141 - Escondido Valley.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... westerly direction approximately 17 miles; (5) The boundary continues to follow the 3000 foot contour line... intermittent stream approximately 18 miles east of the city of Fort Stockton (standard reference GE3317 on the... easterly direction approximately 9 miles until a southbound trail diverges from I-10 just past the point...

  5. 27 CFR 9.139 - Santa Lucia Highlands.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... boundary follows Limekiln Creek for approximately 1.25 miles northeast to the 100 foot elevation. (2) Then following the 100 foot contour in a southeasterly direction for approximately 1 mile, where the boundary... approximately 6.50 miles, to the point where the 160 foot elevation crosses River Road. (6) Then following River...

  6. 27 CFR 9.141 - Escondido Valley.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... westerly direction approximately 17 miles; (5) The boundary continues to follow the 3000 foot contour line... intermittent stream approximately 18 miles east of the city of Fort Stockton (standard reference GE3317 on the... easterly direction approximately 9 miles until a southbound trail diverges from I-10 just past the point...

  7. 27 CFR 9.141 - Escondido Valley.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... westerly direction approximately 17 miles; (5) The boundary continues to follow the 3000 foot contour line... intermittent stream approximately 18 miles east of the city of Fort Stockton (standard reference GE3317 on the... easterly direction approximately 9 miles until a southbound trail diverges from I-10 just past the point...

  8. 27 CFR 9.139 - Santa Lucia Highlands.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... boundary follows Limekiln Creek for approximately 1.25 miles northeast to the 100 foot elevation. (2) Then following the 100 foot contour in a southeasterly direction for approximately 1 mile, where the boundary... approximately 6.50 miles, to the point where the 160 foot elevation crosses River Road. (6) Then following River...

  9. 27 CFR 9.141 - Escondido Valley.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... westerly direction approximately 17 miles; (5) The boundary continues to follow the 3000 foot contour line... intermittent stream approximately 18 miles east of the city of Fort Stockton (standard reference GE3317 on the... easterly direction approximately 9 miles until a southbound trail diverges from I-10 just past the point...

  10. 27 CFR 9.139 - Santa Lucia Highlands.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... boundary follows Limekiln Creek for approximately 1.25 miles northeast to the 100 foot elevation. (2) Then following the 100 foot contour in a southeasterly direction for approximately 1 mile, where the boundary... approximately 6.50 miles, to the point where the 160 foot elevation crosses River Road. (6) Then following River...

  11. Verification of floating-point software

    NASA Technical Reports Server (NTRS)

    Hoover, Doug N.

    1990-01-01

    Floating point computation presents a number of problems for formal verification. Should one treat the actual details of floating point operations, or accept them as imprecisely defined, or should one ignore round-off error altogether and behave as if floating point operations are perfectly accurate. There is the further problem that a numerical algorithm usually only approximately computes some mathematical function, and we often do not know just how good the approximation is, even in the absence of round-off error. ORA has developed a theory of asymptotic correctness which allows one to verify floating point software with a minimum entanglement in these problems. This theory and its implementation in the Ariel C verification system are described. The theory is illustrated using a simple program which finds a zero of a given function by bisection. This paper is presented in viewgraph form.

  12. Automatic extraction of the mid-sagittal plane using an ICP variant

    NASA Astrophysics Data System (ADS)

    Fieten, Lorenz; Eschweiler, Jörg; de la Fuente, Matías; Gravius, Sascha; Radermacher, Klaus

    2008-03-01

    Precise knowledge of the mid-sagittal plane is important for the assessment and correction of several deformities. Furthermore, the mid-sagittal plane can be used for the definition of standardized coordinate systems such as pelvis or skull coordinate systems. A popular approach for mid-sagittal plane computation is based on the selection of anatomical landmarks located either directly on the plane or symmetrically to it. However, the manual selection of landmarks is a tedious, time-consuming and error-prone task, which requires great care. In order to overcome this drawback, previously it was suggested to use the iterative closest point (ICP) algorithm: After an initial mirroring of the data points on a default mirror plane, the mirrored data points should be registered iteratively to the model points using rigid transforms. Finally, a reflection transform approximating the cumulative transform could be extracted. In this work, we present an ICP variant for the iterative optimization of the reflection parameters. It is based on a closed-form solution to the least-squares problem of matching data points to model points using a reflection. In experiments on CT pelvis and skull datasets our method showed a better ability to match homologous areas.

  13. Approximate analytical solutions in the analysis of thin elastic plates

    NASA Astrophysics Data System (ADS)

    Goloskokov, Dmitriy P.; Matrosov, Alexander V.

    2018-05-01

    Two approaches to the construction of approximate analytical solutions for bending of a rectangular thin plate are presented: the superposition method based on the method of initial functions (MIF) and the one built using the Green's function in the form of orthogonal series. Comparison of two approaches is carried out by analyzing a square plate clamped along its contour. Behavior of the moment and the shear force in the neighborhood of the corner points is discussed. It is shown that both solutions give identical results at all points of the plate except for the neighborhoods of the corner points. There are differences in the values of bending moments and generalized shearing forces in the neighborhoods of the corner points.

  14. Implicit approximate-factorization schemes for the low-frequency transonic equation

    NASA Technical Reports Server (NTRS)

    Ballhaus, W. F.; Steger, J. L.

    1975-01-01

    Two- and three-level implicit finite-difference algorithms for the low-frequency transonic small disturbance-equation are constructed using approximate factorization techniques. The schemes are unconditionally stable for the model linear problem. For nonlinear mixed flows, the schemes maintain stability by the use of conservatively switched difference operators for which stability is maintained only if shock propagation is restricted to be less than one spatial grid point per time step. The shock-capturing properties of the schemes were studied for various shock motions that might be encountered in problems of engineering interest. Computed results for a model airfoil problem that produces a flow field similar to that about a helicopter rotor in forward flight show the development of a shock wave and its subsequent propagation upstream off the front of the airfoil.

  15. Elasticity and Strength of Biomacromolecular Crystals: Lysozyme

    NASA Technical Reports Server (NTRS)

    Holmes, A. M.; Witherow, W. K.; Chen, L. Q.; Chernov, A. A.

    2003-01-01

    The static Young modulus, E = 0.1 to 0.5 GPa, the crystal critical strength (sigma(sub c)) and its ratio to E,sigma(sub c)/E is approximately 10(exp 3), were measured for the first time for non cross-linked lysozyme crystals in solution. By using a triple point bending apparatus, we also demonstrated that the crystals were purely elastic. Softness of protein crystals built of hard macromolecules (26 GPa for lysozyme) is explained by the large size of the macromolecules as compared to the range of intermolecular forces and by the weakness of intermolecular bonds as compared to the peptide bond strength. The relatively large reported dynamic elastic moduli (approximately 8 GPa) from resonance light scattering should come from averaging over the moduli of intracrystalline water and intra- and intermolecular bonding.

  16. Guidance, Navigation, and Control Performance for the GOES-R Spacecraft

    NASA Technical Reports Server (NTRS)

    Chapel, Jim; Stancliffe, Devin; Bevacqua, TIm; Winkler, Stephen; Clapp, Brian; Rood, Tim; Gaylor, David; Freesland, Doug; Krimchansky, Alexander

    2014-01-01

    The Geostationary Operational Environmental Satellite-R Series (GOES-R) is the first of the next generation geostationary weather satellites. The series represents a dramatic increase in Earth observation capabilities, with 4 times the resolution, 5 times the observation rate, and 3 times the number of spectral bands. GOES-R also provides unprecedented availability, with less than 120 minutes per year of lost observation time. This paper presents the Guidance Navigation & Control (GN&C) requirements necessary to realize the ambitious pointing, knowledge, and Image Navigation and Registration (INR) objectives of GOES-R. Because the suite of instruments is sensitive to disturbances over a broad spectral range, a high fidelity simulation of the vehicle has been created with modal content over 500 Hz to assess the pointing stability requirements. Simulation results are presented showing acceleration, shock response spectra (SRS), and line of sight (LOS) responses for various disturbances from 0 Hz to 512 Hz. Simulation results demonstrate excellent performance relative to the pointing and pointing stability requirements, with LOS jitter for the isolated instrument platform of approximately 1 micro-rad. Attitude and attitude rate knowledge are provided directly to the instrument with an accuracy defined by the Integrated Rate Error (IRE) requirements. The data are used internally for motion compensation. The final piece of the INR performance is orbit knowledge, which GOES-R achieves with GPS navigation. Performance results are shown demonstrating compliance with the 50 to 75 m orbit position accuracy requirements. As presented in this paper, the GN&C performance supports the challenging mission objectives of GOES-R.

  17. Feedback Implementation of Zermelo's Optimal Control by Sugeno Approximation

    NASA Technical Reports Server (NTRS)

    Clifton, C.; Homaifax, A.; Bikdash, M.

    1997-01-01

    This paper proposes an approach to implement optimal control laws of nonlinear systems in real time. Our methodology does not require solving two-point boundary value problems online and may not require it off-line either. The optimal control law is learned using the original Sugeno controller (OSC) from a family of optimal trajectories. We compare the trajectories generated by the OSC and the trajectories yielded by the optimal feedback control law when applied to Zermelo's ship steering problem.

  18. A rapid method for the determination of some antihypertensive and antipyretic drugs by thermometric titrimetry.

    PubMed

    Abbasi, U M; Chand, F; Bhanger, M I; Memon, S A

    1986-02-01

    A simple and rapid method is described for the direct thermometric determination of milligram amounts of methyl dopa, propranolol hydrochloride, 1-phenyl-3-methylpyrazolone (MPP) and 2,3-dimethyl-1-phenylpyrazol-5-one (phenazone) in the presence of excipients. The compounds are reacted with N'-bromosuccinimide and the heat of reaction is used to determine the end-point of the titration. The time required is approximately 2 min, and the accuracy is analytically acceptable.

  19. Simplified stock markets described by number operators

    NASA Astrophysics Data System (ADS)

    Bagarello, F.

    2009-06-01

    In this paper we continue our systematic analysis of the operatorial approach previously proposed in an economical context and we discuss a mixed toy model of a simplified stock market, i.e. a model in which the price of the shares is given as an input. We deduce the time evolution of the portfolio of the various traders of the market, as well as of other observable quantities. As in a previous paper, we solve the equations of motion by means of a fixed point like approximation.

  20. Speed of recovery after arthroscopic rotator cuff repair.

    PubMed

    Kurowicki, Jennifer; Berglund, Derek D; Momoh, Enesi; Disla, Shanell; Horn, Brandon; Giveans, M Russell; Levy, Jonathan C

    2017-07-01

    The purpose of this study was to delineate the time taken to achieve maximum improvement (plateau of recovery) and the degree of recovery observed at various time points (speed of recovery) for pain and function after arthroscopic rotator cuff repair. An institutional shoulder surgery registry query identified 627 patients who underwent arthroscopic rotator cuff repair between 2006 and 2015. Measured range of motion, patient satisfaction, and patient-reported outcome measures were analyzed for preoperative, 3-month, 6-month, 1-year, and 2-year intervals. Subgroup analysis was performed on the basis of tear size by retraction grade and number of anchors used. As an entire group, the plateau of maximum recovery for pain, function, and motion occurred at 1 year. Satisfaction with surgery was >96% at all time points. At 3 months, 74% of improvement in pain and 45% to 58% of functional improvement were realized. However, only 22% of elevation improvement was achieved (P < .001). At 6 months, 89% of improvement in pain, 81% to 88% of functional improvement, and 78% of elevation improvement were achieved (P < .001). Larger tears had a slower speed of recovery for Single Assessment Numeric Evaluation scores, forward elevation, and external rotation. Smaller tears had higher motion and functional scores across all time points. Tear size did not influence pain levels. The plateau of maximum recovery after rotator cuff repair occurred at 1 year with high satisfaction rates at all time points. At 3 months, approximately 75% of pain relief and 50% of functional recovery can be expected. Larger tears have a slower speed of recovery. Copyright © 2016 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  1. Fermi-LAT Detection of Gravitational Lens Delayed Gamma-Ray Flares from Blazar B0218+357

    NASA Technical Reports Server (NTRS)

    Cheung, C. C.; Larsson, S.; Scargle, J. D.; Amin, M. A.; Blandford, R. D.; Bulmash, D.; Chiang, J.; Ciprini, S.; Corbet, R. D. H.; Falco, E. E.; hide

    2014-01-01

    Using data from the Fermi Large Area Telescope (LAT), we report the first clear gamma-ray measurement of a delay between flares from the gravitationally lensed images of a blazar. The delay was detected in B0218+357, a known double-image lensed system, during a period of enhanced gamma-ray activity with peak fluxes consistently observed to reach greater than 20-50 times its previous average flux. An auto-correlation function analysis identified a delay in the gamma-ray data of 11.46 plus or minus 0.16 days (1 sigma) that is approximately 1 day greater than previous radio measurements. Considering that it is beyond the capabilities of the LAT to spatially resolve the two images, we nevertheless decomposed individual sequences of superposing gamma-ray flares/delayed emissions. In three such approximately 8-10 day-long sequences within an approximately 4-month span, considering confusion due to overlapping flaring emission and flux measurement uncertainties, we found flux ratios consistent with approximately 1, thus systematically smaller than those from radio observations. During the first, best-defined flare, the delayed emission was detailed with a Fermi pointing, and we observed flux doubling timescales of approximately 3-6 hours implying as well extremely compact gamma-ray emitting regions.

  2. Methane bubbling from northern lakes: present and future contributions to the global methane budget.

    PubMed

    Walter, Katey M; Smith, Laurence C; Chapin, F Stuart

    2007-07-15

    Large uncertainties in the budget of atmospheric methane (CH4) limit the accuracy of climate change projections. Here we describe and quantify an important source of CH4 -- point-source ebullition (bubbling) from northern lakes -- that has not been incorporated in previous regional or global methane budgets. Employing a method recently introduced to measure ebullition more accurately by taking into account its spatial patchiness in lakes, we estimate point-source ebullition for 16 lakes in Alaska and Siberia that represent several common northern lake types: glacial, alluvial floodplain, peatland and thermokarst (thaw) lakes. Extrapolation of measured fluxes from these 16 sites to all lakes north of 45 degrees N using circumpolar databases of lake and permafrost distributions suggests that northern lakes are a globally significant source of atmospheric CH4, emitting approximately 24.2+/-10.5Tg CH4yr(-1). Thermokarst lakes have particularly high emissions because they release CH4 produced from organic matter previously sequestered in permafrost. A carbon mass balance calculation of CH4 release from thermokarst lakes on the Siberian yedoma ice complex suggests that these lakes alone would emit as much as approximately 49000Tg CH4 if this ice complex was to thaw completely. Using a space-for-time substitution based on the current lake distributions in permafrost-dominated and permafrost-free terrains, we estimate that lake emissions would be reduced by approximately 12% in a more probable transitional permafrost scenario and by approximately 53% in a 'permafrost-free' Northern Hemisphere. Long-term decline in CH4 ebullition from lakes due to lake area loss and permafrost thaw would occur only after the large release of CH4 associated thermokarst lake development in the zone of continuous permafrost.

  3. Characteristics of downward leaders in a cloud-to-ground lightning strike on a lightning rod

    NASA Astrophysics Data System (ADS)

    Wang, Caixia; Sun, Zhuling; Jiang, Rubin; Tian, Yangmeng; Qie, Xiushu

    2018-05-01

    A natural downward negative cloud-to-ground (CG) lightning was observed at a close distance of 370 m by using electric field change measurements and a high-speed camera at 5400 frames per second (fps). Two subsequent leader-return strokes of the lightning hit a lightning rod installed on the top of a seven-story building in Beijing city, while the grounding point for the stepped leader-first return stroke was 12 m away, on the roof of the building. The 2-D average speed of the downward stepped leader (L1) before the first return stroke (R1) was approximately 5.1 × 104 m/s during its propagation over the 306 m above the building, and those before the subsequent strokes (R2 and R3) ranged from 1.1 × 106 m/s to 2.2 × 106 m/s. An attempted leader (AL) occurred 201 ms after R1 and 10 ms before R2 reached approximately 99 m above the roof and failed to connect to the ground. The 2-D average speed of the AL was approximately 7.4 × 104 m/s. The luminosity at tip of the leader was brighter than the channel behind it. The leader inducing the R2 with an alteration of terminating point was a dart-stepped leader (DSL), which propagated through the channel of AL and continued to develop downward with new branches at about 17 m above the roof. The 2-D speed of the DSL at the bottom 99 m was 6.6 × 105 m/s. The average time interval between the stepped pulses of the DSL was approximately 10 μs, smaller than that of L1 with value of about 17 μs. The average step lengths of the DSL were approximately 6.6 m. The study shows that the stepped leader-first return stroke of lightning will not always hit the tip of a tall metal rod due to the significant branching property of the leader. However, under certain conditions, the subsequent return strokes may alter the grounding point to the tip of a tall metal rod. For the lightning rod, the protection against subsequent return strokes may be better than that against the first return stroke.

  4. Indirect synchronization control in a starlike network of phase oscillators

    NASA Astrophysics Data System (ADS)

    Kuptsov, Pavel V.; Kuptsova, Anna V.

    2018-04-01

    A starlike network of non-identical phase oscillators is considered that contains the hub and tree rays each having a single node. In such network effect of indirect synchronization control is reported: changing the natural frequency and the coupling strength of one of the peripheral oscillators one can switch on an off the synchronization of the others. The controlling oscillator at that is not synchronized with them and has a frequency that is approximately four time higher then the frequency of the synchronization. The parameter planes showing a corresponding synchronization tongue are represented and time dependencies of phase differences are plotted for points within and outside of the tongue.

  5. A new continuous light source for high-speed imaging

    NASA Astrophysics Data System (ADS)

    Paton, R. T.; Hall, R. E.; Skews, B. W.

    2017-02-01

    Xenon arc lamps have been identified as a suitable continuous light source for high-speed imaging, specifically high-speed Schlieren and shadowgraphy. One issue when setting us such systems is the time that it takes to reduce a finite source to the approximation of a point source for z-type schlieren. A preliminary design of a compact compound lens for use with a commercial Xenon arc lamp was tested for suitability. While it was found that there is some dimming of the illumination at the spot periphery, the overall spectral and luminance distribution of the compact source is quite acceptable, especially considering the time benefit that it represents.

  6. Simulated groundwater flow paths, travel time, and advective transport of nitrogen in the Kirkwood-Cohansey aquifer system, Barnegat Bay–Little Egg Harbor Watershed, New Jersey

    USGS Publications Warehouse

    Voronin, Lois M.; Cauller, Stephen J.

    2017-07-31

    Elevated concentrations of nitrogen in groundwater that discharges to surface-water bodies can degrade surface-water quality and habitats in the New Jersey Coastal Plain. An analysis of groundwater flow in the Kirkwood-Cohansey aquifer system and deeper confined aquifers that underlie the Barnegat Bay–Little Egg Harbor (BB-LEH) watershed and estuary was conducted by using groundwater-flow simulation, in conjunction with a particle-tracking routine, to provide estimates of groundwater flow paths and travel times to streams and the BB-LEH estuary.Water-quality data from the Ambient Groundwater Quality Monitoring Network, a long-term monitoring network of wells distributed throughout New Jersey, were used to estimate the initial nitrogen concentration in recharge for five different land-use classes—agricultural cropland or pasture, agricultural orchard or vineyard, urban non-residential, urban residential, and undeveloped. Land use at the point of recharge within the watershed was determined using a geographic information system (GIS). Flow path starting locations were plotted on land-use maps for 1930, 1973, 1986, 1997, and 2002. Information on the land use at the time and location of recharge, time of travel to the discharge location, and the point of discharge were determined for each simulated flow path. Particle-tracking analysis provided the link from the point of recharge, along the particle flow path, to the point of discharge, and the particle travel time. The travel time of each simulated particle established the recharge year. Land use during the year of recharge was used to define the nitrogen concentration associated with each flow path. The recharge-weighted average nitrogen concentration for all flow paths that discharge to the Toms River upstream from streamflow-gaging station 01408500 or to the BB-LEH estuary was calculated.Groundwater input into the Barnegat Bay–Little Egg Harbor estuary from two main sources— indirect discharge from base flow to streams that eventually flow into the bay and groundwater discharge directly into the estuary and adjoining coastal wetlands— is summarized by quantity, travel time, and estimated nitrogen concentration. Simulated average groundwater discharge to streams in the watershed that flow into the BB-LEH estuary is approximately 400 million gallons per day. Particle-tracking results indicate that the travel time of 56 percent of this discharge is less than 7 years. Fourteen percent of the groundwater discharge to the streams in the BB-LEH watershed has a travel time of less than 7 years and originates in urban land. Analysis of flow-path simulations indicate that approximately 13 percent of the total groundwater flow through the study area discharges directly to the estuary and adjoining coastal wetlands (approximately 64 million gallons per day). The travel time of 19 percent of this discharge is less than 7 years. Ten percent of this discharge (1 percent of the total groundwater flow through the study area) originates in urban areas and has a travel time of less than 7 years. Groundwater that discharges to the streams that flow into the BB-LEH, in general, has shorter travel times, and a higher percentage of it originates in urban areas than does direct groundwater discharge to the Barnegat Bay–Little Egg Harbor estuary.The simulated average nitrogen concentration in groundwater that discharges to the Toms River, upstream from streamflow-gaging station 01408500 was computed and compared to summary concentrations determined from analysis of multiple surface-water samples. The nitrogen concentration in groundwater that discharges directly to the estuary and adjoining coastal wetlands is a current data gap. The particle tracking methodology used in this study provides an estimate of this concentration."

  7. Adolescents' Religiousness and Substance Use Are Linked via Afterlife Beliefs and Future Orientation.

    PubMed

    Holmes, Christopher J; Kim-Spoon, Jungmeen

    2017-10-01

    Although religiousness has been identified as a protective factor against adolescent substance use, processes through which these effects may operate are unclear. The current longitudinal study examined sequential mediation of afterlife beliefs and future orientation in the relation between adolescent religiousness and cigarette, alcohol, and marijuana use. Participants included 131 adolescents (mean age at Time 1 = 12 years) at three time points with approximately two year time intervals. Structural equation modeling indicated that higher religiousness at Time 1 was associated with higher afterlife beliefs at Time 2. Higher afterlife beliefs at Time 2 were associated with higher future orientation at Time 2, which in turn was associated with lower use of cigarettes, alcohol, and marijuana at Time 3. Our findings highlight the roles of afterlife beliefs and future orientation in explaining the beneficial effects of religiousness against adolescent substance use.

  8. Effects of agriculture upon the air quality and climate: research, policy, and regulations.

    PubMed

    Aneja, Viney P; Schlesinger, William H; Erisman, Jan Willem

    2009-06-15

    Scientific assessments of agricultural air quality, including estimates of emissions and potential sequestration of greenhouse gases, are an important emerging area of environmental science that offers significant challenges to policy and regulatory authorities. Improvements are needed in measurements, modeling, emission controls, and farm operation management. Controlling emissions of gases and particulate matter from agriculture is notoriously difficult as this sector affects the most basic need of humans, i.e., food. Current policies combine an inadequate science covering a very disparate range of activities in a complex industry with social and political overlays. Moreover, agricultural emissions derive from both area and point sources. In the United States, agricultural emissions play an important role in several atmospherically mediated processes of environmental and public health concerns. These atmospheric processes affect local and regional environmental quality, including odor, particulate matter (PM) exposure, eutrophication, acidification, exposure to toxics, climate, and pathogens. Agricultural emissions also contribute to the global problems caused by greenhouse gas emissions. Agricultural emissions are variable in space and time and in how they interact within the various processes and media affected. Most important in the U.S. are ammonia (where agriculture accounts for approximately 90% of total emissions), reduced sulfur (unquantified), PM25 (approximately 16%), PM110 (approximately 18%), methane (approximately 29%), nitrous oxide (approximately 72%), and odor and emissions of pathogens (both unquantified). Agriculture also consumes fossil fuels for fertilizer production and farm operations, thus emitting carbon dioxide (CO2), oxides of nitrogen (NO(x)), sulfur oxides (SO(x)), and particulates. Current research priorities include the quantification of point and nonpoint sources, the biosphere-atmosphere exchange of ammonia, reduced sulfur compounds, volatile organic compounds, greenhouse gases, odor and pathogens, the quantification of landscape processes, and the primary and secondary emissions of PM. Given the serious concerns raised regarding the amount and the impacts of agricultural air emissions, policies must be pursued and regulations must be enacted in orderto make real progress in reducing these emissions and their associated environmental impacts.

  9. Geologic, geophysical, and in situ stress investigations in the vicinity of the Dining Car Chimney, Dining Car/Hybla gold tunnels, Nevada Test Site, with sections on geologica investigations, geophysical investigations, and in situ stress investigations

    USGS Publications Warehouse

    Townsend, D.R.; Baldwin, M.J.; Carroll, R.D.; Ellis, W.L.; Magner, J.E.

    1982-01-01

    The Hybla Gold experiment was conducted in the U12e.20 drifts of the E-tunnel complex beneath the surface of Rainier Mesa at the Nevada Test Site. Though the proximity of the Hybla Gold working point to the chimney of the Dining Car event was important to the experiment, the observable geologic effects from Dining Car on the Hybla Gold site were minor. Overburden above the working point is approximately 385 m (1,263 ft). The pre-Tertiary surface, probably quartzite, lies approximately 254 m (833 ft) below the working point. The drifts are mined in zeolitized ash-fall tuffs of tunnel bed 4, subunits K and J, all of Miocene age. The working point is in subunit 4J. Geologic structure in the region around the working point is not complex. The U12e.20 main drift follows the axis of a shallow depositional syncline. A northeast-dipping fault with displacement of approximately 3 m (10 ft) passes within 15.2 m (50 ft) of the Hybla Gold working point. Three faults of smaller displacement pass within 183-290 m (600-950 ft) of the working point, and are antithetic to the 3-m (10-ft) fault. Three exploratory holes were drilled to investigate the chimney of the nearby Dining Car event. Four horizontal holes were drilled during the construction of the U12e.20 drifts to investigate the geology of the Hybla Gold working point.

  10. Beyond Clausius-Mossotti - Wave propagation on a polarizable point lattice and the discrete dipole approximation. [electromagnetic scattering and absorption by interstellar grains

    NASA Technical Reports Server (NTRS)

    Draine, B. T.; Goodman, Jeremy

    1993-01-01

    We derive the dispersion relation for electromagnetic waves propagating on a lattice of polarizable points. From this dispersion relation we obtain a prescription for choosing dipole polarizabilities so that an infinite lattice with finite lattice spacing will mimic a continuum with dielectric constant. The discrete dipole approximation is used to calculate scattering and absorption by a finite target by replacing the target with an array of point dipoles. We compare different prescriptions for determining the dipole polarizabilities. We show that the most accurate results are obtained when the lattice dispersion relation is used to set the polarizabilities.

  11. Perspective view looking from the northwest from approximately the same ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Perspective view looking from the northwest from approximately the same vantage point as in MD-1109-19 - National Park Seminary, Colonial House, 2745 Dewitt Circle, Silver Spring, Montgomery County, MD

  12. Perspective view looking from the northeast, from approximately the same ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Perspective view looking from the northeast, from approximately the same vantage point as in MD-1109-K-12 - National Park Seminary, Japanese Bungalow, 2801 Linden Lane, Silver Spring, Montgomery County, MD

  13. A class of reduced-order models in the theory of waves and stability.

    PubMed

    Chapman, C J; Sorokin, S V

    2016-02-01

    This paper presents a class of approximations to a type of wave field for which the dispersion relation is transcendental. The approximations have two defining characteristics: (i) they give the field shape exactly when the frequency and wavenumber lie on a grid of points in the (frequency, wavenumber) plane and (ii) the approximate dispersion relations are polynomials that pass exactly through points on this grid. Thus, the method is interpolatory in nature, but the interpolation takes place in (frequency, wavenumber) space, rather than in physical space. Full details are presented for a non-trivial example, that of antisymmetric elastic waves in a layer. The method is related to partial fraction expansions and barycentric representations of functions. An asymptotic analysis is presented, involving Stirling's approximation to the psi function, and a logarithmic correction to the polynomial dispersion relation.

  14. Finding the Best Quadratic Approximation of a Function

    ERIC Educational Resources Information Center

    Yang, Yajun; Gordon, Sheldon P.

    2011-01-01

    This article examines the question of finding the best quadratic function to approximate a given function on an interval. The prototypical function considered is f(x) = e[superscript x]. Two approaches are considered, one based on Taylor polynomial approximations at various points in the interval under consideration, the other based on the fact…

  15. 27 CFR 9.106 - North Yuba.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 1,900-foot contour line, approximately 0.75 mile north of Finley Ranch; (11) Then north along said... indiana Creek, approximately 0.87 mile, to the point where Indiana Creek meets the 2,000-foot contour line.... 17 N., R. 6 E., the boundary proceeds northeasterly in a meandering line approximately 1.5 miles...

  16. 27 CFR 9.106 - North Yuba.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 1,900-foot contour line, approximately 0.75 mile north of Finley Ranch; (11) Then north along said... indiana Creek, approximately 0.87 mile, to the point where Indiana Creek meets the 2,000-foot contour line.... 17 N., R. 6 E., the boundary proceeds northeasterly in a meandering line approximately 1.5 miles...

  17. 27 CFR 9.134 - Oakville.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Highway 29, then continuing in a straight line approximately .1 mile to the peak of the 320+ foot hill... direction in a straight line approximately 1.7 miles along Skellenger Lane, past its intersection with Conn... quadrangle map); (2) Then south along the center of the river bed approximately .4 miles to the point where...

  18. 27 CFR 9.134 - Oakville.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Highway 29, then continuing in a straight line approximately .1 mile to the peak of the 320+ foot hill... direction in a straight line approximately 1.7 miles along Skellenger Lane, past its intersection with Conn... quadrangle map); (2) Then south along the center of the river bed approximately .4 miles to the point where...

  19. 27 CFR 9.106 - North Yuba.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 1,900-foot contour line, approximately 0.75 mile north of Finley Ranch; (11) Then north along said... indiana Creek, approximately 0.87 mile, to the point where Indiana Creek meets the 2,000-foot contour line.... 17 N., R. 6 E., the boundary proceeds northeasterly in a meandering line approximately 1.5 miles...

  20. 33 CFR 165.T01-0470 - Safety Zones; Maine Events in Captain of the Port Long Island Sound Zone.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... vessel or the designated representative, by siren, radio, flashing light or other means, the operator of...:30 p.m. • Location: Water of the Great South Bay, Blue Point, NY in approximate position 40°44′06.28... South Bay, Blue Point, NY in approximate position, 40°44′06.28″ N, 073°01′02.50″ W (NAD 83). 7.15Montauk...

  1. Coordinated XTE/EUVE Observations of Algol

    NASA Technical Reports Server (NTRS)

    Stern, Robert A.

    1997-01-01

    EUVE, ASCA, and XTE observed the eclipsing binary Algol (Beta Per) from 1-7 Feb. 96. The coordinated observation covered approximately 2 binary orbits of the system, with a net exposure of approximately 160 ksec for EUVE, 40 ksec for ASCA (in 4 pointing), and 90 ksec for XTE (in 45 pointings). We discuss results of modeling the combined EUVE, ASCA, and XTE data using continuous differential emission measure distributions, and provide constraints on the Fe abundance in the Algol system.

  2. Perturbations of the seismic reflectivity of a fluid-saturated depth-dependent poroelastic medium.

    PubMed

    de Barros, Louis; Dietrich, Michel

    2008-03-01

    Analytical formulas are derived to compute the first-order effects produced by plane inhomogeneities on the point source seismic response of a fluid-filled stratified porous medium. The derivation is achieved by a perturbation analysis of the poroelastic wave equations in the plane-wave domain using the Born approximation. This approach yields the Frechet derivatives of the P-SV- and SH-wave responses in terms of the Green's functions of the unperturbed medium. The accuracy and stability of the derived operators are checked by comparing, in the time-distance domain, differential seismograms computed from these analytical expressions with complete solutions obtained by introducing discrete perturbations into the model properties. For vertical and horizontal point forces, it is found that the Frechet derivative approach is remarkably accurate for small and localized perturbations of the medium properties which are consistent with the Born approximation requirements. Furthermore, the first-order formulation appears to be stable at all source-receiver offsets. The porosity, consolidation parameter, solid density, and mineral shear modulus emerge as the most sensitive parameters in forward and inverse modeling problems. Finally, the amplitude-versus-angle response of a thin layer shows strong coupling effects between several model parameters.

  3. The Spacelab IPS Star Simulator

    NASA Astrophysics Data System (ADS)

    Wessling, Francis C., III

    The cost of doing business in space is very high. If errors occur while in orbit the costs grow and desired scientific data may be corrupted or even lost. The Spacelab Instrument Pointing System (IPS) Star Simulator is a unique test bed that allows star trackers to interface with simulated stars in a laboratory before going into orbit. This hardware-in-the-loop testing of equipment on earth increases the probability of success while in space. The IPS Star Simulator provides three fields of view 2.55 x 2.55 deg each for input into star trackers. The fields of view are produced on three separate monitors. Each monitor has 4096 x 4096 addressable points and can display 50 stars (pixels) maximum at a given time. The pixel refresh rate is 1000 Hz. The spectral output is approximately 550 nm. The available relative visual magnitude range is two to eight visual magnitudes. The star size is less than 100 arcsec. The minimum star movement is less than 5 arcsec and the relative position accuracy is approximately 40 arcsec. The purpose of this paper is to describe the IPS Star Simulator design and to provide an operational scenario so others may gain from the approach and possible use of the system.

  4. Fast and accurate Monte Carlo modeling of a kilovoltage X-ray therapy unit using a photon-source approximation for treatment planning in complex media.

    PubMed

    Zeinali-Rafsanjani, B; Mosleh-Shirazi, M A; Faghihi, R; Karbasi, S; Mosalaei, A

    2015-01-01

    To accurately recompute dose distributions in chest-wall radiotherapy with 120 kVp kilovoltage X-rays, an MCNP4C Monte Carlo model is presented using a fast method that obviates the need to fully model the tube components. To validate the model, half-value layer (HVL), percentage depth doses (PDDs) and beam profiles were measured. Dose measurements were performed for a more complex situation using thermoluminescence dosimeters (TLDs) placed within a Rando phantom. The measured and computed first and second HVLs were 3.8, 10.3 mm Al and 3.8, 10.6 mm Al, respectively. The differences between measured and calculated PDDs and beam profiles in water were within 2 mm/2% for all data points. In the Rando phantom, differences for majority of data points were within 2%. The proposed model offered an approximately 9500-fold reduced run time compared to the conventional full simulation. The acceptable agreement, based on international criteria, between the simulations and the measurements validates the accuracy of the model for its use in treatment planning and radiobiological modeling studies of superficial therapies including chest-wall irradiation using kilovoltage beam.

  5. Approximations to galaxy star formation rate histories: properties and uses of two examples

    NASA Astrophysics Data System (ADS)

    Cohn, J. D.

    2018-05-01

    Galaxies evolve via a complex interaction of numerous different physical processes, scales and components. In spite of this, overall trends often appear. Simplified models for galaxy histories can be used to search for and capture such emergent trends, and thus to interpret and compare results of galaxy formation models to each other and to nature. Here, two approximations are applied to galaxy integrated star formation rate histories, drawn from a semi-analytic model grafted onto a dark matter simulation. Both a lognormal functional form and principal component analysis (PCA) approximate the integrated star formation rate histories fairly well. Machine learning, based upon simplified galaxy halo histories, is somewhat successful at recovering both fits. The fits to the histories give fixed time star formation rates which have notable scatter from their true final time rates, especially for quiescent and "green valley" galaxies, and more so for the PCA fit. For classifying galaxies into subfamilies sharing similar integrated histories, both approximations are better than using final stellar mass or specific star formation rate. Several subsamples from the simulation illustrate how these simple parameterizations provide points of contact for comparisons between different galaxy formation samples, or more generally, models. As a side result, the halo masses of simulated galaxies with early peak star formation rate (according to the lognormal fit) are bimodal. The galaxies with a lower halo mass at peak star formation rate appear to stall in their halo growth, even though they are central in their host halos.

  6. A 10-year Study of the Academic Progress of Students Identified as Low Performers after Their First Semester of Pharmacy School

    PubMed Central

    Battise, Dawn M.; Neville, Michael W.

    2016-01-01

    Objective. To examine whether pharmacy students characterized as low performers at the conclusion of their first semester remained low performers throughout their academic career. Methods. Bottom quartile performance on first semester grade point average (GPA) was compared to licensing examination success, cumulative grade point average at the end of the didactic education and whether the student graduated on time, using cross tabulation analysis. Relative risk ratios and confidence intervals were calculated. Results. Students in the bottom quartile for GPA at the end of their first semester in pharmacy school were approximately six times more likely not to graduate on time, not to pass the North American Pharmacist Licensure Exam on their first attempt and to remain in the bottom quartile for GPA at the end of their didactic education. Conclusion. This study suggests that pharmacy students who score in the bottom quartile for GPA at the end of their first semester are more likely to underperform academically unless they take corrective action. PMID:27756926

  7. Accelerated antioxidant bioavailability of OPC-3 bioflavonoids administered as isotonic solution.

    PubMed

    Cesarone, Maria R; Grossi, Maria Giovanni; Di Renzo, Andrea; Errichi, Silvia; Schönlau, Frank; Wilmer, James L; Lange, Mark; Blumenfeld, Julian

    2009-06-01

    The degree of absorption of bioflavonoids, a diverse and complex group of plant derived phytonutrients, has been a frequent debate among scientists. Monomeric flavonoid species are known to be absorbed within 2 h. The kinetics of plasma reactive oxygen species, a reflection of bioactivity, of a commercial blend of flavonoids, OPC-3 was investigated. OPC-3 was selected to compare absorption of an isotonic flavonoid solution vs tablet form with the equivalent amount of fluid. In the case of isotonic OPC-3 the reactive oxygen species of the subject's plasma decreased significantly (p < 0.05), six times greater than OPC-3 tablets by 10 min post-consumption. After 20 min the isotonic formulation was approximately four times more bioavailable and after 40 min twice as bioavailable as the tablet, respectively. At time points 1 h and later, both isotonic and tablet formulations lowered oxidative stress, although the isotonic formulation values remained significantly better throughout the investigation period of 4 h. These findings point to a dramatically accelerated bioavailability of flavonoids delivered in an isotonic formulation. (c) 2009 John Wiley & Sons, Ltd.

  8. Automating the Generation of the Cassini Tour Atlas Database

    NASA Technical Reports Server (NTRS)

    Grazier, Kevin R.; Roumeliotis, Chris; Lange, Robert D.

    2010-01-01

    The Tour Atlas is a large database of geometrical tables, plots, and graphics used by Cassini science planning engineers and scientists primarily for science observation planning. Over time, as the contents of the Tour Atlas grew, the amount of time it took to recreate the Tour Atlas similarly grew--to the point that it took one person a week of effort. When Cassini tour designers estimated that they were going to create approximately 30 candidate Extended Mission trajectories--which needed to be analyzed for science return in a short amount of time--it became a necessity to automate. We report on the automation methodology that reduced the amount of time it took one person to (re)generate a Tour Atlas from a week to, literally, one UNIX command.

  9. Application of Four-Point Newton-EGSOR iteration for the numerical solution of 2D Porous Medium Equations

    NASA Astrophysics Data System (ADS)

    Chew, J. V. L.; Sulaiman, J.

    2017-09-01

    Partial differential equations that are used in describing the nonlinear heat and mass transfer phenomena are difficult to be solved. For the case where the exact solution is difficult to be obtained, it is necessary to use a numerical procedure such as the finite difference method to solve a particular partial differential equation. In term of numerical procedure, a particular method can be considered as an efficient method if the method can give an approximate solution within the specified error with the least computational complexity. Throughout this paper, the two-dimensional Porous Medium Equation (2D PME) is discretized by using the implicit finite difference scheme to construct the corresponding approximation equation. Then this approximation equation yields a large-sized and sparse nonlinear system. By using the Newton method to linearize the nonlinear system, this paper deals with the application of the Four-Point Newton-EGSOR (4NEGSOR) iterative method for solving the 2D PMEs. In addition to that, the efficiency of the 4NEGSOR iterative method is studied by solving three examples of the problems. Based on the comparative analysis, the Newton-Gauss-Seidel (NGS) and the Newton-SOR (NSOR) iterative methods are also considered. The numerical findings show that the 4NEGSOR method is superior to the NGS and the NSOR methods in terms of the number of iterations to get the converged solutions, the time of computation and the maximum absolute errors produced by the methods.

  10. Time integration algorithms for the two-dimensional Euler equations on unstructured meshes

    NASA Technical Reports Server (NTRS)

    Slack, David C.; Whitaker, D. L.; Walters, Robert W.

    1994-01-01

    Explicit and implicit time integration algorithms for the two-dimensional Euler equations on unstructured grids are presented. Both cell-centered and cell-vertex finite volume upwind schemes utilizing Roe's approximate Riemann solver are developed. For the cell-vertex scheme, a four-stage Runge-Kutta time integration, a fourstage Runge-Kutta time integration with implicit residual averaging, a point Jacobi method, a symmetric point Gauss-Seidel method and two methods utilizing preconditioned sparse matrix solvers are presented. For the cell-centered scheme, a Runge-Kutta scheme, an implicit tridiagonal relaxation scheme modeled after line Gauss-Seidel, a fully implicit lower-upper (LU) decomposition, and a hybrid scheme utilizing both Runge-Kutta and LU methods are presented. A reverse Cuthill-McKee renumbering scheme is employed for the direct solver to decrease CPU time by reducing the fill of the Jacobian matrix. A comparison of the various time integration schemes is made for both first-order and higher order accurate solutions using several mesh sizes, higher order accuracy is achieved by using multidimensional monotone linear reconstruction procedures. The results obtained for a transonic flow over a circular arc suggest that the preconditioned sparse matrix solvers perform better than the other methods as the number of elements in the mesh increases.

  11. Active point out-of-plane ultrasound calibration

    NASA Astrophysics Data System (ADS)

    Cheng, Alexis; Guo, Xiaoyu; Zhang, Haichong K.; Kang, Hyunjae; Etienne-Cummings, Ralph; Boctor, Emad M.

    2015-03-01

    Image-guided surgery systems are often used to provide surgeons with informational support. Due to several unique advantages such as ease of use, real-time image acquisition, and no ionizing radiation, ultrasound is a common intraoperative medical imaging modality used in image-guided surgery systems. To perform advanced forms of guidance with ultrasound, such as virtual image overlays or automated robotic actuation, an ultrasound calibration process must be performed. This process recovers the rigid body transformation between a tracked marker attached to the transducer and the ultrasound image. Point-based phantoms are considered to be accurate, but their calibration framework assumes that the point is in the image plane. In this work, we present the use of an active point phantom and a calibration framework that accounts for the elevational uncertainty of the point. Given the lateral and axial position of the point in the ultrasound image, we approximate a circle in the axial-elevational plane with a radius equal to the axial position. The standard approach transforms all of the imaged points to be a single physical point. In our approach, we minimize the distances between the circular subsets of each image, with them ideally intersecting at a single point. We simulated in noiseless and noisy cases, presenting results on out-of-plane estimation errors, calibration estimation errors, and point reconstruction precision. We also performed an experiment using a robot arm as the tracker, resulting in a point reconstruction precision of 0.64mm.

  12. 16. WEST ELEVATION. MONOMOY POINT LT. STATION, MASS., SHOWING PROPOSED ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    16. WEST ELEVATION. MONOMOY POINT LT. STATION, MASS., SHOWING PROPOSED ALTERATION AND IMPROVEMENT OF DWELLING. No. 1343. SHEET 3 of 5. July 1899. - Monomoy Point Light Station, Approximately 3500 feet Northeast Powder Hole Pond, Monomoy National Wildlife Refuge, Chatham, Barnstable County, MA

  13. 17. WEST ELEVATION. MONOMOY POINT LT. STATION, MASS., SHOWING PROPOSED ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    17. WEST ELEVATION. MONOMOY POINT LT. STATION, MASS., SHOWING PROPOSED ALTERATION AND IMPROVEMENT OF DWELLING. No. 1343. Sheet 4 of 5. July 1899. - Monomoy Point Light Station, Approximately 3500 feet Northeast Powder Hole Pond, Monomoy National Wildlife Refuge, Chatham, Barnstable County, MA

  14. 14. FIRST FLOOR PLAN. MONOMOY POINT LT. STATION, MASS., SHOWING ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    14. FIRST FLOOR PLAN. MONOMOY POINT LT. STATION, MASS., SHOWING PROPOSED ALTERATION AND IMPROVEMENT OF DWELLING. No. 1343. SHEET 1 of 5. July 1899. - Monomoy Point Light Station, Approximately 3500 feet Northeast Powder Hole Pond, Monomoy National Wildlife Refuge, Chatham, Barnstable County, MA

  15. 15. SECOND FLOOR PLAN. MONOMOY POINT LT. STATION, MASS., SHOWING ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    15. SECOND FLOOR PLAN. MONOMOY POINT LT. STATION, MASS., SHOWING PROPOSED ALTERATION AND IMPROVEMENT OF DWELLING. No. 1343. SHEET 2 OF 5. July 1899. - Monomoy Point Light Station, Approximately 3500 feet Northeast Powder Hole Pond, Monomoy National Wildlife Refuge, Chatham, Barnstable County, MA

  16. Early-life predictors of leisure-time physical inactivity in midadulthood: findings from a prospective British birth cohort.

    PubMed

    Pinto Pereira, Snehal M; Li, Leah; Power, Chris

    2014-12-01

    Much adult physical inactivity research ignores early-life factors from which later influences may originate. In the 1958 British birth cohort (followed from 1958 to 2008), leisure-time inactivity, defined as activity frequency of less than once a week, was assessed at ages 33, 42, and 50 years (n = 12,776). Early-life factors (at ages 0-16 years) were categorized into 3 domains (i.e., physical, social, and behavioral). We assessed associations of adult inactivity 1) with factors within domains, 2) with the 3 domains combined, and 3) allowing for adult factors. At each age, approximately 32% of subjects were inactive. When domains were combined, factors associated with inactivity (e.g., at age 50 years) were prepubertal stature (5% lower odds per 1-standard deviation higher height), hand control/coordination problems (14% higher odds per 1-point increase on a 4-point scale), cognition (10% lower odds per 1-standard deviation greater ability), parental divorce (21% higher odds), institutional care (29% higher odds), parental social class at child's birth (9% higher odds per 1-point reduction on a 4-point scale), minimal parental education (13% higher odds), household amenities (2% higher odds per increase (representing poorer amenities) on a 19-point scale), inactivity (8% higher odds per 1-point reduction in activity on a 4-point scale), low sports aptitude (13% higher odds), and externalizing behaviors (i.e., conduct problems) (5% higher odds per 1-standard deviation higher score). Adjustment for adult covariates weakened associations slightly. Factors from early life were associated with adult leisure-time inactivity, allowing for early identification of groups vulnerable to inactivity. © The Author 2014. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. Inhomogeneous point-process entropy: An instantaneous measure of complexity in discrete systems

    NASA Astrophysics Data System (ADS)

    Valenza, Gaetano; Citi, Luca; Scilingo, Enzo Pasquale; Barbieri, Riccardo

    2014-05-01

    Measures of entropy have been widely used to characterize complexity, particularly in physiological dynamical systems modeled in discrete time. Current approaches associate these measures to finite single values within an observation window, thus not being able to characterize the system evolution at each moment in time. Here, we propose a new definition of approximate and sample entropy based on the inhomogeneous point-process theory. The discrete time series is modeled through probability density functions, which characterize and predict the time until the next event occurs as a function of the past history. Laguerre expansions of the Wiener-Volterra autoregressive terms account for the long-term nonlinear information. As the proposed measures of entropy are instantaneously defined through probability functions, the novel indices are able to provide instantaneous tracking of the system complexity. The new measures are tested on synthetic data, as well as on real data gathered from heartbeat dynamics of healthy subjects and patients with cardiac heart failure and gait recordings from short walks of young and elderly subjects. Results show that instantaneous complexity is able to effectively track the system dynamics and is not affected by statistical noise properties.

  18. A new NASA/MSFC mission analysis global cloud cover data base

    NASA Technical Reports Server (NTRS)

    Brown, S. C.; Jeffries, W. R., III

    1985-01-01

    A global cloud cover data set, derived from the USAF 3D NEPH Analysis, was developed for use in climate studies and for Earth viewing applications. This data set contains a single parameter - total sky cover - separated in time by 3 or 6 hr intervals and in space by approximately 50 n.mi. Cloud cover amount is recorded for each grid point (of a square grid) by a single alphanumeric character representing each 5 percent increment of sky cover. The data are arranged in both quarterly and monthly formats. The data base currently provides daily, 3-hr observed total sky cover for the Northern Hemisphere from 1972 through 1977 less 1976. For the Southern Hemisphere, there are data at 6-hr intervals for 1976 through 1978 and at 3-hr intervals for 1979 and 1980. More years of data are being added. To validate the data base, the percent frequency of or = 0.3 and or = 0.8 cloud cover was compared with ground observed cloud amounts at several locations with generally good agreement. Mean or other desired cloud amounts can be calculated for any time period and any size area from a single grid point to a hemisphere. The data base is especially useful in evaluating the consequence of cloud cover on Earth viewing space missions. The temporal and spatial frequency of the data allow simulations that closely approximate any projected viewing mission. No adjustments are required to account for cloud continuity.

  19. Comparison of methods for assessing photoprotection against ultraviolet A in vivo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaidbey, K.; Gange, R.W.

    Photoprotection against ultraviolet A (UVA) by three sunscreens was evaluated in humans, with erythema and pigmentation used as end points in normal skin and in skin sensitized with 8-methoxypsoralen and anthracene. The test sunscreens were Parsol 1789 (2%), Eusolex 8020 (2%), and oxybenzone (3%). UVA was obtained from two filtered xenon-arc sources. UVA protection factors were found to be significantly higher in sensitized skin compared with normal skin. Both Parsol and Eusolex provided better and comparable photoprotection (approximately 3.0) than oxybenzone (approximately 2.0) in sensitized skin, regardless of whether 8-methoxypsoralen or anthracene was used. In normal unsensitized skin, Parsol 1789more » and Eusolex 8020 were also comparable and provided slightly better photoprotection (approximately 1.8) than oxybenzone (approximately 1.4) when pigmentation was used as an end point. The three sunscreens, however, were similar in providing photoprotection against UVA-induced erythema. Protection factors obtained in artificially sensitized skin are probably not relevant to normal skin. It is concluded that pigmentation, either immediate or delayed, is a reproducible and useful end point for the routine assessment of photoprotection of normal skin against UVA.« less

  20. High-yield exfoliation of tungsten disulphide nanosheets by rational mixing of low-boiling-point solvents

    NASA Astrophysics Data System (ADS)

    Sajedi-Moghaddam, Ali; Saievar-Iranizad, Esmaiel

    2018-01-01

    Developing high-throughput, reliable, and facile approaches for producing atomically thin sheets of transition metal dichalcogenides is of great importance to pave the way for their use in real applications. Here, we report a highly promising route for exfoliating two-dimensional tungsten disulphide sheets by using binary combination of low-boiling-point solvents. Experimental results show significant dependence of exfoliation yield on the type of solvents as well as relative volume fraction of each solvent. The highest yield was found for appropriate combination of isopropanol/water (20 vol% isopropanol and 80 vol% water) which is approximately 7 times higher than that in pure isopropanol and 4 times higher than that in pure water. The dramatic increase in exfoliation yield can be attributed to perfect match between the surface tension of tungsten disulphide and binary solvent system. Furthermore, solvent molecular size also has a profound impact on the exfoliation efficiency, due to the steric repulsion.

  1. The flux qubit revisited to enhance coherence and reproducibility

    PubMed Central

    Yan, Fei; Gustavsson, Simon; Kamal, Archana; Birenbaum, Jeffrey; Sears, Adam P; Hover, David; Gudmundsen, Ted J.; Rosenberg, Danna; Samach, Gabriel; Weber, S; Yoder, Jonilyn L.; Orlando, Terry P.; Clarke, John; Kerman, Andrew J.; Oliver, William D.

    2016-01-01

    The scalable application of quantum information science will stand on reproducible and controllable high-coherence quantum bits (qubits). Here, we revisit the design and fabrication of the superconducting flux qubit, achieving a planar device with broad-frequency tunability, strong anharmonicity, high reproducibility and relaxation times in excess of 40 μs at its flux-insensitive point. Qubit relaxation times T1 across 22 qubits are consistently matched with a single model involving resonator loss, ohmic charge noise and 1/f-flux noise, a noise source previously considered primarily in the context of dephasing. We furthermore demonstrate that qubit dephasing at the flux-insensitive point is dominated by residual thermal-photons in the readout resonator. The resulting photon shot noise is mitigated using a dynamical decoupling protocol, resulting in T2≈85 μs, approximately the 2T1 limit. In addition to realizing an improved flux qubit, our results uniquely identify photon shot noise as limiting T2 in contemporary qubits based on transverse qubit–resonator interaction. PMID:27808092

  2. Level-crossing statistics of the horizontal wind speed in the planetary surface boundary layer

    NASA Astrophysics Data System (ADS)

    Edwards, Paul J.; Hurst, Robert B.

    2001-09-01

    The probability density of the times for which the horizontal wind remains above or below a given threshold speed is of some interest in the fields of renewable energy generation and pollutant dispersal. However there appear to be no analytic or conceptual models which account for the observed power law form of the distribution of these episode lengths over a range of over three decades, from a few tens of seconds to a day or more. We reanalyze high resolution wind data and demonstrate the fractal character of the point process generated by the wind speed level crossings. We simulate the fluctuating wind speed by a Markov process which approximates the characteristics of the real (non-Markovian) wind and successfully generates a power law distribution of episode lengths. However, fundamental questions concerning the physical basis for this behavior and the connection between the properties of a continuous-time stochastic process and the fractal statistics of the point process generated by its level crossings remain unanswered.

  3. Smart-Geology for the World's largest fossil oyster reef

    NASA Astrophysics Data System (ADS)

    Dorninger, Peter; Nothegger, Clemens; Djuricic, Ana; Rasztovits, Sascha; Harzhauser, Mathias

    2014-05-01

    The geo-edutainment park "Fossilienwelt Weinviertel" at Stetten in Lower Austria exposes the world's largest fossil oyster biostrome. In the past decade, significant progress has been made in 3D digitizing sensor technology. To cope with the high amount of data, processing methods have been automated to a high degree. Consequently, we formulated the hypothesis that appropriate application of state-of-the-art 3D digitizing, data processing, and visualization technologies allows for a significant automation in paleontological prospection, making an evaluation of huge areas commercially feasible in both time and costs. We call the necessary processing steps "Smart Geology", being characterized by automation and large volumes of data. The Smart Geology project (FWF P 25883-N29) investigates three topics, 3D digitizing, automated geological and paleontological analysis and interpretation and finally investigating the applicability of smart devices for on-site accessibility of project data in order to support the two scientific hypotheses concerning the emerging process of the shell bed, i.e. was it formed by a tsunami or a major storm, and does it preserve pre- and post-event features. This contribution concentrates on the innovative and sophisticated 3D documentation and visualization processes being applied to virtualise approximately 15.000 fossil oysters at the approximately 25 by 17 m accessible shell bad. We decided to use a Terrestrial Laserscanner (TLS) for the determination of the geometrical 3D structures. The TLS achieves about 2 mm single point measurement accuracy. The scanning campaign provides a "raw" point cloud of approximately 1 bio. points at the respective area. Due to the scanning configuration used, the occurrence of occluded ares is minimized hence the full 3D structure of this unique site can be modelled. In addition, approximately 300 photos were taken with a nominal resolution of 0.6 mm per pixel. Sophisticated artificial lightning (close to studio conditions) is used in order to minimize the occurrence of shadows. The resulting datasets can be characterized as follows: A virtual 3D representation with a nominal resolution of 1 mm, a local accuracy of 1 mm (after noise minimization), a global accuracy of < 3 mm with respect to a network of reference points and integrated colour information with a resolution of 0.6 mm per pixel. In order to support both interactive and automated geological and palaeontologcial research questions of the entire site in an economically feasible manner, various data reduction and representation methods were evaluated. Within this contribution we will present and discuss results of 2D image representations, 3D documentation models, and combinations, i.e. textured models. Effects of data reduction (i.e. to make them more convenient for the analysis of large areas) and data acquisition configuration (e.g. the necessity for high-resolution data acquisition) as well as the applicability of the data for advanced visualization purposes (e.g. 3D real-time rendering; foundation for augmented- reality based applications) will be discussed.

  4. Economic Expansion Is a Major Determinant of Physician Supply and Utilization

    PubMed Central

    Cooper, Richard A; Getzen, Thomas E; Laud, Prakash

    2003-01-01

    Objective To assess the relationship between levels of economic development and the supply and utilization of physicians. Data Sources Data were obtained from the American Medical Association, American Osteopathic Association, Organization for Economic Cooperation and Development (OECD), Bureau of Health Professions, Bureau of Labor Statistics, Bureau of Economic Analysis, Census Bureau, Health Care Financing Administration, and historical sources. Study Design Economic development, expressed as real per capita gross domestic product (GDP) or personal income, was correlated with per capita health care labor and physician supply within countries and states over periods of time spanning 25–70 years and across countries, states, and metropolitan statistical areas (MSAs) at multiple points in time over periods of up to 30 years. Longitudinal data were analyzed in four complementary ways: (1) simple univariate regressions; (2) regressions in which temporal trends were partialled out; (3) time series comparing percentage differences across segments of time; and (4) a bivariate Granger causality test. Cross-sectional data were assessed at multiple time points by means of univariate regression analyses. Principal Findings Under each analytic scenario, physician supply correlated with differences in GDP or personal income. Longitudinal correlations were associated with temporal lags of approximately 5 years for health employment and 10 years for changes in physician supply. The magnitude of changes in per capita physician supply in the United States was equivalent to differences of approximately 0.75 percent for each 1.0 percent difference in GDP. The greatest effects of economic expansion were on the medical specialties, whereas the surgical and hospital-based specialties were affected to a lesser degree, and levels of economic expansion had little influence on family/general practice. Conclusions Economic expansion has a strong, lagged relationship with changes in physician supply. This suggests that economic projections could serve as a gauge for projecting the future utilization of physician services. PMID:12785567

  5. A Library of ATMO Forward Model Transmission Spectra for Hot Jupiter Exoplanets

    NASA Technical Reports Server (NTRS)

    Goyal, Jayesh M.; Mayne, Nathan; Sing, David K.; Drummond, Benjamin; Tremblin, Pascal; Amundsen, David S.; Evans, Thomas; Carter, Aarynn L.; Spake, Jessica; Baraffe, Isabelle; hide

    2017-01-01

    We present a grid of forward model transmission spectra, adopting an isothermal temperature-pressure profile, alongside corresponding equilibrium chemical abundances for 117 observationally significant hot exoplanets (equilibrium temperatures of 547-2710 K). This model grid has been developed using a 1D radiative-convective-chemical equilibrium model termed ATMO, with up-to-date high-temperature opacities. We present an interpretation of observations of 10 exoplanets, including best-fitting parameters and X(exp 2) maps. In agreement with previous works, we find a continuum from clear to hazy/cloudy atmospheres for this sample of hot Jupiters. The data for all the 10 planets are consistent with subsolar to solar C/O ratio, 0.005 to 10 times solar metallicity and water rather than methane-dominated infrared spectra. We then explore the range of simulated atmospheric spectra for different exoplanets, based on characteristics such as temperature, metallicity, C/O ratio, haziness and cloudiness. We find a transition value for the metallicity between 10 and 50 times solar, which leads to substantial changes in the transmission spectra. We also find a transition value of C/O ratio, from water to carbon species dominated infrared spectra, as found by previous works, revealing a temperature dependence of this transition point ranging from approximately 0.56 to approximately 1-1.3 for equilibrium temperatures from approximately 900 to approximately 2600 K. We highlight the potential of the spectral features of HCN and C2H2 to constrain the metallicities and C/O ratios of planets, using James Webb Space Telescope (JWST) observations. Finally, our entire grid (approximately 460 000 simulations) is publicly available and can be used directly with the JWST simulator PandExo for planning observations.

  6. Convergence results for pseudospectral approximations of hyperbolic systems by a penalty type boundary treatment

    NASA Technical Reports Server (NTRS)

    Funaro, Daniele; Gottlieb, David

    1989-01-01

    A new method of imposing boundary conditions in the pseudospectral approximation of hyperbolic systems of equations is proposed. It is suggested to collocate the equations, not only at the inner grid points, but also at the boundary points and use the boundary conditions as penalty terms. In the pseudo-spectral Legrendre method with the new boundary treatment, a stability analysis for the case of a constant coefficient hyperbolic system is presented and error estimates are derived.

  7. The Spiral of Life

    NASA Astrophysics Data System (ADS)

    Cajiao Vélez, F.; Kamiński, J. Z.; Krajewska, K.

    2018-04-01

    High-energy photoionization driven by short and circularly-polarized laser pulses is studied in the framework of the relativistic strong-field approximation. The saddle-point analysis of the integrals defining the probability amplitude is used to determine the general properties of the probability distributions. Additionally, an approximate solution to the saddle-point equation is derived. This leads to the concept of the three-dimensional spiral of life in momentum space, around which the ionization probability distribution is maximum. We demonstrate that such spiral is also obtained from a classical treatment.

  8. Frequency-Tracking CW Doppler Radar Solving Small-Angle Approximation and Null Point Issues in Non-Contact Vital Signs Monitoring.

    PubMed

    Mercuri, Marco; Liu, Yao-Hong; Lorato, Ilde; Torfs, Tom; Bourdoux, Andre; Van Hoof, Chris

    2017-06-01

    A Doppler radar operating as a Phase-Locked-Loop (PLL) in frequency demodulator configuration is presented and discussed. The proposed radar presents a unique architecture, using a single channel mixer, and allows to detect contactless vital signs parameters while solving the null point issue and without requiring the small angle approximation condition. Spectral analysis, simulations, and experimental results are presented and detailed to demonstrate the feasibility and the operational principle of the proposed radar architecture.

  9. Real-time automatic registration in optical surgical navigation

    NASA Astrophysics Data System (ADS)

    Lin, Qinyong; Yang, Rongqian; Cai, Ken; Si, Xuan; Chen, Xiuwen; Wu, Xiaoming

    2016-05-01

    An image-guided surgical navigation system requires the improvement of the patient-to-image registration time to enhance the convenience of the registration procedure. A critical step in achieving this aim is performing a fully automatic patient-to-image registration. This study reports on a design of custom fiducial markers and the performance of a real-time automatic patient-to-image registration method using these markers on the basis of an optical tracking system for rigid anatomy. The custom fiducial markers are designed to be automatically localized in both patient and image spaces. An automatic localization method is performed by registering a point cloud sampled from the three dimensional (3D) pedestal model surface of a fiducial marker to each pedestal of fiducial markers searched in image space. A head phantom is constructed to estimate the performance of the real-time automatic registration method under four fiducial configurations. The head phantom experimental results demonstrate that the real-time automatic registration method is more convenient, rapid, and accurate than the manual method. The time required for each registration is approximately 0.1 s. The automatic localization method precisely localizes the fiducial markers in image space. The averaged target registration error for the four configurations is approximately 0.7 mm. The automatic registration performance is independent of the positions relative to the tracking system and the movement of the patient during the operation.

  10. Selection of the optimum font type and size interface for on screen continuous reading by young adults: an ergonomic approach.

    PubMed

    Banerjee, Jayeeta; Bhattacharyya, Moushum

    2011-12-01

    There is a rapid shifting of media: from printed paper to computer screens. This transition is modifying the process of how we read and understand text. The efficiency of reading is dependent on how ergonomically the visual information is presented. Font types and size characteristics have been shown to affect reading. A detailed investigation of the effect of the font type and size on reading on computer screens has been carried out by using subjective, objective and physiological evaluation methods on young adults. A group of young participants volunteered for this study. Two types of fonts were used: Serif fonts (Times New Roman, Georgia, Courier New) and Sans serif fonts (Verdana, Arial, Tahoma). All fonts were presented in 10, 12 and 14 point sizes. This study used a 6 X 3 (font type X size) design matrix. Participants read 18 passages of approximately the same length and reading level on a computer monitor. Reading time, ranking and overall mental workload were measured. Eye movements were recorded by a binocular eye movement recorder. Reading time was minimum for Courier New l4 point. The participants' ranking was highest and mental workload was least for Verdana 14 point. The pupil diameter, fixation duration and gaze duration were least for Courier New 14 point. The present study recommends using 14 point sized fonts for reading on computer screen. Courier New is recommended for fast reading while for on screen presentation Verdana is recommended. The outcome of this study will help as a guideline to all the PC users, software developers, web page designers and computer industry as a whole.

  11. 18. EXISTING FIRST FLOOR PLAN. MONOMOY POINT LT. STATION, MASS., ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    18. EXISTING FIRST FLOOR PLAN. MONOMOY POINT LT. STATION, MASS., SHOWING PROPOSED ALTERATION AND IMPROVEMENT OF DWELLING. No. 1343. Sheet 5 of 5. July 1899. - Monomoy Point Light Station, Approximately 3500 feet Northeast Powder Hole Pond, Monomoy National Wildlife Refuge, Chatham, Barnstable County, MA

  12. Indirect iterative learning control for a discrete visual servo without a camera-robot model.

    PubMed

    Jiang, Ping; Bamforth, Leon C A; Feng, Zuren; Baruch, John E F; Chen, YangQuan

    2007-08-01

    This paper presents a discrete learning controller for vision-guided robot trajectory imitation with no prior knowledge of the camera-robot model. A teacher demonstrates a desired movement in front of a camera, and then, the robot is tasked to replay it by repetitive tracking. The imitation procedure is considered as a discrete tracking control problem in the image plane, with an unknown and time-varying image Jacobian matrix. Instead of updating the control signal directly, as is usually done in iterative learning control (ILC), a series of neural networks are used to approximate the unknown Jacobian matrix around every sample point in the demonstrated trajectory, and the time-varying weights of local neural networks are identified through repetitive tracking, i.e., indirect ILC. This makes repetitive segmented training possible, and a segmented training strategy is presented to retain the training trajectories solely within the effective region for neural network approximation. However, a singularity problem may occur if an unmodified neural-network-based Jacobian estimation is used to calculate the robot end-effector velocity. A new weight modification algorithm is proposed which ensures invertibility of the estimation, thus circumventing the problem. Stability is further discussed, and the relationship between the approximation capability of the neural network and the tracking accuracy is obtained. Simulations and experiments are carried out to illustrate the validity of the proposed controller for trajectory imitation of robot manipulators with unknown time-varying Jacobian matrices.

  13. Microwave Heating of Metal Power Clusters

    NASA Astrophysics Data System (ADS)

    Rybakov, K. I.; Semenov, V. E.; Volkovskaya, I. I.

    2018-01-01

    The results of simulating the rapid microwave heating of spherical clusters of metal particles to the melting point are reported. In the simulation, the cluster is subjected to a plane electromagnetic wave. The cluster size is comparable to the wavelength; the perturbations of the field inside the cluster are accounted for within an effective medium approximation. It is shown that the time of heating in vacuum to the melting point does not exceed 1 s when the electric field strength in the incident wave is about 2 kV/cm at a frequency of 24 GHz or 5 kV/cm at a frequency of 2.45 GHz. The obtained results demonstrate feasibility of using rapid microwave heating for the spheroidization of metal particles with an objective to produce high-quality powders for additive manufacturing technologies.

  14. Performance characterization of a Bosch CO sub 2 reduction subsystem

    NASA Technical Reports Server (NTRS)

    Heppner, D. B.; Hallick, T. M.; Schubert, F. H.

    1980-01-01

    The performance of Bosch hardware at the subsystem level (up to five-person capacity) in terms of five operating parameters was investigated. The five parameters were: (1) reactor temperature, (2) recycle loop mass flow rate, (3) recycle loop gas composition (percent hydrogen), (4) recycle loop dew point and (5) catalyst density. Experiments were designed and conducted in which the five operating parameters were varied and Bosch performance recorded. A total of 12 carbon collection cartridges provided over approximately 250 hours of operating time. Generally, one cartridge was used for each parameter that was varied. The Bosch hardware was found to perform reliably and reproducibly. No startup, reaction initiation or carbon containment problems were observed. Optimum performance points/ranges were identified for the five parameters investigated. The performance curves agreed with theoretical projections.

  15. The construction of high-accuracy schemes for acoustic equations

    NASA Technical Reports Server (NTRS)

    Tang, Lei; Baeder, James D.

    1995-01-01

    An accuracy analysis of various high order schemes is performed from an interpolation point of view. The analysis indicates that classical high order finite difference schemes, which use polynomial interpolation, hold high accuracy only at nodes and are therefore not suitable for time-dependent problems. Thus, some schemes improve their numerical accuracy within grid cells by the near-minimax approximation method, but their practical significance is degraded by maintaining the same stencil as classical schemes. One-step methods in space discretization, which use piecewise polynomial interpolation and involve data at only two points, can generate a uniform accuracy over the whole grid cell and avoid spurious roots. As a result, they are more accurate and efficient than multistep methods. In particular, the Cubic-Interpolated Psuedoparticle (CIP) scheme is recommended for computational acoustics.

  16. Durango delta: Complications on San Juan basin Cretaceous linear strandline theme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zech, R.S.; Wright, R.

    1989-09-01

    The Upper Cretaceous Point Lookout Sandstone generally conforms to a predictable cyclic shoreface model in which prograding linear strandline lithosomes dominate formation architecture. Multiple transgressive-regressive cycles results in systematic repetition of lithologies deposited in beach to inner shelf environments. Deposits of approximately five cycles are locally grouped into bundles. Such bundles extend at least 20 km along depositional strike and change from foreshore sandstone to offshore, time-equivalent Mancos mud rock in a downdip distance of 17 to 20 km. Excellent hydrocarbon reservoirs exist where well-sorted shoreface sandstone bundles stack and the formation thickens. This depositional model breaks down in themore » vicinity of Durango, Colorado, where a fluvial-dominated delta front and associated large distributary channels characterize the Point Lookout Sandstone and overlying Menefee Formation.« less

  17. Mass Spectrometry Using Nanomechanical Systems: Beyond the Point-Mass Approximation.

    PubMed

    Sader, John E; Hanay, M Selim; Neumann, Adam P; Roukes, Michael L

    2018-03-14

    The mass measurement of single molecules, in real time, is performed routinely using resonant nanomechanical devices. This approach models the molecules as point particles. A recent development now allows the spatial extent (and, indeed, image) of the adsorbate to be characterized using multimode measurements ( Hanay , M. S. , Nature Nanotechnol. , 10 , 2015 , pp 339 - 344 ). This "inertial imaging" capability is achieved through virtual re-engineering of the resonator's vibrating modes, by linear superposition of their measured frequency shifts. Here, we present a complementary and simplified methodology for the analysis of these inertial imaging measurements that exhibits similar performance while streamlining implementation. This development, together with the software that we provide, enables the broad implementation of inertial imaging that opens the door to a range of novel characterization studies of nanoscale adsorbates.

  18. Production of isoprene by leaf tissue.

    PubMed

    Jones, C A; Rasmussen, R A

    1975-06-01

    Isoprene production by Hamamelis virginiana L. and Quercus borealis Michx. leaves was studied. When ambient CO(2) concentrations were maintained with bicarbonate buffers, the rate of isoprene production at 125 microliters per liter of CO(2) was approximately four times that at 250 microliters per liter of CO(2). Isoprene production was drastically inhibited by 97% O(2). Dichlorodimethylphenylurea (0.1 mm), NaHSO(3) (10 mm), and alpha-hydroxy-2-pyridinemethanesulfonic acid (10 mm) inhibited isoprene production but increased the compensation point of the tissue. Isonicotinic acid hydrazide neither inhibited isoprene emission nor increased the compensation point of the tissue significantly. Inhibition of isoprene production does not seem to correlate with stomatal resistance. Isoprene was labeled by intermediates of the glycolate pathway, and similarities are noted between the biosynthesis of isoprene and that of beta-carotene.

  19. New developments in ALFT's soft x-ray point sources

    NASA Astrophysics Data System (ADS)

    Cintron, Dario F.; Guo, Xiaoming; Xu, Meisheng; Ye, Rubin; Antoshko, Yuriy; Antoshko, Yuriy; Drew, Steve; Philippe, Albert; Panarella, Emilio

    2002-07-01

    The new development in ALFT soft X-ray point source VSX-400 consists mainly of an improvement of the nozzle design to reduce the source size, as well as the introduction of a novel trigger system, capable of triggering the discharge hundreds of million of times without failure, and a debris removal system. Continuous operation for 8 hours at 20 kHz allows us to achieve 400 mW of useful soft X-ray radiation around 1 nm wavelength. In another regime of operation with a high energy machine, the VSX-Z, we have been able to achieve consistently 10 J of X-rays per pulse at a repetition rate that can reach 1 Hz with an input electrical energy of approximately 3 kJ and an efficiency in excess of 10-3.

  20. Improved detection limits for electrospray ionization on a magnetic sector mass spectrometer by using an array detector.

    PubMed

    Cody, R B; Tamura, J; Finch, J W; Musselman, B D

    1994-03-01

    Array detection was compared with point detection for solutions of hen egg-white lysozyme, equine myoglobin, and ubiquitin analyzed by electrospray ionization with a magnetic sector mass spectrometer. The detection limits for samples analyzed by using the array detector system were at least 10 times lower than could be achieved by using a point detector on the same mass spectrometer. The minimum detectable quantity of protein corresponded to a signal-to-background ratio of approximately 2∶1 for a 500 amol/μL solution of hen egg-white lysozyme. However, the ultimate practical sample concentrations appeared to be in the 10-100 fmol/μL range for the analysis of dilute solutions of relatively pure proteins or simple mixtures.

  1. 3D Surface Reconstruction and Volume Calculation of Rills

    NASA Astrophysics Data System (ADS)

    Brings, Christine; Gronz, Oliver; Becker, Kerstin; Wirtz, Stefan; Seeger, Manuel; Ries, Johannes B.

    2015-04-01

    We use the low-cost, user-friendly photogrammetric Structure from Motion (SfM) technique, which is implemented in the Software VisualSfM, for 3D surface reconstruction and volume calculation of an 18 meter long rill in Luxembourg. The images were taken with a Canon HD video camera 1) before a natural rainfall event, 2) after a natural rainfall event and before a rill experiment and 3) after a rill experiment. Recording with a video camera results compared to a photo camera not only a huge time advantage, the method also guarantees more than adequately overlapping sharp images. For each model, approximately 8 minutes of video were taken. As SfM needs single images, we automatically selected the sharpest image from 15 frame intervals. The sharpness was estimated using a derivative-based metric. Then, VisualSfM detects feature points in each image, searches matching feature points in all image pairs, recovers the camera positions and finally by triangulation of camera positions and feature points the software reconstructs a point cloud of the rill surface. From the point cloud, 3D surface models (meshes) are created and via difference calculations of the pre and post models a visualization of the changes (erosion and accumulation areas) and quantification of erosion volumes are possible. The calculated volumes are presented in spatial units of the models and so real values must be converted via references. The outputs are three models at three different points in time. The results show that especially using images taken from suboptimal videos (bad lighting conditions, low contrast of the surface, too much in-motion unsharpness), the sharpness algorithm leads to much more matching features. Hence the point densities of the 3D models are increased and thereby clarify the calculations.

  2. A modelling approach to assessing the timescale uncertainties in proxy series with chronological errors

    NASA Astrophysics Data System (ADS)

    Divine, D. V.; Godtliebsen, F.; Rue, H.

    2012-01-01

    The paper proposes an approach to assessment of timescale errors in proxy-based series with chronological uncertainties. The method relies on approximation of the physical process(es) forming a proxy archive by a random Gamma process. Parameters of the process are partly data-driven and partly determined from prior assumptions. For a particular case of a linear accumulation model and absolutely dated tie points an analytical solution is found suggesting the Beta-distributed probability density on age estimates along the length of a proxy archive. In a general situation of uncertainties in the ages of the tie points the proposed method employs MCMC simulations of age-depth profiles yielding empirical confidence intervals on the constructed piecewise linear best guess timescale. It is suggested that the approach can be further extended to a more general case of a time-varying expected accumulation between the tie points. The approach is illustrated by using two ice and two lake/marine sediment cores representing the typical examples of paleoproxy archives with age models based on tie points of mixed origin.

  3. Research activities at the Center for Modeling of Turbulence and Transition

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing

    1993-01-01

    The main research activities at the Center for Modeling of Turbulence and Transition (CMOTT) are described. The research objective of CMOTT is to improve and/or develop turbulence and transition models for propulsion systems. The flows of interest in propulsion systems can be both compressible and incompressible, three dimensional, bounded by complex wall geometries, chemically reacting, and involve 'bypass' transition. The most relevant turbulence and transition models for the above flows are one- and two-equation eddy viscosity models, Reynolds stress algebraic- and transport-equation models, pdf models, and multiple-scale models. All these models are classified as one-point closure schemes since only one-point (in time and space) turbulent correlations, such as second moments (Reynolds stresses and turbulent heat fluxes) and third moments, are involved. In computational fluid dynamics, all turbulent quantities are one-point correlations. Therefore, the study of one-point turbulent closure schemes is the focus of our turbulence research. However, other research, such as the renormalization group theory, the direct interaction approximation method, and numerical simulations are also pursued to support the development of turbulence modeling.

  4. Perspective view of the Indian Mission looking from approximately the ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Perspective view of the Indian Mission looking from approximately the same vantage point as that seen in MD-1109-N-12 - National Park Seminary, Indian Mission, 2790 Linden Lane, Silver Spring, Montgomery County, MD

  5. Satellite test of the isotropy of the one-way spe ed of light using ExTRAS

    NASA Technical Reports Server (NTRS)

    Wolf, Peter

    1995-01-01

    A test of the second postulate of special relativity, the universality of the speed of light, using the ExTRAS (Experiment on Timing Ranging and Atmospheric Sounding) payload to be flown on board a Russian Meteor-3M satellite (launch date January 1997) is proposed. The propagation time of a light signal transmitted from one point to another without reflection would be measured directly by comparing the phases of two hydrogen maser clocks, one on board and one on the ground, using laser or microwave time transfer systems. An estimated uncertainty budget of the proposed measurements is given, resulting in an expected sensitivity of the experiment of delta c/c is less than 8xl0(exp -10) which would be an improvement by a factor of approximately 430 over previous direct measurements and by a factor of approximately 4 over the best indirect measurement. The proposed test would require no equipment additional to what is already planned and so is of inherently low-cost. It could be carried out by anyone having access to a laser or microwave ground station and a hydrogen maser.

  6. External radiation exposure, excretion, and effective half-life in 177Lu-PSMA-targeted therapies.

    PubMed

    Kurth, J; Krause, B J; Schwarzenböck, S M; Stegger, L; Schäfers, M; Rahbar, K

    2018-04-12

    Prostate-specific membrane antigen (PSMA)-targeted therapy with 177 Lu-PSMA-617 is a therapeutic option for patients with metastatic castration-resistant prostate cancer (mCRPC). To optimize the therapy procedure, it is necessary to determine relevant parameters to define radiation protection and safety necessities. Therefore, this study aimed at estimating the ambient radiation exposure received by the patient. Moreover, the excreted activity was quantified. In total, 50 patients with mCRPC and treated with 177 Lu-PSMA-617 (mean administered activity 6.3 ± 0.5 GBq) were retrospectively included in a bi-centric study. Whole-body dose rates were measured at a distance of 2 m at various time points after application of 177 Lu-PSMA-617, and effective half-lives for different time points were calculated and compared. Radiation exposure to the public was approximated using the dose integral. For the estimation of the excreted activity, whole body measurements of 25 patients were performed at 7 time points. Unbound 177 Lu-PSMA-617 was rapidly cleared from the body. After 4 h, approximately 50% and, after 12 h, approximately 70% of the administered activity were excreted, primarily via urine. The mean dose rates were the following: 3.6 ± 0.7 μSv/h at 2 h p. i., 1.6 ± 0.6 μSv/h at 24 h, 1.1 ± 0.5 μSv/h at 48 h, and 0.7 ± 0.4 μSv/h at 72 h. The mean effective half-life of the cohort was 40.5 ± 9.6 h (min 21.7 h; max 85.7 h). The maximum dose to individual members of the public per treatment cycle was ~ 250 ± 55 μSv when the patient was discharged from the clinic after 48 h and ~ 190 ± 36 μSv when the patient was discharged after 72 h. In terms of the radiation exposure to the public, 177 Lu-PSMA is a safe option of radionuclide therapy. As usually four (sometimes more) cycles of the therapy are performed, it must be conducted in a way that ensures that applicable legal requirements can be followed. In other words, the radiation exposure to the public and the concentration of activity in wastewater must be sub-marginal. Therefore, in certain countries, hospitalization of these patients is mandatory.

  7. Chaotic scattering in an open vase-shaped cavity: Topological, numerical, and experimental results

    NASA Astrophysics Data System (ADS)

    Novick, Jaison Allen

    We present a study of trajectories in a two-dimensional, open, vase-shaped cavity in the absence of forces The classical trajectories freely propagate between elastic collisions. Bound trajectories, regular scattering trajectories, and chaotic scattering trajectories are present in the vase. Most importantly, we find that classical trajectories passing through the vase's mouth escape without return. In our simulations, we propagate bursts of trajectories from point sources located along the vase walls. We record the time for escaping trajectories to pass through the vase's neck. Constructing a plot of escape time versus the initial launch angle for the chaotic trajectories reveals a vastly complicated recursive structure or a fractal. This fractal structure can be understood by a suitable coordinate transform. Reducing the dynamics to two dimensions reveals that the chaotic dynamics are organized by a homoclinic tangle, which is formed by the union of infinitely long, intersecting stable and unstable manifolds. This study is broken down into three major components. We first present a topological theory that extracts the essential topological information from a finite subset of the tangle and encodes this information in a set of symbolic dynamical equations. These equations can be used to predict a topologically forced minimal subset of the recursive structure seen in numerically computed escape time plots. We present three applications of the theory and compare these predictions to our simulations. The second component is a presentation of an experiment in which the vase was constructed from Teflon walls using an ultrasound transducer as a point source. We compare the escaping signal to a classical simulation and find agreement between the two. Finally, we present an approximate solution to the time independent Schrodinger Equation for escaping waves. We choose a set of points at which to evaluate the wave function and interpolate trajectories connecting the source point to each "detector point". We then construct the wave function directly from these classical trajectories using the two-dimensional WKB approximation. The wave function is Fourier Transformed using a Fast Fourier Transform algorithm resulting in a spectrum in which each peak corresponds to an interpolated trajectory. Our predictions are based on an imagined experiment that uses microwave propagation within an electromagnetic waveguide. Such an experiment exploits the fact that under suitable conditions both Maxwell's Equations and the Schrodinger Equation can be reduced to the Helmholtz Equation. Therefore, our predictions, while compared to the electromagnetic experiment, contain information about the quantum system. Identifying peaks in the transmission spectrum with chaotic trajectories will allow for an additional experimental verification of the intermediate recursive structure. Finally, we summarize our results and discuss possible extensions of this project.

  8. Parametrization of semiempirical models against ab initio crystal data: evaluation of lattice energies of nitrate salts.

    PubMed

    Beaucamp, Sylvain; Mathieu, Didier; Agafonov, Viatcheslav

    2005-09-01

    A method to estimate the lattice energies E(latt) of nitrate salts is put forward. First, E(latt) is approximated by its electrostatic component E(elec). Then, E(elec) is correlated with Mulliken atomic charges calculated on the species that make up the crystal, using a simple equation involving two empirical parameters. The latter are fitted against point charge estimates of E(elec) computed on available X-ray structures of nitrate crystals. The correlation thus obtained yields lattice energies within 0.5 kJ/g from point charge values. A further assessment of the method against experimental data suggests that the main source of error arises from the point charge approximation.

  9. Initial-value semiclassical propagators for the Wigner phase space representation: Formulation based on the interpretation of the Moyal equation as a Schrödinger equation.

    PubMed

    Koda, Shin-ichi

    2015-12-28

    We formulate various semiclassical propagators for the Wigner phase space representation from a unified point of view. As is shown in several studies, the Moyal equation, which is an equation of motion for the Wigner distribution function, can be regarded as the Schrödinger equation of an extended Hamiltonian system where its "position" and "momentum" correspond to the middle point of two points of the original phase space and the difference between them, respectively. Then we show that various phase-space semiclassical propagators can be formulated just by applying existing semiclassical propagators to the extended system. As a result, a phase space version of the Van Vleck propagator, the initial-value Van Vleck propagator, the Herman-Kluk propagator, and the thawed Gaussian approximation are obtained. In addition, we numerically compare the initial-value phase-space Van Vleck propagator, the phase-space Herman-Kluk propagator, and the classical mechanical propagation as approximation methods for the time propagation of the Wigner distribution function in terms of both accuracy and convergence speed. As a result, we find that the convergence speed of the Van Vleck propagator is far slower than others as is the case of the Hilbert space, and the Herman-Kluk propagator keeps its accuracy for a long period compared with the classical mechanical propagation while the convergence speed of the latter is faster than the former.

  10. Prevalence of sleep deficiency in early gestation and its associations with stress and depressive symptoms.

    PubMed

    Okun, Michele L; Kline, Christopher E; Roberts, James M; Wettlaufer, Barbara; Glover, Khaleelah; Hall, Martica

    2013-12-01

    Sleep deficiency is an emerging concept denoting a deficit in the quantity or quality of sleep. This may be particularly salient for pregnant women since they report considerable sleep complaints. Sleep deficiency is linked with morbidity, including degradations in psychosocial functioning, (e.g., depression and stress), which are recognized risk factors for adverse pregnancy outcomes. We sought to describe the frequency of sleep deficiency across early gestation (10-20 weeks) and whether sleep deficiency is associated with reports of more depressive symptoms and stress. Pregnant women (N=160) with no self-reported sleep or psychological disorder provided sleep data collected via diary and actigraphy during early pregnancy: 10-12, 14-16, and 18-20 weeks' gestation. Sleep deficiency was defined as short sleep duration, insufficient sleep, or insomnia. Symptoms of depression and stress were collected at the same three time points. Linear mixed effects models were used to analyze the data. Approximately 28%-38% met criteria for sleep deficiency for at least one time point in early gestation. Women who were sleep deficient across all time points reported more perceived stress than those who were not sleep deficient (p<0.01). Depressive symptoms were higher among women with diary-defined sleep deficiency across all time points (p=0.02). Sleep deficiency is a useful concept to describe sleep recognized to be disturbed in pregnancy. Women with persistent sleep deficiency appear to be at greater risk for impairments in psychosocial functioning during early gestation. These associations are important since psychosocial functioning is a recognized correlate of adverse pregnancy outcomes. Sleep deficiency may be another important risk factor for adverse pregnancy outcomes.

  11. Blood Harmane (1-methyl-9h-pyrido[3,4-b]indole) Concentrations in Essential Tremor: Repeat Observation in Cases and Controls in New York

    PubMed Central

    Louis, Elan D.; Jiang, Wendy; Gerbin, Marina; Viner, Amanda S.; Factor-Litvak, Pam; Zheng, Wei

    2012-01-01

    Essential tremor (ET) is a widespread late-life neurological disease. Genetic and environmental factors are likely to play important etiological roles. Harmane (1-methyl-9H-pyrido[3,4-b]indole) is a potent tremor-producing neurotoxin. Previously, elevated blood harmane concentrations were demonstrated in ET cases compared to controls, but these observations were all been cross-sectional, assessing each subject at only one time point. Thus, no one has ever repeat-assayed blood harmane in the same subjects twice. Whether the observed case-control difference persists at a second time point, years later, is unknown. The current goal was to re-assess a sample of our ET cases and controls to determine whether blood harmane concentration remained elevated in ET at a second time point. Blood harmane concentrations were quantified by a well-established high performance liquid chromatography method in 63 ET cases and 70 controls. A mean of approximately 6 years elapsed between the initial and this subsequent blood harmane determination. The mean log blood harmane concentration was significantly higher in cases than controls (0.30 ± 0.61 g−10/ml vs. 0.08 ± 0.55 g−10/ml), and the median value in cases was double that of controls: 0.22 g−10/ml vs. 0.11 g−10/ml. The log blood harmane concentration was highest in cases with a family history of ET. Blood harmane concentration was elevated in ET cases compared to controls when re-assessed at a second time point several years later, indicating what seems to be a stable association between this environmental toxin and ET. PMID:22757671

  12. Blood harmane (1-methyl-9H-pyrido[3,4-b]indole) concentrations in essential tremor: repeat observation in cases and controls in New York.

    PubMed

    Louis, Elan D; Jiang, Wendy; Gerbin, Marina; Viner, Amanda S; Factor-Litvak, Pam; Zheng, Wei

    2012-01-01

    Essential tremor (ET) is a widespread late-life neurological disease. Genetic and environmental factors are likely to play important etiological roles. Harmane (1-methyl-9H-pyrido[3,4-b]indole) is a potent tremor-producing neurotoxin. Previously, elevated blood harmane concentrations were demonstrated in ET cases compared to controls, but these observations have all been cross-sectional, assessing each subject at only one time point. Thus, no one has ever repeat-assayed blood harmane in the same subjects twice. Whether the observed case-control difference persists at a second time point, years later, is unknown. The current goal was to reassess a sample of our ET cases and controls to determine whether blood harmane concentration remained elevated in ET at a second time point. Blood harmane concentrations were quantified by a well-established high-performance liquid chromatography method in 63 ET cases and 70 controls. A mean of approximately 6 yr elapsed between the initial and this subsequent blood harmane determination. The mean log blood harmane concentration was significantly higher in cases than controls (0.30 ± 0.61 g(-10)/ml versus 0.08 ± 0.55 g(-10)/ml), and the median value in cases was double that of controls: 0.22 g(-10)/ml versus 0.11 g(-10)/ml. The log blood harmane concentration was highest in cases with a family history of ET. Blood harmane concentration was elevated in ET cases compared to controls when reassessed at a second time point several years later, indicating what seems to be a stable association between this environmental toxin and ET.

  13. Alcohol consumption during adolescence is associated with reduced grey matter volumes.

    PubMed

    Heikkinen, Noora; Niskanen, Eini; Könönen, Mervi; Tolmunen, Tommi; Kekkonen, Virve; Kivimäki, Petri; Tanila, Heikki; Laukkanen, Eila; Vanninen, Ritva

    2017-04-01

    Cognitive impairment has been associated with excessive alcohol use, but its neural basis is poorly understood. Chronic excessive alcohol use in adolescence may lead to neuronal loss and volumetric changes in the brain. Our objective was to compare the grey matter volumes of heavy- and light-drinking adolescents. This was a longitudinal study: heavy-drinking adolescents without an alcohol use disorder and their light-drinking controls were followed-up for 10 years using questionnaires at three time-points. Magnetic resonance imaging was conducted at the last time-point. The area near Kuopio University Hospital, Finland. The 62 participants were aged 22-28 years and included 35 alcohol users and 27 controls who had been followed-up for approximately 10 years. Alcohol use was measured by the Alcohol Use Disorders Identification Test (AUDIT)-C at three time-points during 10 years. Participants were selected based on their AUDIT-C score. Magnetic resonance imaging was conducted at the last time-point. Grey matter volume was determined and compared between heavy- and light-drinking groups using voxel-based morphometry on three-dimensional T1-weighted magnetic resonance images using predefined regions of interest and a threshold of P < 0.05, with small volume correction applied on cluster level. Grey matter volumes were significantly smaller among heavy-drinking participants in the bilateral anterior cingulate cortex, right orbitofrontal and frontopolar cortex, right superior temporal gyrus and right insular cortex compared to the control group (P < 0.05, family-wise error-corrected cluster level). Excessive alcohol use during adolescence appears to be associated with an abnormal development of the brain grey matter. Moreover, the structural changes detected in the insula of alcohol users may reflect a reduced sensitivity to alcohol's negative subjective effects. © 2016 Society for the Study of Addiction.

  14. Time-dependent density functional theory beyond Kohn-Sham Slater determinants.

    PubMed

    Fuks, Johanna I; Nielsen, Søren E B; Ruggenthaler, Michael; Maitra, Neepa T

    2016-08-03

    When running time-dependent density functional theory (TDDFT) calculations for real-time simulations of non-equilibrium dynamics, the user has a choice of initial Kohn-Sham state, and typically a Slater determinant is used. We explore the impact of this choice on the exchange-correlation potential when the physical system begins in a 50 : 50 superposition of the ground and first-excited state of the system. We investigate the possibility of judiciously choosing a Kohn-Sham initial state that minimizes errors when adiabatic functionals are used. We find that if the Kohn-Sham state is chosen to have a configuration matching the one that dominates the interacting state, this can be achieved for a finite time duration for some but not all such choices. When the Kohn-Sham system does not begin in a Slater determinant, we further argue that the conventional splitting of the exchange-correlation potential into exchange and correlation parts has limited value, and instead propose a decomposition into a "single-particle" contribution that we denote v, and a remainder. The single-particle contribution can be readily computed as an explicit orbital-functional, reduces to exchange in the Slater determinant case, and offers an alternative to the adiabatic approximation as a starting point for TDDFT approximations.

  15. Quasiparticle band offset at the (001) interface and band gaps in ultrathin superlattices of GaAs-AlAs heterojunctions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, S.B.; Cohen, M.L.; Louie, S.G.

    1990-05-15

    A newly developed first-principles quasiparticle theory is used to calculate the band offset at the (001) interface and band gaps in 1{times}1 and 2{times}2 superlattices of GaAs-AlAs heterojunctions. We find a sizable many-body contribution to the valence-band offset which is dominated by the many-body corrections to bulk GaAs and AlAs quasiparticle energies. The resultant offset {Delta}{ital E}{sub {ital v}}=0.53{plus minus}0.05 eV is in good agreement with the recent experimental values of 0.50--0.56 eV. Our calculated direct band gaps for ultrathin superlattices are also in good agreement with experiment. The {ital X}{sub 1{ital c}}-derived state at point {bar {Gamma}}, is however,more » above the {Gamma}{sub 1{ital c}}-derived state for both the 1{times}1 and 2{times}2 lattices, contrary to results obtained under the virtual-crystal approximation (a limiting case for the Kronig-Penny model) and some previous local-density-approximation (corrected) calculations. The differences are explained in terms of atomic-scale localizations and many-body effects. Oscillator strengths and the effects of disorder on the spectra are discussed.« less

  16. Approximation of Failure Probability Using Conditional Sampling

    NASA Technical Reports Server (NTRS)

    Giesy. Daniel P.; Crespo, Luis G.; Kenney, Sean P.

    2008-01-01

    In analyzing systems which depend on uncertain parameters, one technique is to partition the uncertain parameter domain into a failure set and its complement, and judge the quality of the system by estimating the probability of failure. If this is done by a sampling technique such as Monte Carlo and the probability of failure is small, accurate approximation can require so many sample points that the computational expense is prohibitive. Previous work of the authors has shown how to bound the failure event by sets of such simple geometry that their probabilities can be calculated analytically. In this paper, it is shown how to make use of these failure bounding sets and conditional sampling within them to substantially reduce the computational burden of approximating failure probability. It is also shown how the use of these sampling techniques improves the confidence intervals for the failure probability estimate for a given number of sample points and how they reduce the number of sample point analyses needed to achieve a given level of confidence.

  17. Analysis of the phase transition in the two-dimensional Ising ferromagnet using a Lempel-Ziv string-parsing scheme and black-box data-compression utilities

    NASA Astrophysics Data System (ADS)

    Melchert, O.; Hartmann, A. K.

    2015-02-01

    In this work we consider information-theoretic observables to analyze short symbolic sequences, comprising time series that represent the orientation of a single spin in a two-dimensional (2D) Ising ferromagnet on a square lattice of size L2=1282 for different system temperatures T . The latter were chosen from an interval enclosing the critical point Tc of the model. At small temperatures the sequences are thus very regular; at high temperatures they are maximally random. In the vicinity of the critical point, nontrivial, long-range correlations appear. Here we implement estimators for the entropy rate, excess entropy (i.e., "complexity"), and multi-information. First, we implement a Lempel-Ziv string-parsing scheme, providing seemingly elaborate entropy rate and multi-information estimates and an approximate estimator for the excess entropy. Furthermore, we apply easy-to-use black-box data-compression utilities, providing approximate estimators only. For comparison and to yield results for benchmarking purposes, we implement the information-theoretic observables also based on the well-established M -block Shannon entropy, which is more tedious to apply compared to the first two "algorithmic" entropy estimation procedures. To test how well one can exploit the potential of such data-compression techniques, we aim at detecting the critical point of the 2D Ising ferromagnet. Among the above observables, the multi-information, which is known to exhibit an isolated peak at the critical point, is very easy to replicate by means of both efficient algorithmic entropy estimation procedures. Finally, we assess how good the various algorithmic entropy estimates compare to the more conventional block entropy estimates and illustrate a simple modification that yields enhanced results.

  18. Search for Gamma-Ray Emission from Local Primordial Black Holes with the Fermi Large Area Telescope

    NASA Astrophysics Data System (ADS)

    Ackermann, M.; Atwood, W. B.; Baldini, L.; Ballet, J.; Barbiellini, G.; Bastieri, D.; Bellazzini, R.; Berenji, B.; Bissaldi, E.; Blandford, R. D.; Bloom, E. D.; Bonino, R.; Bottacini, E.; Bregeon, J.; Bruel, P.; Buehler, R.; Cameron, R. A.; Caputo, R.; Caraveo, P. A.; Cavazzuti, E.; Charles, E.; Chekhtman, A.; Cheung, C. C.; Chiaro, G.; Ciprini, S.; Cohen-Tanugi, J.; Conrad, J.; Costantin, D.; D’Ammando, F.; de Palma, F.; Digel, S. W.; Di Lalla, N.; Di Mauro, M.; Di Venere, L.; Favuzzi, C.; Fegan, S. J.; Focke, W. B.; Franckowiak, A.; Fukazawa, Y.; Funk, S.; Fusco, P.; Gargano, F.; Gasparrini, D.; Giglietto, N.; Giordano, F.; Giroletti, M.; Green, D.; Grenier, I. A.; Guillemot, L.; Guiriec, S.; Horan, D.; Jóhannesson, G.; Johnson, C.; Kensei, S.; Kocevski, D.; Kuss, M.; Larsson, S.; Latronico, L.; Li, J.; Longo, F.; Loparco, F.; Lovellette, M. N.; Lubrano, P.; Magill, J. D.; Maldera, S.; Malyshev, D.; Manfreda, A.; Mazziotta, M. N.; McEnery, J. E.; Meyer, M.; Michelson, P. F.; Mitthumsiri, W.; Mizuno, T.; Monzani, M. E.; Moretti, E.; Morselli, A.; Moskalenko, I. V.; Negro, M.; Nuss, E.; Ojha, R.; Omodei, N.; Orienti, M.; Orlando, E.; Ormes, J. F.; Palatiello, M.; Paliya, V. S.; Paneque, D.; Persic, M.; Pesce-Rollins, M.; Piron, F.; Principe, G.; Rainò, S.; Rando, R.; Razzano, M.; Razzaque, S.; Reimer, A.; Reimer, O.; Ritz, S.; Sánchez-Conde, M.; Sgrò, C.; Siskind, E. J.; Spada, F.; Spandre, G.; Spinelli, P.; Suson, D. J.; Tajima, H.; Thayer, J. G.; Thayer, J. B.; Torres, D. F.; Tosti, G.; Troja, E.; Valverde, J.; Vianello, G.; Wood, K.; Wood, M.; Zaharijas, G.

    2018-04-01

    Black holes with masses below approximately 1015 g are expected to emit gamma-rays with energies above a few tens of MeV, which can be detected by the Fermi Large Area Telescope (LAT). Although black holes with these masses cannot be formed as a result of stellar evolution, they may have formed in the early universe and are therefore called primordial black holes (PBHs). Previous searches for PBHs have focused on either short-timescale bursts or the contribution of PBHs to the isotropic gamma-ray emission. We show that, in cases of individual PBHs, the Fermi-LAT is most sensitive to PBHs with temperatures above approximately 16 GeV and masses 6 × 1011 g, which it can detect out to a distance of about 0.03 pc. These PBHs have a remaining lifetime of months to years at the start of the Fermi mission. They would appear as potentially moving point sources with gamma-ray emission that become spectrally harder and brighter with time until the PBH completely evaporates. In this paper, we develop a new algorithm to detect the proper motion of gamma-ray point sources, and apply it to 318 unassociated point sources at a high galactic latitude in the third Fermi-LAT source catalog. None of the unassociated point sources with spectra consistent with PBH evaporation show significant proper motion. Using the nondetection of PBH candidates, we derive a 99% confidence limit on the PBH evaporation rate in the vicinity of Earth, {\\dot{ρ }}PBH}< 7.2× {10}3 {pc}}-3 {yr}}-1. This limit is similar to the limits obtained with ground-based gamma-ray observatories.

  19. Two Propositions on the Application of Point Elasticities to Finite Price Changes.

    ERIC Educational Resources Information Center

    Daskin, Alan J.

    1992-01-01

    Considers counterintuitive propositions about using point elasticities to estimate quantity changes in response to price changes. Suggests that elasticity increases with price along a linear demand curve, but falling quantity demand offsets it. Argues that point elasticity with finite percentage change in price only approximates percentage change…

  20. Glacier retreat in New Zealand during the Younger Dryas stadial.

    PubMed

    Kaplan, Michael R; Schaefer, Joerg M; Denton, George H; Barrell, David J A; Chinn, Trevor J H; Putnam, Aaron E; Andersen, Bjørn G; Finkel, Robert C; Schwartz, Roseanne; Doughty, Alice M

    2010-09-09

    Millennial-scale cold reversals in the high latitudes of both hemispheres interrupted the last transition from full glacial to interglacial climate conditions. The presence of the Younger Dryas stadial (approximately 12.9 to approximately 11.7 kyr ago) is established throughout much of the Northern Hemisphere, but the global timing, nature and extent of the event are not well established. Evidence in mid to low latitudes of the Southern Hemisphere, in particular, has remained perplexing. The debate has in part focused on the behaviour of mountain glaciers in New Zealand, where previous research has found equivocal evidence for the precise timing of increased or reduced ice extent. The interhemispheric behaviour of the climate system during the Younger Dryas thus remains an open question, fundamentally limiting our ability to formulate realistic models of global climate dynamics for this time period. Here we show that New Zealand's glaciers retreated after approximately 13 kyr bp, at the onset of the Younger Dryas, and in general over the subsequent approximately 1.5-kyr period. Our evidence is based on detailed landform mapping, a high-precision (10)Be chronology and reconstruction of former ice extents and snow lines from well-preserved cirque moraines. Our late-glacial glacier chronology matches climatic trends in Antarctica, Southern Ocean behaviour and variations in atmospheric CO(2). The evidence points to a distinct warming of the southern mid-latitude atmosphere during the Younger Dryas and a close coupling between New Zealand's cryosphere and southern high-latitude climate. These findings support the hypothesis that extensive winter sea ice and curtailed meridional ocean overturning in the North Atlantic led to a strong interhemispheric thermal gradient during late-glacial times, in turn leading to increased upwelling and CO(2) release from the Southern Ocean, thereby triggering Southern Hemisphere warming during the northern Younger Dryas.

  1. An automated approach to measuring child movement and location in the early childhood classroom.

    PubMed

    Irvin, Dwight W; Crutchfield, Stephen A; Greenwood, Charles R; Kearns, William D; Buzhardt, Jay

    2018-06-01

    Children's movement is an important issue in child development and outcome in early childhood research, intervention, and practice. Digital sensor technologies offer improvements in naturalistic movement measurement and analysis. We conducted validity and feasibility testing of a real-time, indoor mapping and location system (Ubisense, Inc.) within a preschool classroom. Real-time indoor mapping has several implications with respect to efficiently and conveniently: (a) determining the activity areas where children are spending the most and least time per day (e.g., music); and (b) mapping a focal child's atypical real-time movements (e.g., lapping behavior). We calibrated the accuracy of Ubisense point-by-point location estimates (i.e., X and Y coordinates) against laser rangefinder measurements using several stationary points and atypical movement patterns as reference standards. Our results indicate that activity areas occupied and atypical movement patterns could be plotted with an accuracy of 30.48 cm (1 ft) using a Ubisense transponder tag attached to the participating child's shirt. The accuracy parallels findings of other researchers employing Ubisense to study atypical movement patterns in individuals at risk for dementia in an assisted living facility. The feasibility of Ubisense was tested in an approximately 90-min assessment of two children, one typically developing and one with Down syndrome, during natural classroom activities, and the results proved positive. Implications for employing Ubisense in early childhood classrooms as a data-based decision-making tool to support children's development and its potential integration with other wearable sensor technologies are discussed.

  2. Effect of chronic right ventricular apical pacing on left ventricular function.

    PubMed

    O'Keefe, James H; Abuissa, Hussam; Jones, Philip G; Thompson, Randall C; Bateman, Timothy M; McGhie, A Iain; Ramza, Brian M; Steinhaus, David M

    2005-03-15

    The determinants of change in left ventricular (LV) ejection fraction (EF) over time in patients with impaired LV function at baseline have not been clearly established. Using a nuclear database to assess changes in LV function over time, we included patients with a baseline LVEF of 25% to 40% on a gated single-photon emission computed tomographic study at rest and only if second-gated photon emission computed tomography performed approximately 18 months after the initial study showed an improvement in LVEF at rest of > or =10 points or a decrease in LVEF at rest of > or =7 points. In all, 148 patients qualified for the EF increase group and 59 patients for the EF decrease group. LVEF on average increased from 33 +/- 4% to 51 +/- 8% in the EF increase group and decreased from 35 +/- 4% to 25 +/- 5% in the EF decrease group. The strongest multivariable predictor of improvement of LVEF was beta-blocker therapy (odds ratio 3.9, p = 0.002). The strongest independent predictor of LVEF decrease was the presence of a permanent right ventricular apical pacemaker (odds ratio 6.6, p = 0.002). Thus, this study identified beta-blocker therapy as the major independent predictor for improvement in LVEF of > or =10 points, whereas a permanent pacemaker (right ventricular apical pacing) was the strongest predictor of a LVEF decrease of > or =7 points.

  3. Haptic force-feedback devices for the office computer: performance and musculoskeletal loading issues.

    PubMed

    Dennerlein, J T; Yang, M C

    2001-01-01

    Pointing devices, essential input tools for the graphical user interface (GUI) of desktop computers, require precise motor control and dexterity to use. Haptic force-feedback devices provide the human operator with tactile cues, adding the sense of touch to existing visual and auditory interfaces. However, the performance enhancements, comfort, and possible musculoskeletal loading of using a force-feedback device in an office environment are unknown. Hypothesizing that the time to perform a task and the self-reported pain and discomfort of the task improve with the addition of force feedback, 26 people ranging in age from 22 to 44 years performed a point-and-click task 540 times with and without an attractive force field surrounding the desired target. The point-and-click movements were approximately 25% faster with the addition of force feedback (paired t-tests, p < 0.001). Perceived user discomfort and pain, as measured through a questionnaire, were also smaller with the addition of force feedback (p < 0.001). However, this difference decreased as additional distracting force fields were added to the task environment, simulating a more realistic work situation. These results suggest that for a given task, use of a force-feedback device improves performance, and potentially reduces musculoskeletal loading during mouse use. Actual or potential applications of this research include human-computer interface design, specifically that of the pointing device extensively used for the graphical user interface.

  4. A summary of measured hydraulic data for the series of steady and unsteady flow experiments over patterned roughness

    USGS Publications Warehouse

    Collins, Dannie L.; Flynn, Kathleen M.

    1979-01-01

    This report summarizes and makes available to other investigators the measured hydraulic data collected during a series of experiments designed to study the effect of patterned bed roughness on steady and unsteady open-channel flow. The patterned effect of the roughness was obtained by clear-cut mowing of designated areas of an otherwise fairly dense coverage of coastal Bermuda grass approximately 250 mm high. All experiments were conducted in the Flood Plain Simulation Facility during the period of October 7 through December 12, 1974. Data from 18 steady flow experiments and 10 unsteady flow experiments are summarized. Measured data included are ground-surface elevations, grass heights and densities, water-surface elevations and point velocities for all experiments. Additional tables of water-surface elevations and measured point velocities are included for the clear-cut areas for most experiments. One complete set of average water-surface elevations and one complete set of measured point velocities are tabulated for each steady flow experiment. Time series data, on a 2-minute time interval, are tabulated for both water-surface elevations and point velocities for each unsteady flow experiment. All data collected, including individual records of water-surface elevations for the steady flow experiments, have been stored on computer disk storage and can be retrieved using the computer programs listed in the attachment to this report. (Kosco-USGS)

  5. Common fixed points in best approximation for Banach operator pairs with Ciric type I-contractions

    NASA Astrophysics Data System (ADS)

    Hussain, N.

    2008-02-01

    The common fixed point theorems, similar to those of Ciric [Lj.B. Ciric, On a common fixed point theorem of a Gregus type, Publ. Inst. Math. (Beograd) (N.S.) 49 (1991) 174-178; Lj.B. Ciric, On Diviccaro, Fisher and Sessa open questions, Arch. Math. (Brno) 29 (1993) 145-152; Lj.B. Ciric, On a generalization of Gregus fixed point theorem, Czechoslovak Math. J. 50 (2000) 449-458], Fisher and Sessa [B. Fisher, S. Sessa, On a fixed point theorem of Gregus, Internat. J. Math. Math. Sci. 9 (1986) 23-28], Jungck [G. Jungck, On a fixed point theorem of Fisher and Sessa, Internat. J. Math. Math. Sci. 13 (1990) 497-500] and Mukherjee and Verma [R.N. Mukherjee, V. Verma, A note on fixed point theorem of Gregus, Math. Japon. 33 (1988) 745-749], are proved for a Banach operator pair. As applications, common fixed point and approximation results for Banach operator pair satisfying Ciric type contractive conditions are obtained without the assumption of linearity or affinity of either T or I. Our results unify and generalize various known results to a more general class of noncommuting mappings.

  6. Optimal time points sampling in pathway modelling.

    PubMed

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  7. Duplicate view to show interior of the gymnasium from approximately ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Duplicate view to show interior of the gymnasium from approximately the same vantage point as in MD-1109-S-12 - National Park Seminary, Gymnasium, North of Linden Lane, south of Aloha House, Silver Spring, Montgomery County, MD

  8. Power of tests for comparing trend curves with application to national immunization survey (NIS).

    PubMed

    Zhao, Zhen

    2011-02-28

    To develop statistical tests for comparing trend curves of study outcomes between two socio-demographic strata across consecutive time points, and compare statistical power of the proposed tests under different trend curves data, three statistical tests were proposed. For large sample size with independent normal assumption among strata and across consecutive time points, the Z and Chi-square test statistics were developed, which are functions of outcome estimates and the standard errors at each of the study time points for the two strata. For small sample size with independent normal assumption, the F-test statistic was generated, which is a function of sample size of the two strata and estimated parameters across study period. If two trend curves are approximately parallel, the power of Z-test is consistently higher than that of both Chi-square and F-test. If two trend curves cross at low interaction, the power of Z-test is higher than or equal to the power of both Chi-square and F-test; however, at high interaction, the powers of Chi-square and F-test are higher than that of Z-test. The measurement of interaction of two trend curves was defined. These tests were applied to the comparison of trend curves of vaccination coverage estimates of standard vaccine series with National Immunization Survey (NIS) 2000-2007 data. Copyright © 2011 John Wiley & Sons, Ltd.

  9. Etiology of the stability of reading difficulties: the longitudinal twin study of reading disabilities.

    PubMed

    Astrom, Raven L; Wadsworth, Sally J; DeFries, John C

    2007-06-01

    Results obtained from previous longitudinal studies of reading difficulties indicate that reading deficits are generally stable. However, little is known about the etiology of this stability. Thus, the primary objective of this first longitudinal twin study of reading difficulties is to provide an initial assessment of genetic and environmental influences on the stability of reading deficits. Data were analyzed from a sample of 56 twin pairs, 18 identical (monozygotic, MZ) and 38 fraternal (dizygotic, DZ), in which at least one member of each pair was classified as reading-disabled in the Colorado Learning Disabilities Research Center, and on whom follow-up data were available. The twins were tested at two time points (average age of 10.3 years at initial assessment and 16.1 years at follow-up). A composite measure of reading performance (PIAT Reading Recognition, Reading Comprehension and Spelling) was highly stable, with a stability correlation of .84. Data from the initial time point were first subjected to univariate DeFries-Fulker multiple regression analysis and the resulting estimate of the heritability of the group deficit (h2g) was .84 (+/-.26). When the initial and follow-up data were then fitted to a bivariate extension of the basic DF model, bivariate heritability was estimated at .65, indicating that common genetic influences account for approximately 75% of the stability between reading measures at the two time points.

  10. Do parents lead their children by the hand?

    PubMed

    Ozçalişkan, Seyda; Goldin-Meadow, Susan

    2005-08-01

    The types of gesture+speech combinations children produce during the early stages of language development change over time. This change, in turn, predicts the onset of two-word speech and thus might reflect a cognitive transition that the child is undergoing. An alternative, however, is that the change merely reflects changes in the types of gesture + speech combinations that their caregivers produce. To explore this possibility, we videotaped 40 American child-caregiver dyads in their homes for 90 minutes when the children were 1;2, 1;6, and 1;10. Each gesture was classified according to type (deictic, conventional, representational) and the relation it held to speech (reinforcing, disambiguating, supplementary). Children and their caregivers produced the same types of gestures and in approximately the same distribution. However, the children differed from their caregivers in the way they used gesture in relation to speech. Over time, children produced many more REINFORCING (bike+point at bike), DISAMBIGUATING (that one+ point at bike), and SUPPLEMENTARY combinations (ride+point at bike). In contrast, the frequency and distribution of caregivers' gesture+speech combinations remained constant over time. Thus, the changing relation between gesture and speech observed in the children cannot be traced back to the gestural input the children receive. Rather, it appears to reflect changes in the children's own skills, illustrating once again gesture's ability to shed light on developing cognitive and linguistic processes.

  11. EM-ANIMATE - COMPUTER PROGRAM FOR DISPLAYING AND ANIMATING THE STEADY-STATE TIME-HARMONIC ELECTROMAGNETIC NEAR FIELD AND SURFACE-CURRENT SOLUTIONS

    NASA Technical Reports Server (NTRS)

    Hom, K. W.

    1994-01-01

    The EM-ANIMATE program is a specialized visualization program that displays and animates the near-field and surface-current solutions obtained from an electromagnetics program, in particular, that from MOM3D (LAR-15074). The EM-ANIMATE program is windows based and contains a user-friendly, graphical interface for setting viewing options, case selection, file manipulation, etc. EM-ANIMATE displays the field and surface-current magnitude as smooth shaded color fields (color contours) ranging from a minimum contour value to a maximum contour value for the fields and surface currents. The program can display either the total electric field or the scattered electric field in either time-harmonic animation mode or in the root mean square (RMS) average mode. The default setting is initially set to the minimum and maximum values within the field and surface current data and can be optionally set by the user. The field and surface-current value are animated by calculating and viewing the solution at user selectable radian time increments between 0 and 2pi. The surface currents can also be displayed in either time-harmonic animation mode or in RMS average mode. In RMS mode, the color contours do not vary with time, but show the constant time averaged field and surface-current magnitude solution. The electric field and surface-current directions can be displayed as scaled vector arrows which have a length proportional to the magnitude at each field grid point or surface node point. These vector properties can be viewed separately or concurrently with the field or surface-current magnitudes. Animation speed is improved by turning off the display of the vector arrows. In RMS modes, the direction vectors are still displayed as varying with time since the time averaged direction vectors would be zero length vectors. Other surface properties can optionally be viewed. These include the surface grid, the resistance value assigned to each element of the grid, and the power dissipation of each element which has an assigned resistance value. The EM-ANIMATE program will accept up to 10 different surface current cases each consisting of up to 20,000 node points and 10,000 triangle definitions and will animate one of these cases. The capability is used to compare surface-current distribution due to various initial excitation directions or electric field orientations. The program can accept up to 50 planes of field data consisting of a grid of 100 by 100 field points. These planes of data are user selectable and can be viewed individually or concurrently. With these preset limits, the program requires 55 megabytes of core memory to run. These limits can be changed in the header files to accommodate the available core memory of an individual workstation. An estimate of memory required can be made as follows: approximate memory in bytes equals (number of nodes times number of surfaces times 14 variables times bytes per word, typically 4 bytes per floating point) plus (number of field planes times number of nodes per plane times 21 variables times bytes per word). This gives the approximate memory size required to store the field and surface-current data. The total memory size is approximately 400,000 bytes plus the data memory size. The animation calculations are performed in real time at any user set time step. For Silicon Graphics Workstations that have multiple processors, this program has been optimized to perform these calculations on multiple processors to increase animation rates. The optimized program uses the SGI PFA (Power FORTRAN Accelerator) library. On single processor machines, the parallelization directives are seen as comments to the program and will have no effect on compilation or execution. EM-ANIMATE is written in FORTRAN 77 for implementation on SGI IRIS workstations running IRIX 3.0 or later. A minimum of 55Mb of RAM is required for execution of this program; however, the code may be modified to accommodate the available memory of an individual workstation. For program execution, twenty-four bit, double-buffered color capability is suggested, but not required. Sample input and output files and a sample executable are provided on the distribution medium. Electronic documentation is provided in PostScript format and in the form of IRIX man pages. The standard distribution medium for EM-ANIMATE is a .25 inch streaming magnetic IRIX tape cartridge in UNIX tar format. EM-ANIMATE is also available as part of a bundled package, COS-10048 that includes MOM3D, an IRIS program that produces electromagnetic near field and surface current solutions. This program was developed in 1993.

  12. A Newton-Krylov method with an approximate analytical Jacobian for implicit solution of Navier-Stokes equations on staggered overset-curvilinear grids with immersed boundaries.

    PubMed

    Asgharzadeh, Hafez; Borazjani, Iman

    2017-02-15

    The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for nonlinear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the diagonal of the Jacobian further improves the performance by 42 - 74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal Jacobian when the stretching factor was increased, respectively. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80-90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future.

  13. A Newton–Krylov method with an approximate analytical Jacobian for implicit solution of Navier–Stokes equations on staggered overset-curvilinear grids with immersed boundaries

    PubMed Central

    Asgharzadeh, Hafez; Borazjani, Iman

    2016-01-01

    The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for nonlinear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the diagonal of the Jacobian further improves the performance by 42 – 74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal Jacobian when the stretching factor was increased, respectively. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80–90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future. PMID:28042172

  14. A Newton-Krylov method with an approximate analytical Jacobian for implicit solution of Navier-Stokes equations on staggered overset-curvilinear grids with immersed boundaries

    NASA Astrophysics Data System (ADS)

    Asgharzadeh, Hafez; Borazjani, Iman

    2017-02-01

    The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for non-linear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form a preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the diagonal of the Jacobian further improves the performance by 42-74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal and full Jacobian, respectivley, when the stretching factor was increased. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80-90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future.

  15. Time-integrated passive sampling as a complement to conventional point-in-time sampling for investigating drinking-water quality, McKenzie River Basin, Oregon, 2007 and 2010-11

    USGS Publications Warehouse

    McCarthy, Kathleen A.; Alvarez, David A.

    2014-01-01

    The Eugene Water & Electric Board (EWEB) supplies drinking water to approximately 200,000 people in Eugene, Oregon. The sole source of this water is the McKenzie River, which has consistently excellent water quality relative to established drinking-water standards. To ensure that this quality is maintained as land use in the source basin changes and water demands increase, EWEB has developed a proactive management strategy that includes a combination of conventional point-in-time discrete water sampling and time‑integrated passive sampling with a combination of chemical analyses and bioassays to explore water quality and identify where vulnerabilities may lie. In this report, we present the results from six passive‑sampling deployments at six sites in the basin, including the intake and outflow from the EWEB drinking‑water treatment plant (DWTP). This is the first known use of passive samplers to investigate both the source and finished water of a municipal DWTP. Results indicate that low concentrations of several polycyclic aromatic hydrocarbons and organohalogen compounds are consistently present in source waters, and that many of these compounds are also present in finished drinking water. The nature and patterns of compounds detected suggest that land-surface runoff and atmospheric deposition act as ongoing sources of polycyclic aromatic hydrocarbons, some currently used pesticides, and several legacy organochlorine pesticides. Comparison of results from point-in-time and time-integrated sampling indicate that these two methods are complementary and, when used together, provide a clearer understanding of contaminant sources than either method alone.

  16. Bread: CDC 7600 program that processes Spent Fuel Test Climax data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hage, G.L.

    BREAD will process a family of files copied from a data tape made by Hewlett-Packard equipment employed for data acquisition on the Spent Fuel Test-Climax at NTS. Tapes are delivered to Livermore approximately monthly. The process at this stage consists of four steps: read the binary files and convert from H-P 16-bit words to CDC 7600 60-bit words; check identification and data ranges; write the data in 6-bit ASCII (BCD) format, one data point per line; then sort the file by identifier and time.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bogatskaya, A. V., E-mail: annabogatskaya@gmail.com; Volkova, E. A.; Popov, A. M.

    The time evolution of a nonequilibrium plasma channel created in a noble gas by a high-power femtosecond KrF laser pulse is investigated. It is shown that such a channel possesses specific electrodynamic properties and can be used as a waveguide for efficient transportation and amplification of microwave pulses. The propagation of microwave radiation in a plasma waveguide is analyzed by self-consistently solving (i) the Boltzmann kinetic equation for the electron energy distribution function at different spatial points and (ii) the wave equation in the parabolic approximation for a microwave pulse transported along the plasma channel.

  18. Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.

    2007-01-01

    Interpolating scattered data points is a problem of wide ranging interest. A number of approaches for interpolation have been proposed both from theoretical domains such as computational geometry and in applications' fields such as geostatistics. Our motivation arises from geological and mining applications. In many instances data can be costly to compute and are available only at nonuniformly scattered positions. Because of the high cost of collecting measurements, high accuracy is required in the interpolants. One of the most popular interpolation methods in this field is called ordinary kriging. It is popular because it is a best linear unbiased estimator. The price for its statistical optimality is that the estimator is computationally very expensive. This is because the value of each interpolant is given by the solution of a large dense linear system. In practice, kriging problems have been solved approximately by restricting the domain to a small local neighborhood of points that lie near the query point. Determining the proper size for this neighborhood is a solved by ad hoc methods, and it has been shown that this approach leads to undesirable discontinuities in the interpolant. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. This process achieves its efficiency by replacing the large dense kriging system with a much sparser linear system. This technique has been applied to a restriction of our problem, called simple kriging, which is not unbiased for general data sets. In this paper we generalize these results by showing how to apply covariance tapering to the more general problem of ordinary kriging. Through experimentation we demonstrate the space and time efficiency and accuracy of approximating ordinary kriging through the use of covariance tapering combined with iterative methods for solving large sparse systems. We demonstrate our approach on large data sizes arising both from synthetic sources and from real applications.

  19. Efficient generation of discontinuity-preserving adaptive triangulations from range images.

    PubMed

    Garcia, Miguel Angel; Sappa, Angel Domingo

    2004-10-01

    This paper presents an efficient technique for generating adaptive triangular meshes from range images. The algorithm consists of two stages. First, a user-defined number of points is adaptively sampled from the given range image. Those points are chosen by taking into account the surface shapes represented in the range image in such a way that points tend to group in areas of high curvature and to disperse in low-variation regions. This selection process is done through a noniterative, inherently parallel algorithm in order to gain efficiency. Once the image has been subsampled, the second stage applies a two and one half-dimensional Delaunay triangulation to obtain an initial triangular mesh. To favor the preservation of surface and orientation discontinuities (jump and crease edges) present in the original range image, the aforementioned triangular mesh is iteratively modified by applying an efficient edge flipping technique. Results with real range images show accurate triangular approximations of the given range images with low processing times.

  20. Analytical quality goals derived from the total deviation from patients' homeostatic set points, with a margin for analytical errors.

    PubMed

    Bolann, B J; Asberg, A

    2004-01-01

    The deviation of test results from patients' homeostatic set points in steady-state conditions may complicate interpretation of the results and the comparison of results with clinical decision limits. In this study the total deviation from the homeostatic set point is defined as the maximum absolute deviation for 95% of measurements, and we present analytical quality requirements that prevent analytical error from increasing this deviation to more than about 12% above the value caused by biology alone. These quality requirements are: 1) The stable systematic error should be approximately 0, and 2) a systematic error that will be detected by the control program with 90% probability, should not be larger than half the value of the combined analytical and intra-individual standard deviation. As a result, when the most common control rules are used, the analytical standard deviation may be up to 0.15 times the intra-individual standard deviation. Analytical improvements beyond these requirements have little impact on the interpretability of measurement results.

  1. Molecular dynamics of acetamide based ionic deep eutectic solvents

    NASA Astrophysics Data System (ADS)

    Srinivasan, H.; Dubey, P. S.; Sharma, V. K.; Biswas, R.; Mitra, S.; Mukhopadhyay, R.

    2018-04-01

    Deep eutectic solvents are multi-component mixtures that have freezing point lower than their individual components. Mixture of acetamide+ lithium nitrate in the molar ratio 78:22 and acetamide+ lithium perchlorate in the molar ratio 81:19 are found to form deep eutectic solvents with melting point lower than the room temperature. It is known that the depression in freezing point is due to the hydrogen bond breaking ability of anions in the system. Quasielastic neutron scattering experiments on these systems were carried out to study the dynamics of acetamide molecules which may be influenced by this hydrogen bond breaking phenomena. The motion of acetamide molecules is modeled using jump diffusion mechanism to demonstrate continuous breaking and reforming hydrogen bonds in the solvent. Using the jump diffusion model, it is inferred that the jump lengths of acetamide molecules are better approximated by a Gaussian distribution. The shorter residence time of acetamide in presence of perchlorate ions suggest that the perchlorate ions have a higher hydrogen bond breaking ability compared to nitrate ions.

  2. Vernal Point and Anthropocene

    NASA Astrophysics Data System (ADS)

    Chavez-Campos, Teodosio; Chavez S, Nadia; Chavez-Sumarriva, Israel

    2014-05-01

    The time scale was based on the internationally recognized formal chronostratigraphical /geochronological subdivisions of time: The Phanerozoic Eonathem/Eon; the Cenozoic Erathem/Era; the Quaternary System/Period; the Pleistocene and Holocene Series/Epoch. The Quaternary was divided into: (1) The Pleistocene that was characterized by cycles of glaciations (intervals between 40,000 and 100,000 years). (2) The Holocene that was an interglacial period that began about 12,000 years ago. It was believed that the Milankovitch cycles (eccentricity, axial tilt and the precession of the equinoxes) were responsible for the glacial and interglacial Holocene periods. The magnetostratigraphic units have been widely used for global correlations valid for Quaternary. The gravitational influence of the sun and moon on the equatorial bulges of the mantle of the rotating earth causes the precession of the earth. The retrograde motion of the vernal point through the zodiacal band is 26,000 years. The Vernal point passes through each constellation in an average of 2000 years and this period of time was correlated to Bond events that were North Atlantic climate fluctuations occurring every ≡1,470 ± 500 years throughout the Holocene. The vernal point retrogrades one precessional degree approximately in 72 years (Gleissberg-cycle) and approximately enters into the Aquarius constellation on March 20, 1940. On earth this entry was verify through: a) stability of the magnetic equator in the south central zone of Peru and in the north zone of Bolivia, b) the greater intensity of equatorial electrojet (EEJ) in Peru and Bolivia since 1940. With the completion of the Holocene and the beginning of the Anthropocene (widely popularized by Paul Crutzen) it was proposed the date of March 20, 1940 as the beginning of the Anthropocene. The date proposed was correlated to the work presented in IUGG (Italy 2007) with the title "Cusco base meridian for the study of geophysical data"; Cusco was proposed as a prime meridian that was based on: (1) the new prime meridian (72º W == 0º) was parallel to the Andes and its projection the meridian (108° E == 180º) intersects the Tibetan plate (Asia). (2) On earth these two areas present the greatest thickness of the crust with an average depth of 70 kilometers. The aim was to synchronize the earth sciences phenomena (e.g. geology, geophysics, etc.). During the Holocene the vernal point retrograde 12,000 years and enters into the Aquarius constellation on March 20, 1940. That date was proposed as the beginning of the Anthropocene because on that date proposed the vernal point passes from the Pisces constellation to Aquarius constellation, besides that event around the date proposed, the Second World War begun. This event was a global change in the earth. The base of the Anthropocene was defined by the passage of the vernal point from the Pisces Constellation to the Aquarius constellation.

  3. Early Life Factors and Adult Leisure Time Physical Inactivity Stability and Change.

    PubMed

    Pinto Pereira, Snehal M; Li, Leah; Power, Chris

    2015-09-01

    Physical inactivity has a high prevalence and associated disease burden. A better understanding of influences on sustaining and changing inactive lifestyles is needed. We aimed to establish whether leisure time inactivity was stable in midadulthood and whether early life factors were associated with inactivity patterns. In the 1958 British birth cohort (n = 12,271), leisure time inactivity (frequency, less than once a week) assessed at 33 and 50 yr was categorized as "never inactive," "persistently inactive," "deteriorating," or "improving." Early life factors (birth to 16 yr) were categorized into three (physical, social, and behavioral) domains. Using multinomial logistic regression, we assessed associations with inactivity persistence and change of factors within each early life domain and the three domains combined with and without adjustment for adult factors. Inactivity prevalence was similar at 33 and 50 yr (approximately 31%), but 17% deteriorated and 18% improved with age. In models adjusted for all domains simultaneously, factors associated with inactivity persistence versus never inactive were prepubertal stature (8% lower risk/height SD), poor hand control/coordination (17% higher risk/increase on four-point scale), cognition (16% lower/SD in ability) (physical); parental divorce (25% higher), class at birth (7% higher/reduction on four-point scale), minimal parental education (16% higher), household amenities (2% higher/increase in 19-point score (high = poor)) (social); and inactivity (22% higher/reduction in activity on four-point scale), low sports aptitude (47% higher), smoking (30% higher) (behavioral). All except stature, parental education, sports aptitude, and smoking were associated also with inactivity deterioration. Poor hand control/coordination was the only factor associated with improved status (13% lower/increase on four-point scale) versus persistently inactive. Adult leisure time inactivity is moderately stable. Early life factors are associated with persistent and deteriorating inactivity over decades in midadulthood but rarely with improvement.

  4. ARM - Midlatitude Continental Convective Clouds Microwave Radiometer Profiler (jensen-mwr)

    DOE Data Explorer

    Jensen, Mike

    2012-02-01

    A major component of the Mid-latitude Continental Convective Clouds Experiment (MC3E) field campaign was the deployment of an enhanced radiosonde array designed to capture the vertical profile of atmospheric state variables (pressure, temperature, humidity wind speed and wind direction) for the purpose of deriving the large-scale forcing for use in modeling studies. The radiosonde array included six sites (enhanced Central Facility [CF-1] plus five new sites) launching radiosondes at 3-6 hour sampling intervals. The network will cover an area of approximately (300)2 km2 with five outer sounding launch sites and one central launch location. The five outer sounding launch sites are: S01 Pratt, KS [ 37.7oN, 98.75oW]; S02 Chanute, KS [37.674, 95.488]; S03 Vici, Oklahoma [36.071, -99.204]; S04 Morris, Oklahoma [35.687, -95.856]; and S05 Purcell, Oklahoma [34.985, -97.522]. Soundings from the SGP Central Facility during MC3E can be retrieved from the regular ARM archive. During routine MC3E operations 4 radiosondes were launched from each of these sites (approx. 0130, 0730, 1330 and 1930 UTC). On days that were forecast to be convective up to four additional launches were launched at each site (approx. 0430, 1030, 1630, 2230 UTC). There were a total of approximately 14 of these high frequency launch days over the course of the experiment. These files contain brightness temperatures observed at Purcell during MC3E. The measurements were made with a 5 channel (22.235, 23.035, 23.835, 26.235, 30.000GHz) microwave radiometer at one minute intervals. The results have been separated into daily files and the day of observations is indicated in the file name. All observations were zenith pointing. Included in the files are the time variables base_time and time_offset. These follow the ARM time conventions. Base_time is the number seconds since January 1, 1970 at 00:00:00 for the first data point of the file and time_offset is the offset in seconds from base_time.

  5. ARM - Midlatitude Continental Convective Clouds - Ultra High Sensitivity Aerosol Spectrometer(tomlinson-uhsas)

    DOE Data Explorer

    Tomlinson, Jason; Jensen, Mike

    2012-02-28

    Ultra High Sensitivity Aerosol Spectrometer (UHSASA) A major component of the Mid-latitude Continental Convective Clouds Experiment (MC3E) field campaign was the deployment of an enhanced radiosonde array designed to capture the vertical profile of atmospheric state variables (pressure, temperature, humidity wind speed and wind direction) for the purpose of deriving the large-scale forcing for use in modeling studies. The radiosonde array included six sites (enhanced Central Facility [CF-1] plus five new sites) launching radiosondes at 3-6 hour sampling intervals. The network will cover an area of approximately (300)2 km2 with five outer sounding launch sites and one central launch location. The five outer sounding launch sites are: S01 Pratt, KS [ 37.7oN, 98.75oW]; S02 Chanute, KS [37.674, 95.488]; S03 Vici, Oklahoma [36.071, -99.204]; S04 Morris, Oklahoma [35.687, -95.856]; and S05 Purcell, Oklahoma [34.985, -97.522]. Soundings from the SGP Central Facility during MC3E can be retrieved from the regular ARM archive. During routine MC3E operations 4 radiosondes were launched from each of these sites (approx. 0130, 0730, 1330 and 1930 UTC). On days that were forecast to be convective up to four additional launches were launched at each site (approx. 0430, 1030, 1630, 2230 UTC). There were a total of approximately 14 of these high frequency launch days over the course of the experiment. These files contain brightness temperatures observed at Purcell during MC3E. The measurements were made with a 5 channel (22.235, 23.035, 23.835, 26.235, 30.000GHz) microwave radiometer at one minute intervals. The results have been separated into daily files and the day of observations is indicated in the file name. All observations were zenith pointing. Included in the files are the time variables base_time and time_offset. These follow the ARM time conventions. Base_time is the number seconds since January 1, 1970 at 00:00:00 for the first data point of the file and time_offset is the offset in seconds from base_time.

  6. Reliable Real-Time Solution of Parametrized Partial Differential Equations: Reduced-Basis Output Bound Methods. Appendix 2

    NASA Technical Reports Server (NTRS)

    Prudhomme, C.; Rovas, D. V.; Veroy, K.; Machiels, L.; Maday, Y.; Patera, A. T.; Turinici, G.; Zang, Thomas A., Jr. (Technical Monitor)

    2002-01-01

    We present a technique for the rapid and reliable prediction of linear-functional outputs of elliptic (and parabolic) partial differential equations with affine parameter dependence. The essential components are (i) (provably) rapidly convergent global reduced basis approximations, Galerkin projection onto a space W(sub N) spanned by solutions of the governing partial differential equation at N selected points in parameter space; (ii) a posteriori error estimation, relaxations of the error-residual equation that provide inexpensive yet sharp and rigorous bounds for the error in the outputs of interest; and (iii) off-line/on-line computational procedures, methods which decouple the generation and projection stages of the approximation process. The operation count for the on-line stage, in which, given a new parameter value, we calculate the output of interest and associated error bound, depends only on N (typically very small) and the parametric complexity of the problem; the method is thus ideally suited for the repeated and rapid evaluations required in the context of parameter estimation, design, optimization, and real-time control.

  7. Time domain simulations of preliminary breakdown pulses in natural lightning

    PubMed Central

    Carlson, B E; Liang, C; Bitzer, P; Christian, H

    2015-01-01

    Lightning discharge is a complicated process with relevant physical scales spanning many orders of magnitude. In an effort to understand the electrodynamics of lightning and connect physical properties of the channel to observed behavior, we construct a simulation of charge and current flow on a narrow conducting channel embedded in three-dimensional space with the time domain electric field integral equation, the method of moments, and the thin-wire approximation. The method includes approximate treatment of resistance evolution due to lightning channel heating and the corona sheath of charge surrounding the lightning channel. Focusing our attention on preliminary breakdown in natural lightning by simulating stepwise channel extension with a simplified geometry, our simulation reproduces the broad features observed in data collected with the Huntsville Alabama Marx Meter Array. Some deviations in pulse shape details are evident, suggesting future work focusing on the detailed properties of the stepping mechanism. Key Points Preliminary breakdown pulses can be reproduced by simulated channel extension Channel heating and corona sheath formation are crucial to proper pulse shape Extension processes and channel orientation significantly affect observations PMID:26664815

  8. The Stochastic X-Ray Variability of the Accreting Millisecond Pulsar MAXI J0911-655

    NASA Technical Reports Server (NTRS)

    Bult, Peter

    2017-01-01

    In this work, I report on the stochastic X-ray variability of the 340 hertz accreting millisecond pulsar MAXI J0911-655. Analyzing pointed observations of the XMM-Newton and NuSTAR observatories, I find that the source shows broad band-limited stochastic variability in the 0.01-10 hertz range with a total fractional variability of approximately 24 percent root mean square timing residuals in the 0.4 to 3 kiloelectronvolt energy band that increases to approximately 40 percent root mean square timing residuals in the 3 to 10 kiloelectronvolt band. Additionally, a pair of harmonically related quasi-periodic oscillations (QPOs) are discovered. The fundamental frequency of this harmonic pair is observed between frequencies of 62 and 146 megahertz. Like the band-limited noise, the amplitudes of the QPOs show a steep increase as a function of energy; this suggests that they share a similar origin, likely the inner accretion flow. Based on their energy dependence and frequency relation with respect to the noise terms, the QPOs are identified as low-frequency oscillations and discussed in terms of the Lense-Thirring precession model.

  9. Identification of differentially expressed genes in cucumber (Cucumis sativus L.) root under waterlogging stress by digital gene expression profile.

    PubMed

    Qi, Xiao-Hua; Xu, Xue-Wen; Lin, Xiao-Jian; Zhang, Wen-Jie; Chen, Xue-Hao

    2012-03-01

    High-throughput tag-sequencing (Tag-seq) analysis based on the Solexa Genome Analyzer platform was applied to analyze the gene expression profiling of cucumber plant at 5 time points over a 24h period of waterlogging treatment. Approximately 5.8 million total clean sequence tags per library were obtained with 143013 distinct clean tag sequences. Approximately 23.69%-29.61% of the distinct clean tags were mapped unambiguously to the unigene database, and 53.78%-60.66% of the distinct clean tags were mapped to the cucumber genome database. Analysis of the differentially expressed genes revealed that most of the genes were down-regulated in the waterlogging stages, and the differentially expressed genes mainly linked to carbon metabolism, photosynthesis, reactive oxygen species generation/scavenging, and hormone synthesis/signaling. Finally, quantitative real-time polymerase chain reaction using nine genes independently verified the tag-mapped results. This present study reveals the comprehensive mechanisms of waterlogging-responsive transcription in cucumber. Copyright © 2011 Elsevier Inc. All rights reserved.

  10. Vendor compliance with Ontario's tobacco point of sale legislation.

    PubMed

    Dubray, Jolene M; Schwartz, Robert M; Garcia, John M; Bondy, Susan J; Victor, J Charles

    2009-01-01

    On May 31, 2006, Ontario joined a small group of international jurisdictions to implement legislative restrictions on tobacco point of sale promotions. This study compares the presence of point of sale promotions in the retail tobacco environment from three surveys: one prior to and two following implementation of the legislation. Approximately 1,575 tobacco vendors were randomly selected for each survey. Each regionally-stratified sample included equal numbers of tobacco vendors categorized into four trade classes: chain convenience, independent convenience and discount, gas stations, and grocery. Data regarding the six restricted point of sale promotions were collected using standardized protocols and inspection forms. Weighted estimates and 95% confidence intervals were produced at the provincial, regional and vendor trade class level using the bootstrap method for estimating variance. At baseline, the proportion of tobacco vendors who did not engage in each of the six restricted point of sale promotions ranged from 41% to 88%. Within four months following implementation of the legislation, compliance with each of the six restricted point of sale promotions exceeded 95%. Similar levels of compliance were observed one year later. Grocery stores had the fewest point of sale promotions displayed at baseline. Compliance rates did not differ across vendor trade classes at either follow-up survey. Point of sale promotions did not differ across regions in any of the three surveys. Within a short period of time, a high level of compliance with six restricted point of sale promotions was achieved.

  11. Breakdown of the Migdal approximation at Lifshitz transitions with giant zero-point motion in the H3S superconductor.

    PubMed

    Jarlborg, Thomas; Bianconi, Antonio

    2016-04-20

    While 203 K high temperature superconductivity in H3S has been interpreted by BCS theory in the dirty limit here we focus on the effects of hydrogen zero-point-motion and the multiband electronic structure relevant for multigap superconductivity near Lifshitz transitions. We describe how the topology of the Fermi surfaces evolves with pressure giving different Lifshitz-transitions. A neck-disrupting Lifshitz-transition (type 2) occurs where the van Hove singularity, vHs, crosses the chemical potential at 210 GPa and new small 2D Fermi surface portions appear with slow Fermi velocity where the Migdal-approximation becomes questionable. We show that the neglected hydrogen zero-point motion ZPM, plays a key role at Lifshitz transitions. It induces an energy shift of about 600 meV of the vHs. The other Lifshitz-transition (of type 1) for the appearing of a new Fermi surface occurs at 130 GPa where new Fermi surfaces appear at the Γ point of the Brillouin zone here the Migdal-approximation breaks down and the zero-point-motion induces large fluctuations. The maximum Tc = 203 K occurs at 160 GPa where EF/ω0 = 1 in the small Fermi surface pocket at Γ. A Feshbach-like resonance between a possible BEC-BCS condensate at Γ and the BCS condensate in different k-space spots is proposed.

  12. Prosociality: the contribution of traits, values, and self-efficacy beliefs.

    PubMed

    Caprara, Gian Vittorio; Alessandri, Guido; Eisenberg, Nancy

    2012-06-01

    The present study examined how agreeableness, self-transcendence values, and empathic self-efficacy beliefs predict individuals' tendencies to engage in prosocial behavior (i.e., prosociality) across time. Participants were 340 young adults, 190 women and 150 men, age approximately 21 years at Time 1 and 25 years at Time 2. Measures of agreeableness, self-transcendence, empathic self-efficacy beliefs, and prosociality were collected at 2 time points. The findings corroborated the posited paths of relations, with agreeableness directly predicting self-transcendence and indirectly predicting empathic self-efficacy beliefs and prosociality. Self-transcendence mediated the relation between agreeableness and empathic self-efficacy beliefs. Empathic self-efficacy beliefs mediated the relation of agreeableness and self-transcendence to prosociality. Finally, earlier prosociality predicted agreeableness and empathic self-efficacy beliefs assessed at Time 2. The posited conceptual model accounted for a significant portion of variance in prosociality and provides guidance to interventions aimed at promoting prosociality. 2012 APA, all rights reserved

  13. SU-E-J-11: A New Optical Method to Register Patient External Motion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barbes, B; Azcona, J; Moreno, M

    2014-06-01

    Purpose: To devise and implement a new system to measure and register the patient motion during radiotherapy treatments. Methods: The system can obtain the position of several points in the 3D-space, through their projections in the 2D-images recorded by two cameras. The algorithm needs a series of constants, that are obtained using the images of a calibrated phantom.To test the system, some adhesive labels were placed on the surface of an object. Two cameras recorded the moving object over time. An in-house developed software localized the labels in each image. In the first pair of images, the program used amore » first approximation given by the user. In the subsequent images, it used the last position as an approximate location. The final exact coordinates of the point were obtained in a two-step process using the contrast of the images. From the 2D-positions of the point in each frame, the 3D-trajectories of each of these marks were obtained.The system was tested with linear displacements, oscillations of a mechanical oscillator, circular trajectories of a rotating disk, and with respiratory motion of a volunteer. Results: Trajectories of several points were reproduced with sub-millimeter accuracy in the three directions of the space. The system was able to follow periodic motion with amplitudes lower than 0.5mm; and trajectories of rotating points at speeds up to 200mm/s. The software could also track accurately the respiration motion of a person. Conclusion: A new, inexpensive optical tracking system for patient motion has been demonstrated. The system detects motion with high accuracy. Installation and calibration of the system is simple and quick. Data collection is not expected to involve any discomfort for the patient, nor any delay for the treatment. The system could be also used as a method of warning for patient movements, and for gating. We acknowledge financial support from Fundacion Mutua Madrilena, Madrid, Spain.« less

  14. Superadiabatic driving of a three-level quantum system

    NASA Astrophysics Data System (ADS)

    Theisen, M.; Petiziol, F.; Carretta, S.; Santini, P.; Wimberger, S.

    2017-07-01

    We study superadiabatic quantum control of a three-level quantum system whose energy spectrum exhibits multiple avoided crossings. In particular, we investigate the possibility of treating the full control task in terms of independent two-level Landau-Zener problems. We first show that the time profiles of the elements of the full control Hamiltonian are characterized by peaks centered around the crossing times. These peaks decay algebraically for large times. In principle, such a power-law scaling invalidates the hypothesis of perfect separability. Nonetheless, we address the problem from a pragmatic point of view by studying the fidelity obtained through separate control as a function of the intercrossing separation. This procedure may be a good approach to achieve approximate adiabatic driving of a specific instantaneous eigenstate in realistic implementations.

  15. JANUS: a bit-wise reversible integrator for N-body dynamics

    NASA Astrophysics Data System (ADS)

    Rein, Hanno; Tamayo, Daniel

    2018-01-01

    Hamiltonian systems such as the gravitational N-body problem have time-reversal symmetry. However, all numerical N-body integration schemes, including symplectic ones, respect this property only approximately. In this paper, we present the new N-body integrator JANUS , for which we achieve exact time-reversal symmetry by combining integer and floating point arithmetic. JANUS is explicit, formally symplectic and satisfies Liouville's theorem exactly. Its order is even and can be adjusted between two and ten. We discuss the implementation of JANUS and present tests of its accuracy and speed by performing and analysing long-term integrations of the Solar system. We show that JANUS is fast and accurate enough to tackle a broad class of dynamical problems. We also discuss the practical and philosophical implications of running exactly time-reversible simulations.

  16. The structure of mode-locking regions of piecewise-linear continuous maps: II. Skew sawtooth maps

    NASA Astrophysics Data System (ADS)

    Simpson, D. J. W.

    2018-05-01

    In two-parameter bifurcation diagrams of piecewise-linear continuous maps on , mode-locking regions typically have points of zero width known as shrinking points. Near any shrinking point, but outside the associated mode-locking region, a significant proportion of parameter space can be usefully partitioned into a two-dimensional array of annular sectors. The purpose of this paper is to show that in these sectors the dynamics is well-approximated by a three-parameter family of skew sawtooth circle maps, where the relationship between the skew sawtooth maps and the N-dimensional map is fixed within each sector. The skew sawtooth maps are continuous, degree-one, and piecewise-linear, with two different slopes. They approximate the stable dynamics of the N-dimensional map with an error that goes to zero with the distance from the shrinking point. The results explain the complicated radial pattern of periodic, quasi-periodic, and chaotic dynamics that occurs near shrinking points.

  17. Accelerated solution of discrete ordinates approximation to the Boltzmann transport equation via model reduction

    DOE PAGES

    Tencer, John; Carlberg, Kevin; Larsen, Marvin; ...

    2017-06-17

    Radiation heat transfer is an important phenomenon in many physical systems of practical interest. When participating media is important, the radiative transfer equation (RTE) must be solved for the radiative intensity as a function of location, time, direction, and wavelength. In many heat-transfer applications, a quasi-steady assumption is valid, thereby removing time dependence. The dependence on wavelength is often treated through a weighted sum of gray gases (WSGG) approach. The discrete ordinates method (DOM) is one of the most common methods for approximating the angular (i.e., directional) dependence. The DOM exactly solves for the radiative intensity for a finite numbermore » of discrete ordinate directions and computes approximations to integrals over the angular space using a quadrature rule; the chosen ordinate directions correspond to the nodes of this quadrature rule. This paper applies a projection-based model-reduction approach to make high-order quadrature computationally feasible for the DOM for purely absorbing applications. First, the proposed approach constructs a reduced basis from (high-fidelity) solutions of the radiative intensity computed at a relatively small number of ordinate directions. Then, the method computes inexpensive approximations of the radiative intensity at the (remaining) quadrature points of a high-order quadrature using a reduced-order model constructed from the reduced basis. Finally, this results in a much more accurate solution than might have been achieved using only the ordinate directions used to compute the reduced basis. One- and three-dimensional test problems highlight the efficiency of the proposed method.« less

  18. Accelerated solution of discrete ordinates approximation to the Boltzmann transport equation via model reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tencer, John; Carlberg, Kevin; Larsen, Marvin

    Radiation heat transfer is an important phenomenon in many physical systems of practical interest. When participating media is important, the radiative transfer equation (RTE) must be solved for the radiative intensity as a function of location, time, direction, and wavelength. In many heat-transfer applications, a quasi-steady assumption is valid, thereby removing time dependence. The dependence on wavelength is often treated through a weighted sum of gray gases (WSGG) approach. The discrete ordinates method (DOM) is one of the most common methods for approximating the angular (i.e., directional) dependence. The DOM exactly solves for the radiative intensity for a finite numbermore » of discrete ordinate directions and computes approximations to integrals over the angular space using a quadrature rule; the chosen ordinate directions correspond to the nodes of this quadrature rule. This paper applies a projection-based model-reduction approach to make high-order quadrature computationally feasible for the DOM for purely absorbing applications. First, the proposed approach constructs a reduced basis from (high-fidelity) solutions of the radiative intensity computed at a relatively small number of ordinate directions. Then, the method computes inexpensive approximations of the radiative intensity at the (remaining) quadrature points of a high-order quadrature using a reduced-order model constructed from the reduced basis. Finally, this results in a much more accurate solution than might have been achieved using only the ordinate directions used to compute the reduced basis. One- and three-dimensional test problems highlight the efficiency of the proposed method.« less

  19. A Comprehensive Search for Gamma-Ray Lines in the First Year of Data from the INTEGRAL Spectrometer

    NASA Technical Reports Server (NTRS)

    Teegarden, B. J.; Watanabe, K.

    2006-01-01

    Gamma-ray lines are produced in nature by a variety of different physical processes. They can be valuable astrophysical diagnostics providing information the may be unobtainable by other means. We have carried out an extensive search for gamma-ray lines in the first year of public data from the Spectrometer (SPI) on the INTEGRAL mission. INTEGRAL has spent a large fraction of its observing time in the Galactic Plane with particular concentration in the Galactic Center (GC) region (approximately 3 Msec in the first year). Hence the most sensitive search regions are in the Galactic Plane and Center. The phase space of the search spans the energy range 20-8000 keV, and line widths from 0-1000 keV (FWHM) and includes both diffuse and point-like emission. We have searched for variable emission on time scales down to approximately 1000 sec. Diffuse emission has been searched for on a range of different spatial scales from approximately 20 degrees (the approximate field-of-view of the spectrometer) up to the entire Galactic Plane. Our search procedures were verified by the recovery of the known gamma-ray lines at 511 keV and 1809 keV at the appropriate intensities and significances. We find no evidence for any previously unknown gamma-ray lines. The upper limits range from a few x10(exp -5) per square centimeter per second to a few x10(exp -3) per square centimeter per second depending on line width, energy and exposure. Comparison is made between our results and various prior predictions of astrophysical lines

  20. 27 CFR 9.83 - Lake Erie.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Cazenovia Creek and thence up the west branch of Cazenovia Creek to a point approximately one mile north of Colden, New York, exactly 12 statute miles inland from any point on the shore of Lake Erie. (3) The boundary proceeds southwestward and along a line exactly 12 statute miles inland from any point on the...

Top