Sample records for simple path merging

  1. Universal photonic quantum gates assisted by ancilla diamond nitrogen-vacancy centers coupled to resonators

    NASA Astrophysics Data System (ADS)

    Wei, Hai-Rui; Long, Gui Lu

    2015-03-01

    We propose two compact, economic, and scalable schemes for implementing optical controlled-phase-flip and controlled-controlled-phase-flip gates by using the input-output process of a single-sided cavity strongly coupled to a single nitrogen-vacancy-center defect in diamond. Additional photonic qubits, necessary for procedures based on the parity-check measurement or controlled-path and merging gates, are not employed in our schemes. In the controlled-path gate, the paths of the target photon are conditionally controlled by the control photon, and these two paths can be merged back into one by using a merging gate. Only one half-wave plate is employed in our scheme for the controlled-phase-flip gate. Compared with the conventional synthesis procedures for constructing a controlled-controlled-phase-flip gate, the cost of which is two controlled-path gates and two merging gates, or six controlled-not gates, our scheme is more compact and simpler. Our schemes could be performed with a high fidelity and high efficiency with current achievable experimental techniques.

  2. Quadrilateral finite element mesh coarsening

    DOEpatents

    Staten, Matthew L; Dewey, Mark W; Benzley, Steven E

    2012-10-16

    Techniques for coarsening a quadrilateral mesh are described. These techniques include identifying a coarsening region within the quadrilateral mesh to be coarsened. Quadrilateral elements along a path through the coarsening region are removed. Node pairs along opposite sides of the path are identified. The node pairs along the path are then merged to collapse the path.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agarwal, Animesh, E-mail: animesh@zedat.fu-berlin.de; Delle Site, Luigi, E-mail: dellesite@fu-berlin.de

    Quantum effects due to the spatial delocalization of light atoms are treated in molecular simulation via the path integral technique. Among several methods, Path Integral (PI) Molecular Dynamics (MD) is nowadays a powerful tool to investigate properties induced by spatial delocalization of atoms; however, computationally this technique is very demanding. The above mentioned limitation implies the restriction of PIMD applications to relatively small systems and short time scales. One of the possible solutions to overcome size and time limitation is to introduce PIMD algorithms into the Adaptive Resolution Simulation Scheme (AdResS). AdResS requires a relatively small region treated at pathmore » integral level and embeds it into a large molecular reservoir consisting of generic spherical coarse grained molecules. It was previously shown that the realization of the idea above, at a simple level, produced reasonable results for toy systems or simple/test systems like liquid parahydrogen. Encouraged by previous results, in this paper, we show the simulation of liquid water at room conditions where AdResS, in its latest and more accurate Grand-Canonical-like version (GC-AdResS), is merged with two of the most relevant PIMD techniques available in the literature. The comparison of our results with those reported in the literature and/or with those obtained from full PIMD simulations shows a highly satisfactory agreement.« less

  4. Surface Hold Advisor Using Critical Sections

    NASA Technical Reports Server (NTRS)

    Law, Caleb Hoi Kei (Inventor); Hsiao, Thomas Kun-Lung (Inventor); Mittler, Nathan C. (Inventor); Couluris, George J. (Inventor)

    2013-01-01

    The Surface Hold Advisor Using Critical Sections is a system and method for providing hold advisories to surface controllers to prevent gridlock and resolve crossing and merging conflicts among vehicles traversing a vertex-edge graph representing a surface traffic network on an airport surface. The Advisor performs pair-wise comparisons of current position and projected path of each vehicle with other surface vehicles to detect conflicts, determine critical sections, and provide hold advisories to traffic controllers recommending vehicles stop at entry points to protected zones around identified critical sections. A critical section defines a segment of the vertex-edge graph where vehicles are in crossing or merging or opposite direction gridlock contention. The Advisor detects critical sections without reference to scheduled, projected or required times along assigned vehicle paths, and generates hold advisories to prevent conflicts without requiring network path direction-of-movement rules and without requiring rerouting, rescheduling or other network optimization solutions.

  5. Modeling of the merging of two colliding field reversed configuration plasmoids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Guanqiong; Wang, Xiaoguang; Li, Lulu

    2016-06-15

    The field reversed configuration (FRC) is one of the candidate plasma targets for the magneto-inertial fusion, and a high temperature FRC can be formed by using the collision-merging technology. Although the merging process and mechanism of FRC are quite complicated, it is thinkable to build a simple model to investigate the macroscopic equilibrium parameters including the density, the temperature and the separatrix volume, which may play an important role in the collision-merging process of FRC. It is quite interesting that the estimates of the related results based on our simple model are in agreement with the simulation results of amore » two-dimensional magneto-hydrodynamic code (MFP-2D), which has being developed by our group since the last couple of years, while these results can qualitatively fit the results of C-2 experiments by Tri-alpha energy company. On the other hand, the simple model can be used to investigate how to increase the density of the merged FRC. It is found that the amplification of the density depends on the poloidal flux-increase factor and the temperature increases with the translation speed of two plasmoids.« less

  6. The Challenge of Wider Library Units: Merging Libraries and Developing Taxing Districts May Be a Way to Stabilize Funding, but the Path Is Not Always Clear

    ERIC Educational Resources Information Center

    Hennen, Thomas J., Jr.

    2004-01-01

    Last year, Louisville, KY, grew overnight from the country's 64th largest city to the 16th largest, the result of the merger of the city with Jefferson County. Pittsburgh and Buffalo, NY, are among other communities discussing city-county mergers. Many smaller communities are considering merging services, such as police and fire, or consolidating…

  7. Monitoring weaving sections

    DOT National Transportation Integrated Search

    2001-10-01

    Traffic control in highway weaving sections is complicated since vehicles are crossing paths, changing lanes, or merging with through traffic as they enter or exit an expressway. There are two types of weaving sections: (a) single weaving sections wh...

  8. Merging domino and redox chemistry: stereoselective access to di- and trisubstituted β,γ-unsaturated acids and esters.

    PubMed

    Tejedor, David; Méndez-Abt, Gabriela; Cotos, Leandro; García-Tellado, Fernando

    2012-03-19

    Merging is the game! The coupling of a domino reaction and an internal neutral redox reaction constitutes an excellent manifold for the stereoselective synthesis of di- and trisubstituted olefins featuring a malonate unit, an ester, or a free carboxylic acid as substituents at the allylic position (see scheme; MW=microwave). The reaction utilizes simple starting materials (propargyl vinyl ethers), methanol or water as solvents, and a very simple and bench-friendly protocol. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Using State Merging and State Pruning to Address the Path Explosion Problem Faced by Symbolic Execution

    DTIC Science & Technology

    2014-06-19

    urgent and compelling. Recent efforts in this area automate program analysis techniques using model checking and symbolic execution [2, 5–7]. These...bounded model checking tool for x86 binary programs developed at the Air Force Institute of Technology (AFIT). Jiseki creates a bit-vector logic model based...assume there are n different paths through the function foo . The program could potentially call the function foo a bound number of times, resulting in n

  10. Do Poor Students Benefit from China's Merger Program? Transfer Path and Educational Performance

    ERIC Educational Resources Information Center

    Chen, Xinxin; Yi, Hongmei; Zhang, Linxiu; Mo, Di; Chu, James; Rozelle, Scott

    2014-01-01

    Aiming to provide better education facilities and improve the educational attainment of poor rural students, China's government has been merging remote rural primary schools into centralized village, town, or county schools since the late 1990s. To accompany the policy, boarding facilities have been constructed that allow (mandate) primary…

  11. Path integral learning of multidimensional movement trajectories

    NASA Astrophysics Data System (ADS)

    André, João; Santos, Cristina; Costa, Lino

    2013-10-01

    This paper explores the use of Path Integral Methods, particularly several variants of the recent Path Integral Policy Improvement (PI2) algorithm in multidimensional movement parametrized policy learning. We rely on Dynamic Movement Primitives (DMPs) to codify discrete and rhythmic trajectories, and apply the PI2-CMA and PIBB methods in the learning of optimal policy parameters, according to different cost functions that inherently encode movement objectives. Additionally we merge both of these variants and propose the PIBB-CMA algorithm, comparing all of them with the vanilla version of PI2. From the obtained results we conclude that PIBB-CMA surpasses all other methods in terms of convergence speed and iterative final cost, which leads to an increased interest in its application to more complex robotic problems.

  12. Path-Following Solutions Of Nonlinear Equations

    NASA Technical Reports Server (NTRS)

    Barger, Raymond L.; Walters, Robert W.

    1989-01-01

    Report describes some path-following techniques for solution of nonlinear equations and compares with other methods. Use of multipurpose techniques applicable at more than one stage of path-following computation results in system relatively simple to understand, program, and use. Comparison of techniques with method of parametric differentiation (MPD) reveals definite advantages for path-following methods. Emphasis in investigation on multiuse techniques being applied at more than one stage of path-following computation. Incorporation of multipurpose techniques results in concise computer code relatively simple to use.

  13. Coal workers pneumoconiosis - stage II (image)

    MedlinePlus

    ... borders, representing coalescence (merging together) of previously distinct light areas. Diseases which may explain these x-ray findings include simple coal workers pneumoconiosis (CWP) - stage ...

  14. Binary partition tree analysis based on region evolution and its application to tree simplification.

    PubMed

    Lu, Huihai; Woods, John C; Ghanbari, Mohammed

    2007-04-01

    Pyramid image representations via tree structures are recognized methods for region-based image analysis. Binary partition trees can be applied which document the merging process with small details found at the bottom levels and larger ones close to the root. Hindsight of the merging process is stored within the tree structure and provides the change histories of an image property from the leaf to the root node. In this work, the change histories are modelled by evolvement functions and their second order statistics are analyzed by using a knee function. Knee values show the reluctancy of each merge. We have systematically formulated these findings to provide a novel framework for binary partition tree analysis, where tree simplification is demonstrated. Based on an evolvement function, for each upward path in a tree, the tree node associated with the first reluctant merge is considered as a pruning candidate. The result is a simplified version providing a reduced solution space and still complying with the definition of a binary tree. The experiments show that image details are preserved whilst the number of nodes is dramatically reduced. An image filtering tool also results which preserves object boundaries and has applications for segmentation.

  15. A method for automatic grain segmentation of multi-angle cross-polarized microscopic images of sandstone

    NASA Astrophysics Data System (ADS)

    Jiang, Feng; Gu, Qing; Hao, Huizhen; Li, Na; Wang, Bingqian; Hu, Xiumian

    2018-06-01

    Automatic grain segmentation of sandstone is to partition mineral grains into separate regions in the thin section, which is the first step for computer aided mineral identification and sandstone classification. The sandstone microscopic images contain a large number of mixed mineral grains where differences among adjacent grains, i.e., quartz, feldspar and lithic grains, are usually ambiguous, which make grain segmentation difficult. In this paper, we take advantage of multi-angle cross-polarized microscopic images and propose a method for grain segmentation with high accuracy. The method consists of two stages, in the first stage, we enhance the SLIC (Simple Linear Iterative Clustering) algorithm, named MSLIC, to make use of multi-angle images and segment the images as boundary adherent superpixels. In the second stage, we propose the region merging technique which combines the coarse merging and fine merging algorithms. The coarse merging merges the adjacent superpixels with less evident boundaries, and the fine merging merges the ambiguous superpixels using the spatial enhanced fuzzy clustering. Experiments are designed on 9 sets of multi-angle cross-polarized images taken from the three major types of sandstones. The results demonstrate both the effectiveness and potential of the proposed method, comparing to the available segmentation methods.

  16. Hubble Views Two Galaxies Merging

    NASA Image and Video Library

    2017-12-08

    This image, taken with the Wide Field Planetary Camera 2 on board the NASA/ESA Hubble Space Telescope, shows the galaxy NGC 6052, located around 230 million light-years away in the constellation of Hercules. It would be reasonable to think of this as a single abnormal galaxy, and it was originally classified as such. However, it is in fact a “new” galaxy in the process of forming. Two separate galaxies have been gradually drawn together, attracted by gravity, and have collided. We now see them merging into a single structure. As the merging process continues, individual stars are thrown out of their original orbits and placed onto entirely new paths, some very distant from the region of the collision itself. Since the stars produce the light we see, the “galaxy” now appears to have a highly chaotic shape. Eventually, this new galaxy will settle down into a stable shape, which may not resemble either of the two original galaxies. Image credit: ESA/Hubble & NASA, Acknowledgement: Judy Schmidt

  17. Navier-Stokes structure of merged layer flow on the spherical nose of a space vehicle

    NASA Technical Reports Server (NTRS)

    Jain, A. C.; Woods, G. H.

    1988-01-01

    Hypersonic merged layer flow on the forepart of a spherical surface of a space vehicle has been investigated on the basis of the full steady-state Navier-Stokes equations using slip and temperature jump boundary conditions at the surface and free-stream conditions far from the surface. The shockwave-like structure was determined as part of the computations. Using an equivalent body concept, computations were carried out under conditions that the Aeroassist Flight Experiment (AFE) Vehicle would encounter at 15 and 20 seconds in its flight path. Emphasis was placed on understanding the basic nature of the flow structure under low density conditions. Particular attention was paid to the understanding of the structure of the outer shockwave-like region as the fluid expands around the sphere. Plots were drawn for flow profiles and surface characteristics to understand the role of dissipation processes in the merged layer of the spherical nose of the vehicle.

  18. Formal language constrained path problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrett, C.; Jacob, R.; Marathe, M.

    1997-07-08

    In many path finding problems arising in practice, certain patterns of edge/vertex labels in the labeled graph being traversed are allowed/preferred, while others are disallowed. Motivated by such applications as intermodal transportation planning, the authors investigate the complexity of finding feasible paths in a labeled network, where the mode choice for each traveler is specified by a formal language. The main contributions of this paper include the following: (1) the authors show that the problem of finding a shortest path between a source and destination for a traveler whose mode choice is specified as a context free language is solvablemore » efficiently in polynomial time, when the mode choice is specified as a regular language they provide algorithms with improved space and time bounds; (2) in contrast, they show that the problem of finding simple paths between a source and a given destination is NP-hard, even when restricted to very simple regular expressions and/or very simple graphs; (3) for the class of treewidth bounded graphs, they show that (i) the problem of finding a regular language constrained simple path between source and a destination is solvable in polynomial time and (ii) the extension to finding context free language constrained simple paths is NP-complete. Several extensions of these results are presented in the context of finding shortest paths with additional constraints. These results significantly extend the results in [MW95]. As a corollary of the results, they obtain a polynomial time algorithm for the BEST k-SIMILAR PATH problem studied in [SJB97]. The previous best algorithm was given by [SJB97] and takes exponential time in the worst case.« less

  19. Cosmic ray modulation and merged interaction regions

    NASA Technical Reports Server (NTRS)

    Burlaga, L. F.; Goldstein, M. L.; Mcdonald, F. B.

    1985-01-01

    Beyond several AU, interactions among shocks and streams give rise to merged interaction regions in which the magnetic field is turbulent. The integral intensity of . 75 MeV/Nuc cosmic rays at Voyager is generally observed to decrease when a merged interaction region moves past the spacecraft and to increase during the passage of a rarefaction region. When the separation between interaction regions is relatively large, the cosmic ray intensity tends to increase on a scale of a few months. This was the case at Voyager 1 from July 1, 1983 to May 1, 1984, when the spacecraft moved from 16.7 to 19.6 AU. Changes in cosmic ray intensity were related to the magnetic field strength in a simple way. It is estimated that the diffusion coefficient in merged interaction regions at this distance is similar to 0.6 x 10 to the 22nd power sq cm/s.

  20. Redundant via insertion in self-aligned double patterning

    NASA Astrophysics Data System (ADS)

    Song, Youngsoo; Jung, Jinwook; Shin, Youngsoo

    2017-03-01

    Redundant via (RV) insertion is employed to enhance via manufacturability, and has been extensively studied. Self-aligned double patterning (SADP) process, brings a new challenge to RV insertion since newly created cut for each RV insertion has to be taken care of. Specifically, when a cut for RV, which we simply call RV-cut, is formed, cut conflict may occur with nearby line-end cuts, which results in a decrease in RV candidates. We introduce cut merging to reduce the number of cut conflicts; merged cuts are processed with stitch using litho-etch-litho-etch (LELE) multi-patterning method. In this paper, we propose a new RV insertion method with cut merging in SADP for the first time. In our experiments, a simple RV insertion yields 55.3% vias to receives RVs; our proposed method that considers cut merging increases that number to 69.6% on average of test circuits.

  1. P-Finder: Reconstruction of Signaling Networks from Protein-Protein Interactions and GO Annotations.

    PubMed

    Young-Rae Cho; Yanan Xin; Speegle, Greg

    2015-01-01

    Because most complex genetic diseases are caused by defects of cell signaling, illuminating a signaling cascade is essential for understanding their mechanisms. We present three novel computational algorithms to reconstruct signaling networks between a starting protein and an ending protein using genome-wide protein-protein interaction (PPI) networks and gene ontology (GO) annotation data. A signaling network is represented as a directed acyclic graph in a merged form of multiple linear pathways. An advanced semantic similarity metric is applied for weighting PPIs as the preprocessing of all three methods. The first algorithm repeatedly extends the list of nodes based on path frequency towards an ending protein. The second algorithm repeatedly appends edges based on the occurrence of network motifs which indicate the link patterns more frequently appearing in a PPI network than in a random graph. The last algorithm uses the information propagation technique which iteratively updates edge orientations based on the path strength and merges the selected directed edges. Our experimental results demonstrate that the proposed algorithms achieve higher accuracy than previous methods when they are tested on well-studied pathways of S. cerevisiae. Furthermore, we introduce an interactive web application tool, called P-Finder, to visualize reconstructed signaling networks.

  2. Multi-chord fiber-coupled interferometer with a long coherence length laser

    NASA Astrophysics Data System (ADS)

    Merritt, Elizabeth C.; Lynn, Alan G.; Gilmore, Mark A.; Hsu, Scott C.

    2012-03-01

    This paper describes a 561 nm laser heterodyne interferometer that provides time-resolved measurements of line-integrated plasma electron density within the range of 1015-1018 cm-2. Such plasmas are produced by railguns on the plasma liner experiment, which aims to produce μs-, cm-, and Mbar-scale plasmas through the merging of 30 plasma jets in a spherically convergent geometry. A long coherence length, 320 mW laser allows for a strong, sub-fringe phase-shift signal without the need for closely matched probe and reference path lengths. Thus, only one reference path is required for all eight probe paths, and an individual probe chord can be altered without altering the reference or other probe path lengths. Fiber-optic decoupling of the probe chord optics on the vacuum chamber from the rest of the system allows the probe paths to be easily altered to focus on different spatial regions of the plasma. We demonstrate that sub-fringe resolution capability allows the interferometer to operate down to line-integrated densities of the order of 5 × 1015 cm-2.

  3. A fast algorithm for identifying friends-of-friends halos

    NASA Astrophysics Data System (ADS)

    Feng, Y.; Modi, C.

    2017-07-01

    We describe a simple and fast algorithm for identifying friends-of-friends features and prove its correctness. The algorithm avoids unnecessary expensive neighbor queries, uses minimal memory overhead, and rejects slowdown in high over-density regions. We define our algorithm formally based on pair enumeration, a problem that has been heavily studied in fast 2-point correlation codes and our reference implementation employs a dual KD-tree correlation function code. We construct features in a hierarchical tree structure, and use a splay operation to reduce the average cost of identifying the root of a feature from O [ log L ] to O [ 1 ] (L is the size of a feature) without additional memory costs. This reduces the overall time complexity of merging trees from O [ L log L ] to O [ L ] , reducing the number of operations per splay by orders of magnitude. We next introduce a pruning operation that skips merge operations between two fully self-connected KD-tree nodes. This improves the robustness of the algorithm, reducing the number of merge operations in high density peaks from O [δ2 ] to O [ δ ] . We show that for cosmological data set the algorithm eliminates more than half of merge operations for typically used linking lengths b ∼ 0 . 2 (relative to mean separation). Furthermore, our algorithm is extremely simple and easy to implement on top of an existing pair enumeration code, reusing the optimization effort that has been invested in fast correlation function codes.

  4. Automatic tracking of dynamical evolutions of oceanic mesoscale eddies with satellite observation data

    NASA Astrophysics Data System (ADS)

    Sun, Liang; Li, Qiu-Yang

    2017-04-01

    The oceanic mesoscale eddies play a major role in ocean climate system. To analyse spatiotemporal dynamics of oceanic mesoscale eddies, the Genealogical Evolution Model (GEM) based on satellite data is developed, which is an efficient logical model used to track dynamic evolution of mesoscale eddies in the ocean. It can distinguish different dynamic processes (e.g., merging and splitting) within a dynamic evolution pattern, which is difficult to accomplish using other tracking methods. To this end, a mononuclear eddy detection method was firstly developed with simple segmentation strategies, e.g. watershed algorithm. The algorithm is very fast by searching the steepest descent path. Second, the GEM uses a two-dimensional similarity vector (i.e. a pair of ratios of overlap area between two eddies to the area of each eddy) rather than a scalar to measure the similarity between eddies, which effectively solves the ''missing eddy" problem (temporarily lost eddy in tracking). Third, for tracking when an eddy splits, GEM uses both "parent" (the original eddy) and "child" (eddy split from parent) and the dynamic processes are described as birth and death of different generations. Additionally, a new look-ahead approach with selection rules effectively simplifies computation and recording. All of the computational steps are linear and do not include iteration. Given the pixel number of the target region L, the maximum number of eddies M, the number N of look-ahead time steps, and the total number of time steps T, the total computer time is O (LM(N+1)T). The tracking of each eddy is very smooth because we require that the snapshots of each eddy on adjacent days overlap one another. Although eddy splitting or merging is ubiquitous in the ocean, they have different geographic distribution in the Northern Pacific Ocean. Both the merging and splitting rates of the eddies are high, especially at the western boundary, in currents and in "eddy deserts". GEM is useful not only for satellite-based observational data but also for numerical simulation outputs. It is potentially useful for studying dynamic processes in other related fields, e.g., the dynamics of cyclones in meteorology.

  5. Sensor-Oriented Path Planning for Multiregion Surveillance with a Single Lightweight UAV SAR

    PubMed Central

    Li, Jincheng; Chen, Jie; Wang, Pengbo; Li, Chunsheng

    2018-01-01

    In the surveillance of interested regions by unmanned aerial vehicle (UAV), system performance relies greatly on the motion control strategy of the UAV and the operation characteristics of the onboard sensors. This paper investigates the 2D path planning problem for the lightweight UAV synthetic aperture radar (SAR) system in an environment of multiple regions of interest (ROIs), the sizes of which are comparable to the radar swath width. Taking into account the special requirements of the SAR system on the motion of the platform, we model path planning for UAV SAR as a constrained multiobjective optimization problem (MOP). Based on the fact that the UAV route can be designed in the map image, an image-based path planner is proposed in this paper. First, the neighboring ROIs are merged by the morphological operation. Then, the parts of routes for data collection of the ROIs can be located according to the geometric features of the ROIs and the observation geometry of UAV SAR. Lastly, the route segments for ROIs surveillance are connected by a path planning algorithm named the sampling-based sparse A* search (SSAS) algorithm. Simulation experiments in real scenarios demonstrate that the proposed sensor-oriented path planner can improve the reconnaissance performance of lightweight UAV SAR greatly compared with the conventional zigzag path planner. PMID:29439447

  6. Sensor-Oriented Path Planning for Multiregion Surveillance with a Single Lightweight UAV SAR.

    PubMed

    Li, Jincheng; Chen, Jie; Wang, Pengbo; Li, Chunsheng

    2018-02-11

    In the surveillance of interested regions by unmanned aerial vehicle (UAV), system performance relies greatly on the motion control strategy of the UAV and the operation characteristics of the onboard sensors. This paper investigates the 2D path planning problem for the lightweight UAV synthetic aperture radar (SAR) system in an environment of multiple regions of interest (ROIs), the sizes of which are comparable to the radar swath width. Taking into account the special requirements of the SAR system on the motion of the platform, we model path planning for UAV SAR as a constrained multiobjective optimization problem (MOP). Based on the fact that the UAV route can be designed in the map image, an image-based path planner is proposed in this paper. First, the neighboring ROIs are merged by the morphological operation. Then, the parts of routes for data collection of the ROIs can be located according to the geometric features of the ROIs and the observation geometry of UAV SAR. Lastly, the route segments for ROIs surveillance are connected by a path planning algorithm named the sampling-based sparse A* search (SSAS) algorithm. Simulation experiments in real scenarios demonstrate that the proposed sensor-oriented path planner can improve the reconnaissance performance of lightweight UAV SAR greatly compared with the conventional zigzag path planner.

  7. End-to-end simulation of bunch merging for a muon collider

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bao, Yu; Stratakis, Diktys; Hanson, Gail G.

    2015-05-03

    Muon accelerator beams are commonly produced indirectly through pion decay by interaction of a charged particle beam with a target. Efficient muon capture requires the muons to be first phase-rotated by rf cavities into a train of 21 bunches with much reduced energy spread. Since luminosity is proportional to the square of the number of muons per bunch, it is crucial for a Muon Collider to use relatively few bunches with many muons per bunch. In this paper we will describe a bunch merging scheme that should achieve this goal. We present for the first time a complete end-to-end simulationmore » of a 6D bunch merger for a Muon Collider. The 21 bunches arising from the phase-rotator, after some initial cooling, are merged in longitudinal phase space into seven bunches, which then go through seven paths with different lengths and reach the final collecting "funnel" at the same time. The final single bunch has a transverse and a longitudinal emittance that matches well with the subsequent 6D rectilinear cooling scheme.« less

  8. Laboratory plasma physics experiments using merging supersonic plasma jets

    DOE PAGES

    Hsu, S. C.; Moser, A. L.; Merritt, E. C.; ...

    2015-04-01

    We describe a laboratory plasma physics experiment at Los Alamos National Laboratory that uses two merging supersonic plasma jets formed and launched by pulsed-power-driven railguns. The jets can be formed using any atomic species or mixture available in a compressed-gas bottle and have the following nominal initial parameters at the railgun nozzle exit: n e ≈ n i ~ 10¹⁶ cm⁻³, T e ≈ T i ≈ 1.4 eV, V jet ≈ 30–100 km/s, mean chargemore » $$\\bar{Z}$$ ≈ 1, sonic Mach number M s ≡ V jet/C s > 10, jet diameter = 5 cm, and jet length ≈ 20 cm. Experiments to date have focused on the study of merging-jet dynamics and the shocks that form as a result of the interaction, in both collisional and collisionless regimes with respect to the inter-jet classical ion mean free path, and with and without an applied magnetic field. However, many other studies are also possible, as discussed in this paper.« less

  9. Laboratory plasma physics experiments using merging supersonic plasma jets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsu, S. C.; Moser, A. L.; Merritt, E. C.

    We describe a laboratory plasma physics experiment at Los Alamos National Laboratory that uses two merging supersonic plasma jets formed and launched by pulsed-power-driven railguns. The jets can be formed using any atomic species or mixture available in a compressed-gas bottle and have the following nominal initial parameters at the railgun nozzle exit: n e ≈ n i ~ 10¹⁶ cm⁻³, T e ≈ T i ≈ 1.4 eV, V jet ≈ 30–100 km/s, mean chargemore » $$\\bar{Z}$$ ≈ 1, sonic Mach number M s ≡ V jet/C s > 10, jet diameter = 5 cm, and jet length ≈ 20 cm. Experiments to date have focused on the study of merging-jet dynamics and the shocks that form as a result of the interaction, in both collisional and collisionless regimes with respect to the inter-jet classical ion mean free path, and with and without an applied magnetic field. However, many other studies are also possible, as discussed in this paper.« less

  10. Study on traffic characteristics for a typical expressway on-ramp bottleneck considering various merging behaviors

    NASA Astrophysics Data System (ADS)

    Sun, Jie; Li, Zhipeng; Sun, Jian

    2015-12-01

    Recurring bottlenecks at freeway/expressway are considered as the main cause of traffic congestion in urban traffic system while on-ramp bottlenecks are the most significant sites that may result in congestion. In this paper, the traffic bottleneck characteristics for a simple and typical expressway on-ramp are investigated by the means of simulation modeling under the open boundary condition. In simulations, the running behaviors of each vehicle are described by a car-following model with a calibrated optimal velocity function, and lane changing actions at the merging section are modeled by a novel set of rules. We numerically derive the traffic volume of on-ramp bottleneck under different upstream arrival rates of mainline and ramp flows. It is found that the vehicles from the ramp strongly affect the pass of mainline vehicles and the merging ratio changes with the increasing of ramp vehicle, when the arrival rate of mainline flow is greater than a critical value. In addition, we clarify the dependence of the merging ratio of on-ramp bottleneck on the probability of lane changing and the length of the merging section, and some corresponding intelligent control strategies are proposed in actual traffic application.

  11. A Spatial Cognitive Map and a Human-Like Memory Model Dedicated to Pedestrian Navigation in Virtual Urban Environments

    NASA Astrophysics Data System (ADS)

    Thomas, Romain; Donikian, Stéphane

    Many articles dealing with agent navigation in an urban environment involve the use of various heuristics. Among them, one is prevalent: the search of the shortest path between two points. This strategy impairs the realism of the resulting behaviour. Indeed, psychological studies state that such a navigation behaviour is conditioned by the knowledge the subject has of its environment. Furthermore, the path a city dweller can follow may be influenced by many factors like his daily habits, or the path simplicity in term of minimum of direction changes. It appeared interesting to us to investigate how to mimic human navigation behavior with an autonomous agent. The solution we propose relies on an architecture based on a generic model of informed environment, a spatial cognitive map model merged with a human-like memory model, representing the agent's temporal knowledge of the environment, it gained along its experiences of navigation.

  12. Automatic Traffic-Based Internet Control Message Protocol (ICMP) Model Generation for ns-3

    DTIC Science & Technology

    2015-12-01

    through visiting the inferred automata o Fuzzing of an implementation by generating altered message formats We tested with 3 versions of Netzob. First...relationships. Afterwards, we used the Automata module to generate state machines using different functions: “generateChainedStateAutomata...The “generatePTAAutomata” takes as input several communication sessions and then identifies common paths and merges these into a single automata . The

  13. Segmentation of remotely sensed data using parallel region growing

    NASA Technical Reports Server (NTRS)

    Tilton, J. C.; Cox, S. C.

    1983-01-01

    The improved spatial resolution of the new earth resources satellites will increase the need for effective utilization of spatial information in machine processing of remotely sensed data. One promising technique is scene segmentation by region growing. Region growing can use spatial information in two ways: only spatially adjacent regions merge together, and merging criteria can be based on region-wide spatial features. A simple region growing approach is described in which the similarity criterion is based on region mean and variance (a simple spatial feature). An effective way to implement region growing for remote sensing is as an iterative parallel process on a large parallel processor. A straightforward parallel pixel-based implementation of the algorithm is explored and its efficiency is compared with sequential pixel-based, sequential region-based, and parallel region-based implementations. Experimental results from on aircraft scanner data set are presented, as is a discussioon of proposed improvements to the segmentation algorithm.

  14. Efficient Merge and Insert Operations for Binary Heaps and Trees

    NASA Technical Reports Server (NTRS)

    Kuszmaul, Christopher Lee; Woo, Alex C. (Technical Monitor)

    2000-01-01

    Binary heaps and binary search trees merge efficiently. We introduce a new amortized analysis that allows us to prove the cost of merging either binary heaps or balanced binary trees is O(l), in the amortized sense. The standard set of other operations (create, insert, delete, extract minimum, in the case of binary heaps, and balanced binary trees, as well as a search operation for balanced binary trees) remain with a cost of O(log n). For binary heaps implemented as arrays, we show a new merge algorithm that has a single operation cost for merging two heaps, a and b, of O(absolute value of a + min(log absolute value of b log log absolute value of b. log absolute value of a log absolute value of b). This is an improvement over O(absolute value of a + log absolute value of a log absolute value of b). The cost of the new merge is so low that it can be used in a new structure which we call shadow heaps. to implement the insert operation to a tunable efficiency. Shadow heaps support the insert operation for simple priority queues in an amortized time of O(f(n)) and other operations in time O((log n log log n)/f (n)), where 1 less than or equal to f (n) less than or equal to log log n. More generally, the results here show that any data structure with operations that change its size by at most one, with the exception of a merge (aka meld) operation, can efficiently amortize the cost of the merge under conditions that are true for most implementations of binary heaps and search trees.

  15. Augmented microscopy: real-time overlay of bright-field and near-infrared fluorescence images.

    PubMed

    Watson, Jeffrey R; Gainer, Christian F; Martirosyan, Nikolay; Skoch, Jesse; Lemole, G Michael; Anton, Rein; Romanowski, Marek

    2015-10-01

    Intraoperative applications of near-infrared (NIR) fluorescent contrast agents can be aided by instrumentation capable of merging the view of surgical field with that of NIR fluorescence. We demonstrate augmented microscopy, an intraoperative imaging technique in which bright-field (real) and electronically processed NIR fluorescence (synthetic) images are merged within the optical path of a stereomicroscope. Under luminance of 100,000 lx, representing typical illumination of the surgical field, the augmented microscope detects 189 nM concentration of indocyanine green and produces a composite of the real and synthetic images within the eyepiece of the microscope at 20 fps. Augmentation described here can be implemented as an add-on module to visualize NIR contrast agents, laser beams, or various types of electronic data within the surgical microscopes commonly used in neurosurgical, cerebrovascular, otolaryngological, and ophthalmic procedures.

  16. Augmented microscopy: real-time overlay of bright-field and near-infrared fluorescence images

    NASA Astrophysics Data System (ADS)

    Watson, Jeffrey R.; Gainer, Christian F.; Martirosyan, Nikolay; Skoch, Jesse; Lemole, G. Michael, Jr.; Anton, Rein; Romanowski, Marek

    2015-10-01

    Intraoperative applications of near-infrared (NIR) fluorescent contrast agents can be aided by instrumentation capable of merging the view of surgical field with that of NIR fluorescence. We demonstrate augmented microscopy, an intraoperative imaging technique in which bright-field (real) and electronically processed NIR fluorescence (synthetic) images are merged within the optical path of a stereomicroscope. Under luminance of 100,000 lx, representing typical illumination of the surgical field, the augmented microscope detects 189 nM concentration of indocyanine green and produces a composite of the real and synthetic images within the eyepiece of the microscope at 20 fps. Augmentation described here can be implemented as an add-on module to visualize NIR contrast agents, laser beams, or various types of electronic data within the surgical microscopes commonly used in neurosurgical, cerebrovascular, otolaryngological, and ophthalmic procedures.

  17. Coarse-grained representation of the quasi adiabatic propagator path integral for the treatment of non-Markovian long-time bath memory

    NASA Astrophysics Data System (ADS)

    Richter, Martin; Fingerhut, Benjamin P.

    2017-06-01

    The description of non-Markovian effects imposed by low frequency bath modes poses a persistent challenge for path integral based approaches like the iterative quasi-adiabatic propagator path integral (iQUAPI) method. We present a novel approximate method, termed mask assisted coarse graining of influence coefficients (MACGIC)-iQUAPI, that offers appealing computational savings due to substantial reduction of considered path segments for propagation. The method relies on an efficient path segment merging procedure via an intermediate coarse grained representation of Feynman-Vernon influence coefficients that exploits physical properties of system decoherence. The MACGIC-iQUAPI method allows us to access the regime of biological significant long-time bath memory on the order of hundred propagation time steps while retaining convergence to iQUAPI results. Numerical performance is demonstrated for a set of benchmark problems that cover bath assisted long range electron transfer, the transition from coherent to incoherent dynamics in a prototypical molecular dimer and excitation energy transfer in a 24-state model of the Fenna-Matthews-Olson trimer complex where in all cases excellent agreement with numerically exact reference data is obtained.

  18. Merging photoredox and nickel catalysis: decarboxylative cross-coupling of carboxylic acids with vinyl halides.

    PubMed

    Noble, Adam; McCarver, Stefan J; MacMillan, David W C

    2015-01-21

    Decarboxylative cross-coupling of alkyl carboxylic acids with vinyl halides has been accomplished through the synergistic merger of photoredox and nickel catalysis. This new methodology has been successfully applied to a variety of α-oxy and α-amino acids, as well as simple hydrocarbon-substituted acids. Diverse vinyl iodides and bromides give rise to vinylation products in high efficiency under mild, operationally simple reaction conditions.

  19. Controlling lightwave in Riemann space by merging geometrical optics with transformation optics.

    PubMed

    Liu, Yichao; Sun, Fei; He, Sailing

    2018-01-11

    In geometrical optical design, we only need to choose a suitable combination of lenses, prims, and mirrors to design an optical path. It is a simple and classic method for engineers. However, people cannot design fantastical optical devices such as invisibility cloaks, optical wormholes, etc. by geometrical optics. Transformation optics has paved the way for these complicated designs. However, controlling the propagation of light by transformation optics is not a direct design process like geometrical optics. In this study, a novel mixed method for optical design is proposed which has both the simplicity of classic geometrical optics and the flexibility of transformation optics. This mixed method overcomes the limitations of classic optical design; at the same time, it gives intuitive guidance for optical design by transformation optics. Three novel optical devices with fantastic functions have been designed using this mixed method, including asymmetrical transmissions, bidirectional focusing, and bidirectional cloaking. These optical devices cannot be implemented by classic optics alone and are also too complicated to be designed by pure transformation optics. Numerical simulations based on both the ray tracing method and full-wave simulation method are carried out to verify the performance of these three optical devices.

  20. Verifying Air Force Weather Passive Satellite Derived Cloud Analysis Products

    NASA Astrophysics Data System (ADS)

    Nobis, T. E.

    2017-12-01

    Air Force Weather (AFW) has developed an hourly World-Wide Merged Cloud Analysis (WWMCA) using imager data from 16 geostationary and polar-orbiting satellites. The analysis product contains information on cloud fraction, height, type and various optical properties including optical depth and integrated water path. All of these products are derived using a suite of algorithms which rely exclusively on passively sensed data from short, mid and long wave imager data. The system integrates satellites with a wide-range of capabilities, from the relatively simple two-channel OLS imager to the 16 channel ABI/AHI to create a seamless global analysis in real time. Over the last couple of years, AFW has started utilizing independent verification data from active sensed cloud measurements to better understand the performance limitations of the WWMCA. Sources utilized include space based lidars (CALIPSO, CATS) and radar (CloudSat) as well as ground based lidars from the Department of Energy ARM sites and several European cloud radars. This work will present findings from our efforts to compare active and passive sensed cloud information including comparison techniques/limitations as well as performance of the passive derived cloud information against the active.

  1. PathNER: a tool for systematic identification of biological pathway mentions in the literature

    PubMed Central

    2013-01-01

    Background Biological pathways are central to many biomedical studies and are frequently discussed in the literature. Several curated databases have been established to collate the knowledge of molecular processes constituting pathways. Yet, there has been little focus on enabling systematic detection of pathway mentions in the literature. Results We developed a tool, named PathNER (Pathway Named Entity Recognition), for the systematic identification of pathway mentions in the literature. PathNER is based on soft dictionary matching and rules, with the dictionary generated from public pathway databases. The rules utilise general pathway-specific keywords, syntactic information and gene/protein mentions. Detection results from both components are merged. On a gold-standard corpus, PathNER achieved an F1-score of 84%. To illustrate its potential, we applied PathNER on a collection of articles related to Alzheimer's disease to identify associated pathways, highlighting cases that can complement an existing manually curated knowledgebase. Conclusions In contrast to existing text-mining efforts that target the automatic reconstruction of pathway details from molecular interactions mentioned in the literature, PathNER focuses on identifying specific named pathway mentions. These mentions can be used to support large-scale curation and pathway-related systems biology applications, as demonstrated in the example of Alzheimer's disease. PathNER is implemented in Java and made freely available online at http://sourceforge.net/projects/pathner/. PMID:24555844

  2. Bäcklund transformations for the Boussinesq equation and merging solitons

    NASA Astrophysics Data System (ADS)

    Rasin, Alexander G.; Schiff, Jeremy

    2017-08-01

    The Bäcklund transformation (BT) for the ‘good’ Boussinesq equation and its superposition principles are presented and applied. Unlike other standard integrable equations, the Boussinesq equation does not have a strictly algebraic superposition principle for 2 BTs, but it does for 3. We present this and discuss associated lattice systems. Applying the BT to the trivial solution generates both standard solitons and what we call ‘merging solitons’—solutions in which two solitary waves (with related speeds) merge into a single one. We use the superposition principles to generate a variety of interesting solutions, including superpositions of a merging soliton with 1 or 2 regular solitons, and solutions that develop a singularity in finite time which then disappears at a later finite time. We prove a Wronskian formula for the solutions obtained by applying a general sequence of BTs on the trivial solution. Finally, we obtain the standard conserved quantities of the Boussinesq equation from the BT, and show how the hierarchy of local symmetries follows in a simple manner from the superposition principle for 3 BTs.

  3. Crack Path Selection in Thermally Loaded Borosilicate/Steel Bibeam Specimen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grutzik, Scott Joseph; Reedy, Jr., E. D.

    Here, we have developed a novel specimen for studying crack paths in glass. Under certain conditions, the specimen reaches a state where the crack must select between multiple paths satisfying the K II = 0 condition. This path selection is a simple but challenging benchmark case for both analytical and numerical methods of predicting crack propagation. We document the development of the specimen, using an uncracked and instrumented test case to study the effect of adhesive choice and validate the accuracy of both a simple beam theory model and a finite element model. In addition, we present preliminary fracture testmore » results and provide a comparison to the path predicted by two numerical methods (mesh restructuring and XFEM). The directional stability of the crack path and differences in kink angle predicted by various crack kinking criteria is analyzed with a finite element model.« less

  4. Crack Path Selection in Thermally Loaded Borosilicate/Steel Bibeam Specimen

    DOE PAGES

    Grutzik, Scott Joseph; Reedy, Jr., E. D.

    2017-08-04

    Here, we have developed a novel specimen for studying crack paths in glass. Under certain conditions, the specimen reaches a state where the crack must select between multiple paths satisfying the K II = 0 condition. This path selection is a simple but challenging benchmark case for both analytical and numerical methods of predicting crack propagation. We document the development of the specimen, using an uncracked and instrumented test case to study the effect of adhesive choice and validate the accuracy of both a simple beam theory model and a finite element model. In addition, we present preliminary fracture testmore » results and provide a comparison to the path predicted by two numerical methods (mesh restructuring and XFEM). The directional stability of the crack path and differences in kink angle predicted by various crack kinking criteria is analyzed with a finite element model.« less

  5. Merging Photoredox and Nickel Catalysis: Decarboxylative Cross-Coupling of Carboxylic Acids with Vinyl Halides

    PubMed Central

    2015-01-01

    Decarboxylative cross-coupling of alkyl carboxylic acids with vinyl halides has been accomplished through the synergistic merger of photoredox and nickel catalysis. This new methodology has been successfully applied to a variety of α-oxy and α-amino acids, as well as simple hydrocarbon-substituted acids. Diverse vinyl iodides and bromides give rise to vinylation products in high efficiency under mild, operationally simple reaction conditions. PMID:25521443

  6. Path Integration on the Upper Half-Plane

    NASA Astrophysics Data System (ADS)

    Kubo, R.

    1987-10-01

    Feynman's path integral is considered on the Poincaré upper half-plane. It is shown that the fundermental solution to the heat equation partial f/partial t=Delta_{H}f can be expressed in terms of a path integral. A simple relation between the path integral and the Selberg trace formula is discussed briefly.

  7. Augmented microscopy: real-time overlay of bright-field and near-infrared fluorescence images

    PubMed Central

    Watson, Jeffrey R.; Gainer, Christian F.; Martirosyan, Nikolay; Skoch, Jesse; Lemole, G. Michael; Anton, Rein; Romanowski, Marek

    2015-01-01

    Abstract. Intraoperative applications of near-infrared (NIR) fluorescent contrast agents can be aided by instrumentation capable of merging the view of surgical field with that of NIR fluorescence. We demonstrate augmented microscopy, an intraoperative imaging technique in which bright-field (real) and electronically processed NIR fluorescence (synthetic) images are merged within the optical path of a stereomicroscope. Under luminance of 100,000 lx, representing typical illumination of the surgical field, the augmented microscope detects 189 nM concentration of indocyanine green and produces a composite of the real and synthetic images within the eyepiece of the microscope at 20 fps. Augmentation described here can be implemented as an add-on module to visualize NIR contrast agents, laser beams, or various types of electronic data within the surgical microscopes commonly used in neurosurgical, cerebrovascular, otolaryngological, and ophthalmic procedures. PMID:26440760

  8. Multiple-path model of spectral reflectance of a dyed fabric.

    PubMed

    Rogers, Geoffrey; Dalloz, Nicolas; Fournel, Thierry; Hebert, Mathieu

    2017-05-01

    Experimental results are presented of the spectral reflectance of a dyed fabric as analyzed by a multiple-path model of reflection. The multiple-path model provides simple analytic expressions for reflection and transmission of turbid media by applying the Beer-Lambert law to each path through the medium and summing over all paths, each path weighted by its probability. The path-length probability is determined by a random-walk analysis. The experimental results presented here show excellent agreement with predictions made by the model.

  9. Semiautomated Management Of Arriving Air Traffic

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz; Nedell, William

    1992-01-01

    System of computers, graphical workstations, and computer programs developed for semiautomated management of approach and arrival of numerous aircraft at airport. System comprises three subsystems: traffic-management advisor, used for controlling traffic into terminal area; descent advisor generates information integrated into plan-view display of traffic on monitor; and final-approach-spacing tool used to merge traffic converging on final approach path while making sure aircraft are properly spaced. Not intended to restrict decisions of air-traffic controllers.

  10. Path integration: effect of curved path complexity and sensory system on blindfolded walking.

    PubMed

    Koutakis, Panagiotis; Mukherjee, Mukul; Vallabhajosula, Srikant; Blanke, Daniel J; Stergiou, Nicholas

    2013-02-01

    Path integration refers to the ability to integrate continuous information of the direction and distance traveled by the system relative to the origin. Previous studies have investigated path integration through blindfolded walking along simple paths such as straight line and triangles. However, limited knowledge exists regarding the role of path complexity in path integration. Moreover, little is known about how information from different sensory input systems (like vision and proprioception) contributes to accurate path integration. The purpose of the current study was to investigate how sensory information and curved path complexity affect path integration. Forty blindfolded participants had to accurately reproduce a curved path and return to the origin. They were divided into four groups that differed in the curved path, circle (simple) or figure-eight (complex), and received either visual (previously seen) or proprioceptive (previously guided) information about the path before they reproduced it. The dependent variables used were average trajectory error, walking speed, and distance traveled. The results indicated that (a) both groups that walked on a circular path and both groups that received visual information produced greater accuracy in reproducing the path. Moreover, the performance of the group that received proprioceptive information and later walked on a figure-eight path was less accurate than their corresponding circular group. The groups that had the visual information also walked faster compared to the group that had proprioceptive information. Results of the current study highlight the roles of different sensory inputs while performing blindfolded walking for path integration. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Sensory feedback in a bump attractor model of path integration.

    PubMed

    Poll, Daniel B; Nguyen, Khanh; Kilpatrick, Zachary P

    2016-04-01

    Mammalian spatial navigation systems utilize several different sensory information channels. This information is converted into a neural code that represents the animal's current position in space by engaging place cell, grid cell, and head direction cell networks. In particular, sensory landmark (allothetic) cues can be utilized in concert with an animal's knowledge of its own velocity (idiothetic) cues to generate a more accurate representation of position than path integration provides on its own (Battaglia et al. The Journal of Neuroscience 24(19):4541-4550 (2004)). We develop a computational model that merges path integration with feedback from external sensory cues that provide a reliable representation of spatial position along an annular track. Starting with a continuous bump attractor model, we explore the impact of synaptic spatial asymmetry and heterogeneity, which disrupt the position code of the path integration process. We use asymptotic analysis to reduce the bump attractor model to a single scalar equation whose potential represents the impact of asymmetry and heterogeneity. Such imperfections cause errors to build up when the network performs path integration, but these errors can be corrected by an external control signal representing the effects of sensory cues. We demonstrate that there is an optimal strength and decay rate of the control signal when cues appear either periodically or randomly. A similar analysis is performed when errors in path integration arise from dynamic noise fluctuations. Again, there is an optimal strength and decay of discrete control that minimizes the path integration error.

  12. TURBULENT COSMIC-RAY REACCELERATION AT RADIO RELICS AND HALOS IN CLUSTERS OF GALAXIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fujita, Yutaka; Takizawa, Motokazu; Yamazaki, Ryo

    Radio relics are synchrotron emission found on the periphery of galaxy clusters. From the position and the morphology, it is often believed that the relics are generated by cosmic-ray (CR) electrons accelerated at shocks through a diffusive shock acceleration (DSA) mechanism. However, some radio relics have harder spectra than the prediction of the standard DSA model. One example is observed in the cluster 1RXS J0603.3+4214, which is often called the “Toothbrush Cluster.” Interestingly, the position of the relic is shifted from that of a possible shock. In this study, we show that these discrepancies in the spectrum and the positionmore » can be solved if turbulent (re)acceleration is very effective behind the shock. This means that for some relics turbulent reacceleration may be the main mechanism to produce high-energy electrons, contrary to the common belief that it is the DSA. Moreover, we show that for efficient reacceleration, the effective mean free path of the electrons has to be much smaller than their Coulomb mean free path. We also study the merging cluster 1E 0657−56, or the “Bullet Cluster,” in which a radio relic has not been found at the position of the prominent shock ahead of the bullet. We indicate that a possible relic at the shock is obscured by the observed large radio halo that is generated by strong turbulence behind the shock. We propose a simple explanation of the morphological differences of radio emission among the Toothbrush, the Bullet, and the Sausage (CIZA J2242.8+5301) Clusters.« less

  13. Learning to merge: a new tool for interactive mapping

    NASA Astrophysics Data System (ADS)

    Porter, Reid B.; Lundquist, Sheng; Ruggiero, Christy

    2013-05-01

    The task of turning raw imagery into semantically meaningful maps and overlays is a key area of remote sensing activity. Image analysts, in applications ranging from environmental monitoring to intelligence, use imagery to generate and update maps of terrain, vegetation, road networks, buildings and other relevant features. Often these tasks can be cast as a pixel labeling problem, and several interactive pixel labeling tools have been developed. These tools exploit training data, which is generated by analysts using simple and intuitive paint-program annotation tools, in order to tailor the labeling algorithm for the particular dataset and task. In other cases, the task is best cast as a pixel segmentation problem. Interactive pixel segmentation tools have also been developed, but these tools typically do not learn from training data like the pixel labeling tools do. In this paper we investigate tools for interactive pixel segmentation that also learn from user input. The input has the form of segment merging (or grouping). Merging examples are 1) easily obtained from analysts using vector annotation tools, and 2) more challenging to exploit than traditional labels. We outline the key issues in developing these interactive merging tools, and describe their application to remote sensing.

  14. Some path-following techniques for solution of nonlinear equations and comparison with parametric differentiation

    NASA Technical Reports Server (NTRS)

    Barger, R. L.; Walters, R. W.

    1986-01-01

    Some path-following techniques are described and compared with other methods. Use of multipurpose techniques that can be used at more than one stage of the path-following computation results in a system that is relatively simple to understand, program, and use. Comparison of path-following methods with the method of parametric differentiation reveals definite advantages for the path-following methods. The fact that parametric differentiation has found a broader range of applications indicates that path-following methods have been underutilized.

  15. Computing the optimal path in stochastic dynamical systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauver, Martha; Forgoston, Eric, E-mail: eric.forgoston@montclair.edu; Billings, Lora

    2016-08-15

    In stochastic systems, one is often interested in finding the optimal path that maximizes the probability of escape from a metastable state or of switching between metastable states. Even for simple systems, it may be impossible to find an analytic form of the optimal path, and in high-dimensional systems, this is almost always the case. In this article, we formulate a constructive methodology that is used to compute the optimal path numerically. The method utilizes finite-time Lyapunov exponents, statistical selection criteria, and a Newton-based iterative minimizing scheme. The method is applied to four examples. The first example is a two-dimensionalmore » system that describes a single population with internal noise. This model has an analytical solution for the optimal path. The numerical solution found using our computational method agrees well with the analytical result. The second example is a more complicated four-dimensional system where our numerical method must be used to find the optimal path. The third example, although a seemingly simple two-dimensional system, demonstrates the success of our method in finding the optimal path where other numerical methods are known to fail. In the fourth example, the optimal path lies in six-dimensional space and demonstrates the power of our method in computing paths in higher-dimensional spaces.« less

  16. Raman Lidar MERGE Value-Added Product

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newsom, Rob; Goldsmith, John; Sivaraman, Chitra

    The U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility Raman lidars (RLs) are semi-autonomous, land-based, laser remote sensing systems that provide height- and time-resolved measurements of water vapor mixing ratio, temperature, aerosol backscatter, extinction, and linear depolarization ratio from about 200 m to greater than 10 km AGL. These systems transmit at a wavelength of 355 nm with 300 mJ, ~5 ns pulses, and a pulse repetition frequency of 30 Hz. The receiver incorporates nine detection channels, including two water vapor channels at 408 nm, two nitrogen channels at 387 nm, three elastic channels, and twomore » rotational Raman channels for temperature profiling at 354 and 353 nm. Figure 1 illustrates the layout of the ARM RL receiver system. Backscattered light from the atmosphere enters the telescope and is directed into the receiver system (i.e., aft optics). This signal is then split between a narrow-field-of-view radiometer (NFOV) path (blue) and a wide-field-of-view zenith radiometer (WFOV) path (red). The WFOV (2 mrad) path contains three channels (water vapor, nitrogen, and unpolarized elastic), and the NFOV (0.3 mrad) path contains six channels (water vapor, nitrogen, parallel and perpendicular elastic, and two rotational Raman). All nine detection channels use Electron Tubes 9954B photomultiplier tubes (PMTs). The signals from each of the nine PMTs are acquired using transient data recorders from Licel GbR (Berlin, Germany). The Licel data recorders provide simultaneous measurements of both analog photomultiplier current and photon counts at height resolution of 7.5 m and a time resolution of 10 s. The analog signal provides good linearity in the strong signal regime, but poor sensitivity at low signal levels. Conversely, the photo counting signal provides good sensitivity in the weak signal regime, but is strongly nonlinear at higher signal levels. The advantage in recording both signals is that they can be combined (or merged) into a single signal with improved dynamic range. The process of combining the analog and photon counting data has become known as “gluing” (Whiteman et al., 2006).« less

  17. THE EVOLUTION OF PRIMORDIAL BINARY OPEN STAR CLUSTERS: MERGERS, SHREDDED SECONDARIES, AND SEPARATED TWINS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De la Fuente Marcos, R.; De la Fuente Marcos, C., E-mail: raul@galaxy.suffolk.e

    2010-08-10

    The properties of the candidate binary star cluster population in the Magellanic Clouds and Milky Way are similar. The fraction of candidate binaries is {approx}10% and the pair separation histogram exhibits a bimodal distribution commonly attributed to their transient nature. However, if primordial pairs cannot survive for long as recognizable bound systems, how are they ending up? Here, we use simulations to confirm that merging, extreme tidal distortion, and ionization are possible depending on the initial orbital elements and mass ratio of the cluster pair. Merging is observed for initially close pairs but also for wider systems in nearly parabolicmore » orbits. Its characteristic timescale depends on the initial orbital semi-major axis, eccentricity, and cluster pair mass ratio, becoming shorter for closer, more eccentric equal mass pairs. Shredding of the less massive cluster and subsequent separation is observed in all pairs with appreciably different masses. Wide pairs evolve into separated twins characterized by the presence of tidal bridges and separations of 200-500 pc after one Galactic orbit. Most observed binary candidates appear to be following this evolutionary path which translates into the dominant peak (25-30 pc) in the observed pair separation distribution. The secondary peak at smaller separations (10-15 pc) can be explained as due to close pairs in almost circular orbits and/or undergoing merging. Merged clusters exhibit both peculiar radial density and velocity dispersion profiles shaped by synchronization and gravogyro instabilities. Simulations and observations show that long-term binary open cluster stability is unlikely.« less

  18. A droplet-merging platform for comparative functional analysis of m1 and m2 macrophages in response to e. coli-induced stimuli.

    PubMed

    Hondroulis, Evangelia; Movila, Alexandru; Sabhachandani, Pooja; Sarkar, Saheli; Cohen, Noa; Kawai, Toshihisa; Konry, Tania

    2017-03-01

    Microfluidic droplets are used to isolate cell pairs and prevent crosstalk with neighboring cells, while permitting free motility and interaction within the confined space. Dynamic analysis of cellular heterogeneity in droplets has provided insights in various biological processes. Droplet manipulation methods such as fusion and fission make it possible to precisely regulate the localized environment of a cell in a droplet and deliver reagents as required. Droplet fusion strategies achieved by passive mechanisms preserve cell viability and are easier to fabricate and operate. Here, we present a simple and effective method for the co-encapsulation of polarized M1 and M2 macrophages with Escherichia coli (E. coli) by passive merging in an integrated droplet generation, merging, and docking platform. This approach facilitated live cell profiling of effector immune functions in situ and quantitative functional analysis of macrophage heterogeneity. Biotechnol. Bioeng. 2017;114: 705-709. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  19. Application of modern control theory to scheduling and path-stretching maneuvers of aircraft in the near terminal area

    NASA Technical Reports Server (NTRS)

    Athans, M.

    1974-01-01

    A design concept of the dynamic control of aircraft in the near terminal area is discussed. An arbitrary set of nominal air routes, with possible multiple merging points, all leading to a single runway, is considered. The system allows for the automated determination of acceleration/deceleration of aircraft along the nominal air routes, as well as for the automated determination of path-stretching delay maneuvers. In addition to normal operating conditions, the system accommodates: (1) variable commanded separations over the outer marker to allow for takeoffs and between successive landings and (2) emergency conditions under which aircraft in distress have priority. The system design is based on a combination of three distinct optimal control problems involving a standard linear-quadratic problem, a parameter optimization problem, and a minimum-time rendezvous problem.

  20. Interaction of Saturn's dual rotation periods

    NASA Astrophysics Data System (ADS)

    Smith, C. G. A.

    2018-03-01

    We develop models of the interaction of Rossby wave disturbances in the northern and southern ionospheres of Saturn. We show that interhemispheric field-aligned currents allow the exchange of vorticity, modifying the background Rossby wave propagation speed. This leads to interaction of the northern and southern Rossby wave periods. In a very simple symmetric model without a plasma disk the periods merge when the overall conductivity is sufficiently high. A more complex model taking account of the inertia of the plasma disk and the asymmetry of the two hemispheres predicts a rich variety of possible wave modes. We find that merging of the northern and southern periods can only occur when (i) the conductivities of both hemispheres are sufficiently low (a criterion that is fulfilled for realistic parameters) and (ii) the background Rossby wave periods in the two hemispheres are identical. We reconcile the second criterion with the observations of a merged period that also drifts by noting that ranges of Rossby wave propagation speeds are possible in each hemisphere. We suggest that a merged disturbance in the plasma disk may act as an 'anchor' and drive Rossby waves in each hemisphere within the range of possible propagation speeds. This suggestion predicts behaviour that qualitatively matches the observed merging and splitting of the northern and southern rotation periods that occurred in 2013 and 2014. Low conductivity modes also show long damping timescales that are consistent with the persistence of the periodic signals.

  1. Merging Radar Quantitative Precipitation Estimates (QPEs) from the High-resolution NEXRAD Reanalysis over CONUS with Rain-gauge Observations

    NASA Astrophysics Data System (ADS)

    Prat, O. P.; Nelson, B. R.; Stevens, S. E.; Nickl, E.; Seo, D. J.; Kim, B.; Zhang, J.; Qi, Y.

    2015-12-01

    The processing of radar-only precipitation via the reanalysis from the National Mosaic and Multi-Sensor Quantitative (NMQ/Q2) based on the WSR-88D Next-generation Radar (Nexrad) network over the Continental United States (CONUS) is completed for the period covering from 2002 to 2011. While this constitutes a unique opportunity to study precipitation processes at higher resolution than conventionally possible (1-km, 5-min), the long-term radar-only product needs to be merged with in-situ information in order to be suitable for hydrological, meteorological and climatological applications. The radar-gauge merging is performed by using rain gauge information at daily (Global Historical Climatology Network-Daily: GHCN-D), hourly (Hydrometeorological Automated Data System: HADS), and 5-min (Automated Surface Observing Systems: ASOS; Climate Reference Network: CRN) resolution. The challenges related to incorporating differing resolution and quality networks to generate long-term large-scale gridded estimates of precipitation are enormous. In that perspective, we are implementing techniques for merging the rain gauge datasets and the radar-only estimates such as Inverse Distance Weighting (IDW), Simple Kriging (SK), Ordinary Kriging (OK), and Conditional Bias-Penalized Kriging (CBPK). An evaluation of the different radar-gauge merging techniques is presented and we provide an estimate of uncertainty for the gridded estimates. In addition, comparisons with a suite of lower resolution QPEs derived from ground based radar measurements (Stage IV) are provided in order to give a detailed picture of the improvements and remaining challenges.

  2. From grid cells and visual place cells to multimodal place cell: a new robotic architecture

    PubMed Central

    Jauffret, Adrien; Cuperlier, Nicolas; Gaussier, Philippe

    2015-01-01

    In the present study, a new architecture for the generation of grid cells (GC) was implemented on a real robot. In order to test this model a simple place cell (PC) model merging visual PC activity and GC was developed. GC were first built from a simple “several to one” projection (similar to a modulo operation) performed on a neural field coding for path integration (PI). Robotics experiments raised several practical and theoretical issues. To limit the important angular drift of PI, head direction information was introduced in addition to the robot proprioceptive signal coming from the wheel rotation. Next, a simple associative learning between visual place cells and the neural field coding for the PI has been used to recalibrate the PI and to limit its drift. Finally, the parameters controlling the shape of the PC built from the GC have been studied. Increasing the number of GC obviously improves the shape of the resulting place field. Yet, other parameters such as the discretization factor of PI or the lateral interactions between GC can have an important impact on the place field quality and avoid the need of a very large number of GC. In conclusion, our results show our GC model based on the compression of PI is congruent with neurobiological studies made on rodent. GC firing patterns can be the result of a modulo transformation of PI information. We argue that such a transformation may be a general property of the connectivity from the cortex to the entorhinal cortex. Our model predicts that the effect of similar transformations on other kinds of sensory information (visual, tactile, auditory, etc…) in the entorhinal cortex should be observed. Consequently, a given EC cell should react to non-contiguous input configurations in non-spatial conditions according to the projection from its different inputs. PMID:25904862

  3. Tracking trade transactions in water resource systems: A node-arc optimization formulation

    NASA Astrophysics Data System (ADS)

    Erfani, Tohid; Huskova, Ivana; Harou, Julien J.

    2013-05-01

    We formulate and apply a multicommodity network flow node-arc optimization model capable of tracking trade transactions in complex water resource systems. The model uses a simple node to node network connectivity matrix and does not require preprocessing of all possible flow paths in the network. We compare the proposed node-arc formulation with an existing arc-path (flow path) formulation and explain the advantages and difficulties of both approaches. We verify the proposed formulation model on a hypothetical water distribution network. Results indicate the arc-path model solves the problem with fewer constraints, but the proposed formulation allows using a simple network connectivity matrix which simplifies modeling large or complex networks. The proposed algorithm allows converting existing node-arc hydroeconomic models that broadly represent water trading to ones that also track individual supplier-receiver relationships (trade transactions).

  4. Temperature and pressure determination of the tin melt boundary from a combination of pyrometry, spectral reflectance, and velocity measurements along release paths

    NASA Astrophysics Data System (ADS)

    La Lone, Brandon; Asimow, Paul; Fatyanov, Oleg; Hixson, Robert; Stevens, Gerald

    2017-06-01

    Plate impact experiments were conducted on tin samples backed by LiF windows to determine the tin melt curve. Thin copper flyers were used so that a release wave followed the 30-40 GPa shock wave in the tin. The release wave at the tin-LiF interface was about 300 ns long. Two sets of experiments were conducted. In one set, spectral emissivity was measured at six wavelengths using a flashlamp illuminated integrating sphere. In the other set, thermal radiance was measured at two wavelengths. The emissivity and thermal radiance measurements were combined to obtain temperature histories of the tin-LiF interface during the release. PDV was used to obtain stress histories. All measurements were combined to obtain temperature vs. stress release paths. A kink or steepening in the release paths indicate where the releases merge onto the melt boundary, and release paths originating from different shock stresses overlap on the melt boundary. Our temperature-stress release path measurements provide a continuous segment of the tin melt boundary that is in good agreement with some of the published melt curves. This work was done by National Security Technologies, LLC, under Contract No. DE-AC52-06NA25946 with the U.S. Department of Energy, and supported by the Site-Directed Research and Development Program. DOE/NV/259463133.

  5. On the optical path length in refracting media

    NASA Astrophysics Data System (ADS)

    Hasbun, Javier E.

    2018-04-01

    The path light follows as it travels through a substance depends on the substance's index of refraction. This path is commonly known as the optical path length (OPL). In geometrical optics, the laws of reflection and refraction are simple examples for understanding the path of light travel from source to detector for constant values of the traveled substances' refraction indices. In more complicated situations, the Euler equation can be quite useful and quite important in optics courses. Here, the well-known Euler differential equation (EDE) is used to obtain the OPL for several index of refraction models. For pedagogical completeness, the OPL is also obtained through a modified Monte Carlo (MC) method, versus which the various results obtained through the EDE are compared. The examples developed should be important in projects involving undergraduate as well as graduate students in an introductory optics course. A simple matlab script (program) is included that can be modified by students who wish to pursue the subject further.

  6. Vervet monkeys use paths consistent with context-specific spatial movement heuristics.

    PubMed

    Teichroeb, Julie A

    2015-10-01

    Animal foraging routes are analogous to the computationally demanding "traveling salesman problem" (TSP), where individuals must find the shortest path among several locations before returning to the start. Humans approximate solutions to TSPs using simple heuristics or "rules of thumb," but our knowledge of how other animals solve multidestination routing problems is incomplete. Most nonhuman primate species have shown limited ability to route plan. However, captive vervets were shown to solve a TSP for six sites. These results were consistent with either planning three steps ahead or a risk-avoidance strategy. I investigated how wild vervet monkeys (Chlorocebus pygerythrus) solved a path problem with six, equally rewarding food sites; where site arrangement allowed assessment of whether vervets found the shortest route and/or used paths consistent with one of three simple heuristics to navigate. Single vervets took the shortest possible path in fewer than half of the trials, usually in ways consistent with the most efficient heuristic (the convex hull). When in competition, vervets' paths were consistent with different, more efficient heuristics dependent on their dominance rank (a cluster strategy for dominants and the nearest neighbor rule for subordinates). These results suggest that, like humans, vervets may solve multidestination routing problems by applying simple, adaptive, context-specific "rules of thumb." The heuristics that were consistent with vervet paths in this study are the same as some of those asserted to be used by humans. These spatial movement strategies may have common evolutionary roots and be part of a universal mental navigational toolkit. Alternatively, they may have emerged through convergent evolution as the optimal way to solve multidestination routing problems.

  7. Distributed Method to Optimal Profile Descent

    NASA Astrophysics Data System (ADS)

    Kim, Geun I.

    Current ground automation tools for Optimal Profile Descent (OPD) procedures utilize path stretching and speed profile change to maintain proper merging and spacing requirements at high traffic terminal area. However, low predictability of aircraft's vertical profile and path deviation during decent add uncertainty to computing estimated time of arrival, a key information that enables the ground control center to manage airspace traffic effectively. This paper uses an OPD procedure that is based on a constant flight path angle to increase the predictability of the vertical profile and defines an OPD optimization problem that uses both path stretching and speed profile change while largely maintaining the original OPD procedure. This problem minimizes the cumulative cost of performing OPD procedures for a group of aircraft by assigning a time cost function to each aircraft and a separation cost function to a pair of aircraft. The OPD optimization problem is then solved in a decentralized manner using dual decomposition techniques under inter-aircraft ADS-B mechanism. This method divides the optimization problem into more manageable sub-problems which are then distributed to the group of aircraft. Each aircraft solves its assigned sub-problem and communicate the solutions to other aircraft in an iterative process until an optimal solution is achieved thus decentralizing the computation of the optimization problem.

  8. Short-Path Statistics and the Diffusion Approximation

    NASA Astrophysics Data System (ADS)

    Blanco, Stéphane; Fournier, Richard

    2006-12-01

    In the field of first return time statistics in bounded domains, short paths may be defined as those paths for which the diffusion approximation is inappropriate. This is at the origin of numerous open questions concerning the characterization of residence time distributions. We show here how general integral constraints can be derived that make it possible to address short-path statistics indirectly by application of the diffusion approximation to long paths. Application to the moments of the distribution at the low-Knudsen limit leads to simple practical results and novel physical pictures.

  9. A simple model for the estimation of rain-induced attenuation along earth-space paths at millimeter wavelengths

    NASA Technical Reports Server (NTRS)

    Stutzman, W. L.; Dishman, W. K.

    1982-01-01

    A simple attenuation model (SAM) is presented for estimating rain-induced attenuation along an earth-space path. The rain model uses an effective spatial rain distribution which is uniform for low rain rates and which has an exponentially shaped horizontal rain profile for high rain rates. When compared to other models, the SAM performed well in the important region of low percentages of time, and had the lowest percent standard deviation of all percent time values tested.

  10. Modeling shared resources with generalized synchronization within a Petri net bottom-up approach.

    PubMed

    Ferrarini, L; Trioni, M

    1996-01-01

    This paper proposes a simple and effective way to represent shared resources in manufacturing systems within a Petri net model previously developed. Such a model relies on the bottom-up and modular approach to synthesis and analysis. The designer may define elementary tasks and then connect them with one another with three kinds of connections: self-loops, inhibitor arcs and simple synchronizations. A theoretical framework has been established for the analysis of liveness and reversibility of such models. The generalized synchronization, here formalized, represents an extension of the simple synchronization, allowing the merging of suitable subnets among elementary tasks. It is proved that under suitable, but not restrictive, hypotheses the generalized synchronization may be substituted for a simple one, thus being compatible with all the developed theoretical body.

  11. Bisphenol A and Other Metabolites in Human Saliva and Urine Associated with the Placement of Composite Restorations

    DTIC Science & Technology

    2009-07-01

    patient identifiers at the NIDCR after merging clinical, questionnaire and laboratory data. Backup copies will be made by staff in the Biostatistics Core...not changed. This dental treatment is simple restorative dentistry and this will not change a patients’ health status. There is no intended beneftt

  12. Microsco-Pi: A Novel and Inexpensive Way of Merging Biology and IT

    ERIC Educational Resources Information Center

    Kent, Harry R.; Bacon, Jonathan P.

    2016-01-01

    It is well known that schools and colleges often have budget limitations that can hamper the effectiveness of practical education. This article looks at how cheap, off-the-shelf components can be used to produce a simple DIY digital microscope, and how this provides novel opportunities to integrate biology, physics, design technology and computer…

  13. Are merging black holes born from stellar collapse or previous mergers?

    NASA Astrophysics Data System (ADS)

    Gerosa, Davide; Berti, Emanuele

    2017-06-01

    Advanced LIGO detectors at Hanford and Livingston made two confirmed and one marginal detection of binary black holes during their first observing run. The first event, GW150914, was from the merger of two black holes much heavier that those whose masses have been estimated so far, indicating a formation scenario that might differ from "ordinary" stellar evolution. One possibility is that these heavy black holes resulted from a previous merger. When the progenitors of a black hole binary merger result from previous mergers, they should (on average) merge later, be more massive, and have spin magnitudes clustered around a dimensionless spin ˜0.7 . Here we ask the following question: can gravitational-wave observations determine whether merging black holes were born from the collapse of massive stars ("first generation"), rather than being the end product of earlier mergers ("second generation")? We construct simple, observationally motivated populations of black hole binaries, and we use Bayesian model selection to show that measurements of the masses, luminosity distance (or redshift), and "effective spin" of black hole binaries can indeed distinguish between these different formation scenarios.

  14. The metallicity and elemental abundance gradients of simulated galaxies and their environmental dependence

    NASA Astrophysics Data System (ADS)

    Taylor, Philip; Kobayashi, Chiaki

    2017-11-01

    The internal distribution of heavy elements, in particular the radial metallicity gradient, offers insight into the merging history of galaxies. Using our cosmological, chemodynamical simulations that include both detailed chemical enrichment and feedback from active galactic nuclei (AGN), we find that stellar metallicity gradients in the most massive galaxies (≳3 × 1010M⊙) are made flatter by mergers and are unable to regenerate due to the quenching of star formation by AGN feedback. The fitting range is chosen on a galaxy-by-galaxy basis in order to mask satellite galaxies. The evolutionary paths of the gradients can be summarized as follows: (I) creation of initial steep gradients by gas-rich assembly, (II) passive evolution by star formation and/or stellar accretion at outskirts, and (III) sudden flattening by mergers. There is a significant scatter in gradients at a given mass, which originates from the last path, and therefore from galaxy type. Some variation remains at given galaxy mass and type because of the complexity of merging events, and hence we find only a weak environmental dependence. Our early-type galaxies (ETGs), defined from the star formation main sequence rather than their morphology, are in excellent agreement with the observed stellar metallicity gradients of ETGs in the SAURON and ATLAS3D surveys. We find small positive [O/Fe] gradients of stars in our simulated galaxies, although they are smaller with AGN feedback. Gas-phase metallicity and [O/Fe] gradients also show variation, the origin of which is not as clear as for stellar populations.

  15. Development of a prototype multi-processing interactive software invocation system

    NASA Technical Reports Server (NTRS)

    Berman, W. J.

    1983-01-01

    The Interactive Software Invocation System (NASA-ISIS) was first transported to the M68000 microcomputer, and then rewritten in the programming language Path Pascal. Path Pascal is a significantly enhanced derivative of Pascal, allowing concurrent algorithms to be expressed using the simple and elegant concept of Path Expressions. The primary results of this contract was to verify the viability of Path Pascal as a system's development language. The NASA-ISIS implementation using Path Pascal is a prototype of a large, interactive system in Path Pascal. As such, it is an excellent demonstration of the feasibility of using Path Pascal to write even more extensive systems. It is hoped that future efforts will build upon this research and, ultimately, that a full Path Pascal/ISIS Operating System (PPIOS) might be developed.

  16. Gravitational Waves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Jonah Maxwell

    This report has slides on Gravitational Waves; Pound and Rebka: A Shocking Fact; Light is a Ruler; Gravity is the Curvature of Spacetime; Gravitational Waves Made Simple; How a Gravitational Wave Affects Stuff Here; LIGO; This Detection: Neutron Stars; What the Gravitational Wave Looks Like; The Sound of Merging Neutron Stars; Neutron Star Mergers: More than GWs; The Radioactive Cloud; The Kilonova; and finally Summary, Multimessenger Astronomy.

  17. Underwater Multi-Vehicle Trajectory Alignment and Mapping Using Acoustic and Optical Constraints

    PubMed Central

    Campos, Ricard; Gracias, Nuno; Ridao, Pere

    2016-01-01

    Multi-robot formations are an important advance in recent robotic developments, as they allow a group of robots to merge their capacities and perform surveys in a more convenient way. With the aim of keeping the costs and acoustic communications to a minimum, cooperative navigation of multiple underwater vehicles is usually performed at the control level. In order to maintain the desired formation, individual robots just react to simple control directives extracted from range measurements or ultra-short baseline (USBL) systems. Thus, the robots are unaware of their global positioning, which presents a problem for the further processing of the collected data. The aim of this paper is two-fold. First, we present a global alignment method to correct the dead reckoning trajectories of multiple vehicles to resemble the paths followed during the mission using the acoustic messages passed between vehicles. Second, we focus on the optical mapping application of these types of formations and extend the optimization framework to allow for multi-vehicle geo-referenced optical 3D mapping using monocular cameras. The inclusion of optical constraints is not performed using the common bundle adjustment techniques, but in a form improving the computational efficiency of the resulting optimization problem and presenting a generic process to fuse optical reconstructions with navigation data. We show the performance of the proposed method on real datasets collected within the Morph EU-FP7 project. PMID:26999144

  18. Development of Thin-Walled Magnesium Alloy Extrusions for Improved Crash Performance Based Upon Texture Control

    NASA Astrophysics Data System (ADS)

    Williams, Bruce W.; Agnew, Sean R.; Klein, Robert W.; McKinley, Jonathan

    Recent investigations suggest that it is possible to achieve dramatic modifications to both strength and ductility of magnesium alloys through a combination of alloying, grain refinement, and texture control. The current work explores the possibility of altering the texture in extruded thin-walled magnesium alloy tubes for improved ductility during axial crush in which energy is absorbed through progressive buckling. The texture evolution was predicted using the viscoplastic self-consistent (VPSC) crystal plasticity model, with strain path input from continuum-based finite element simulations of extrusion. A limited diversity of textures can be induced by altering the strain path through the extrusion die design. In some cases, such as for simple bar extrusion, the textures predicted can be connected with simple shape change. In other cases, a subtle influence of strain path involving shear-reverse-shear is predicted. The most promising textures predicted for a variety of strain paths are selected for subsequent experimental study.

  19. Subaru adaptive-optics high-spatial-resolution infrared K- and L'-band imaging search for deeply buried dual AGNs in merging galaxies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Imanishi, Masatoshi; Saito, Yuriko, E-mail: masa.imanishi@nao.ac.jp

    2014-01-01

    We present the results of infrared K- (2.2 μm) and L'-band (3.8 μm) high-spatial-resolution (<0.''2) imaging observations of nearby gas- and dust-rich infrared luminous merging galaxies, assisted by the adaptive optics system on the Subaru 8.2 m telescope. We investigate the presence and frequency of red K – L' compact sources, which are sensitive indicators of active galactic nuclei (AGNs), including AGNs that are deeply buried in gas and dust. We observed 29 merging systems and confirmed at least one AGN in all but one system. However, luminous dual AGNs were detected in only four of the 29 systems (∼14%),more » despite our method's being sensitive to buried AGNs. For multiple nuclei sources, we compared the estimated AGN luminosities with supermassive black hole (SMBH) masses inferred from large-aperture K-band stellar emission photometry in individual nuclei. We found that mass accretion rates onto SMBHs are significantly different among multiple SMBHs, such that larger-mass SMBHs generally show higher mass accretion rates when normalized to SMBH mass. Our results suggest that non-synchronous mass accretion onto SMBHs in gas- and dust-rich infrared luminous merging galaxies hampers the observational detection of kiloparsec-scale multiple active SMBHs. This could explain the significantly smaller detection fraction of kiloparsec-scale dual AGNs when compared with the number expected from simple theoretical predictions. Our results also indicate that mass accretion onto SMBHs is dominated by local conditions, rather than by global galaxy properties, reinforcing the importance of observations to our understanding of how multiple SMBHs are activated and acquire mass in gas- and dust-rich merging galaxies.« less

  20. Path integral molecular dynamic simulation of flexible molecular systems in their ground state: Application to the water dimer

    NASA Astrophysics Data System (ADS)

    Schmidt, Matthew; Roy, Pierre-Nicholas

    2018-03-01

    We extend the Langevin equation Path Integral Ground State (LePIGS), a ground state quantum molecular dynamics method, to simulate flexible molecular systems and calculate both energetic and structural properties. We test the approach with the H2O and D2O monomers and dimers. We systematically optimize all simulation parameters and use a unity trial wavefunction. We report ground state energies, dissociation energies, and structural properties using three different water models, two of which are empirically based, q-TIP4P/F and q-SPC/Fw, and one which is ab initio, MB-pol. We demonstrate that our energies calculated from LePIGS can be merged seamlessly with low temperature path integral molecular dynamics calculations and note the similarities between the two methods. We also benchmark our energies against previous diffusion Monte Carlo calculations using the same potentials and compare to experimental results. We further demonstrate that accurate vibrational energies of the H2O and D2O monomer can be calculated from imaginary time correlation functions generated from the LePIGS simulations using solely the unity trial wavefunction.

  1. Mail merge can be used to create personalized questionnaires in complex surveys.

    PubMed

    Taljaard, Monica; Chaudhry, Shazia Hira; Brehaut, Jamie C; Weijer, Charles; Grimshaw, Jeremy M

    2015-10-16

    Low response rates and inadequate question comprehension threaten the validity of survey results. We describe a simple procedure to implement personalized-as opposed to generically worded-questionnaires in the context of a complex web-based survey of corresponding authors of a random sample of 300 published cluster randomized trials. The purpose of the survey was to gather more detailed information about informed consent procedures used in the trial, over and above basic information provided in the trial report. We describe our approach-which allowed extensive personalization without the need for specialized computer technology-and discuss its potential application in similar settings. The mail merge feature of standard word processing software was used to generate unique, personalized questionnaires for each author by incorporating specific information from the article, including naming the randomization unit (e.g., family practice, school, worksite), and identifying specific individuals who may have been considered research participants at the cluster level (family doctors, teachers, employers) and individual level (patients, students, employees) in questions regarding informed consent procedures in the trial. The response rate was relatively high (64%, 182/285) and did not vary significantly by author, publication, or study characteristics. The refusal rate was low (7%). While controlled studies are required to examine the specific effects of our approach on comprehension, quality of responses, and response rates, we showed how mail merge can be used as a simple but useful tool to add personalized fields to complex survey questionnaires, or to request additional information required from study authors. One potential application is in eliciting specific information about published articles from study authors when conducting systematic reviews and meta-analyses.

  2. An anomalous CO2 uptake measured over asphalt surface by open-path eddy-covariance system

    NASA Astrophysics Data System (ADS)

    Bogoev, Ivan; Santos, Eduardo

    2017-04-01

    Measurements of net ecosystem exchange of CO2 in desert environments made by Wohlfahrt et al. (2008) and Ma (2014) indicate strong CO2 sink. The results of these studies have been challenged by Schlesinger (2016) because the rates of the CO2 uptake are incongruent with the increase of biomass in the vegetation and accumulation of organic and inorganic carbon in the soil. Consequently, the accuracy of the open-path eddy-covariance systems in arid and semi-arid ecosystems has been questioned. A new technology merging the sensing paths of the gas analyzer and the sonic anemometer has recently been developed. This integrated open-path system allows a direct measurement of CO2 mixing ratio in the open air and has the potential to improve the quality of the temperature related density and spectroscopic corrections by synchronously measuring the sensible heat flux in the optical path of the gas analyzer. We evaluate the performance and the accuracy of this new sensor over a large parking lot with an asphalt surface where the water vapor and CO2 fluxes are expected to be low and the interfering sensible heat fluxes are above 200 Wm-2. For independent CO2 flux reference measurements, we use a co-located closed-path analyzer with a short intake tube and a standalone sonic anemometer. We compare energy and carbon dioxide fluxes between the open- and the closed-path systems. During periods with sensible heat flux above 100 W m-2, the open-path system reports an apparent CO2 uptake of 0.02 mg m-2 s-1, while the closed-path system consistently measures a more acceptable upward flux of 0.015 mg m-2 s-1. We attribute this systematic bias to inadequate fast-response temperature compensation of absorption-line broadening effects. We demonstrate that this bias can be eliminated by using the humidity-corrected fast-response sonic temperature to compensate for the abovementioned spectroscopic effects in the open-path analyzer.

  3. Two arm robot path planning in a static environment using polytopes and string stretching. Thesis

    NASA Technical Reports Server (NTRS)

    Schima, Francis J., III

    1990-01-01

    The two arm robot path planning problem has been analyzed and reduced into components to be simplified. This thesis examines one component in which two Puma-560 robot arms are simultaneously holding a single object. The problem is to find a path between two points around obstacles which is relatively fast and minimizes the distance. The thesis involves creating a structure on which to form an advanced path planning algorithm which could ideally find the optimum path. An actual path planning method is implemented which is simple though effective in most common situations. Given the limits of computer technology, a 'good' path is currently found. Objects in the workspace are modeled with polytopes. These are used because they can be used for rapid collision detection and still provide a representation which is adequate for path planning.

  4. User Manual for the PROTEUS Mesh Tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Micheal A.; Shemon, Emily R

    2016-09-19

    PROTEUS is built around a finite element representation of the geometry for visualization. In addition, the PROTEUS-SN solver was built to solve the even-parity transport equation on a finite element mesh provided as input. Similarly, PROTEUS-MOC and PROTEUS-NEMO were built to apply the method of characteristics on unstructured finite element meshes. Given the complexity of real world problems, experience has shown that using commercial mesh generator to create rather simple input geometries is overly complex and slow. As a consequence, significant effort has been put into place to create multiple codes that help assist in the mesh generation and manipulation.more » There are three input means to create a mesh in PROTEUS: UFMESH, GRID, and NEMESH. At present, the UFMESH is a simple way to generate two-dimensional Cartesian and hexagonal fuel assembly geometries. The UFmesh input allows for simple assembly mesh generation while the GRID input allows the generation of Cartesian, hexagonal, and regular triangular structured grid geometry options. The NEMESH is a way for the user to create their own mesh or convert another mesh file format into a PROTEUS input format. Given that one has an input mesh format acceptable for PROTEUS, we have constructed several tools which allow further mesh and geometry construction (i.e. mesh extrusion and merging). This report describes the various mesh tools that are provided with the PROTEUS code giving both descriptions of the input and output. In many cases the examples are provided with a regression test of the mesh tools. The most important mesh tools for any user to consider using are the MT_MeshToMesh.x and the MT_RadialLattice.x codes. The former allows the conversion between most mesh types handled by PROTEUS while the second allows the merging of multiple (assembly) meshes into a radial structured grid. Note that the mesh generation process is recursive in nature and that each input specific for a given mesh tool (such as .axial or .merge) can be used as “mesh” input for any of the mesh tools discussed in this manual.« less

  5. Repeated bubble breakup and coalescence in perturbed Hele-Shaw channels

    NASA Astrophysics Data System (ADS)

    Thompson, Alice; Franco-Gomez, Andres; Hazel, Andrew; Juel, Anne

    2017-11-01

    The introduction of an axially-uniform, centred constriction in a Hele-Shaw channel leads to multiple propagation modes for both air fingers and bubbles, including symmetric and asymmetric steadily propagating modes along with oscillations. These multiple modes correspond to a non-trivial bifurcation structure, and relate to the plethora of steadily propagating bubbles and fingers which exist in the Saffman-Taylor system. In both experiments and depth-averaged computations, a very small centred occlusion can be enough to trigger bubble breakup, with a single large centred bubble splitting into two smaller bubbles which propagate along each side of the channel. We present numerical simulations for the depth-averaged model, implementing geometric criteria for pinchoff and coalescence in order to track the bubble before and beyond breakup. We find that the two-bubble state is itself unstable, with finger competition causing one bubble to move ahead; the trailing bubble then moves across the channel to merge with the leading bubble. However, the story is not always so simple, enabling complicated cascades of splitting and merging bubbles. We compare the general dynamical behaviour, basins of attraction, and the details of merging and splitting, to experimental observations.

  6. Real-time Feynman path integral with Picard–Lefschetz theory and its applications to quantum tunneling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tanizaki, Yuya, E-mail: yuya.tanizaki@riken.jp; Theoretical Research Division, Nishina Center, RIKEN, Wako 351-0198; Koike, Takayuki, E-mail: tkoike@ms.u-tokyo.ac.jp

    Picard–Lefschetz theory is applied to path integrals of quantum mechanics, in order to compute real-time dynamics directly. After discussing basic properties of real-time path integrals on Lefschetz thimbles, we demonstrate its computational method in a concrete way by solving three simple examples of quantum mechanics. It is applied to quantum mechanics of a double-well potential, and quantum tunneling is discussed. We identify all of the complex saddle points of the classical action, and their properties are discussed in detail. However a big theoretical difficulty turns out to appear in rewriting the original path integral into a sum of path integralsmore » on Lefschetz thimbles. We discuss generality of that problem and mention its importance. Real-time tunneling processes are shown to be described by those complex saddle points, and thus semi-classical description of real-time quantum tunneling becomes possible on solid ground if we could solve that problem. - Highlights: • Real-time path integral is studied based on Picard–Lefschetz theory. • Lucid demonstration is given through simple examples of quantum mechanics. • This technique is applied to quantum mechanics of the double-well potential. • Difficulty for practical applications is revealed, and we discuss its generality. • Quantum tunneling is shown to be closely related to complex classical solutions.« less

  7. Final report on the Magnetized Target Fusion Collaboration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John Slough

    Nuclear fusion has the potential to satisfy the prodigious power that the world will demand in the future, but it has yet to be harnessed as a practical energy source. The entry of fusion as a viable, competitive source of power has been stymied by the challenge of finding an economical way to provide for the confinement and heating of the plasma fuel. It is the contention here that a simpler path to fusion can be achieved by creating fusion conditions in a different regime at small scale (~ a few cm). One such program now under study, referred tomore » as Magnetized Target Fusion (MTF), is directed at obtaining fusion in this high energy density regime by rapidly compressing a compact toroidal plasmoid commonly referred to as a Field Reversed Configuration (FRC). To make fusion practical at this smaller scale, an efficient method for compressing the FRC to fusion gain conditions is required. In one variant of MTF a conducting metal shell is imploded electrically. This radially compresses and heats the FRC plasmoid to fusion conditions. The closed magnetic field in the target plasmoid suppresses the thermal transport to the confining shell, thus lowering the imploding power needed to compress the target. The undertaking to be described in this proposal is to provide a suitable target FRC, as well as a simple and robust method for inserting and stopping the FRC within the imploding liner. The timescale for testing and development can be rapidly accelerated by taking advantage of a new facility funded by the Department of Energy. At this facility, two inductive plasma accelerators (IPA) were constructed and tested. Recent experiments with these IPAs have demonstrated the ability to rapidly form, accelerate and merge two hypervelocity FRCs into a compression chamber. The resultant FRC that was formed was hot (T&ion ~ 400 eV), stationary, and stable with a configuration lifetime several times that necessary for the MTF liner experiments. The accelerator length was less than 1 meter, and the time from the initiation of formation to the establishment of the final equilibrium was less than 10 microseconds. With some modification, each accelerator was made capable of producing FRCs suitable for the production of the target plasma for the MTF liner experiment. Based on the initial FRC merging/compression results, the design and methodology for an experimental realization of the target plasma for the MTF liner experiment can now be defined. A high density FRC plasmoid is to be formed and accelerated out of each IPA into a merging/compression chamber similar to the imploding liner at AFRL. The properties of the resultant FRC plasma (size, temperature, density, flux, lifetime) are obtained in the reevant regime of interest. The process still needs to be optimized, and a final design for implementation at AFRL must now be carried out. When implemented at AFRL it is anticipated that the colliding/merging FRCs will then be compressed by the liner. In this manner it is hoped that ultimately a plasma with ion temperatures reaching the 10 keV range and fusion gain near unity can be obtained.« less

  8. Path generation algorithm for UML graphic modeling of aerospace test software

    NASA Astrophysics Data System (ADS)

    Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Chen, Chao

    2018-03-01

    Aerospace traditional software testing engineers are based on their own work experience and communication with software development personnel to complete the description of the test software, manual writing test cases, time-consuming, inefficient, loopholes and more. Using the high reliability MBT tools developed by our company, the one-time modeling can automatically generate test case documents, which is efficient and accurate. UML model to describe the process accurately express the need to rely on the path is reached, the existing path generation algorithm are too simple, cannot be combined into a path and branch path with loop, or too cumbersome, too complicated arrangement generates a path is meaningless, for aerospace software testing is superfluous, I rely on our experience of ten load space, tailor developed a description of aerospace software UML graphics path generation algorithm.

  9. Field evaluation of open and closed-path CO2 flux systems over asphalt surface

    NASA Astrophysics Data System (ADS)

    Bogoev, I.; Santos, E.

    2016-12-01

    Eddy covariance (EC) is a widely used method for quantifying surface fluxes of heat, water vapor and carbon dioxide between ecosystems and the atmosphere. A typical EC system consists of an ultrasonic anemometer measuring the 3D wind vector and a fast-response infrared gas analyzer for sensing the water vapor and CO2 density in the air. When using an open-path analyzer that detects the constituent's density in situ a correction for concurrent air temperature and humidity fluctuations must be applied, Webb et al. (1980). In environments with small magnitudes of CO2 flux (<5µmol m-2 s-1) and in the presence of high sensible heat flux, like wintertime over boreal forest, open-path flux measurements have been challenging since the magnitude of the density corrections are as large as the uncorrected CO2 flux itself. A new technology merging the sensing paths of the gas analyzer and the sonic anemometer has been recently developed. This new integrated instrument allows a direct measurement of CO2 mixing ratio in the open air and has the potential to improve the quality of the temperature related density corrections by synchronously measuring the sensible heat flux in the optical path of the gas analyzer. We evaluate the performance and the accuracy of this new sensor over a large parking lot with an asphalt surface where the CO2 fluxes are considered low and the interfering sensible heat fluxes are above 200 Wm-2. A co-located closed-path EC system is used as a reference measurement to examine any systematic biases and apparent CO2 uptake observed with open-path sensors under high sensible heat flux regimes. Half-hour mean and variance of CO2 and water vapor concentrations are evaluated. The relative spectral responses, covariances and corrected turbulent fluxes using a common sonic anemometer are analyzed. The influence of sensor separation and frequency response attenuation on the density corrections is discussed.

  10. Path spectra derived from inversion of source and site spectra for earthquakes in Southern California

    NASA Astrophysics Data System (ADS)

    Klimasewski, A.; Sahakian, V. J.; Baltay, A.; Boatwright, J.; Fletcher, J. B.; Baker, L. M.

    2017-12-01

    A large source of epistemic uncertainty in Ground Motion Prediction Equations (GMPEs) is derived from the path term, currently represented as a simple geometric spreading and intrinsic attenuation term. Including additional physical relationships between the path properties and predicted ground motions would produce more accurate and precise, region-specific GMPEs by reclassifying some of the random, aleatory uncertainty as epistemic. This study focuses on regions of Southern California, using data from the Anza network and Southern California Seismic network to create a catalog of events magnitude 2.5 and larger from 1998 to 2016. The catalog encompasses regions of varying geology and therefore varying path and site attenuation. Within this catalog of events, we investigate several collections of event region-to-station pairs, each of which share similar origin locations and stations so that all events have similar paths. Compared with a simple regional GMPE, these paths consistently have high or low residuals. By working with events that have the same path, we can isolate source and site effects, and focus on the remaining residual as path effects. We decompose the recordings into source and site spectra for each unique event and site in our greater Southern California regional database using the inversion method of Andrews (1986). This model represents each natural log record spectra as the sum of its natural log event and site spectra, while constraining each record to a reference site or Brune source spectrum. We estimate a regional, path-specific anelastic attenuation (Q) and site attenuation (t*) from the inversion site spectra and corner frequency from the inversion event spectra. We then compute the residuals between the observed record data, and the inversion model prediction (event*site spectra). This residual is representative of path effects, likely anelastic attenuation along the path that varies from the regional median attenuation. We examine the residuals for our different sets independently to see how path terms differ between event-to-station collections. The path-specific information gained from this can inform development of terms for regional GMPEs, through understanding of these seismological phenomena.

  11. Demonstrating Fermat's Principle in Optics

    ERIC Educational Resources Information Center

    Paleiov, Orr; Pupko, Ofir; Lipson, S. G.

    2011-01-01

    We demonstrate Fermat's principle in optics by a simple experiment using reflection from an arbitrarily shaped one-dimensional reflector. We investigated a range of possible light paths from a lamp to a fixed slit by reflection in a curved reflector and showed by direct measurement that the paths along which light is concentrated have either…

  12. Earth Model with Laser Beam Simulating Seismic Ray Paths.

    ERIC Educational Resources Information Center

    Ryan, John Arthur; Handzus, Thomas Jay, Jr.

    1988-01-01

    Described is a simple device, that uses a laser beam to simulate P waves. It allows students to follow ray paths, reflections and refractions within the earth. Included is a set of exercises that lead students through the steps by which the presence of the outer and inner cores can be recognized. (Author/CW)

  13. Computation of rare transitions in the barotropic quasi-geostrophic equations

    NASA Astrophysics Data System (ADS)

    Laurie, Jason; Bouchet, Freddy

    2015-01-01

    We investigate the theoretical and numerical computation of rare transitions in simple geophysical turbulent models. We consider the barotropic quasi-geostrophic and two-dimensional Navier-Stokes equations in regimes where bistability between two coexisting large-scale attractors exist. By means of large deviations and instanton theory with the use of an Onsager-Machlup path integral formalism for the transition probability, we show how one can directly compute the most probable transition path between two coexisting attractors analytically in an equilibrium (Langevin) framework and numerically otherwise. We adapt a class of numerical optimization algorithms known as minimum action methods to simple geophysical turbulent models. We show that by numerically minimizing an appropriate action functional in a large deviation limit, one can predict the most likely transition path for a rare transition between two states. By considering examples where theoretical predictions can be made, we show that the minimum action method successfully predicts the most likely transition path. Finally, we discuss the application and extension of such numerical optimization schemes to the computation of rare transitions observed in direct numerical simulations and experiments and to other, more complex, turbulent systems.

  14. Computer Storage and Retrieval of Position - Dependent Data.

    DTIC Science & Technology

    1982-06-01

    This thesis covers the design of a new digital database system to replace the merged (observation and geographic location) record, one file per cruise...68 "The Digital Data Library System: Library Storage and Retrieval of Digital Geophysical Data" by Robert C. Groan) provided a relatively simple...dependent, ’geophysical’ data. The system is operational on a Digital Equipment Corporation VAX-11/780 computer. Values of measured and computed

  15. Stochastic Adaptive Particle Beam Tracker Using Meer Filter Feedback.

    DTIC Science & Technology

    1986-12-01

    breakthrough required in controlling the beam location. In 1983, Zicker (27] conducted a feasibility study of a simple proportional gain controller... Zicker synthesized his stochastic controller designs from a deterministic optimal LQ controller assuming full state feedback. An LQ controller is a...34Merge" Method 2.5 Simlifying the eer Filter a Zicker ran a performance analysis on the Meer filter and found the Meer filter virtually insensitive to

  16. REML/BLUP and sequential path analysis in estimating genotypic values and interrelationships among simple maize grain yield-related traits.

    PubMed

    Olivoto, T; Nardino, M; Carvalho, I R; Follmann, D N; Ferrari, M; Szareski, V J; de Pelegrin, A J; de Souza, V Q

    2017-03-22

    Methodologies using restricted maximum likelihood/best linear unbiased prediction (REML/BLUP) in combination with sequential path analysis in maize are still limited in the literature. Therefore, the aims of this study were: i) to use REML/BLUP-based procedures in order to estimate variance components, genetic parameters, and genotypic values of simple maize hybrids, and ii) to fit stepwise regressions considering genotypic values to form a path diagram with multi-order predictors and minimum multicollinearity that explains the relationships of cause and effect among grain yield-related traits. Fifteen commercial simple maize hybrids were evaluated in multi-environment trials in a randomized complete block design with four replications. The environmental variance (78.80%) and genotype-vs-environment variance (20.83%) accounted for more than 99% of the phenotypic variance of grain yield, which difficult the direct selection of breeders for this trait. The sequential path analysis model allowed the selection of traits with high explanatory power and minimum multicollinearity, resulting in models with elevated fit (R 2 > 0.9 and ε < 0.3). The number of kernels per ear (NKE) and thousand-kernel weight (TKW) are the traits with the largest direct effects on grain yield (r = 0.66 and 0.73, respectively). The high accuracy of selection (0.86 and 0.89) associated with the high heritability of the average (0.732 and 0.794) for NKE and TKW, respectively, indicated good reliability and prospects of success in the indirect selection of hybrids with high-yield potential through these traits. The negative direct effect of NKE on TKW (r = -0.856), however, must be considered. The joint use of mixed models and sequential path analysis is effective in the evaluation of maize-breeding trials.

  17. Ocean Color and Earth Science Data Records

    NASA Astrophysics Data System (ADS)

    Maritorena, S.

    2014-12-01

    The development of consistent, high quality time series of biogeochemical products from a single ocean color sensor is a difficult task that involves many aspects related to pre- and post-launch instrument calibration and characterization, stability monitoring and the removal of the contribution of the atmosphere which represents most of the signal measured at the sensor. It is even more challenging to build Climate Data Records (CDRs) or Earth Science Data Records (ESDRs) from multiple sensors as design, technology and methodologies (bands, spectral/spatial resolution, Cal/Val, algorithms) differ from sensor to sensor. NASA MEaSUREs, ESA Climate Change Initiative (CCI) and IOCCG Virtual Constellation are some of the underway efforts that investigate or produce ocean color CDRs or ESDRs from the recent and current global missions (SeaWiFS, MODIS, MERIS). These studies look at key aspects of the development of unified data records from multiple sensors, e.g. the concatenation of the "best" individual records vs. the merging of multiple records or band homogenization vs. spectral diversity. The pros and cons of the different approaches are closely dependent upon the overall science purpose of the data record and its temporal resolution. While monthly data are generally adequate for biogeochemical modeling or to assess decadal trends, higher temporal resolution data records are required to look into changes in phenology or the dynamics of phytoplankton blooms. Similarly, short temporal resolution (daily to weekly) time series may benefit more from being built through the merging of data from multiple sensors while a simple concatenation of data from individual sensors might be better suited for longer temporal resolution (e.g. monthly time series). Several Ocean Color ESDRs were developed as part of the NASA MEaSUREs project. Some of these time series are built by merging the reflectance data from SeaWiFS, MODIS-Aqua and Envisat-MERIS in a semi-analytical ocean color model that generates both merged reflectance and merged biogeochemical products. The benefits and limitations of this merging approach to develop ESDRs will be presented and discussed along with those of alternative approaches.

  18. Constraining the Merging History of Massive Galaxies Since Redshift 3 Using Close Pairs. I. Major Pairs from Candels and the SDSS

    NASA Astrophysics Data System (ADS)

    Mantha, Kameswara Bharadwaj; McIntosh, Daniel H.; Brennan, Ryan; Cook, Joshua; Kodra, Dritan; Newman, Jeffrey; Somerville, Rachel S.; Barro, Guillermo; Behroozi, Peter; Conselice, Christopher; Dekel, Avishai; Faber, Sandra M.; Closson Ferguson, Henry; Finkelstein, Steven L.; Fontana, Adriano; Galametz, Audrey; Perez-Gonzalez, Pablo; Grogin, Norman A.; Guo, Yicheng; Hathi, Nimish P.; Hopkins, Philip F.; Kartaltepe, Jeyhan S.; Kocevski, Dale; Koekemoer, Anton M.; Koo, David C.; Lee, Seong-Kook; Lotz, Jennifer M.; Lucas, Ray A.; Nayyeri, Hooshang; Peth, Michael; Pforr, Janine; Primack, Joel R.; Santini, Paola; Simmons, Brooke D.; Stefanon, Mauro; Straughn, Amber; Snyder, Gregory F.; Wuyts, Stijn

    2017-01-01

    Major galaxy-galaxy merging can play an important role in the history of massive galaxies (stellar masses > 2E10 Msun) over cosmic time. An important way to measure the impact of major merging is to study close pairs of galaxies stellar mass or flux ratios between 1 and 4. We improve on the best recent efforts by probing merging of lower mass galaxies, anchoring evolutionary trends from five Hubble Space Telescope Legacy fields in the Cosmic Assembly Near-Infrared Deep Extragalactic Legacy Survey (CANDELS) to the nearby universe using Sloan Digital Sky Survey (SDSS) to measure the fraction of massive galaxies in such pairs during six epochs spanning 01.5. This implies that major merging may not be as important at high redshifts as previously thought, merger timescales may not be fully understood, or we may be missing evidence of mergers at z~2-3 owing to CANDELS selections effects. Next, we will analyze pair fractions and merging timescales within realistic mocks of CANDELS from state of the art Semi-Analytic Model (SAM) to better understand and calibrate our empirical results.

  19. Multi-path variational transition state theory for chemical reaction rates of complex polyatomic species: ethanol + OH reactions.

    PubMed

    Zheng, Jingjing; Truhlar, Donald G

    2012-01-01

    Complex molecules often have many structures (conformations) of the reactants and the transition states, and these structures may be connected by coupled-mode torsions and pseudorotations; some but not all structures may have hydrogen bonds in the transition state or reagents. A quantitative theory of the reaction rates of complex molecules must take account of these structures, their coupled-mode nature, their qualitatively different character, and the possibility of merging reaction paths at high temperature. We have recently developed a coupled-mode theory called multi-structural variational transition state theory (MS-VTST) and an extension, called multi-path variational transition state theory (MP-VTST), that includes a treatment of the differences in the multi-dimensional tunneling paths and their contributions to the reaction rate. The MP-VTST method was presented for unimolecular reactions in the original paper and has now been extended to bimolecular reactions. The MS-VTST and MP-VTST formulations of variational transition state theory include multi-faceted configuration-space dividing surfaces to define the variational transition state. They occupy an intermediate position between single-conformation variational transition state theory (VTST), which has been used successfully for small molecules, and ensemble-averaged variational transition state theory (EA-VTST), which has been used successfully for enzyme kinetics. The theories are illustrated and compared here by application to three thermal rate constants for reactions of ethanol with hydroxyl radical--reactions with 4, 6, and 14 saddle points.

  20. Transition path time distributions

    NASA Astrophysics Data System (ADS)

    Laleman, M.; Carlon, E.; Orland, H.

    2017-12-01

    Biomolecular folding, at least in simple systems, can be described as a two state transition in a free energy landscape with two deep wells separated by a high barrier. Transition paths are the short part of the trajectories that cross the barrier. Average transition path times and, recently, their full probability distribution have been measured for several biomolecular systems, e.g., in the folding of nucleic acids or proteins. Motivated by these experiments, we have calculated the full transition path time distribution for a single stochastic particle crossing a parabolic barrier, including inertial terms which were neglected in previous studies. These terms influence the short time scale dynamics of a stochastic system and can be of experimental relevance in view of the short duration of transition paths. We derive the full transition path time distribution as well as the average transition path times and discuss the similarities and differences with the high friction limit.

  1. Slow secondary relaxation in a free-energy landscape model for relaxation in glass-forming liquids

    NASA Astrophysics Data System (ADS)

    Diezemann, Gregor; Mohanty, Udayan; Oppenheim, Irwin

    1999-02-01

    Within the framework of a free-energy landscape model for the relaxation in supercooled liquids the primary (α) relaxation is modeled by transitions among different free-energy minima. The secondary (β) relaxation then corresponds to intraminima relaxation. We consider a simple model for the reorientational motions of the molecules associated with both processes and calculate the dielectric susceptibility as well as the spin-lattice relaxation times. The parameters of the model can be chosen in a way that both quantities show a behavior similar to that observed in experimental studies on supercooled liquids. In particular we find that it is not possible to obtain a crossing of the time scales associated with α and β relaxation. In our model these processes always merge at high temperatures and the α process remains above the merging temperature. The relation to other models is discussed.

  2. Modeling and numerical analysis of a magneto-inertial fusion concept with the target created through FRC merging

    NASA Astrophysics Data System (ADS)

    Li, Chenguang; Yang, Xianjun

    2016-10-01

    The Magnetized Plasma Fusion Reactor concept is proposed as a magneto-inertial fusion approach based on the target plasma created through the collision merging of two oppositely translating field reversed configuration plasmas, which is then compressed by the imploding liner driven by the pulsed-power driver. The target creation process is described by a two-dimensional magnetohydrodynamics model, resulting in the typical target parameters. The implosion process and the fusion reaction are modeled by a simple zero-dimensional model, taking into account the alpha particle heating and the bremsstrahlung radiation loss. The compression on the target can be 2D cylindrical or 2.4D with the additive axial contraction taken into account. The dynamics of the liner compression and fusion burning are simulated and the optimum fusion gain and the associated target parameters are predicted. The scientific breakeven could be achieved at the optimized conditions.

  3. Calculation of smooth potential energy surfaces using local electron correlation methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mata, Ricardo A.; Werner, Hans-Joachim

    2006-11-14

    The geometry dependence of excitation domains in local correlation methods can lead to noncontinuous potential energy surfaces. We propose a simple domain merging procedure which eliminates this problem in many situations. The method is applied to heterolytic bond dissociations of ketene and propadienone, to SN2 reactions of Cl{sup -} with alkylchlorides, and in a quantum mechanical/molecular mechanical study of the chorismate mutase enzyme. It is demonstrated that smooth potentials are obtained in all cases. Furthermore, basis set superposition error effects are reduced in local calculations, and it is found that this leads to better basis set convergence when computing barriermore » heights or weak interactions. When the electronic structure strongly changes between reactants or products and the transition state, the domain merging procedure leads to a balanced description of all structures and accurate barrier heights.« less

  4. Neutron Capture Energies for Flux Normalization and Approximate Model for Gamma-Smeared Power

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kang Seog; Clarno, Kevin T.; Liu, Yuxuan

    The Consortium for Advanced Simulation of Light Water Reactors (CASL) Virtual Environment for Reactor Applications (VERA) neutronics simulator MPACT has used a single recoverable fission energy for each fissionable nuclide assuming that all recoverable energies come only from fission reaction, for which capture energy is merged with fission energy. This approach includes approximations and requires improvement by separating capture energy from the merged effective recoverable energy. This report documents the procedure to generate recoverable neutron capture energies and the development of a program called CapKappa to generate capture energies. Recoverable neutron capture energies have been generated by using CapKappa withmore » the evaluated nuclear data file (ENDF)/B-7.0 and 7.1 cross section and decay libraries. The new capture kappas were compared to the current SCALE-6.2 and the CASMO-5 capture kappas. These new capture kappas have been incorporated into the Simplified AMPX 51- and 252-group libraries, and they can be used for the AMPX multigroup (MG) libraries and the SCALE code package. The CASL VERA neutronics simulator MPACT does not include a gamma transport capability, which limits it to explicitly estimating local energy deposition from fission, neutron, and gamma slowing down and capture. Since the mean free path of gamma rays is typically much longer than that for the neutron, and the total gamma energy is about 10% to the total energy, the gamma-smeared power distribution is different from the fission power distribution. Explicit local energy deposition through neutron and gamma transport calculation is significantly important in multi-physics whole core simulation with thermal-hydraulic feedback. Therefore, the gamma transport capability should be incorporated into the CASL neutronics simulator MPACT. However, this task will be timeconsuming in developing the neutron induced gamma production and gamma cross section libraries. This study is to investigate an approximate model to estimate gammasmeared power distribution without performing any gamma transport calculation. A simple approximate gamma smearing model has been investigated based on the facts that pinwise gamma energy depositions are almost flat over a fuel assembly, and assembly-wise gamma energy deposition is proportional to kappa-fission energy deposition. The approximate gamma smearing model works well for single assembly cases, and can partly improve the gamma smeared power distribution for the whole core model. Although the power distributions can be improved by the approximate gamma smearing model, still there is an issue to explicitly obtain local energy deposition. A new simple approach or gamma transport/diffusion capability may need to be incorporated into MPACT to estimate local energy deposition for more robust multi-physics simulation.« less

  5. THE DYNAMICS OF MERGING CLUSTERS: A MONTE CARLO SOLUTION APPLIED TO THE BULLET AND MUSKET BALL CLUSTERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dawson, William A., E-mail: wadawson@ucdavis.edu

    2013-08-01

    Merging galaxy clusters have become one of the most important probes of dark matter, providing evidence for dark matter over modified gravity and even constraints on the dark matter self-interaction cross-section. To properly constrain the dark matter cross-section it is necessary to understand the dynamics of the merger, as the inferred cross-section is a function of both the velocity of the collision and the observed time since collision. While the best understanding of merging system dynamics comes from N-body simulations, these are computationally intensive and often explore only a limited volume of the merger phase space allowed by observed parametermore » uncertainty. Simple analytic models exist but the assumptions of these methods invalidate their results near the collision time, plus error propagation of the highly correlated merger parameters is unfeasible. To address these weaknesses I develop a Monte Carlo method to discern the properties of dissociative mergers and propagate the uncertainty of the measured cluster parameters in an accurate and Bayesian manner. I introduce this method, verify it against an existing hydrodynamic N-body simulation, and apply it to two known dissociative mergers: 1ES 0657-558 (Bullet Cluster) and DLSCL J0916.2+2951 (Musket Ball Cluster). I find that this method surpasses existing analytic models-providing accurate (10% level) dynamic parameter and uncertainty estimates throughout the merger history. This, coupled with minimal required a priori information (subcluster mass, redshift, and projected separation) and relatively fast computation ({approx}6 CPU hours), makes this method ideal for large samples of dissociative merging clusters.« less

  6. Genesis of magnetic fields in isolated white dwarfs

    NASA Astrophysics Data System (ADS)

    Briggs, Gordon P.; Ferrario, Lilia; Tout, Christopher A.; Wickramasinghe, Dayal T.

    2018-05-01

    A dynamo mechanism driven by differential rotation when stars merge has been proposed to explain the presence of strong fields in certain classes of magnetic stars. In the case of the high field magnetic white dwarfs (HFMWDs), the site of the differential rotation has been variously thought to be the common envelope, the hot outer regions of a merged degenerate core or an accretion disc formed by a tidally disrupted companion that is subsequently accreted by a degenerate core. We have shown previously that the observed incidence of magnetism and the mass distribution in HFMWDs are consistent with the hypothesis that they are the result of merging binaries during common envelope evolution. Here we calculate the magnetic field strengths generated by common envelope interactions for synthetic populations using a simple prescription for the generation of fields and find that the observed magnetic field distribution is also consistent with the stellar merging hypothesis. We use the Kolmogorov-Smirnov test to study the correlation between the calculated and the observed field strengths and find that it is consistent for low envelope ejection efficiency. We also suggest that field generation by the plunging of a giant gaseous planet on to a white dwarf may explain why magnetism among cool white dwarfs (including DZ white dwarfs) is higher than among hot white dwarfs. In this picture a super-Jupiter residing in the outer regions of the white dwarf's planetary system is perturbed into a highly eccentric orbit by a close stellar encounter and is later accreted by the white dwarf.

  7. Genesis of magnetic fields in isolated white dwarfs

    NASA Astrophysics Data System (ADS)

    Briggs, Gordon P.; Ferrario, Lilia; Tout, Christopher A.; Wickramasinghe, Dayal T.

    2018-07-01

    A dynamo mechanism driven by differential rotation when stars merge has been proposed to explain the presence of strong fields in certain classes of magnetic stars. In the case of the high-field magnetic white dwarfs (HFMWDs), the site of the differential rotation has been variously thought to be the common envelope, the hot outer regions of a merged degenerate core or an accretion disc are formed by a tidally disrupted companion that is subsequently accreted by a degenerate core. We have shown previously that the observed incidence of magnetism and the mass distribution in HFMWDs are consistent with the hypothesis that they are the result of merging binaries during common envelope evolution. Here, we calculate the magnetic field strengths generated by common envelope interactions for synthetic populations using a simple prescription for the generation of fields and find that the observed magnetic field distribution is also consistent with the stellar merging hypothesis. We use the Kolmogorov-Smirnov test to study the correlation between the calculated and the observed field strengths and find that it is consistent for low envelope ejection efficiency. We also suggest that the field generation by the plunging of a giant gaseous planet on to a white dwarf may explain why magnetism among cool white dwarfs (including DZ white dwarfs) is higher than among hot white dwarfs. In this picture, a super-Jupiter residing in the outer regions of the white dwarf's planetary system is perturbed into a highly eccentric orbit by a close stellar encounter and is later accreted by the white dwarf.

  8. Measures of health sciences journal use: a comparison of vendor, link-resolver, and local citation statistics.

    PubMed

    De Groote, Sandra L; Blecic, Deborah D; Martin, Kristin

    2013-04-01

    Libraries require efficient and reliable methods to assess journal use. Vendors provide complete counts of articles retrieved from their platforms. However, if a journal is available on multiple platforms, several sets of statistics must be merged. Link-resolver reports merge data from all platforms into one report but only record partial use because users can access library subscriptions from other paths. Citation data are limited to publication use. Vendor, link-resolver, and local citation data were examined to determine correlation. Because link-resolver statistics are easy to obtain, the study library especially wanted to know if they correlate highly with the other measures. Vendor, link-resolver, and local citation statistics for the study institution were gathered for health sciences journals. Spearman rank-order correlation coefficients were calculated. There was a high positive correlation between all three data sets, with vendor data commonly showing the highest use. However, a small percentage of titles showed anomalous results. Link-resolver data correlate well with vendor and citation data, but due to anomalies, low link-resolver data would best be used to suggest titles for further evaluation using vendor data. Citation data may not be needed as it correlates highly with other measures.

  9. An open-source software package for multivariate modeling and clustering: applications to air quality management.

    PubMed

    Wang, Xiuquan; Huang, Guohe; Zhao, Shan; Guo, Junhong

    2015-09-01

    This paper presents an open-source software package, rSCA, which is developed based upon a stepwise cluster analysis method and serves as a statistical tool for modeling the relationships between multiple dependent and independent variables. The rSCA package is efficient in dealing with both continuous and discrete variables, as well as nonlinear relationships between the variables. It divides the sample sets of dependent variables into different subsets (or subclusters) through a series of cutting and merging operations based upon the theory of multivariate analysis of variance (MANOVA). The modeling results are given by a cluster tree, which includes both intermediate and leaf subclusters as well as the flow paths from the root of the tree to each leaf subcluster specified by a series of cutting and merging actions. The rSCA package is a handy and easy-to-use tool and is freely available at http://cran.r-project.org/package=rSCA . By applying the developed package to air quality management in an urban environment, we demonstrate its effectiveness in dealing with the complicated relationships among multiple variables in real-world problems.

  10. Implications of path tolerance and path characteristics on critical vehicle manoeuvres

    NASA Astrophysics Data System (ADS)

    Lundahl, K.; Frisk, E.; Nielsen, L.

    2017-12-01

    Path planning and path following are core components in safe autonomous driving. Typically, a path planner provides a path with some tolerance on how tightly the path should be followed. Based on that, and other path characteristics, for example, sharpness of curves, a speed profile needs to be assigned so that the vehicle can stay within the given tolerance without going unnecessarily slow. Here, such trajectory planning is based on optimal control formulations where critical cases arise as on-the-limit solutions. The study focuses on heavy commercial vehicles, causing rollover to be of a major concern, due to the relatively high centre of gravity. Several results are obtained on required model complexity depending on path characteristics, for example, quantification of required path tolerance for a simple model to be sufficient, quantification of when yaw inertia needs to be considered in more detail, and how the curvature rate of change interplays with available friction. Overall, in situations where the vehicle is subject to a wide range of driving conditions, from good transport roads to more tricky avoidance manoeuvres, the requirements on the path following will vary. For this, the provided results form a basis for real-time path following.

  11. Automated flight path planning for virtual endoscopy.

    PubMed

    Paik, D S; Beaulieu, C F; Jeffrey, R B; Rubin, G D; Napel, S

    1998-05-01

    In this paper, a novel technique for rapid and automatic computation of flight paths for guiding virtual endoscopic exploration of three-dimensional medical images is described. While manually planning flight paths is a tedious and time consuming task, our algorithm is automated and fast. Our method for positioning the virtual camera is based on the medial axis transform but is much more computationally efficient. By iteratively correcting a path toward the medial axis, the necessity of evaluating simple point criteria during morphological thinning is eliminated. The virtual camera is also oriented in a stable viewing direction, avoiding sudden twists and turns. We tested our algorithm on volumetric data sets of eight colons, one aorta and one bronchial tree. The algorithm computed the flight paths in several minutes per volume on an inexpensive workstation with minimal computation time added for multiple paths through branching structures (10%-13% per extra path). The results of our algorithm are smooth, centralized paths that aid in the task of navigation in virtual endoscopic exploration of three-dimensional medical images.

  12. The War Film: Historical Perspective or Simple Entertainment

    DTIC Science & Technology

    2001-06-01

    published in 1978, the effect of the image repair campaign was not evident.25 Dr. Suid is an extremely helpful source of DOD assistance and filmmaker ...single speech at any one particular venue.32 The filmmakers merged several snippets of Patton speeches into dialogue to provide dramatic effect and a... filmmakers could support it and that the historical 135 accurate scene was still a good story. If the story faltered, dramatic effect was inserted to

  13. A community detection algorithm using network topologies and rule-based hierarchical arc-merging strategies

    PubMed Central

    2017-01-01

    The authors use four criteria to examine a novel community detection algorithm: (a) effectiveness in terms of producing high values of normalized mutual information (NMI) and modularity, using well-known social networks for testing; (b) examination, meaning the ability to examine mitigating resolution limit problems using NMI values and synthetic networks; (c) correctness, meaning the ability to identify useful community structure results in terms of NMI values and Lancichinetti-Fortunato-Radicchi (LFR) benchmark networks; and (d) scalability, or the ability to produce comparable modularity values with fast execution times when working with large-scale real-world networks. In addition to describing a simple hierarchical arc-merging (HAM) algorithm that uses network topology information, we introduce rule-based arc-merging strategies for identifying community structures. Five well-studied social network datasets and eight sets of LFR benchmark networks were employed to validate the correctness of a ground-truth community, eight large-scale real-world complex networks were used to measure its efficiency, and two synthetic networks were used to determine its susceptibility to two resolution limit problems. Our experimental results indicate that the proposed HAM algorithm exhibited satisfactory performance efficiency, and that HAM-identified and ground-truth communities were comparable in terms of social and LFR benchmark networks, while mitigating resolution limit problems. PMID:29121100

  14. The Use of Digital Technology in Finding Multiple Paths to Solve and Extend an Equilateral Triangle Task

    ERIC Educational Resources Information Center

    Santos-Trigo, Manuel; Reyes-Rodriguez, Aaron

    2016-01-01

    Mathematical tasks are crucial elements for teachers to orient, foster and assess students' processes to comprehend and develop mathematical knowledge. During the process of working and solving a task, searching for or discussing multiple solution paths becomes a powerful strategy for students to engage in mathematical thinking. A simple task that…

  15. Simplified, inverse, ejector design tool

    NASA Technical Reports Server (NTRS)

    Dechant, Lawrence J.

    1993-01-01

    A simple lumped parameter based inverse design tool has been developed which provides flow path geometry and entrainment estimates subject to operational, acoustic, and design constraints. These constraints are manifested through specification of primary mass flow rate or ejector thrust, fully-mixed exit velocity, and static pressure matching. Fundamentally, integral forms of the conservation equations coupled with the specified design constraints are combined to yield an easily invertible linear system in terms of the flow path cross-sectional areas. Entrainment is computed by back substitution. Initial comparison with experimental and analogous one-dimensional methods show good agreement. Thus, this simple inverse design code provides an analytically based, preliminary design tool with direct application to High Speed Civil Transport (HSCT) design studies.

  16. A hybrid quantum eraser scheme for characterization of free-space and fiber communication channels

    NASA Astrophysics Data System (ADS)

    Nape, Isaac; Kyeremah, Charlotte; Vallés, Adam; Rosales-Guzmán, Carmelo; Buah-Bassuah, Paul K.; Forbes, Andrew

    2018-02-01

    We demonstrate a simple projective measurement based on the quantum eraser concept that can be used to characterize the disturbances of any communication channel. Quantum erasers are commonly implemented as spatially separated path interferometric schemes. Here we exploit the advantages of redefining the which-path information in terms of spatial modes, replacing physical paths with abstract paths of orbital angular momentum (OAM). Remarkably, vector modes (natural modes of free-space and fiber) have a non-separable feature of spin-orbit coupled states, equivalent to the description of two independently marked paths. We explore the effects of fiber perturbations by probing a step-index optical fiber channel with a vector mode, relevant to high-order spatial mode encoding of information for ultra-fast fiber communications.

  17. From the physics of interacting polymers to optimizing routes on the London Underground

    PubMed Central

    Yeung, Chi Ho; Saad, David; Wong, K. Y. Michael

    2013-01-01

    Optimizing paths on networks is crucial for many applications, ranging from subway traffic to Internet communication. Because global path optimization that takes account of all path choices simultaneously is computationally hard, most existing routing algorithms optimize paths individually, thus providing suboptimal solutions. We use the physics of interacting polymers and disordered systems to analyze macroscopic properties of generic path optimization problems and derive a simple, principled, generic, and distributed routing algorithm capable of considering all individual path choices simultaneously. We demonstrate the efficacy of the algorithm by applying it to: (i) random graphs resembling Internet overlay networks, (ii) travel on the London Underground network based on Oyster card data, and (iii) the global airport network. Analytically derived macroscopic properties give rise to insightful new routing phenomena, including phase transitions and scaling laws, that facilitate better understanding of the appropriate operational regimes and their limitations, which are difficult to obtain otherwise. PMID:23898198

  18. From the physics of interacting polymers to optimizing routes on the London Underground.

    PubMed

    Yeung, Chi Ho; Saad, David; Wong, K Y Michael

    2013-08-20

    Optimizing paths on networks is crucial for many applications, ranging from subway traffic to Internet communication. Because global path optimization that takes account of all path choices simultaneously is computationally hard, most existing routing algorithms optimize paths individually, thus providing suboptimal solutions. We use the physics of interacting polymers and disordered systems to analyze macroscopic properties of generic path optimization problems and derive a simple, principled, generic, and distributed routing algorithm capable of considering all individual path choices simultaneously. We demonstrate the efficacy of the algorithm by applying it to: (i) random graphs resembling Internet overlay networks, (ii) travel on the London Underground network based on Oyster card data, and (iii) the global airport network. Analytically derived macroscopic properties give rise to insightful new routing phenomena, including phase transitions and scaling laws, that facilitate better understanding of the appropriate operational regimes and their limitations, which are difficult to obtain otherwise.

  19. Predictor laws for pictorial flight displays

    NASA Technical Reports Server (NTRS)

    Grunwald, A. J.

    1985-01-01

    Two predictor laws are formulated and analyzed: (1) a circular path law based on constant accelerations perpendicular to the path and (2) a predictor law based on state transition matrix computations. It is shown that for both methods the predictor provides the essential lead zeros for the path-following task. However, in contrast to the circular path law, the state transition matrix law furnishes the system with additional zeros that entirely cancel out the higher-frequency poles of the vehicle dynamics. On the other hand, the circular path law yields a zero steady-state error in following a curved trajectory with a constant radius. A combined predictor law is suggested that utilizes the advantages of both methods. A simple analysis shows that the optimal prediction time mainly depends on the level of precision required in the path-following task, and guidelines for determining the optimal prediction time are given.

  20. SU-F-R-33: Can CT and CBCT Be Used Simultaneously for Radiomics Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, R; Wang, J; Zhong, H

    2016-06-15

    Purpose: To investigate whether CBCT and CT can be used in radiomics analysis simultaneously. To establish a batch correction method for radiomics in two similar image modalities. Methods: Four sites including rectum, bladder, femoral head and lung were considered as region of interest (ROI) in this study. For each site, 10 treatment planning CT images were collected. And 10 CBCT images which came from same site of same patient were acquired at first radiotherapy fraction. 253 radiomics features, which were selected by our test-retest study at rectum cancer CT (ICC>0.8), were calculated for both CBCT and CT images in MATLAB.more » Simple scaling (z-score) and nonlinear correction methods were applied to the CBCT radiomics features. The Pearson Correlation Coefficient was calculated to analyze the correlation between radiomics features of CT and CBCT images before and after correction. Cluster analysis of mixed data (for each site, 5 CT and 5 CBCT data are randomly selected) was implemented to validate the feasibility to merge radiomics data from CBCT and CT. The consistency of clustering result and site grouping was verified by a chi-square test for different datasets respectively. Results: For simple scaling, 234 of the 253 features have correlation coefficient ρ>0.8 among which 154 features haveρ>0.9 . For radiomics data after nonlinear correction, 240 of the 253 features have ρ>0.8 among which 220 features have ρ>0.9. Cluster analysis of mixed data shows that data of four sites was almost precisely separated for simple scaling(p=1.29 * 10{sup −7}, χ{sup 2} test) and nonlinear correction (p=5.98 * 10{sup −7}, χ{sup 2} test), which is similar to the cluster result of CT data (p=4.52 * 10{sup −8}, χ{sup 2} test). Conclusion: Radiomics data from CBCT can be merged with those from CT by simple scaling or nonlinear correction for radiomics analysis.« less

  1. Application of transmission infrared spectroscopy and partial least squares regression to predict immunoglobulin G concentration in dairy and beef cow colostrum.

    PubMed

    Elsohaby, Ibrahim; Windeyer, M Claire; Haines, Deborah M; Homerosky, Elizabeth R; Pearson, Jennifer M; McClure, J Trenton; Keefe, Greg P

    2018-03-06

    The objective of this study was to explore the potential of transmission infrared (TIR) spectroscopy in combination with partial least squares regression (PLSR) for quantification of dairy and beef cow colostral immunoglobulin G (IgG) concentration and assessment of colostrum quality. A total of 430 colostrum samples were collected from dairy (n = 235) and beef (n = 195) cows and tested by a radial immunodiffusion (RID) assay and TIR spectroscopy. Colostral IgG concentrations obtained by the RID assay were linked to the preprocessed spectra and divided into combined and prediction data sets. Three PLSR calibration models were built: one for the dairy cow colostrum only, the second for beef cow colostrum only, and the third for the merged dairy and beef cow colostrum. The predictive performance of each model was evaluated separately using the independent prediction data set. The Pearson correlation coefficients between IgG concentrations as determined by the TIR-based assay and the RID assay were 0.84 for dairy cow colostrum, 0.88 for beef cow colostrum, and 0.92 for the merged set of dairy and beef cow colostrum. The average of the differences between colostral IgG concentrations obtained by the RID- and TIR-based assays were -3.5, 2.7, and 1.4 g/L for dairy, beef, and merged colostrum samples, respectively. Further, the average relative error of the colostral IgG predicted by the TIR spectroscopy from the RID assay was 5% for dairy cow, 1.2% for beef cow, and 0.8% for the merged data set. The average intra-assay CV% of the IgG concentration predicted by the TIR-based method were 3.2%, 2.5%, and 6.9% for dairy cow, beef cow, and merged data set, respectively.The utility of TIR method for assessment of colostrum quality was evaluated using the entire data set and showed that TIR spectroscopy accurately identified the quality status of 91% of dairy cow colostrum, 95% of beef cow colostrum, and 89% and 93% of the merged dairy and beef cow colostrum samples, respectively. The results showed that TIR spectroscopy demonstrates potential as a simple, rapid, and cost-efficient method for use as an estimate of IgG concentration in dairy and beef cow colostrum samples and assessment of colostrum quality. The results also showed that merging the dairy and beef cow colostrum sample data sets improved the predictive ability of the TIR spectroscopy.

  2. Viewing strategies for simple and chimeric faces: an investigation of perceptual bias in normals and schizophrenic patients using visual scan paths.

    PubMed

    Phillips, M L; David, A S

    1997-11-01

    Left hemi-face (LHF) perceptual bias of chimeric faces in normal right-handers is well-documented. We investigated mechanisms underlying this by measuring visual scan paths in right-handed normal controls (n = 9) and schizophrenics (n = 8) for simple, full-face photographs and schematic, happy-sad chimeric faces over 5 s. Normals viewed the left side/ LHF first, more so than the right of all stimuli. Schizophrenics viewed the LHF first more than the right of stimuli for which there was a LHF choice of predominant affect. Neither group demonstrated an overall LHF perceptual bias for the chimeric stimuli. Readjustment of the initial LHF bias in controls was probably a result of increased attention to stimulus detail with scanning, whereas the schizophrenics demonstrated difficulty in redirection of the initial focus of attention. The study highlights the role of visual scan paths as a marker of normal and abnormal attentional processes. Copyright 1997 Academic Press.

  3. A simple method for estimating frequency response corrections for eddy covariance systems

    Treesearch

    W. J. Massman

    2000-01-01

    A simple analytical formula is developed for estimating the frequency attenuation of eddy covariance fluxes due to sensor response, path-length averaging, sensor separation, signal processing, and flux averaging periods. Although it is an approximation based on flat terrain cospectra, this analytical formula should have broader applicability than just flat-terrain...

  4. Quantization of Simple Parametrized Systems

    NASA Astrophysics Data System (ADS)

    Ruffini, Giulio

    1995-01-01

    I study the canonical formulation and quantization of some simple parametrized systems using Dirac's formalism and the Becchi-Rouet-Stora-Tyutin (BRST) extended phase space method. These systems include the parametrized particle and minisuperspace. Using Dirac's formalism I first analyze for each case the construction of the classical reduced phase space. There are two separate features of these systems that may make this construction difficult: (a) Because of the boundary conditions used, the actions are not gauge invariant at the boundaries. (b) The constraints may have a disconnected solution space. The relativistic particle and minisuperspace have such complicated constraints, while the non-relativistic particle displays only the first feature. I first show that a change of gauge fixing is equivalent to a canonical transformation in the reduced phase space, thus resolving the problems associated with the first feature above. Then I consider the quantization of these systems using several approaches: Dirac's method, Dirac-Fock quantization, and the BRST formalism. In the cases of the relativistic particle and minisuperspace I consider first the quantization of one branch of the constraint at the time and then discuss the backgrounds in which it is possible to quantize simultaneously both branches. I motivate and define the inner product, and obtain, for example, the Klein-Gordon inner product for the relativistic case. Then I show how to construct phase space path integral representations for amplitudes in these approaches--the Batalin-Fradkin-Vilkovisky (BFV) and the Faddeev path integrals --from which one can then derive the path integrals in coordinate space--the Faddeev-Popov path integral and the geometric path integral. In particular I establish the connection between the Hilbert space representation and the range of the lapse in the path integrals. I also examine the class of paths that contribute in the path integrals and how they affect space-time covariance, concluding that it is consistent to take paths that move forward in time only when there is no electric field. The key elements in this analysis are the space-like paths and the behavior of the action under the non-trivial ( Z_2) element of the reparametrization group.

  5. Ever-Widening Horizons: Hemingway’s War Literature, 1923 to 1940

    DTIC Science & Technology

    2005-03-09

    the erroneous merging of his literature and his carefully cultivated public machismo that leads to accusations that he celebrates violence or...experience: "The things he had come to know in this war were not so simple" (FWBT 248). War was a topic that Hemingway considered a "persistent dimension...Great War, these authors seemed quaint; one needed Hemingway, Joyce, Stein, and Dos Passos to give voice to the nihilism that the war had engendered

  6. The Productive Merger of Iodonium Salts and Organocatalysis. A Non-Photolytic Approach to the Enantioselective α-Trifluoromethylation of Aldehydes

    PubMed Central

    Allen, Anna E.; MacMillan, David W. C.

    2010-01-01

    An enantioselective organocatalytic α-trifluoromethylation of aldehydes has been accomplished using a commercially available, electrophilic trifluoromethyl source. The merging of Lewis acid and organocatalysis provides a new strategy for the enantioselective construction of trifluoromethyl stereogenicity, an important chiral synthon for pharmaceutical, material, and agrochemical applications. This mild and operationally simple protocol allows rapid access to enantioenriched α-trifluoromethylated aldehydes through a non-photolytic pathway. PMID:20297822

  7. Of Ivory and Smurfs: Loxodontan MapReduce Experiments for Web Search

    DTIC Science & Technology

    2009-11-01

    i.e., index construction may involve multiple flushes to local disk and on-disk merge sorts outside of MapReduce). Once the local indexes have been...contained 198 cores, which, with current dual -processor quad-core con- figurations, could fit into 25 machines—a far more modest cluster with today’s...signifi- cant impact on effectiveness. Our simple pruning technique was performed at query time and hence could be adapted to query-dependent

  8. Continuous quantum measurements and the action uncertainty principle

    NASA Astrophysics Data System (ADS)

    Mensky, Michael B.

    1992-09-01

    The path-integral approach to quantum theory of continuous measurements has been developed in preceding works of the author. According to this approach the measurement amplitude determining probabilities of different outputs of the measurement can be evaluated in the form of a restricted path integral (a path integral “in finite limits”). With the help of the measurement amplitude, maximum deviation of measurement outputs from the classical one can be easily determined. The aim of the present paper is to express this variance in a simpler and transparent form of a specific uncertainty principle (called the action uncertainty principle, AUP). The most simple (but weak) form of AUP is δ S≳ℏ, where S is the action functional. It can be applied for simple derivation of the Bohr-Rosenfeld inequality for measurability of gravitational field. A stronger (and having wider application) form of AUP (for ideal measurements performed in the quantum regime) is |∫{/' t″ }(δ S[ q]/δ q( t))Δ q( t) dt|≃ℏ, where the paths [ q] and [Δ q] stand correspondingly for the measurement output and for the measurement error. It can also be presented in symbolic form as Δ(Equation) Δ(Path) ≃ ℏ. This means that deviation of the observed (measured) motion from that obeying the classical equation of motion is reciprocally proportional to the uncertainty in a path (the latter uncertainty resulting from the measurement error). The consequence of AUP is that improving the measurement precision beyond the threshold of the quantum regime leads to decreasing information resulting from the measurement.

  9. Thermal imaging application for behavior study of chosen nocturnal animals

    NASA Astrophysics Data System (ADS)

    Pregowski, Piotr; Owadowska, Edyta; Pietrzak, Jan

    2004-04-01

    This paper presents preliminary results of the project brought up with aim to verify the hypothesis that small, nocturnal rodents use common paths which form a common, rather stable system for fast movement. This report concentrates on results of merging uniquely good detecting features of modern IR thermal cameras with newly elaborated software. Among the final results offered by this method there are both thermal movies and single synthetic graphic images of paths traced during a few minutes or hours of investigations, as well as detailed numerical data of the ".txt" type about chosen detected events. Although it is to early to say that elaborated method will allow us to answer all ecological questions, it is possible to say that we worked out a new, valuable tool for the next steps of our project. We expect that this method enables us to solve the important ecological problems of nocturnal animals study. Supervised, stably settled area can be enlarged by use of a few thermal imagers or IR thermographic cameras, simultaneously. Presented method can be applied in other uses, even distant from presented e.g. ecological corridors detection.

  10. 3D characterization of trans- and inter-lamellar fatigue crack in (α + β) Ti alloy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Babout, Laurent, E-mail: Laurent.babout@p.lodz.pl; Jopek, Łukasz; Preuss, Michael

    2014-12-15

    This paper presents a three dimensional image processing strategy that has been developed to quantitatively analyze and correlate the path of a fatigue crack with the lamellar microstructure found in Ti-6246. The analysis is carried out on X-ray microtomography images acquired in situ during uniaxial fatigue testing. The crack, the primary β-grain boundaries and the α lamellae have been segmented separately and merged for the first time to allow a better characterization and understanding of their mutual interaction. This has particularly emphasized the role of translamellar crack growth at a very high propagation angle with regard to the lamellar orientation,more » supporting the central role of colonies favorably oriented for basal 〈a〉 slip to guide the crack in the fully lamellar microstructure of Ti alloy. - Highlights: • 3D tomography images reveal strong short fatigue crack interaction with α lamellae. • Proposed 3D image processing methodology makes their segmentation possible. • Crack-lamellae orientation maps show prevalence of translamellar cracking. • Angle study comforts the influence of basal/prismatic slip on crack path.« less

  11. Merging with the path not taken: Wilhelm Wundt's work as a precursor to the embedded-processes approach to memory, attention, and consciousness.

    PubMed

    Cowan, Nelson; Rachev, Nikolay R

    2018-06-04

    Early research on memory was dominated by two researchers forging different paths: Hermann Ebbinghaus, interested in principles of learning and recall, and Wilhelm Wundt, founder of the first formal laboratory of experimental psychology, who was interested in empirical evidence to interpret conscious experience. Whereas the work of Ebbinghaus is a much-heralded precursor of modern research on long-term memory, the work of Wundt appears to be a mostly-forgotten precursor to research on working memory. We show how his scientific perspective is germane to more recent investigations, with emphasis on the embedded-processes approaches of Nelson Cowan and Klaus Oberauer, and how it is in contrast with most other recent theoretical approaches. This investigation is important because the embedded-process theorists, apparently like most modern researchers, have recognized few of Wundt's specific contributions. We explore commonalities between the approaches and suggest that an appreciation of these commonalities might enrich the field going forward. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Virtually assisted optical colonoscopy

    NASA Astrophysics Data System (ADS)

    Marino, Joseph; Qiu, Feng; Kaufman, Arie

    2008-03-01

    We present a set of tools used to enhance the optical colonoscopy procedure in a novel manner with the aim of improving both the accuracy and efficiency of this procedure. In order to better present the colon information to the gastroenterologist performing a conventional (optical) colonoscopy, we undistort the radial distortion of the fisheye view of the colonoscope. The radial distortion is modeled with a function that converts the fisheye view to the perspective view, where the shape and size of polyps can be more readily observed. The conversion, accelerated on the graphics processing unit and running in real-time, calculates the corresponding position in the fisheye view of each pixel on the perspective image. We also merge our previous work in computer-aided polyp detection for virtual colonoscopy into the optical colonoscopy environment. The physical colonoscope path in the optical colonoscopy is approximated with the hugging corner shortest path, which is correlated with the centerline in the virtual colonoscopy. With the estimated distance that the colonoscope has been inserted, we are able to provide the gastroenterologist with visual cues along the observation path as to the location of possible polyps found by the detection process. In order to present the information to the gastroenterologist in a non-intrusive manner, we have developed a friendly user interface to enhance the optical colonoscopy without being cumbersome, distracting, or resulting in a more lackadaisical inspection by the gastroenterologist.

  13. Quantum Mechanics, Path Integrals and Option Pricing:. Reducing the Complexity of Finance

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Corianò, Claudio; Srikant, Marakani

    2003-04-01

    Quantum Finance represents the synthesis of the techniques of quantum theory (quantum mechanics and quantum field theory) to theoretical and applied finance. After a brief overview of the connection between these fields, we illustrate some of the methods of lattice simulations of path integrals for the pricing of options. The ideas are sketched out for simple models, such as the Black-Scholes model, where analytical and numerical results are compared. Application of the method to nonlinear systems is also briefly overviewed. More general models, for exotic or path-dependent options are discussed.

  14. Optimization of magnet end-winding geometry

    NASA Astrophysics Data System (ADS)

    Reusch, Michael F.; Weissenburger, Donald W.; Nearing, James C.

    1994-03-01

    A simple, almost entirely analytic, method for the optimization of stress-reduced magnet-end winding paths for ribbon-like superconducting cable is presented. This technique is based on characterization of these paths as developable surfaces, i.e., surfaces whose intrinsic geometry is flat. The method is applicable to winding mandrels of arbitrary geometry. Computational searches for optimal winding paths are easily implemented via the technique. Its application to the end configuration of cylindrical Superconducting Super Collider (SSC)-type magnets is discussed. The method may be useful for other engineering problems involving the placement of thin sheets of material.

  15. Mars PathFinder Rover Traverse Image

    NASA Technical Reports Server (NTRS)

    1998-01-01

    This figure contains an azimuth-elevation projection of the 'Gallery Panorama.' The original Simple Cylindrical mosaic has been reprojected to the inside of a sphere so that lines of constant azimuth radiate from the center and lines of constant elevation are concentric circles. This projection preserves the resolution of the original panorama. Overlaid onto the projected Martian surface is a delineation of the Sojourner rover traverse path during the 83 Sols (Martian days) of Pathfinder surface operations. The rover path was reproduced using IMP camera 'end of day' and 'Rover movie' image sequences and rover vehicle telemetry data as references.

  16. Development of an Implantable WBAN Path-Loss Model for Capsule Endoscopy

    NASA Astrophysics Data System (ADS)

    Aoyagi, Takahiro; Takizawa, Kenichi; Kobayashi, Takehiko; Takada, Jun-Ichi; Hamaguchi, Kiyoshi; Kohno, Ryuji

    An implantable WBAN path-loss model for a capsule endoscopy which is used for examining digestive organs, is developed by conducting simulations and experiments. First, we performed FDTD simulations on implant WBAN propagation by using a numerical human model. Second, we performed FDTD simulations on a vessel that represents the human body. Third, we performed experiments using a vessel of the same dimensions as that used in the simulations. On the basis of the results of these simulations and experiments, we proposed the gradient and intercept parameters of the simple path-loss in-body propagation model.

  17. Oscillations of a Simple Pendulum with Extremely Large Amplitudes

    ERIC Educational Resources Information Center

    Butikov, Eugene I.

    2012-01-01

    Large oscillations of a simple rigid pendulum with amplitudes close to 180[degrees] are treated on the basis of a physically justified approach in which the cycle of oscillation is divided into several stages. The major part of the almost closed circular path of the pendulum is approximated by the limiting motion, while the motion in the vicinity…

  18. Nonvisual Route Following with Guidance from a Simple Haptic or Auditory Display

    ERIC Educational Resources Information Center

    Marston, James R.; Loomis, Jack M.; Klatzky, Roberta L.; Golledge, Reginald G.

    2007-01-01

    A path-following experiment, using a global positioning system, was conducted with participants who were legally blind. On- and off-course confirmations were delivered by either a vibrotactile or an audio stimulus. These simple binary cues were sufficient for guidance and point to the need to offer output options for guidance systems for people…

  19. Study of the optical crosstalk in a heterodyne displacement gauge with cancelable circuit

    NASA Astrophysics Data System (ADS)

    Donazzan, Alberto; Naletto, Giampiero; Pelizzo, Maria G.

    2017-06-01

    One main focus of high precision heterodyne displacement interferometers are the means of splitting and merging for the reference (R) and measurement (M) beams when a cancelable circuit is implemented. Optical mixing of R and M gives birth to a systematc error called cyclic error, which appears as a periodic offset between the detected displacement and the actual one. A simple derivation of the cyclic error due to optical mixing is proposed for the cancelable circuit design. R and M beatings are collected by two photodiodes and conveniently converted by transimpedance amplifiers, such that the output signals are turned into ac-coupled voltages. The detected phase can be calculated as a function of the real phase (a change in optical path difference) in the case of zero-crossing detection. What turns out is a cyclic non-linearity which depends on the actual phase and on the amount of optical power leakage from the R channel into the M channel and vice versa. We then applied this result to the prototype of displacement gauge we are developing, which implements the cancelable circuit design with wavefront division. The splitting between R and M is done with a double coated mirror with a central hole, tilted by 45° with respect to the surface normal. The interferometer features two removable diffraction masks, respectively located before the merging point (a circular obscuration) and before the recombination point (a ring obscuration). In order to predict the extent of optical mixing between R and M, the whole layout was simulated by means of the Zemax ® Physical Optics Propagation (POP) tool. After the model of our setup was built and qualitatively verified, we proceeded by calculating the amount of optical leakages in various configurations: with and without the diffraction masks as well as for different sizes of both the holey mirror and the diffraction masks. The corrisponding maximum displacement error was then calculated for every configuration thanks to the previously derived formula. The insertion and optimization of the diffraction masks greatly improved the expected optical isolation inside the system. Data acquisition from our displacement gauge has just started. We plan to experimentally verify such results as soon as our prototype gauge will reach the desired sub-nanometer sensitivity.

  20. Ground-based atmospheric water vapor monitoring system with spectroscopy of radiation in 20-30 GHz and 50-60 GHz bands

    NASA Astrophysics Data System (ADS)

    Nagasaki, Takeo; Tajima, Osamu; Araki, Kentaro; Ishimoto, Hiroshi

    2016-07-01

    We propose a novel ground-based meteorological monitoring system. In the 20{30 GHz band, our system simultaneously measures a broad absorption peak of water vapor and cloud liquid water. Additional observation in the 50{60 GHz band obtains the radiation of oxygen. Spectral results contain vertical profiles of the physical temperature of atmospheric molecules. We designed a simple method for placing the system atop high buildings and mountains and on decks of ships. There is a simple optical system in front of horn antennas for each frequency band. A focused signal from a reflector is separated into two polarized optical paths by a wire grid. Each signal received by the horn antenna is amplified by low-noise amplifiers. Spectra of each signal are measured as a function of frequency using two analyzers. A blackbody calibration source is maintained at 50 K in a cryostat. The calibration signal is led to each receiver via the wire grid. The input path of the signal is selected by rotation of the wire grid by 90°, because the polarization axis of the reflected path and axis of the transparent path are orthogonal. We developed a prototype receiver and demonstrated its performance using monitoring at the zenith.

  1. Measures of health sciences journal use: a comparison of vendor, link-resolver, and local citation statistics*

    PubMed Central

    De Groote, Sandra L.; Blecic, Deborah D.; Martin, Kristin

    2013-01-01

    Objective: Libraries require efficient and reliable methods to assess journal use. Vendors provide complete counts of articles retrieved from their platforms. However, if a journal is available on multiple platforms, several sets of statistics must be merged. Link-resolver reports merge data from all platforms into one report but only record partial use because users can access library subscriptions from other paths. Citation data are limited to publication use. Vendor, link-resolver, and local citation data were examined to determine correlation. Because link-resolver statistics are easy to obtain, the study library especially wanted to know if they correlate highly with the other measures. Methods: Vendor, link-resolver, and local citation statistics for the study institution were gathered for health sciences journals. Spearman rank-order correlation coefficients were calculated. Results: There was a high positive correlation between all three data sets, with vendor data commonly showing the highest use. However, a small percentage of titles showed anomalous results. Discussion and Conclusions: Link-resolver data correlate well with vendor and citation data, but due to anomalies, low link-resolver data would best be used to suggest titles for further evaluation using vendor data. Citation data may not be needed as it correlates highly with other measures. PMID:23646026

  2. Off the beaten path: a new approach to realistically model the orbital decay of supermassive black holes in galaxy formation simulations

    NASA Astrophysics Data System (ADS)

    Tremmel, M.; Governato, F.; Volonteri, M.; Quinn, T. R.

    2015-08-01

    We introduce a sub-grid force correction term to better model the dynamical friction experienced by a supermassive black hole (SMBH) as it orbits within its host galaxy. This new approach accurately follows an SMBH's orbital decay and drastically improves over commonly used `advection' methods. The force correction introduced here naturally scales with the force resolution of the simulation and converges as resolution is increased. In controlled experiments, we show how the orbital decay of the SMBH closely follows analytical predictions when particle masses are significantly smaller than that of the SMBH. In a cosmological simulation of the assembly of a small galaxy, we show how our method allows for realistic black hole orbits. This approach overcomes the limitations of the advection scheme, where black holes are rapidly and artificially pushed towards the halo centre and then forced to merge, regardless of their orbits. We find that SMBHs from merging dwarf galaxies can spend significant time away from the centre of the remnant galaxy. Improving the modelling of SMBH orbital decay will help in making robust predictions of the growth, detectability and merger rates of SMBHs, especially at low galaxy masses or at high redshift.

  3. Compensation of high order harmonic long quantum-path attosecond chirp

    NASA Astrophysics Data System (ADS)

    Guichard, R.; Caillat, J.; Lévêque, C.; Risoud, F.; Maquet, A.; Taïeb, R.; Zaïr, A.

    2017-12-01

    We propose a method to compensate for the extreme ultra violet (XUV) attosecond chirp associated with the long quantum-path in the high harmonic generation process. Our method employs an isolated attosecond pulse (IAP) issued from the short trajectory contribution in a primary target to assist the infrared driving field to produce high harmonics from the long trajectory in a secondary target. In our simulations based on the resolution of the time-dependent Schrödinger equation, the resulting high harmornics present a clear phase compensation of the long quantum-path contribution, near to Fourier transform limited attosecond XUV pulse. Employing time-frequency analysis of the high harmonic dipole, we found that the compensation is not a simple far-field photonic interference between the IAP and the long-path harmonic emission, but a coherent phase transfer from the weak IAP to the long quantum-path electronic wavepacket. Our approach opens the route to utilizing the long quantum-path for the production and applications of attosecond pulses.

  4. Assessment of the Performance of a Dual-Frequency Surface Reference Technique

    NASA Technical Reports Server (NTRS)

    Meneghini, Robert; Liao, Liang; Tanelli, Simone; Durden, Stephen

    2013-01-01

    The high correlation of the rain-free surface cross sections at two frequencies implies that the estimate of differential path integrated attenuation (PIA) caused by precipitation along the radar beam can be obtained to a higher degree of accuracy than the path-attenuation at either frequency. We explore this finding first analytically and then by examining data from the JPL dual-frequency airborne radar using measurements from the TC4 experiment obtained during July-August 2007. Despite this improvement in the accuracy of the differential path attenuation, solving the constrained dual-wavelength radar equations for parameters of the particle size distribution requires not only this quantity but the single-wavelength path attenuation as well. We investigate a simple method of estimating the single-frequency path attenuation from the differential attenuation and compare this with the estimate derived directly from the surface return.

  5. Efficient computation paths for the systematic analysis of sensitivities

    NASA Astrophysics Data System (ADS)

    Greppi, Paolo; Arato, Elisabetta

    2013-01-01

    A systematic sensitivity analysis requires computing the model on all points of a multi-dimensional grid covering the domain of interest, defined by the ranges of variability of the inputs. The issues to efficiently perform such analyses on algebraic models are handling solution failures within and close to the feasible region and minimizing the total iteration count. Scanning the domain in the obvious order is sub-optimal in terms of total iterations and is likely to cause many solution failures. The problem of choosing a better order can be translated geometrically into finding Hamiltonian paths on certain grid graphs. This work proposes two paths, one based on a mixed-radix Gray code and the other, a quasi-spiral path, produced by a novel heuristic algorithm. Some simple, easy-to-visualize examples are presented, followed by performance results for the quasi-spiral algorithm and the practical application of the different paths in a process simulation tool.

  6. Conceptual models of the evolution of transgressive dune field systems

    NASA Astrophysics Data System (ADS)

    A. Hesp, Patrick

    2013-10-01

    This paper examines the evolutionary paths of some transgressive dune fields that have formed on different coasts of the world, and presents some initial conceptual models of system dynamics for transgressive dune sheets and dune fields. Various evolutionary pathways are conceptualized based on a visual examination of dune fields from around the world. On coasts with high sediment supply, dune sheets and dune fields tend to accumulate as large scale barrier systems with little colonization of vegetation in arid-hyper to arid climate regimes, and as multiple, active discrete phases of dune field and deflation plain couplets in temperate to tropical environments. Active dune fields tend to be singular entities on coasts with low to moderate sediment supply. Landscape complexity and vegetation richness and diversity increases as dune fields evolve from simple active sheets and dunes to single and multiple deflation plains and basins, precipitation ridges, nebkha fields and a host of other dune types associated with vegetation (e.g. trailing ridges, slacks, remnant knobs, gegenwalle ridges and dune track ridges, ‘tree islands' and ‘bush pockets'). Three principal scenarios of transgressive dune sheet and dune field development are discussed, including dune sheets or dune fields evolving directly from the backshore, development following foredune and/or dune field erosion, and development from the breakdown or merging of parabolic dunes. Various stages of evolution are outlined for each scenario. Knowledge of evolutionary patterns and stages in coastal dune fields is very limited and caution is urged in attempts to reverse, change and/or modify dune fields to ‘restore' some perceived loss of ecosystem or dune functioning.

  7. Conceptual models of the evolution of transgressive dune field systems

    NASA Astrophysics Data System (ADS)

    Hesp, Patrick A.

    2013-10-01

    This paper examines the evolutionary paths of some transgressive dune fields that have formed on different coasts of the world, and presents some initial conceptual models of system dynamics for transgressive dune sheets and dune fields. Various evolutionary pathways are conceptualized based on a visual examination of dune fields from around the world. On coasts with high sediment supply, dune sheets and dune fields tend to accumulate as large scale barrier systems with little colonization of vegetation in arid-hyper to arid climate regimes, and as multiple, active discrete phases of dune field and deflation plain couplets in temperate to tropical environments. Active dune fields tend to be singular entities on coasts with low to moderate sediment supply. Landscape complexity and vegetation richness and diversity increases as dune fields evolve from simple active sheets and dunes to single and multiple deflation plains and basins, precipitation ridges, nebkha fields and a host of other dune types associated with vegetation (e.g. trailing ridges, slacks, remnant knobs, gegenwalle ridges and dune track ridges, 'tree islands' and 'bush pockets'). Three principal scenarios of transgressive dune sheet and dune field development are discussed, including dune sheets or dune fields evolving directly from the backshore, development following foredune and/or dune field erosion, and development from the breakdown or merging of parabolic dunes. Various stages of evolution are outlined for each scenario. Knowledge of evolutionary patterns and stages in coastal dune fields is very limited and caution is urged in attempts to reverse, change and/or modify dune fields to 'restore' some perceived loss of ecosystem or dune functioning.

  8. Self-Organized Mantle Layering After the Magma-Ocean Period

    NASA Astrophysics Data System (ADS)

    Hansen, U.; Dude, S.

    2017-12-01

    The thermal history of the Earth, it's chemical differentiation and also the reaction of the interior with the atmosphere is largely determined by convective processes within the Earth's mantle. A simple physical model, resembling the situation, shortly after core formation, consists of a compositionally stable stratified mantle, as resulting from fractional crystallization of the magma ocean. The early mantle is subject to heating from below by the Earth's core and cooling from the top through the atmosphere. Additionally internal heat sources will serve to power the mantle dynamics. Under such circumstances double diffusive convection will eventually lead to self -organized layer formation, even without the preexisting jumps is material properties. We have conducted 2D and 3D numerical experiments in Cartesian and spherical geometry, taking into account mantle realistic values, especially a strong temperature dependent viscosity and a pressure dependent thermal expansivity . The experiments show that in a wide parameter range. distinct convective layers evolve in this scenario. The layering strongly controls the heat loss from the core and decouples the dynamics in the lower mantle from the upper part. With time, individual layers grow on the expense of others and merging of layers does occur. We observe several events of intermittent breakdown of individual layers. Altogether an evolution emerges, characterized by continuous but also spontaneous changes in the mantle structure, ranging from multiple to single layer flow. Such an evolutionary path of mantle convection allows to interpret phenomena ranging from stagnation of slabs at various depth to variations in the chemical signature of mantle upwellings in a new framework.

  9. Merged Real Time GNSS Solutions for the READI System

    NASA Astrophysics Data System (ADS)

    Santillan, V. M.; Geng, J.

    2014-12-01

    Real-time measurements from increasingly dense Global Navigational Satellite Systems (GNSS) networks located throughout the western US offer a substantial, albeit largely untapped, contribution towards the mitigation of seismic and other natural hazards. Analyzed continuously in real-time, currently over 600 instruments blanket the San Andreas and Cascadia fault systems of the North American plate boundary and can provide on-the-fly characterization of transient ground displacements highly complementary to traditional seismic strong-motion monitoring. However, the utility of GNSS systems depends on their resolution, and merged solutions of two or more independent estimation strategies have been shown to offer lower scatter and higher resolution. Towards this end, independent real time GNSS solutions produced by Scripps Inst. of Oceanography and Central Washington University (PANGA) are now being formally combined in pursuit of NASA's Real-Time Earthquake Analysis for Disaster Mitigation (READI) positioning goals. CWU produces precise point positioning (PPP) solutions while SIO produces ambiguity resolved PPP solutions (PPP-AR). The PPP-AR solutions have a ~5 mm RMS scatter in the horizontal and ~10mm in the vertical, however PPP-AR solutions can take tens of minutes to re-converge in case of data gaps. The PPP solutions produced by CWU use pre-cleaned data in which biases are estimated as non-integer ambiguities prior to formal positioning with GIPSY 6.2 using a real time stream editor developed at CWU. These solutions show ~20mm RMS scatter in the horizontal and ~50mm RMS scatter in the vertical but re-converge within 2 min. or less following cycle-slips or data outages. We have implemented the formal combination of the CWU and SCRIPPS ENU displacements using the independent solutions as input measurements to a simple 3-element state Kalman filter plus white noise. We are now merging solutions from 90 stations, including 30 in Cascadia, 39 in the Bay Area, and 21 from S. California. Six months of merged time series demonstrate that the combined solution is more reliable and can take advantage of the strengths of the individual solutions while mitigating their weaknesses. The merging can be easily extended to three or more independent analysis strategies, which may be considered in the future

  10. Linear Optical Quantum Metrology with Single Photons: Exploiting Spontaneously Generated Entanglement to Beat the Shot-Noise Limit

    NASA Astrophysics Data System (ADS)

    Motes, Keith R.; Olson, Jonathan P.; Rabeaux, Evan J.; Dowling, Jonathan P.; Olson, S. Jay; Rohde, Peter P.

    2015-05-01

    Quantum number-path entanglement is a resource for supersensitive quantum metrology and in particular provides for sub-shot-noise or even Heisenberg-limited sensitivity. However, such number-path entanglement has been thought to be resource intensive to create in the first place—typically requiring either very strong nonlinearities, or nondeterministic preparation schemes with feedforward, which are difficult to implement. Very recently, arising from the study of quantum random walks with multiphoton walkers, as well as the study of the computational complexity of passive linear optical interferometers fed with single-photon inputs, it has been shown that such passive linear optical devices generate a superexponentially large amount of number-path entanglement. A logical question to ask is whether this entanglement may be exploited for quantum metrology. We answer that question here in the affirmative by showing that a simple, passive, linear-optical interferometer—fed with only uncorrelated, single-photon inputs, coupled with simple, single-mode, disjoint photodetection—is capable of significantly beating the shot-noise limit. Our result implies a pathway forward to practical quantum metrology with readily available technology.

  11. Linear optical quantum metrology with single photons: exploiting spontaneously generated entanglement to beat the shot-noise limit.

    PubMed

    Motes, Keith R; Olson, Jonathan P; Rabeaux, Evan J; Dowling, Jonathan P; Olson, S Jay; Rohde, Peter P

    2015-05-01

    Quantum number-path entanglement is a resource for supersensitive quantum metrology and in particular provides for sub-shot-noise or even Heisenberg-limited sensitivity. However, such number-path entanglement has been thought to be resource intensive to create in the first place--typically requiring either very strong nonlinearities, or nondeterministic preparation schemes with feedforward, which are difficult to implement. Very recently, arising from the study of quantum random walks with multiphoton walkers, as well as the study of the computational complexity of passive linear optical interferometers fed with single-photon inputs, it has been shown that such passive linear optical devices generate a superexponentially large amount of number-path entanglement. A logical question to ask is whether this entanglement may be exploited for quantum metrology. We answer that question here in the affirmative by showing that a simple, passive, linear-optical interferometer--fed with only uncorrelated, single-photon inputs, coupled with simple, single-mode, disjoint photodetection--is capable of significantly beating the shot-noise limit. Our result implies a pathway forward to practical quantum metrology with readily available technology.

  12. Merging OLTP and OLAP - Back to the Future

    NASA Astrophysics Data System (ADS)

    Lehner, Wolfgang

    When the terms "Data Warehousing" and "Online Analytical Processing" were coined in the 1990s by Kimball, Codd, and others, there was an obvious need for separating data and workload for operational transactional-style processing and decision-making implying complex analytical queries over large and historic data sets. Large data warehouse infrastructures have been set up to cope with the special requirements of analytical query answering for multiple reasons: For example, analytical thinking heavily relies on predefined navigation paths to guide the user through the data set and to provide different views on different aggregation levels.Multi-dimensional queries exploiting hierarchically structured dimensions lead to complex star queries at a relational backend, which could hardly be handled by classical relational systems.

  13. Rodeo and Chediski Fires in Arizona

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Over the weekend, the Rodeo and Chediski Fires in Arizona grew explosively, and the two large fires are now beginning to merge. Smoke from the fires is stretching hundreds of kilometers northeast, where it may be mingling with smoke from the Missionary Ridge Fire in Colorado. The Rodeo Fire is now 205,000 acres, and the Chediski is over 100,000. More than 200 structures have been lost in the two blazes, but many more hundreds have been saved by firefighters. This Moderate Resolution Imaging Spectroradiometer (MODIS) image was acquired Sunday, June 23, 2002. The data were collected via MODIS? Direct Broadcast capability, in which real time data are continuously broadcast, and can be received by ground stations directly in the path of the Terra satellite.

  14. An on-board near-optimal climb-dash energy management

    NASA Technical Reports Server (NTRS)

    Weston, A. R.; Cliff, E. M.; Kelley, H. J.

    1982-01-01

    On-board real time flight control is studied in order to develop algorithms which are simple enough to be used in practice, for a variety of missions involving three dimensional flight. The intercept mission in symmetric flight is emphasized. Extensive computation is required on the ground prior to the mission but the ensuing on-board exploitation is extremely simple. The scheme takes advantage of the boundary layer structure common in singular perturbations, arising with the multiple time scales appropriate to aircraft dynamics. Energy modelling of aircraft is used as the starting point for the analysis. In the symmetric case, a nominal path is generated which fairs into the dash or cruise state. Feedback coefficients are found as functions of the remaining energy to go (dash energy less current energy) along the nominal path.

  15. In-depth analysis of drivers' merging behavior and rear-end crash risks in work zone merging areas.

    PubMed

    Weng, Jinxian; Xue, Shan; Yang, Ying; Yan, Xuedong; Qu, Xiaobo

    2015-04-01

    This study investigates the drivers' merging behavior and the rear-end crash risk in work zone merging areas during the entire merging implementation period from the time of starting a merging maneuver to that of completing the maneuver. With the merging traffic data from a work zone site in Singapore, a mixed probit model is developed to describe the merging behavior, and two surrogate safety measures including the time to collision (TTC) and deceleration rate to avoid the crash (DRAC) are adopted to compute the rear-end crash risk between the merging vehicle and its neighboring vehicles. Results show that the merging vehicle has a bigger probability of completing a merging maneuver quickly under one of the following situations: (i) the merging vehicle moves relatively fast; (ii) the merging lead vehicle is a heavy vehicle; and (iii) there is a sizable gap in the adjacent through lane. Results indicate that the rear-end crash risk does not monotonically increase as the merging vehicle speed increases. The merging vehicle's rear-end crash risk is also affected by the vehicle type. There is a biggest increment of rear-end crash risk if the merging lead vehicle belongs to a heavy vehicle. Although the reduced remaining distance to work zone could urge the merging vehicle to complete a merging maneuver quickly, it might lead to an increased rear-end crash risk. Interestingly, it is found that the rear-end crash risk could be generally increased over the elapsed time after the merging maneuver being triggered. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. An automatic alignment system for measuring optical path of transmissometer based on light beam scanning

    NASA Astrophysics Data System (ADS)

    Zhou, Shudao; Ma, Zhongliang; Wang, Min; Peng, Shuling

    2018-05-01

    This paper proposes a novel alignment system based on the measurement of optical path using a light beam scanning mode in a transmissometer. The system controls both the probe beam and the receiving field of view while scanning in two vertical directions. The system then calculates the azimuth angle of the transmitter and the receiver to determine the precise alignment of the optical path. Experiments show that this method can determine the alignment angles in less than 10 min with errors smaller than 66 μrad in the azimuth. This system also features high collimation precision, process automation and simple installation.

  17. Analytic solution of the lifeguard problem

    NASA Astrophysics Data System (ADS)

    De Luca, Roberto; Di Mauro, Marco; Naddeo, Adele

    2018-03-01

    A simple version due to Feynman of Fermat’s principle is analyzed. It deals with the path a lifeguard on a beach must follow to reach a drowning swimmer. The solution for the exact point, P(x, 0) , at the beach-sea boundary, corresponding to the fastest path to the swimmer, is worked out in detail and the analogy with light traveling at the air-water boundary is described. The results agree with the known conclusion that the shortest path does not coincide with the fastest one. The relevance of the subject for a basic physics course, at an advanced high school level, is pointed out.

  18. Low profile, highly configurable, current sharing paralleled wide band gap power device power module

    DOEpatents

    McPherson, Brice; Killeen, Peter D.; Lostetter, Alex; Shaw, Robert; Passmore, Brandon; Hornberger, Jared; Berry, Tony M

    2016-08-23

    A power module with multiple equalized parallel power paths supporting multiple parallel bare die power devices constructed with low inductance equalized current paths for even current sharing and clean switching events. Wide low profile power contacts provide low inductance, short current paths, and large conductor cross section area provides for massive current carrying. An internal gate & source kelvin interconnection substrate is provided with individual ballast resistors and simple bolted construction. Gate drive connectors are provided on either left or right size of the module. The module is configurable as half bridge, full bridge, common source, and common drain topologies.

  19. Attention trees and semantic paths

    NASA Astrophysics Data System (ADS)

    Giusti, Christian; Pieroni, Goffredo G.; Pieroni, Laura

    2007-02-01

    In the last few decades several techniques for image content extraction, often based on segmentation, have been proposed. It has been suggested that under the assumption of very general image content, segmentation becomes unstable and classification becomes unreliable. According to recent psychological theories, certain image regions attract the attention of human observers more than others and, generally, the image main meaning appears concentrated in those regions. Initially, regions attracting our attention are perceived as a whole and hypotheses on their content are formulated; successively the components of those regions are carefully analyzed and a more precise interpretation is reached. It is interesting to observe that an image decomposition process performed according to these psychological visual attention theories might present advantages with respect to a traditional segmentation approach. In this paper we propose an automatic procedure generating image decomposition based on the detection of visual attention regions. A new clustering algorithm taking advantage of the Delaunay- Voronoi diagrams for achieving the decomposition target is proposed. By applying that algorithm recursively, starting from the whole image, a transformation of the image into a tree of related meaningful regions is obtained (Attention Tree). Successively, a semantic interpretation of the leaf nodes is carried out by using a structure of Neural Networks (Neural Tree) assisted by a knowledge base (Ontology Net). Starting from leaf nodes, paths toward the root node across the Attention Tree are attempted. The task of the path consists in relating the semantics of each child-parent node pair and, consequently, in merging the corresponding image regions. The relationship detected in this way between two tree nodes generates, as a result, the extension of the interpreted image area through each step of the path. The construction of several Attention Trees has been performed and partial results will be shown.

  20. Fast exploration of an optimal path on the multidimensional free energy surface

    PubMed Central

    Chen, Changjun

    2017-01-01

    In a reaction, determination of an optimal path with a high reaction rate (or a low free energy barrier) is important for the study of the reaction mechanism. This is a complicated problem that involves lots of degrees of freedom. For simple models, one can build an initial path in the collective variable space by the interpolation method first and then update the whole path constantly in the optimization. However, such interpolation method could be risky in the high dimensional space for large molecules. On the path, steric clashes between neighboring atoms could cause extremely high energy barriers and thus fail the optimization. Moreover, performing simulations for all the snapshots on the path is also time-consuming. In this paper, we build and optimize the path by a growing method on the free energy surface. The method grows a path from the reactant and extends its length in the collective variable space step by step. The growing direction is determined by both the free energy gradient at the end of the path and the direction vector pointing at the product. With fewer snapshots on the path, this strategy can let the path avoid the high energy states in the growing process and save the precious simulation time at each iteration step. Applications show that the presented method is efficient enough to produce optimal paths on either the two-dimensional or the twelve-dimensional free energy surfaces of different small molecules. PMID:28542475

  1. Extended charge banking model of dual path shocks for implantable cardioverter defibrillators

    PubMed Central

    Dosdall, Derek J; Sweeney, James D

    2008-01-01

    Background Single path defibrillation shock methods have been improved through the use of the Charge Banking Model of defibrillation, which predicts the response of the heart to shocks as a simple resistor-capacitor (RC) circuit. While dual path defibrillation configurations have significantly reduced defibrillation thresholds, improvements to dual path defibrillation techniques have been limited to experimental observations without a practical model to aid in improving dual path defibrillation techniques. Methods The Charge Banking Model has been extended into a new Extended Charge Banking Model of defibrillation that represents small sections of the heart as separate RC circuits, uses a weighting factor based on published defibrillation shock field gradient measures, and implements a critical mass criteria to predict the relative efficacy of single and dual path defibrillation shocks. Results The new model reproduced the results from several published experimental protocols that demonstrated the relative efficacy of dual path defibrillation shocks. The model predicts that time between phases or pulses of dual path defibrillation shock configurations should be minimized to maximize shock efficacy. Discussion Through this approach the Extended Charge Banking Model predictions may be used to improve dual path and multi-pulse defibrillation techniques, which have been shown experimentally to lower defibrillation thresholds substantially. The new model may be a useful tool to help in further improving dual path and multiple pulse defibrillation techniques by predicting optimal pulse durations and shock timing parameters. PMID:18673561

  2. Artificial pheromone for path selection by a foraging swarm of robots.

    PubMed

    Campo, Alexandre; Gutiérrez, Alvaro; Nouyan, Shervin; Pinciroli, Carlo; Longchamp, Valentin; Garnier, Simon; Dorigo, Marco

    2010-11-01

    Foraging robots involved in a search and retrieval task may create paths to navigate faster in their environment. In this context, a swarm of robots that has found several resources and created different paths may benefit strongly from path selection. Path selection enhances the foraging behavior by allowing the swarm to focus on the most profitable resource with the possibility for unused robots to stop participating in the path maintenance and to switch to another task. In order to achieve path selection, we implement virtual ants that lay artificial pheromone inside a network of robots. Virtual ants are local messages transmitted by robots; they travel along chains of robots and deposit artificial pheromone on the robots that are literally forming the chain and indicating the path. The concentration of artificial pheromone on the robots allows them to decide whether they are part of a selected path. We parameterize the mechanism with a mathematical model and provide an experimental validation using a swarm of 20 real robots. We show that our mechanism favors the selection of the closest resource is able to select a new path if a selected resource becomes unavailable and selects a newly detected and better resource when possible. As robots use very simple messages and behaviors, the system would be particularly well suited for swarms of microrobots with minimal abilities.

  3. In-to-Out Body Antenna-Independent Path Loss Model for Multilayered Tissues and Heterogeneous Medium

    PubMed Central

    Kurup, Divya; Vermeeren, Günter; Tanghe, Emmeric; Joseph, Wout; Martens, Luc

    2015-01-01

    In this paper, we investigate multilayered lossy and heterogeneous media for wireless body area networks (WBAN) to develop a simple, fast and efficient analytical in-to-out body path loss (PL) model at 2.45 GHz and, thus, avoid time-consuming simulations. The PL model is an antenna-independent model and is validated with simulations in layered medium, as well as in a 3D human model using electromagnetic solvers. PMID:25551483

  4. Quantization of simple parametrized systems

    NASA Astrophysics Data System (ADS)

    Ruffini, G.

    2005-11-01

    I study the canonical formulation and quantization of some simple parametrized systems, including the non-relativistic parametrized particle and the relativistic parametrized particle. Using Dirac's formalism I construct for each case the classical reduced phase space and study the dependence on the gauge fixing used. Two separate features of these systems can make this construction difficult: the actions are not invariant at the boundaries, and the constraints may have disconnected solution spaces. The relativistic particle is affected by both, while the non-relativistic particle displays only by the first. Analyzing the role of canonical transformations in the reduced phase space, I show that a change of gauge fixing is equivalent to a canonical transformation. In the relativistic case, quantization of one branch of the constraint at the time is applied and I analyze the electromagenetic backgrounds in which it is possible to quantize simultaneously both branches and still obtain a covariant unitary quantum theory. To preserve unitarity and space-time covariance, second quantization is needed unless there is no electric field. I motivate a definition of the inner product in all these cases and derive the Klein-Gordon inner product for the relativistic case. I construct phase space path integral representations for amplitudes for the BFV and the Faddeev path integrals, from which the path integrals in coordinate space (Faddeev-Popov and geometric path integrals) are derived.

  5. Changes in thunderstorm characteristics due to feeder cloud merging

    NASA Astrophysics Data System (ADS)

    Sinkevich, Andrei A.; Krauss, Terrence W.

    2014-06-01

    Cumulus cloud merging is a complex dynamical and microphysical process in which two convective cells merge into a single cell. Previous radar observations and numerical simulations have shown a substantial increase in the maximum area, maximum echo top and maximum reflectivity as a result of the merging process. Although the qualitative aspects of merging have been well documented, the quantitative effects on storm properties remain less defined. Therefore, a statistical assessment of changes in storm characteristics due to merging is of importance. Further investigation into the effects of cloud merging on precipitation flux (Pflux) in a statistical manner provided the motivation for this study in the Asir region of Saudi Arabia. It was confirmed that merging has a strong effect on storm development in this region. The data analysis shows that an increase in the median of the distribution of maximum reflectivity was observed just after merging and was equal to 3.9 dBZ. A detailed analysis of the individual merge cases compared the merged storm Pflux and mass to the sum of the individual Feeder and Storm portions just before merging for each case. The merged storm Pflux increased an average of 106% over the 20-min period after merging, and the mass increased on average 143%. The merged storm clearly became larger and more severe than the sum of the two parts prior to merging. One consequence of this study is that any attempts to evaluate the precipitation enhancement effects of cloud seeding must also include the issue of cloud mergers because merging can have a significant effect on the results.

  6. Ecological control line: A decade of exploration and an innovative path of ecological land management for megacities in China.

    PubMed

    Hong, Wuyang; Yang, Chengyun; Chen, Liuxin; Zhang, Fangfang; Shen, Shaoqing; Guo, Renzhong

    2017-04-15

    Ecological control line is a system innovation in the field of ecological environment protection in China and it has become as an important strategy of national ecological protection. Ten years have passed since the first ecological control line in Shenzhen was delimited in 2005. This study examines the connotations of ecological control line and the current study status in China and abroad, and then takes a brief description about the delimitation background and existing problems of the ecological control line in Shenzhen. The problem-solving strategy is gradually transforming from extensive management to refined management. This study proposes a differential ecological space management model that merges the space system, management system, and support system. The implementation paths include the following five aspects: delimiting ecological bottom lines to protect core ecological resources; formulating access systems for new construction projects to strictly control new construction; implementing construction land inventory reclamation assisted by market means; regulating boundary adjusting procedures and processes; and constructing ecological equity products by using multiple means to implement rights relief. Finally, this study illustrates the progress of the implementation and discusses the rigorousness and flexibility problems of ecological control line and calls for the promotion of the legislation. The management model and implementation paths proposed in this study have referential significance for developing countries and megacities to achieve ecological protection and sustainable development. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Sessile multidroplets and salt droplets under high tangential electric fields

    PubMed Central

    Xie, Guoxin; He, Feng; Liu, Xiang; Si, Lina; Guo, Dan

    2016-01-01

    Understanding the interaction behaviors between sessile droplets under imposed high voltages is very important in many practical situations, e.g., microfluidic devices and the degradation/aging problems of outdoor high-power applications. In the present work, the droplet coalescence, the discharge activity and the surface thermal distribution response between sessile multidroplets and chloride salt droplets under high tangential electric fields have been investigated with infrared thermography, high-speed photography and pulse current measurement. Obvious polarity effects on the discharge path direction and the temperature change in the droplets in the initial stage after discharge initiation were observed due to the anodic dissolution of metal ions from the electrode. In the case of sessile aligned multidroplets, the discharge path direction could affect the location of initial droplet coalescence. The smaller unmerged droplet would be drained into the merged large droplet as a result from the pressure difference inside the droplets rather than the asymmetric temperature change due to discharge. The discharge inception voltages and the temperature variations for two salt droplets closely correlated with the ionization degree of the salt, as well as the interfacial electrochemical reactions near the electrodes. Mechanisms of these observed phenomena were discussed. PMID:27121926

  8. Delineating wetland catchments and modeling hydrologic ...

    EPA Pesticide Factsheets

    In traditional watershed delineation and topographic modeling, surface depressions are generally treated as spurious features and simply removed from a digital elevation model (DEM) to enforce flow continuity of water across the topographic surface to the watershed outlets. In reality, however, many depressions in the DEM are actual wetland landscape features with seasonal to permanent inundation patterning characterized by nested hierarchical structures and dynamic filling–spilling–merging surface-water hydrological processes. Differentiating and appropriately processing such ecohydrologically meaningful features remains a major technical terrain-processing challenge, particularly as high-resolution spatial data are increasingly used to support modeling and geographic analysis needs. The objectives of this study were to delineate hierarchical wetland catchments and model their hydrologic connectivity using high-resolution lidar data and aerial imagery. The graph-theory-based contour tree method was used to delineate the hierarchical wetland catchments and characterize their geometric and topological properties. Potential hydrologic connectivity between wetlands and streams were simulated using the least-cost-path algorithm. The resulting flow network delineated potential flow paths connecting wetland depressions to each other or to the river network on scales finer than those available through the National Hydrography Dataset. The results demonstrated that

  9. Looking Down Through the Clouds – Optical Attenuation through Real-Time Clouds

    NASA Astrophysics Data System (ADS)

    Burley, J.; Lazarewicz, A.; Dean, D.; Heath, N.

    Detecting and identifying nuclear explosions in the atmosphere and on the surface of the Earth is critical for the Air Force Technical Applications Center (AFTAC) treaty monitoring mission. Optical signals, from surface or atmospheric nuclear explosions detected by satellite sensors, are attenuated by the atmosphere and clouds. Clouds present a particularly complex challenge as they cover up to seventy percent of the earth's surface. Moreover, their highly variable and diverse nature requires physics-based modeling. Determining the attenuation for each optical ray-path is uniquely dependent on the source geolocation, the specific optical transmission characteristics along that ray path, and sensor detection capabilities. This research details a collaborative AFTAC and AFIT effort to fuse worldwide weather data, from a variety of sources, to provide near-real-time profiles of atmospheric and cloud conditions and the resulting radiative transfer analysis for virtually any wavelength(s) of interest from source to satellite. AFIT has developed a means to model global clouds using the U.S. Air Force’s World Wide Merged Cloud Analysis (WWMCA) cloud data in a new toolset that enables radiance calculations through clouds from UV to RF wavelengths.

  10. A novel algorithm for delineating wetland depressions and ...

    EPA Pesticide Factsheets

    In traditional watershed delineation and topographic modeling, surface depressions are generally treated as spurious features and simply removed from a digital elevation model (DEM) to enforce flow continuity of water across the topographic surface to the watershed outlets. In reality, however, many depressions in the DEM are actual wetland landscape features that are seldom fully filled with water. For instance, wetland depressions in the Prairie Pothole Region (PPR) are seasonally to permanently flooded wetlands characterized by nested hierarchical structures with dynamic filling- spilling-merging surface-water hydrological processes. The objectives of this study were to delineate hierarchical wetland catchments and model their hydrologic connectivity using high-resolution LiDAR data and aerial imagery. We proposed a novel algorithm delineate the hierarchical wetland catchments and characterize their geometric and topological properties. Potential hydrologic connectivity between wetlands and streams were simulated using the least-cost path algorithm. The resulting flow network delineated putative temporary or seasonal flow paths connecting wetland depressions to each other or to the river network at scales finer than available through the National Hydrography Dataset. The results demonstrated that our proposed framework is promising for improving overland flow modeling and hydrologic connectivity analysis. Presentation at AWRA Spring Specialty Conference in Sn

  11. Proposal for automated transformations on single-photon multipath qudits

    NASA Astrophysics Data System (ADS)

    Baldijão, R. D.; Borges, G. F.; Marques, B.; Solís-Prosser, M. A.; Neves, L.; Pádua, S.

    2017-09-01

    We propose a method for implementing automated state transformations on single-photon multipath qudits encoded in a one-dimensional transverse spatial domain. It relies on transferring the encoding from this domain to the orthogonal one by applying a spatial phase modulation with diffraction gratings, merging all the initial propagation paths by using a stable interferometric network, and filtering out the unwanted diffraction orders. The automation feature is attained by utilizing a programmable phase-only spatial light modulator (SLM) where properly designed diffraction gratings displayed on its screen will implement the desired transformations, including, among others, projections, permutations, and random operations. We discuss the losses in the process which is, in general, inherently nonunitary. Some examples of transformations are presented and, considering a realistic scenario, we analyze how they will be affected by the pixelated structure of the SLM screen. The method proposed here enables one to implement much more general transformations on multipath qudits than is possible with a SLM alone operating in the diagonal basis of which-path states. Therefore, it will extend the range of applicability for this encoding in high-dimensional quantum information and computing protocols as well as fundamental studies in quantum theory.

  12. Circular common-path point diffraction interferometer.

    PubMed

    Du, Yongzhao; Feng, Guoying; Li, Hongru; Vargas, J; Zhou, Shouhuan

    2012-10-01

    A simple and compact point-diffraction interferometer with circular common-path geometry configuration is developed. The interferometer is constructed by a beam-splitter, two reflection mirrors, and a telescope system composed by two lenses. The signal and reference waves travel along the same path. Furthermore, an opaque mask containing a reference pinhole and a test object holder or test window is positioned in the common focal plane of the telescope system. The object wave is divided into two beams that take opposite paths along the interferometer. The reference wave is filtered by the reference pinhole, while the signal wave is transmitted through the object holder. The reference and signal waves are combined again in the beam-splitter and their interference is imaged in the CCD. The new design is compact, vibration insensitive, and suitable for the measurement of moving objects or dynamic processes.

  13. Merging Photoredox and Nickel Catalysis: The Direct Synthesis of Ketones via the Decarboxylative Arylation of α-Oxo Acids**

    PubMed Central

    Chu, Lingling; Lipshultz, Jeffrey M.

    2015-01-01

    The direct decarboxylative arylation of α-oxo acids has been achieved via synergistic visible light-mediated photoredox and nickel catalyses. This method offers rapid entry to aryl and alkyl ketone architectures from simple α-oxo acid precursors via an acyl radical intermediate. Significant substrate scope is observed with respect to both the oxo acid and arene coupling partners. This mild decarboxylative arylation can also be utilized to efficiently access medicinal agents, as demonstrated by the rapid synthesis of fenofibrate. PMID:26014029

  14. An Introduction to Magnetospheric Physics by Means of Simple Models

    NASA Technical Reports Server (NTRS)

    Stern, D. P.

    1981-01-01

    The large scale structure and behavior of the Earth's magnetosphere is discussed. The model is suitable for inclusion in courses on space physics, plasmas, astrophysics or the Earth's environment, as well as for self-study. Nine quantitative problems, dealing with properties of linear superpositions of a dipole and a constant field are presented. Topics covered include: open and closed models of the magnetosphere; field line motion; the role of magnetic merging (reconnection); magnetospheric convection; and the origin of the magnetopause, polar cusps, and high latitude lobes.

  15. Remodeling a tissue: subtraction adds insight.

    PubMed

    Axelrod, Jeffrey D

    2012-11-27

    Sculpting a body plan requires both patterning of gene expression and translating that pattern into morphogenesis. Developmental biologists have made remarkable strides in understanding gene expression patterning, but despite a long history of fascination with the mechanics of morphogenesis, knowledge of how patterned gene expression drives the emergence of even simple shapes and forms has grown at a slower pace. The successful merging of approaches from cell biology, developmental biology, imaging, engineering, and mathematical and computational sciences is now accelerating progress toward a fuller and better integrated understanding of the forces shaping morphogenesis.

  16. Eccentric connectivity index of chemical trees

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haoer, R. S., E-mail: raadsehen@gmail.com; Department of Mathematics, Faculty of Computer Sciences and Mathematics, University Of Kufa, Najaf; Atan, K. A., E-mail: kamel@upm.edu.my

    Let G = (V, E) be a simple connected molecular graph. In such a simple molecular graph, vertices and edges are depicted atoms and chemical bonds respectively, we refer to the sets of vertices by V (G) and edges by E (G). If d(u, v) be distance between two vertices u, v ∈ V(G) and can be defined as the length of a shortest path joining them. Then, the eccentricity connectivity index (ECI) of a molecular graph G is ξ(G) = ∑{sub v∈V(G)} d(v) ec(v), where d(v) is degree of a vertex v ∈ V(G). ec(v) is the length ofmore » a greatest path linking to another vertex of v. In this study, we focus the general formula for the eccentricity connectivity index (ECI) of some chemical trees as alkenes.« less

  17. Langevin Dynamics, Large Deviations and Instantons for the Quasi-Geostrophic Model and Two-Dimensional Euler Equations

    NASA Astrophysics Data System (ADS)

    Bouchet, Freddy; Laurie, Jason; Zaboronski, Oleg

    2014-09-01

    We investigate a class of simple models for Langevin dynamics of turbulent flows, including the one-layer quasi-geostrophic equation and the two-dimensional Euler equations. Starting from a path integral representation of the transition probability, we compute the most probable fluctuation paths from one attractor to any state within its basin of attraction. We prove that such fluctuation paths are the time reversed trajectories of the relaxation paths for a corresponding dual dynamics, which are also within the framework of quasi-geostrophic Langevin dynamics. Cases with or without detailed balance are studied. We discuss a specific example for which the stationary measure displays either a second order (continuous) or a first order (discontinuous) phase transition and a tricritical point. In situations where a first order phase transition is observed, the dynamics are bistable. Then, the transition paths between two coexisting attractors are instantons (fluctuation paths from an attractor to a saddle), which are related to the relaxation paths of the corresponding dual dynamics. For this example, we show how one can analytically determine the instantons and compute the transition probabilities for rare transitions between two attractors.

  18. The half-wave rectifier response of the magnetosphere and antiparallel merging

    NASA Technical Reports Server (NTRS)

    Crooker, N. U.

    1980-01-01

    In some ways the magnetosphere behaves as if merging occurs only when the interplanetary magnetic field (IMF) is southward, and in other ways it behaves as if merging occurs for all IMF orientations. An explanation of this duality is offered in terms of a geometrical antiparallel merging model which predicts merging for all IMF orientations but magnetic flux transfer to the tail only for southward IMF. This is in contrast to previous models of component merging, where merging and flux transfer occur together for nearly all IMF orientations. That the problematic duality can be explained by the model is compelling evidence that antiparallel merging should be seriously considered in constructing theories of the merging process.

  19. Gain degradation and amplitude scintillation due to tropospheric turbulence

    NASA Technical Reports Server (NTRS)

    Theobold, D. M.; Hodge, D. B.

    1978-01-01

    It is shown that a simple physical model is adequate for the prediction of the long term statistics of both the reduced signal levels and increased peak-to-peak fluctuations. The model is based on conventional atmospheric turbulence theory and incorporates both amplitude and angle of arrival fluctuations. This model predicts the average variance of signals observed under clear air conditions at low elevation angles on earth-space paths at 2, 7.3, 20 and 30 GHz. Design curves based on this model for gain degradation, realizable gain, amplitude fluctuation as a function of antenna aperture size, frequency, and either terrestrial path length or earth-space path elevation angle are presented.

  20. Wave propagation in a random medium

    NASA Technical Reports Server (NTRS)

    Lee, R. W.; Harp, J. C.

    1969-01-01

    A simple technique is used to derive statistical characterizations of the perturbations imposed upon a wave (plane, spherical or beamed) propagating through a random medium. The method is essentially physical rather than mathematical, and is probably equivalent to the Rytov method. The limitations of the method are discussed in some detail; in general they are restrictive only for optical paths longer than a few hundred meters, and for paths at the lower microwave frequencies. Situations treated include arbitrary path geometries, finite transmitting and receiving apertures, and anisotropic media. Results include, in addition to the usual statistical quantities, time-lagged functions, mixed functions involving amplitude and phase fluctuations, angle-of-arrival covariances, frequency covariances, and other higher-order quantities.

  1. Pickup protons and pressure-balanced structures: Voyager 2 observations in merged interaction regions near 35 AU

    NASA Astrophysics Data System (ADS)

    Burlaga, L. F.; Ness, N. F.; Belcher, J. W.; Szabo, A.; Isenberg, P. A.; Lee, M. A.

    1994-11-01

    Five pressure-balanced structures, each with a scale of the order of a few hundredths of an astonomical unit (AU), were identified in two merged interaction regions (MIRs) near 35 AU in the Voyager 2 plasma and magnetic field data. They include a tangential discontinuity, simple and complex magnetic holes, slow correlated variations among the plasma and magnetic field parameters, and complex uncorrelated variations among the parameters. The changes in the magnetic pressure in these events are balanced by changes in the pressure of interstellar pickup protons. Thus the pickup protons probably play a major role in the dynamics of the MIRs. The solar wind proton and electron pressures are relatively unimportant in the MIRs at 35 AU and beyond. The region near 35 AU is transition region: the Sun is the source of the magnetic field, but the interstellar medium in source of pickups protons. Relative to the solar wind proton guyroadius, the thicknesses of the discontinuities and simple magnetic holes observed near 35 AU are at least an order of magnitude greater than those observed at 1 AU. However, the thicknesses of the tangential discontinuity and simple magnetic holes observed near 35 AU (in units of the pickup proton Larmor radius) are comparable to those observed at 1 AU (in units of the solar wind proton gyroradius). Thus the gyroradius of interstellar pickup protons controls the thickness of current sheets near 35 AU. We determine the interstellar pickup proton pressure in the PBSs. Using a model for the pickup proton temperature, we estimate that the average interstellar pickup proton pressure, temperature, and density in the MIRs at 35 AU are (0.53 +/- 0.14) x 10-12 erg/cu cm, (5.8 +/- 0.4) x 106 K and (7 +/- 2) x 10-4/cu cm.

  2. Time-varying mixed logit model for vehicle merging behavior in work zone merging areas.

    PubMed

    Weng, Jinxian; Du, Gang; Li, Dan; Yu, Yao

    2018-08-01

    This study aims to develop a time-varying mixed logit model for the vehicle merging behavior in work zone merging areas during the merging implementation period from the time of starting a merging maneuver to that of completing the maneuver. From the safety perspective, vehicle crash probability and severity between the merging vehicle and its surrounding vehicles are regarded as major factors influencing vehicle merging decisions. Model results show that the model with the use of vehicle crash risk probability and severity could provide higher prediction accuracy than previous models with the use of vehicle speeds and gap sizes. It is found that lead vehicle type, through lead vehicle type, through lag vehicle type, crash probability of the merging vehicle with respect to the through lag vehicle, crash severities of the merging vehicle with respect to the through lead and lag vehicles could exhibit time-varying effects on the merging behavior. One important finding is that the merging vehicle could become more and more aggressive in order to complete the merging maneuver as quickly as possible over the elapsed time, even if it has high vehicle crash risk with respect to the through lead and lag vehicles. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Event Rates of Gravitational Waves from merging Intermediate mass Black Holes: based on a Runaway Path to a SMBH

    NASA Astrophysics Data System (ADS)

    Shinkai, Hisaaki

    2018-01-01

    Based on a dynamical formation model of a supermassive black hole (SMBH), we estimate the expected observational profile of gravitational wave at ground-based detectors, such as KAGRA or advanced LIGO/VIRGO. Noting that the second generation of detectors have enough sensitivity from 10 Hz and up, we are able to detect the ring-down gravitational wave of a BH with the mass M < 2 × 103M⊙. This enables us to check the sequence of BH mergers to SMBHs via intermediate-mass BHs. We estimate the number density of galaxies from the halo formation model and estimate the number of BH mergers from the giant molecular cloud model assuming hierarchical growth of merged cores. At the designed KAGRA (and/or advanced LIGO/VIRGO), we find that the BH merger of its total mass M ˜ 60M⊙ is at the peak of the expected mass distribution. With its signal-to-noise ratio ρ = 10(30), we estimate the event rate R ˜ 200(20) per year in the most optimistic case, and we also find that BH mergers in the range M < 150M⊙ are R > 1 per year for ρ = 10. Thus, if we observe a BH with more than 100M⊙ in future gravitational-wave observations, our model naturally explains its source.

  4. Modeling Events in the Lower Imperial Valley Basin

    NASA Astrophysics Data System (ADS)

    Tian, X.; Wei, S.; Zhan, Z.; Fielding, E. J.; Helmberger, D. V.

    2010-12-01

    The Imperial Valley below the US-Mexican border has few seismic stations but many significant earthquakes. Many of these events, such as the recent El Mayor-Cucapah event, have complex mechanisms involving a mixture of strike-slip and normal slip patterns with now over 30 aftershocks with magnitude over 4.5. Unfortunately, many earthquake records from the Southern Imperial Valley display a great deal of complexity, ie., strong Rayleigh wave multipathing and extended codas. In short, regional recordings in the US are too complex to easily separate source properties from complex propagation. Fortunately, the Dec 30 foreshock (Mw=5.9) has excellent recordings teleseismically and regionally, and moreover is observed with InSAR. We use this simple strike-slip event to calibrate paths. In particular, we are finding record segments involving Pnl (including depth phases) and some surface waves (mostly Love waves) that appear well behaved, ie., can be approximated by synthetics from 1D local models and events modeled with the Cut-and-Paste (CAP) routine. Simple events can then be identified along with path calibration. Modeling the more complicated paths can be started with known mechanisms. We will report on both the aftershocks and historic events.

  5. Cellular Gauge Symmetry and the Li Organization Principle: A Mathematical Addendum. Quantifying energetic dynamics in physical and biological systems through a simple geometric tool and geodetic curves.

    PubMed

    Yurkin, Alexander; Tozzi, Arturo; Peters, James F; Marijuán, Pedro C

    2017-12-01

    The present Addendum complements the accompanying paper "Cellular Gauge Symmetry and the Li Organization Principle"; it illustrates a recently-developed geometrical physical model able to assess electronic movements and energetic paths in atomic shells. The model describes a multi-level system of circular, wavy and zigzag paths which can be projected onto a horizontal tape. This model ushers in a visual interpretation of the distribution of atomic electrons' energy levels and the corresponding quantum numbers through rather simple tools, such as compasses, rulers and straightforward calculations. Here we show how this geometrical model, with the due corrections, among them the use of geodetic curves, might be able to describe and quantify the structure and the temporal development of countless physical and biological systems, from Langevin equations for random paths, to symmetry breaks occurring ubiquitously in physical and biological phenomena, to the relationships among different frequencies of EEG electric spikes. Therefore, in our work we explore the possible association of binomial distribution and geodetic curves configuring a uniform approach for the research of natural phenomena, in biology, medicine or the neurosciences. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. NASA Tech Briefs, August 2007

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Topics include: Program Merges SAR Data on Terrain and Vegetation Heights; Using G(exp 4)FETs as a Data Router for In-Plane Crossing of Signal Paths; Two Algorithms for Processing Electronic Nose Data; Radiation-Tolerant Dual Data Bus; General-Purpose Front End for Real-Time Data Processing; Nanocomposite Photoelectrochemical Cells; Ultracapacitor-Powered Cordless Drill, Cumulative Timers for Microprocessors; Photocatalytic/Magnetic Composite Particles; Separation and Sealing of a Sample Container Using Brazing; Automated Aerial Refueling Hitches a Ride on AFF; Cobra Probes Containing Replaceable Thermocouples; High-Speed Noninvasive Eye-Tracking System; Detergent-Specific Membrane Protein Crystallization Screens; Evaporation-Cooled Protective Suits for Firefighters; Plasmonic Antenna Coupling for QWIPs; Electronic Tongue Containing Redox and Conductivity Sensors; Improved Heat-Stress Algorithm; A Method of Partly Automated Testing of Software; Rover Wheel-Actuated Tool Interface; and Second-Generation Electronic Nose.

  7. Enhanced selectivity for the hydrolysis of block copoly(2-oxazoline)s in ethanol-water resulting in linear poly(ethylene imine) copolymers.

    PubMed

    van Kuringen, Huub P C; de la Rosa, Victor R; Fijten, Martin W M; Heuts, Johan P A; Hoogenboom, Richard

    2012-05-14

    The ability of merging the properties of poly(2-oxazoline)s and poly(ethylene imine) is of high interest for various biomedical applications, including gene delivery, biosensors, and switchable surfaces and nanoparticles. In the present research, a methodology for the controlled and selective hydrolysis of (co)poly(2-oxazoline)s is developed in an ethanol-water solvent mixture, opening the path toward a wide range of block poly(2-oxazoline-co-ethylene imine) (POx-PEI) copolymers with tunable properties. The unexpected influence of the selected ethanol-water binary solvent mixture on the hydrolysis kinetics and selectivity is highlighted in the pursue of well-defined POx-PEI block copolymers. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Exploration of a Dynamic Merging Scheme for Precipitation Estimation over a Small Urban Catchment

    NASA Astrophysics Data System (ADS)

    Al-Azerji, Sherien; Rico-Ramirez, Miguel, ,, Dr.; Han, Dawei, ,, Prof.

    2016-04-01

    The accuracy of quantitative precipitation estimation is of significant importance for urban areas due to the potentially damaging consequences that can result from pluvial flooding. Improved accuracy could be accomplished by merging rain gauge measurements with weather radar data through different merging methods. Several factors may affect the accuracy of the merged data, and the gauge density used for merging is one of the most important. However, if there are no gauges inside the research area, then a gauge network outside the research area can be used for the merging. Generally speaking, the denser the rain gauge network is, the better the merging results that can be achieved. However, in practice, the rain gauge network around the research area is fixed, and the research question is about the optimal merging area. The hypothesis is that if the merging area is too small, there are fewer gauges for merging and thus the result would be poor. If the merging area is too large, gauges far away from the research area can be included in merging. However, due to their large distances, those gauges far away from the research area provide little relevant information to the study and may even introduce noise in merging. Therefore, an optimal merging area that produces the best merged rainfall estimation in the research area could exist. To test this hypothesis, the distance from the centre of the research area and the number of merging gauges around the research area were gradually increased and merging with a new domain of radar data was then performed. The performance of the new merging scheme was compared with a gridded interpolated rainfall from four experimental rain gauges installed inside the research area for validation. The result of this analysis shows that there is indeed an optimum distance from the centre of research area and consequently an optimum number of rain gauges that produce the best merged rainfall data inside the research area. This study is of important and practical value for estimating rainfall in an urban catchment (when there are no gauges available inside the catchment) by merging weather radar with rain gauge data from outside of the catchment. This has not been reported in any literature before now.

  9. Traffic-engineering-aware shortest-path routing and its application in IP-over-WDM networks [Invited

    NASA Astrophysics Data System (ADS)

    Lee, Youngseok; Mukherjee, Biswanath

    2004-03-01

    Single shortest-path routing is known to perform poorly for Internet traffic engineering (TE) where the typical optimization objective is to minimize the maximum link load. Splitting traffic uniformly over equal-cost multiple shortest paths in open shortest path first and intermediate system-intermediate system protocols does not always minimize the maximum link load when multiple paths are not carefully selected for the global traffic demand matrix. However, a TE-aware shortest path among all the equal-cost multiple shortest paths between each ingress-egress pair can be selected such that the maximum link load is significantly reduced. IP routers can use the globally optimal TE-aware shortest path without any change to existing routing protocols and without any serious configuration overhead. While calculating TE-aware shortest paths, the destination-based forwarding constraint at a node should be satisfied, because an IP router will forward a packet to the next hop toward the destination by looking up the destination prefix. We present a mathematical problem formulation for finding a set of TE-aware shortest paths for the given network as an integer linear program, and we propose a simple heuristic for solving large instances of the problem. Then we explore the usage of our proposed algorithm for the integrated TE method in IP-over-WDM networks. The proposed algorithm is evaluated through simulations in IP networks as well as in IP-over-WDM networks.

  10. Continuity equation for probability as a requirement of inference over paths

    NASA Astrophysics Data System (ADS)

    González, Diego; Díaz, Daniela; Davis, Sergio

    2016-09-01

    Local conservation of probability, expressed as the continuity equation, is a central feature of non-equilibrium Statistical Mechanics. In the existing literature, the continuity equation is always motivated by heuristic arguments with no derivation from first principles. In this work we show that the continuity equation is a logical consequence of the laws of probability and the application of the formalism of inference over paths for dynamical systems. That is, the simple postulate that a system moves continuously through time following paths implies the continuity equation. The translation between the language of dynamical paths to the usual representation in terms of probability densities of states is performed by means of an identity derived from Bayes' theorem. The formalism presented here is valid independently of the nature of the system studied: it is applicable to physical systems and also to more abstract dynamics such as financial indicators, population dynamics in ecology among others.

  11. A marker-based watershed method for X-ray image segmentation.

    PubMed

    Zhang, Xiaodong; Jia, Fucang; Luo, Suhuai; Liu, Guiying; Hu, Qingmao

    2014-03-01

    Digital X-ray images are the most frequent modality for both screening and diagnosis in hospitals. To facilitate subsequent analysis such as quantification and computer aided diagnosis (CAD), it is desirable to exclude image background. A marker-based watershed segmentation method was proposed to segment background of X-ray images. The method consisted of six modules: image preprocessing, gradient computation, marker extraction, watershed segmentation from markers, region merging and background extraction. One hundred clinical direct radiograph X-ray images were used to validate the method. Manual thresholding and multiscale gradient based watershed method were implemented for comparison. The proposed method yielded a dice coefficient of 0.964±0.069, which was better than that of the manual thresholding (0.937±0.119) and that of multiscale gradient based watershed method (0.942±0.098). Special means were adopted to decrease the computational cost, including getting rid of few pixels with highest grayscale via percentile, calculation of gradient magnitude through simple operations, decreasing the number of markers by appropriate thresholding, and merging regions based on simple grayscale statistics. As a result, the processing time was at most 6s even for a 3072×3072 image on a Pentium 4 PC with 2.4GHz CPU (4 cores) and 2G RAM, which was more than one time faster than that of the multiscale gradient based watershed method. The proposed method could be a potential tool for diagnosis and quantification of X-ray images. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  12. Location Prediction Based on Transition Probability Matrices Constructing from Sequential Rules for Spatial-Temporal K-Anonymity Dataset

    PubMed Central

    Liu, Zhao; Zhu, Yunhong; Wu, Chenxue

    2016-01-01

    Spatial-temporal k-anonymity has become a mainstream approach among techniques for protection of users’ privacy in location-based services (LBS) applications, and has been applied to several variants such as LBS snapshot queries and continuous queries. Analyzing large-scale spatial-temporal anonymity sets may benefit several LBS applications. In this paper, we propose two location prediction methods based on transition probability matrices constructing from sequential rules for spatial-temporal k-anonymity dataset. First, we define single-step sequential rules mined from sequential spatial-temporal k-anonymity datasets generated from continuous LBS queries for multiple users. We then construct transition probability matrices from mined single-step sequential rules, and normalize the transition probabilities in the transition matrices. Next, we regard a mobility model for an LBS requester as a stationary stochastic process and compute the n-step transition probability matrices by raising the normalized transition probability matrices to the power n. Furthermore, we propose two location prediction methods: rough prediction and accurate prediction. The former achieves the probabilities of arriving at target locations along simple paths those include only current locations, target locations and transition steps. By iteratively combining the probabilities for simple paths with n steps and the probabilities for detailed paths with n-1 steps, the latter method calculates transition probabilities for detailed paths with n steps from current locations to target locations. Finally, we conduct extensive experiments, and correctness and flexibility of our proposed algorithm have been verified. PMID:27508502

  13. Simulations of Jovian Vortices: Sensitivity to Vertical Shear below the Cloud Tops.

    NASA Astrophysics Data System (ADS)

    Morales-Juberías, R.; Dowling, T. E.; Palotai, Cs. J.

    2003-05-01

    We have multiple, detailed observations of individual spots drifting with different velocities at different latitudes within a given shear zone. We also have indications from modeling Jupiter's White Ovals that the drift rates of anticyclones and cyclones are influenced by the structure of the atmosphere below the cloud tops. Therefore, it should be possible to combine such observations with modeling to learn about the abyssal circulation. We are investigating the influence the vertical wind shear has on jovian vortices with two versions of the EPIC atmospheric model, the original pure-isentropic-coordinate model and the new hybrid-coordinate model that transitions smoothly to a pressure-based coordinate where the atmosphere becomes nearly neutrally stratified and the potential temperature ceases to be a useful coordinate. The hybrid model allows us to achieve significantly greater depth and vertical resolution, and so gain more sensitivity to the baroclinic effects of the vertical shear of the zonal wind. There are technical issues that arise with the hybrid, in particular it is more challenging to introduce a balanced vortex since there is no simple streamfunction like the Montgomery streamfunction used in the isentropic-coordinate case. Our scientific goal is to reproduce the observed interactions of cyclones with anticyclones. For example, previous to the final merger of White Ovals BE and FA, the cyclonic vortex between them, which appeared to act as a merger inhibitor---a common occurrance when two White Ovals drifted close to each other---was pulled out of the triple system by the nearby transit of the Great Red Spot, thereby leaving a free path for BE and FA to merge. We will present our latest results on anticyclone-cyclone interactions and the influence of the abyssal circulation. This research is funded by NASA's Planetary Atmospheres and EPSCoR programs.

  14. Rapid-X - An FPGA Development Toolset Using a Custom Simulink Library for MTCA.4 Modules

    NASA Astrophysics Data System (ADS)

    Prędki, Paweł; Heuer, Michael; Butkowski, Łukasz; Przygoda, Konrad; Schlarb, Holger; Napieralski, Andrzej

    2015-06-01

    The recent introduction of advanced hardware architectures such as the Micro Telecommunications Computing Architecture (MTCA) caused a change in the approach to implementation of control schemes in many fields. The development has been moving away from traditional programming languages ( C/C++), to hardware description languages (VHDL, Verilog), which are used in FPGA development. With MATLAB/Simulink it is possible to describe complex systems with block diagrams and simulate their behavior. Those diagrams are then used by the HDL experts to implement exactly the required functionality in hardware. Both the porting of existing applications and adaptation of new ones require a lot of development time from them. To solve this, Xilinx System Generator, a toolbox for MATLAB/Simulink, allows rapid prototyping of those block diagrams using hardware modelling. It is still up to the firmware developer to merge this structure with the hardware-dependent HDL project. This prevents the application engineer from quickly verifying the proposed schemes in real hardware. The framework described in this article overcomes these challenges, offering a hardware-independent library of components that can be used in Simulink/System Generator models. The components are subsequently translated into VHDL entities and integrated with a pre-prepared VHDL project template. Furthermore, the entire implementation process is run in the background, giving the user an almost one-click path from control scheme modelling and simulation to bit-file generation. This approach allows the application engineers to quickly develop new schemes and test them in real hardware environment. The applications may range from simple data logging or signal generation ones to very advanced controllers. Taking advantage of the Simulink simulation capabilities and user-friendly hardware implementation routines, the framework significantly decreases the development time of FPGA-based applications.

  15. Semi-empirical formulation of multiple scattering for the Gaussian beam model of heavy charged particles stopping in tissue-like matter.

    PubMed

    Kanematsu, Nobuyuki

    2009-03-07

    Dose calculation for radiotherapy with protons and heavier ions deals with a large volume of path integrals involving a scattering power of body tissue. This work provides a simple model for such demanding applications. There is an approximate linearity between RMS end-point displacement and range of incident particles in water, empirically found in measurements and detailed calculations. This fact was translated into a simple linear formula, from which the scattering power that is only inversely proportional to the residual range was derived. The simplicity enabled the analytical formulation for ions stopping in water, which was designed to be equivalent with the extended Highland model and agreed with measurements within 2% or 0.02 cm in RMS displacement. The simplicity will also improve the efficiency of numerical path integrals in the presence of heterogeneity.

  16. Applicability of Zipper Merge Versus Early Merge in Kentucky Work Zones

    DOT National Transportation Integrated Search

    2017-12-24

    In an effort to improve work zone safety and streamline traffic flows, a number of state transportation agencies (STAs) have experimented with the zipper merge. The zipper merge differs from a conventional, or early, merge in that vehicles do not mer...

  17. Modified Hitschfeld-Bordan Equations for Attenuation-Corrected Radar Rain Reflectivity: Application to Nonuniform Beamfilling at Off-Nadir Incidence

    NASA Technical Reports Server (NTRS)

    Meneghini, Robert; Liao, Liang

    2013-01-01

    As shown by Takahashi et al., multiple path attenuation estimates over the field of view of an airborne or spaceborne weather radar are feasible for off-nadir incidence angles. This follows from the fact that the surface reference technique, which provides path attenuation estimates, can be applied to each radar range gate that intersects the surface. This study builds on this result by showing that three of the modified Hitschfeld-Bordan estimates for the attenuation-corrected radar reflectivity factor can be generalized to the case where multiple path attenuation estimates are available, thereby providing a correction to the effects of nonuniform beamfilling. A simple simulation is presented showing some strengths and weaknesses of the approach.

  18. Investigating decoherence in a simple system

    NASA Technical Reports Server (NTRS)

    Albrecht, Andreas

    1991-01-01

    The results of some simple calculations designed to study quantum decoherence are presented. The physics of quantum decoherence are briefly reviewed, and a very simple 'toy' model is analyzed. Exact solutions are found using numerical techniques. The type of incoherence exhibited by the model can be changed by varying a coupling strength. The author explains why the conventional approach to studying decoherence by checking the diagonality of the density matrix is not always adequate. Two other approaches, the decoherence functional and the Schmidt paths approach, are applied to the toy model and contrasted to each other. Possible problems with each are discussed.

  19. The action uncertainty principle and quantum gravity

    NASA Astrophysics Data System (ADS)

    Mensky, Michael B.

    1992-02-01

    Results of the path-integral approach to the quantum theory of continuous measurements have been formulated in a preceding paper in the form of an inequality of the type of the uncertainty principle. The new inequality was called the action uncertainty principle, AUP. It was shown that the AUP allows one to find in a simple what outputs of the continuous measurements will occur with high probability. Here a more simple form of the AUP will be formulated, δ S≳ħ. When applied to quantum gravity, it leads in a very simple way to the Rosenfeld inequality for measurability of the average curvature.

  20. An Application of Self-Organizing Map for Multirobot Multigoal Path Planning with Minmax Objective.

    PubMed

    Faigl, Jan

    2016-01-01

    In this paper, Self-Organizing Map (SOM) for the Multiple Traveling Salesman Problem (MTSP) with minmax objective is applied to the robotic problem of multigoal path planning in the polygonal domain. The main difficulty of such SOM deployment is determination of collision-free paths among obstacles that is required to evaluate the neuron-city distances in the winner selection phase of unsupervised learning. Moreover, a collision-free path is also needed in the adaptation phase, where neurons are adapted towards the presented input signal (city) to the network. Simple approximations of the shortest path are utilized to address this issue and solve the robotic MTSP by SOM. Suitability of the proposed approximations is verified in the context of cooperative inspection, where cities represent sensing locations that guarantee to "see" the whole robots' workspace. The inspection task formulated as the MTSP-Minmax is solved by the proposed SOM approach and compared with the combinatorial heuristic GENIUS. The results indicate that the proposed approach provides competitive results to GENIUS and support applicability of SOM for robotic multigoal path planning with a group of cooperating mobile robots. The proposed combination of approximate shortest paths with unsupervised learning opens further applications of SOM in the field of robotic planning.

  1. An Application of Self-Organizing Map for Multirobot Multigoal Path Planning with Minmax Objective

    PubMed Central

    Faigl, Jan

    2016-01-01

    In this paper, Self-Organizing Map (SOM) for the Multiple Traveling Salesman Problem (MTSP) with minmax objective is applied to the robotic problem of multigoal path planning in the polygonal domain. The main difficulty of such SOM deployment is determination of collision-free paths among obstacles that is required to evaluate the neuron-city distances in the winner selection phase of unsupervised learning. Moreover, a collision-free path is also needed in the adaptation phase, where neurons are adapted towards the presented input signal (city) to the network. Simple approximations of the shortest path are utilized to address this issue and solve the robotic MTSP by SOM. Suitability of the proposed approximations is verified in the context of cooperative inspection, where cities represent sensing locations that guarantee to “see” the whole robots' workspace. The inspection task formulated as the MTSP-Minmax is solved by the proposed SOM approach and compared with the combinatorial heuristic GENIUS. The results indicate that the proposed approach provides competitive results to GENIUS and support applicability of SOM for robotic multigoal path planning with a group of cooperating mobile robots. The proposed combination of approximate shortest paths with unsupervised learning opens further applications of SOM in the field of robotic planning. PMID:27340395

  2. BBMerge – Accurate paired shotgun read merging via overlap

    DOE PAGES

    Bushnell, Brian; Rood, Jonathan; Singer, Esther

    2017-10-26

    Merging paired-end shotgun reads generated on high-throughput sequencing platforms can substantially improve various subsequent bioinformatics processes, including genome assembly, binning, mapping, annotation, and clustering for taxonomic analysis. With the inexorable growth of sequence data volume and CPU core counts, the speed and scalability of read-processing tools becomes ever-more important. The accuracy of shotgun read merging is crucial as well, as errors introduced by incorrect merging percolate through to reduce the quality of downstream analysis. Thus, we designed a new tool to maximize accuracy and minimize processing time, allowing the use of read merging on larger datasets, and in analyses highlymore » sensitive to errors. We present BBMerge, a new merging tool for paired-end shotgun sequence data. We benchmark BBMerge by comparison with eight other widely used merging tools, assessing speed, accuracy and scalability. Evaluations of both synthetic and real-world datasets demonstrate that BBMerge produces merged shotgun reads with greater accuracy and at higher speed than any existing merging tool examined. BBMerge also provides the ability to merge non-overlapping shotgun read pairs by using k-mer frequency information to assemble the unsequenced gap between reads, achieving a significantly higher merge rate while maintaining or increasing accuracy.« less

  3. BBMerge – Accurate paired shotgun read merging via overlap

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bushnell, Brian; Rood, Jonathan; Singer, Esther

    Merging paired-end shotgun reads generated on high-throughput sequencing platforms can substantially improve various subsequent bioinformatics processes, including genome assembly, binning, mapping, annotation, and clustering for taxonomic analysis. With the inexorable growth of sequence data volume and CPU core counts, the speed and scalability of read-processing tools becomes ever-more important. The accuracy of shotgun read merging is crucial as well, as errors introduced by incorrect merging percolate through to reduce the quality of downstream analysis. Thus, we designed a new tool to maximize accuracy and minimize processing time, allowing the use of read merging on larger datasets, and in analyses highlymore » sensitive to errors. We present BBMerge, a new merging tool for paired-end shotgun sequence data. We benchmark BBMerge by comparison with eight other widely used merging tools, assessing speed, accuracy and scalability. Evaluations of both synthetic and real-world datasets demonstrate that BBMerge produces merged shotgun reads with greater accuracy and at higher speed than any existing merging tool examined. BBMerge also provides the ability to merge non-overlapping shotgun read pairs by using k-mer frequency information to assemble the unsequenced gap between reads, achieving a significantly higher merge rate while maintaining or increasing accuracy.« less

  4. Path Network Recovery Using Remote Sensing Data and Geospatial-Temporal Semantic Graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    William C. McLendon III; Brost, Randy C.

    Remote sensing systems produce large volumes of high-resolution images that are difficult to search. The GeoGraphy (pronounced Geo-Graph-y) framework [2, 20] encodes remote sensing imagery into a geospatial-temporal semantic graph representation to enable high level semantic searches to be performed. Typically scene objects such as buildings and trees tend to be shaped like blocks with few holes, but other shapes generated from path networks tend to have a large number of holes and can span a large geographic region due to their connectedness. For example, we have a dataset covering the city of Philadelphia in which there is a singlemore » road network node spanning a 6 mile x 8 mile region. Even a simple question such as "find two houses near the same street" might give unexpected results. More generally, nodes arising from networks of paths (roads, sidewalks, trails, etc.) require additional processing to make them useful for searches in GeoGraphy. We have assigned the term Path Network Recovery to this process. Path Network Recovery is a three-step process involving (1) partitioning the network node into segments, (2) repairing broken path segments interrupted by occlusions or sensor noise, and (3) adding path-aware search semantics into GeoQuestions. This report covers the path network recovery process, how it is used, and some example use cases of the current capabilities.« less

  5. Distinct roles of hippocampus and medial prefrontal cortex in spatial and nonspatial memory.

    PubMed

    Sapiurka, Maya; Squire, Larry R; Clark, Robert E

    2016-12-01

    In earlier work, patients with hippocampal damage successfully path integrated, apparently by maintaining spatial information in working memory. In contrast, rats with hippocampal damage were unable to path integrate, even when the paths were simple and working memory might have been expected to support performance. We considered possible ways to understand these findings. We tested rats with either hippocampal lesions or lesions of medial prefrontal cortex (mPFC) on three tasks of spatial or nonspatial memory: path integration, spatial alternation, and a nonspatial alternation task. Rats with mPFC lesions were impaired on both spatial and nonspatial alternation but performed normally on path integration. By contrast, rats with hippocampal lesions were impaired on path integration and spatial alternation but performed normally on nonspatial alternation. We propose that rodent neocortex is limited in its ability to construct a coherent spatial working memory of complex environments. Accordingly, in tasks such as path integration and spatial alternation, working memory cannot depend on neocortex alone. Rats may accomplish many spatial memory tasks by relying on long-term memory. Alternatively, they may accomplish these tasks within working memory through sustained coordination between hippocampus and other cortical brain regions such as mPFC, in the case of spatial alternation, or parietal cortex in the case of path integration. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  6. Measuring phonon mean free path distributions by probing quasiballistic phonon transport in grating nanostructures

    DOE PAGES

    Zeng, Lingping; Collins, Kimberlee C.; Hu, Yongjie; ...

    2015-11-27

    Heat conduction in semiconductors and dielectrics depends upon their phonon mean free paths that describe the average travelling distance between two consecutive phonon scattering events. Nondiffusive phonon transport is being exploited to extract phonon mean free path distributions. Here, we describe an implementation of a nanoscale thermal conductivity spectroscopy technique that allows for the study of mean free path distributions in optically absorbing materials with relatively simple fabrication and a straightforward analysis scheme. We pattern 1D metallic grating of various line widths but fixed gap size on sample surfaces. The metal lines serve as both heaters and thermometers in time-domainmore » thermoreflectance measurements and simultaneously act as wiregrid polarizers that protect the underlying substrate from direct optical excitation and heating. We demonstrate the viability of this technique by studying length-dependent thermal conductivities of silicon at various temperatures. The thermal conductivities measured with different metal line widths are analyzed using suppression functions calculated from the Boltzmann transport equation to extract the phonon mean free path distributions with no calibration required. Furthermore, this table-top ultrafast thermal transport spectroscopy technique enables the study of mean free path spectra in a wide range of technologically important materials.« less

  7. Anomalous optical scattering from intersecting fine particles

    NASA Astrophysics Data System (ADS)

    Paley, Alina V.; Radchik, Alex V.; Smith, Geoffrey B.

    1995-09-01

    There are many areas of science and technology where the scattering of electromagnetic waves by clusters or merging particles are of interest. The merging particles under study might be inclusions in high-density composites, liquid drops, biological cells, macroscopic ceramic particles, etc. As intersecting particles are bounded by a complex physical surface, the problem of scattering from these particles valid for any degree of merging, including touching, and for arbitrary materials of the constituents, has received limited attention. Here we present solutions which are valid and exact in the long wavelength limit compared with the size of intersecting spherical particles and cardioidal particles of similar dimensions. Both shapes are almost coincident everywhere except in the region of intersection. We treat the case when the waves are polarized along the common axis (longitudinal field). The solutions of Laplace's equation are integrals (spheres) or sums (cardioids) over continuous or discrete eigenvalue spectra respectively. The spectral dependencies of the resulting extinction coefficients and the scattering for the spherical and cardioidal particles are quite distinct. There is an enormous difference in the magnitude of absorption responses. Overall the cardioidal particle behaves as if it is almost invisible in terms of effects on the external field for a very broad band of optical frequencies. THe latter result was checked for a number of dielectric permittivities and seems to be universal. It scatters far more weakly than the isolated sphere. In constrast the intersecting sphere has an extinction band which is broad and is much enhanced at longer wavelegnths relative to the simple sphere. This result has significant implications for the design of surfaces with minimum scattering.

  8. Merging of long-term memories in an insect.

    PubMed

    Hunt, Kathryn L; Chittka, Lars

    2015-03-16

    Research on comparative cognition has largely focused on successes and failures of animals to solve certain cognitive tasks, but in humans, memory errors can be more complex than simple failures to retrieve information [1, 2]. The existence of various types of "false memories," in which individuals remember events that they have never actually encountered, are now well established in humans [3, 4]. We hypothesize that such systematic memory errors may be widespread in animals whose natural lifestyle involves the processing and recollection of memories for multiple stimuli [5]. We predict that memory traces for various stimuli may "merge," such that features acquired in distinct bouts of training are combined in an animal's mind, so that stimuli that have never been viewed before, but are a combination of the features presented in training, may be chosen during recall. We tested this using bumblebees, Bombus terrestris. When individuals were first trained to a solid single-colored stimulus followed by a black and white (b/w)-patterned stimulus, a subsequent preference for the last entrained stimulus was found in both short-term- and long-term-memory tests. However, when bees were first trained to b/w-patterned stimuli followed by solid single-colored stimuli and were tested in long-term-memory tests 1 or 3 days later, they only initially preferred the most recently rewarded stimulus, and then switched their preference to stimuli that combined features from the previous color and pattern stimuli. The observed merging of long-term memories is thus similar to the memory conjunction error found in humans [6]. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Optimisation of tool path for improved formability of commercial pure aluminium sheets during the incremental forming process

    NASA Astrophysics Data System (ADS)

    Prasad, Moyye Devi; Nagarajan, D.

    2018-05-01

    An axisymmetric dome of 70 mm in diameter and 35 mm in depth was formed using the ISF process using varying proportions (25, 50 and 75%) of spiral (S) and helical (H) tool path combinations as a single tool path strategy, on a 2 mm thickness commercially pure aluminium sheets. A maximum forming depth of ˜30 mm was observed on all the components, irrespective of the different tool path combinations employed. None of the components were fractured for the different tool path combinations used. The springback was also same and uniform for all the tool path combinations employed, except for the 75S25H which showed slightly larger springback. The wall thickness reduced drastically up to a certain forming depth and increased with the increase in forming depth for all the tool path combinations. The maximum thinning occurred near the maximum wall angle region for all the components. The wall thickness improved significantly (around 10-15%) near the maximum wall angle region for the 25S75H combination than that of the complete spiral and other tool path strategies. It is speculated that this improvement in wall thickness may be mainly due to the combined contribution of the simple shear and uniaxial dilatation deformation modes of the helical tool path strategy in the 25S75H combination. This increase in wall thickness will greatly help in reducing the plastic instability and postpone the early failure of the component.

  10. Simple techniques for forecasting bicycle and pedestrian demand.

    DOT National Transportation Integrated Search

    2009-01-01

    Bicycle lanes, sidewalks, and shared-use paths are some of the most : commonly requested transportation improvements in many parts of : the country. Increased fuel costs, desire to fit exercise into personal : routines, and land-use changes all are d...

  11. Path integral Liouville dynamics: Applications to infrared spectra of OH, water, ammonia, and methane

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Jian, E-mail: jianliupku@pku.edu.cn; State Key Joint Laboratory of Environmental Simulation and Pollution Control, College of Environmental Sciences and Engineering, Peking University, Beijing 100871; Zhang, Zhijun

    Path integral Liouville dynamics (PILD) is applied to vibrational dynamics of several simple but representative realistic molecular systems (OH, water, ammonia, and methane). The dipole-derivative autocorrelation function is employed to obtain the infrared spectrum as a function of temperature and isotopic substitution. Comparison to the exact vibrational frequency shows that PILD produces a reasonably accurate peak position with a relatively small full width at half maximum. PILD offers a potentially useful trajectory-based quantum dynamics approach to compute vibrational spectra of molecular systems.

  12. Nearly deterministic quantum Fredkin gate based on weak cross-Kerr nonlinearity

    NASA Astrophysics Data System (ADS)

    Wu, Yun-xiang; Zhu, Chang-hua; Pei, Chang-xing

    2016-09-01

    A scheme of an optical quantum Fredkin gate is presented based on weak cross-Kerr nonlinearity. By an auxiliary coherent state with the cross-Kerr nonlinearity effect, photons can interact with each other indirectly, and a non-demolition measurement for photons can be implemented. Combined with the homodyne detection, classical feedforward, polarization beam splitters and Pauli-X operations, a controlled-path gate is constructed. Furthermore, a quantum Fredkin gate is built based on the controlled-path gate. The proposed Fredkin gate is simple in structure and feasible by current experimental technology.

  13. A path model for Whittaker vectors

    NASA Astrophysics Data System (ADS)

    Di Francesco, Philippe; Kedem, Rinat; Turmunkh, Bolor

    2017-06-01

    In this paper we construct weighted path models to compute Whittaker vectors in the completion of Verma modules, as well as Whittaker functions of fundamental type, for all finite-dimensional simple Lie algebras, affine Lie algebras, and the quantum algebra U_q(slr+1) . This leads to series expressions for the Whittaker functions. We show how this construction leads directly to the quantum Toda equations satisfied by these functions, and to the q-difference equations in the quantum case. We investigate the critical limit of affine Whittaker functions computed in this way.

  14. RenderMan design principles

    NASA Technical Reports Server (NTRS)

    Apodaca, Tony; Porter, Tom

    1989-01-01

    The two worlds of interactive graphics and realistic graphics have remained separate. Fast graphics hardware runs simple algorithms and generates simple looking images. Photorealistic image synthesis software runs slowly on large expensive computers. The time has come for these two branches of computer graphics to merge. The speed and expense of graphics hardware is no longer the barrier to the wide acceptance of photorealism. There is every reason to believe that high quality image synthesis will become a standard capability of every graphics machine, from superworkstation to personal computer. The significant barrier has been the lack of a common language, an agreed-upon set of terms and conditions, for 3-D modeling systems to talk to 3-D rendering systems for computing an accurate rendition of that scene. Pixar has introduced RenderMan to serve as that common language. RenderMan, specifically the extensibility it offers in shading calculations, is discussed.

  15. Phase-and-amplitude recovery from a single phase-contrast image using partially spatially coherent x-ray radiation

    NASA Astrophysics Data System (ADS)

    Beltran, Mario A.; Paganin, David M.; Pelliccia, Daniele

    2018-05-01

    A simple method of phase-and-amplitude extraction is derived that corrects for image blurring induced by partially spatially coherent incident illumination using only a single intensity image as input. The method is based on Fresnel diffraction theory for the case of high Fresnel number, merged with the space-frequency description formalism used to quantify partially coherent fields and assumes the object under study is composed of a single-material. A priori knowledge of the object’s complex refractive index and information obtained by characterizing the spatial coherence of the source is required. The algorithm was applied to propagation-based phase-contrast data measured with a laboratory-based micro-focus x-ray source. The blurring due to the finite spatial extent of the source is embedded within the algorithm as a simple correction term to the so-called Paganin algorithm and is also numerically stable in the presence of noise.

  16. Software architecture of the Magdalena Ridge Observatory Interferometer

    NASA Astrophysics Data System (ADS)

    Farris, Allen; Klinglesmith, Dan; Seamons, John; Torres, Nicolas; Buscher, David; Young, John

    2010-07-01

    Merging software from 36 independent work packages into a coherent, unified software system with a lifespan of twenty years is the challenge faced by the Magdalena Ridge Observatory Interferometer (MROI). We solve this problem by using standardized interface software automatically generated from simple highlevel descriptions of these systems, relying only on Linux, GNU, and POSIX without complex software such as CORBA. This approach, based on gigabit Ethernet with a TCP/IP protocol, provides the flexibility to integrate and manage diverse, independent systems using a centralized supervisory system that provides a database manager, data collectors, fault handling, and an operator interface.

  17. Research of real-time video processing system based on 6678 multi-core DSP

    NASA Astrophysics Data System (ADS)

    Li, Xiangzhen; Xie, Xiaodan; Yin, Xiaoqiang

    2017-10-01

    In the information age, the rapid development in the direction of intelligent video processing, complex algorithm proposed the powerful challenge on the performance of the processor. In this article, through the FPGA + TMS320C6678 frame structure, the image to fog, merge into an organic whole, to stabilize the image enhancement, its good real-time, superior performance, break through the traditional function of video processing system is simple, the product defects such as single, solved the video application in security monitoring, video, etc. Can give full play to the video monitoring effectiveness, improve enterprise economic benefits.

  18. Advanced digital modulation: Communication techniques and monolithic GaAs technology

    NASA Technical Reports Server (NTRS)

    Wilson, S. G.; Oliver, J. D., Jr.; Kot, R. C.; Richards, C. R.

    1983-01-01

    Communications theory and practice are merged with state-of-the-art technology in IC fabrication, especially monolithic GaAs technology, to examine the general feasibility of a number of advanced technology digital transmission systems. Satellite-channel models with (1) superior throughput, perhaps 2 Gbps; (2) attractive weight and cost; and (3) high RF power and spectrum efficiency are discussed. Transmission techniques possessing reasonably simple architectures capable of monolithic fabrication at high speeds were surveyed. This included a review of amplitude/phase shift keying (APSK) techniques and the continuous-phase-modulation (CPM) methods, of which MSK represents the simplest case.

  19. Determination of chloride in admixtures and aggregates for cement by a simple flow injection potentiometric system.

    PubMed

    Junsomboon, Jaroon; Jakmunee, Jaroon

    2008-07-15

    A simple flow injection system using three 3-way solenoid valves as an electric control injection valve and with a simple home-made chloride ion selective electrode based on Ag/AgCl wire as a sensor for determination of water soluble chloride in admixtures and aggregates for cement has been developed. A liquid sample or an extract was injected into a water carrier stream which was then merged with 0.1M KNO(3) stream and flowed through a flow cell where the solution will be in contact with the sensor, producing a potential change recorded as a peak. A calibration graph in range of 10-100 mg L(-1) was obtained with a detection limit of 2 mg L(-1). Relative standard deviations for 7 replicates injecting of 20, 60 and 90 mg L(-1) chloride solutions were 1.0, 1.2 and 0.6%, respectively. Sample throughput of 60 h(-1) was achieved with the consumption of 1 mL each of electrolyte solution and water carrier. The developed method was validated by the British Standard methods.

  20. Subscription merging in filter-based publish/subscribe systems

    NASA Astrophysics Data System (ADS)

    Zhang, Shengdong; Shen, Rui

    2013-03-01

    Filter-based publish/subscribe systems suffer from high subscription maintenance cost for each broker in the system stores a large number of subscriptions. Advertisement and covering are not sufficient to conquer such problem. Thus, subscription merging is proposed. However, current researches lack of an efficient and practical merging mechanism. In this paper, we propose a novel subscription merging mechanism. The mechanism is both time and space efficient, and can flexibly control the merging granularity. The merging mechanism has been verified through both theoretical and simulation-based evaluation.

  1. Effectively infinite optical path-length created using a simple cubic photonic crystal for extreme light trapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frey, Brian J.; Kuang, Ping; Hsieh, Mei-Li

    A 900 nm thick TiO 2 simple cubic photonic crystal with lattice constant 450 nm was fabricated and used to experimentally validate a newly-discovered mechanism for extreme light-bending. Absorption enhancement was observed extending 1–2 orders of magnitude over that of a reference TiO 2 film. Several enhancement peaks in the region from 600–950 nm were identified, which far exceed both the ergodic fundamental limit and the limit based on surface-gratings, with some peaks exceeding 100 times enhancement. These results are attributed to radically sharp refraction where the optical path length approaches infinity due to the Poynting vector lying nearly parallelmore » to the photonic crystal interface. The observed phenomena follow directly from the simple cubic symmetry of the photonic crystal, and can be achieved by integrating the light-trapping architecture into the absorbing volume. These results are not dependent on the material used, and can be applied to any future light trapping applications such as phosphor-converted white light generation, water-splitting, or thin-film solar cells, where increased response in areas of weak absorption is desired.« less

  2. Effectively infinite optical path-length created using a simple cubic photonic crystal for extreme light trapping

    DOE PAGES

    Frey, Brian J.; Kuang, Ping; Hsieh, Mei-Li; ...

    2017-06-23

    A 900 nm thick TiO 2 simple cubic photonic crystal with lattice constant 450 nm was fabricated and used to experimentally validate a newly-discovered mechanism for extreme light-bending. Absorption enhancement was observed extending 1–2 orders of magnitude over that of a reference TiO 2 film. Several enhancement peaks in the region from 600–950 nm were identified, which far exceed both the ergodic fundamental limit and the limit based on surface-gratings, with some peaks exceeding 100 times enhancement. These results are attributed to radically sharp refraction where the optical path length approaches infinity due to the Poynting vector lying nearly parallelmore » to the photonic crystal interface. The observed phenomena follow directly from the simple cubic symmetry of the photonic crystal, and can be achieved by integrating the light-trapping architecture into the absorbing volume. These results are not dependent on the material used, and can be applied to any future light trapping applications such as phosphor-converted white light generation, water-splitting, or thin-film solar cells, where increased response in areas of weak absorption is desired.« less

  3. The Threshold Shortest Path Interdiction Problem for Critical Infrastructure Resilience Analysis

    DTIC Science & Technology

    2017-09-01

    being pushed over the minimum designated threshold. 1.4 Motivation A simple setting to motivate this research is the “30 minutes or it’s free” guarantee...parallel network structure in Fig. 4.4 is simple in design , yet shows a relatively high resilience when compared to the other networks in general. The high...United States Naval Academy, 2002 Submitted in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE IN OPERATIONS RESEARCH

  4. A Graph-Embedding Approach to Hierarchical Visual Word Mergence.

    PubMed

    Wang, Lei; Liu, Lingqiao; Zhou, Luping

    2017-02-01

    Appropriately merging visual words are an effective dimension reduction method for the bag-of-visual-words model in image classification. The approach of hierarchically merging visual words has been extensively employed, because it gives a fully determined merging hierarchy. Existing supervised hierarchical merging methods take different approaches and realize the merging process with various formulations. In this paper, we propose a unified hierarchical merging approach built upon the graph-embedding framework. Our approach is able to merge visual words for any scenario, where a preferred structure and an undesired structure are defined, and, therefore, can effectively attend to all kinds of requirements for the word-merging process. In terms of computational efficiency, we show that our algorithm can seamlessly integrate a fast search strategy developed in our previous work and, thus, well maintain the state-of-the-art merging speed. To the best of our survey, the proposed approach is the first one that addresses the hierarchical visual word mergence in such a flexible and unified manner. As demonstrated, it can maintain excellent image classification performance even after a significant dimension reduction, and outperform all the existing comparable visual word-merging methods. In a broad sense, our work provides an open platform for applying, evaluating, and developing new criteria for hierarchical word-merging tasks.

  5. Magnetized Target Fusion Collaboration. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slough, John

    Nuclear fusion has the potential to satisfy the prodigious power that the world will demand in the future, but it has yet to be harnessed as a practical energy source. The entry of fusion as a viable, competitive source of power has been stymied by the challenge of finding an economical way to provide for the confinement and heating of the plasma fuel. It is the contention here that a simpler path to fusion can be achieved by creating fusion conditions in a different regime at small scale (~ a few cm). One such program now under study, referred tomore » as Magnetized Target Fusion (MTF), is directed at obtaining fusion in this high energy density regime by rapidly compressing a compact toroidal plasmoid commonly referred to as a Field Reversed Configuration (FRC). To make fusion practical at this smaller scale, an efficient method for compressing the FRC to fusion gain conditions is required. In one variant of MTF a conducting metal shell is imploded electrically. This radially compresses and heats the FRC plasmoid to fusion conditions. The closed magnetic field in the target plasmoid suppresses the thermal transport to the confining shell, thus lowering the imploding power needed to compress the target. The undertaking described in this report was to provide a suitable target FRC, as well as a simple and robust method for inserting and stopping the FRC within the imploding liner. The FRC must also survive during the time it takes for the metal liner to compress the FRC target. The initial work at the UW was focused on developing adequate preionization and flux trapping that were found to be essential in past experiments for obtaining the density, flux and most critically, FRC lifetime required for MTF. The timescale for testing and development of such a source can be rapidly accelerated by taking advantage of a new facility funded by the Department of Energy. At this facility, two inductive plasma accelerators (IPA) were constructed and tested. Recent experiments with these IPAs have demonstrated the ability to rapidly form, accelerate and merge two hypervelocity FRCs into a compression chamber. The resultant FRC that was formed was hot (T{sub ion} ~ 400 eV), stationary, and stable with a configuration lifetime several times that necessary for the MTF liner experiments. The accelerator length was less than 1 meter, and the time from the initiation of formation to the establishment of the final equilibrium was less than 10 microseconds. With some modification, each accelerator can be made capable of producing FRCs suitable for the production of the target plasma for the MTF liner experiment. Based on the initial FRC merging/compression results, the design and methodology for an experimental realization of the target plasma for the MTF liner experiment can now be defined. The construction and testing of the key components for the formation of the target plasma at the Air Force Research Laboratory (AFRL) will be performed on the IPA experiment, now at MSNW. A high density FRC plasmoid will be formed and accelerated out of each IPA into a merging/compression chamber similar to the imploding liner at AFRL. The properties of the resultant FRC plasma (size, temperature, density, flux, lifetime) will be obtained. The process will be optimized, and a final design for implementation at AFRL will be carried out. When implemented at AFRL it is anticipated that the colliding/merging FRCs will then be compressed by the liner. In this manner it is hoped that ultimately a plasma with ion temperatures reaching the 10 keV range and fusion gain near unity can be obtained.« less

  6. The transcultural diabetes nutrition algorithm toolkit: survey and content validation in the United States, Mexico, and Taiwan.

    PubMed

    Hamdy, Osama; Marchetti, Albert; Hegazi, Refaat A; Mechanick, Jeffrey I

    2014-06-01

    Evidence demonstrates that medical nutrition therapy (MNT) in prediabetes and type 2 diabetes (T2D) improves glycemic control and reduces diabetes risks and complications. Consequently, MNT is included in current clinical practice guidelines. Guideline recommendations, however, are frequently limited by their complexity, contradictions, personal and cultural rigidity, and compromised portability. The transcultural Diabetes Nutrition Algorithm (tDNA) was developed to overcome these limitations. To facilitate tDNA uptake and usage, an instructional Patient Algorithm Therapy (PATh) toolkit was created. Content validation of tDNA-PATh is needed before widespread implementation. Healthcare providers (n=837) in Mexico (n=261), Taiwan (n=250), and the United States (n=326) were questioned about challenges implementing MNT in clinical practice and the projected utilization and impact of tDNA-PATh. To assess the international portability and applicability of tDNA-PATh, the survey was conducted in countries with distinct ethnic and cultural attributes. Potential respondents were screened for professional and practice demographics related to diabetes. The questionnaire was administered electronically after respondents were exposed to core tDNA-PATh components. Overall, 61% of respondents thought that tDNA-PATh could help overcome MNT implementation challenges, 91% indicated positive impressions, 83% believed they would adopt tDNA-PATh, and 80% thought tDNA-PATh would be fairly easy to implement. tDNA-PATh appears to be an effective culturally sensitive tool to foster MNT in clinical practice. By providing simple culturally specific instructions, tDNA-PATh may help to overcome current impediments to implementing recommended lifestyle modifications. Specific guidance provided by tDNA-PATh, together with included patient education materials, may increase healthcare provider efficiency.

  7. Computational path planner for product assembly in complex environments

    NASA Astrophysics Data System (ADS)

    Shang, Wei; Liu, Jianhua; Ning, Ruxin; Liu, Mi

    2013-03-01

    Assembly path planning is a crucial problem in assembly related design and manufacturing processes. Sampling based motion planning algorithms are used for computational assembly path planning. However, the performance of such algorithms may degrade much in environments with complex product structure, narrow passages or other challenging scenarios. A computational path planner for automatic assembly path planning in complex 3D environments is presented. The global planning process is divided into three phases based on the environment and specific algorithms are proposed and utilized in each phase to solve the challenging issues. A novel ray test based stochastic collision detection method is proposed to evaluate the intersection between two polyhedral objects. This method avoids fake collisions in conventional methods and degrades the geometric constraint when a part has to be removed with surface contact with other parts. A refined history based rapidly-exploring random tree (RRT) algorithm which bias the growth of the tree based on its planning history is proposed and employed in the planning phase where the path is simple but the space is highly constrained. A novel adaptive RRT algorithm is developed for the path planning problem with challenging scenarios and uncertain environment. With extending values assigned on each tree node and extending schemes applied, the tree can adapts its growth to explore complex environments more efficiently. Experiments on the key algorithms are carried out and comparisons are made between the conventional path planning algorithms and the presented ones. The comparing results show that based on the proposed algorithms, the path planner can compute assembly path in challenging complex environments more efficiently and with higher success. This research provides the references to the study of computational assembly path planning under complex environments.

  8. Delineating wetland catchments and modeling hydrologic connectivity using lidar data and aerial imagery

    NASA Astrophysics Data System (ADS)

    Wu, Qiusheng; Lane, Charles R.

    2017-07-01

    In traditional watershed delineation and topographic modeling, surface depressions are generally treated as spurious features and simply removed from a digital elevation model (DEM) to enforce flow continuity of water across the topographic surface to the watershed outlets. In reality, however, many depressions in the DEM are actual wetland landscape features with seasonal to permanent inundation patterning characterized by nested hierarchical structures and dynamic filling-spilling-merging surface-water hydrological processes. Differentiating and appropriately processing such ecohydrologically meaningful features remains a major technical terrain-processing challenge, particularly as high-resolution spatial data are increasingly used to support modeling and geographic analysis needs. The objectives of this study were to delineate hierarchical wetland catchments and model their hydrologic connectivity using high-resolution lidar data and aerial imagery. The graph-theory-based contour tree method was used to delineate the hierarchical wetland catchments and characterize their geometric and topological properties. Potential hydrologic connectivity between wetlands and streams were simulated using the least-cost-path algorithm. The resulting flow network delineated potential flow paths connecting wetland depressions to each other or to the river network on scales finer than those available through the National Hydrography Dataset. The results demonstrated that our proposed framework is promising for improving overland flow simulation and hydrologic connectivity analysis.

  9. Identification of anomalous motion of thunderstorms using daily rainfall fields

    NASA Astrophysics Data System (ADS)

    Moral, Anna del; Llasat, María del Carmen; Rigo, Tomeu

    2017-03-01

    Most of the adverse weather phenomena in Catalonia (northeast Iberian Peninsula) are caused by convective events, which can produce heavy rains, large hailstones, strong winds, lightning and/or tornadoes. These thunderstorms usually have marked paths. However, their trajectories can vary sharply at any given time, completely changing direction from the path they have previously followed. Furthermore, some thunderstorms split or merge with each other, creating new formations with different behaviour. In order to identify the potentially anomalous movements that some thunderstorms make, this paper presents a two-step methodology using a database with 8 years of daily rainfall fields data for the Catalonia region (2008-2015). First, it classifies daily rainfall fields between days with "no rain", "non-potentially convective rain" and "potentially convective rain", based on daily accumulated precipitation and extension thresholds. Second, it categorises convective structures within rainfall fields and briefly identifies their main features, distinguishing whether there were any anomalous thunderstorm movements in each case. This methodology has been applied to the 2008-2015 period, and the main climatic features of convective and non-convective days were obtained. The methodology can be exported to other regions that do not have the necessary radar-based algorithms to detect convective cells, but where there is a good rain gauge network in place.

  10. Interaction, coalescence, and collapse of localized patterns in a quasi-one-dimensional system of interacting particles

    NASA Astrophysics Data System (ADS)

    Dessup, Tommy; Coste, Christophe; Saint Jean, Michel

    2017-01-01

    We study the path toward equilibrium of pairs of solitary wave envelopes (bubbles) that modulate a regular zigzag pattern in an annular channel. We evidence that bubble pairs are metastable states, which spontaneously evolve toward a stable single bubble. We exhibit the concept of topological frustration of a bubble pair. A configuration is frustrated when the particles between the two bubbles are not organized in a modulated staggered row. For a nonfrustrated (NF) bubble pair configuration, the bubbles interaction is attractive, whereas it is repulsive for a frustrated (F) configuration. We describe a model of interacting solitary wave that provides all qualitative characteristics of the interaction force: It is attractive for NF systems and repulsive for F systems and decreases exponentially with the bubbles distance. Moreover, for NF systems, the bubbles come closer and eventually merge as a single bubble, in a coalescence process. We also evidence a collapse process, in which one bubble shrinks in favor of the other one, overcoming an energetic barrier in phase space. This process is relevant for both NF systems and F systems. In NF systems, the coalescence prevails at low temperature, whereas thermally activated jumps make the collapse prevail at high temperature. In F systems, the path toward equilibrium involves a collapse process regardless of the temperature.

  11. Vision-based mapping with cooperative robots

    NASA Astrophysics Data System (ADS)

    Little, James J.; Jennings, Cullen; Murray, Don

    1998-10-01

    Two stereo-vision-based mobile robots navigate and autonomously explore their environment safely while building occupancy grid maps of the environment. The robots maintain position estimates within a global coordinate frame using landmark recognition. This allows them to build a common map by sharing position information and stereo data. Stereo vision processing and map updates are done at 3 Hz and the robots move at speeds of 200 cm/s. Cooperative mapping is achieved through autonomous exploration of unstructured and dynamic environments. The map is constructed conservatively, so as to be useful for collision-free path planning. Each robot maintains a separate copy of a shared map, and then posts updates to the common map when it returns to observe a landmark at home base. Issues include synchronization, mutual localization, navigation, exploration, registration of maps, merging repeated views (fusion), centralized vs decentralized maps.

  12. Merging paleobiology with conservation biology to guide the future of terrestrial ecosystems.

    PubMed

    Barnosky, Anthony D; Hadly, Elizabeth A; Gonzalez, Patrick; Head, Jason; Polly, P David; Lawing, A Michelle; Eronen, Jussi T; Ackerly, David D; Alex, Ken; Biber, Eric; Blois, Jessica; Brashares, Justin; Ceballos, Gerardo; Davis, Edward; Dietl, Gregory P; Dirzo, Rodolfo; Doremus, Holly; Fortelius, Mikael; Greene, Harry W; Hellmann, Jessica; Hickler, Thomas; Jackson, Stephen T; Kemp, Melissa; Koch, Paul L; Kremen, Claire; Lindsey, Emily L; Looy, Cindy; Marshall, Charles R; Mendenhall, Chase; Mulch, Andreas; Mychajliw, Alexis M; Nowak, Carsten; Ramakrishnan, Uma; Schnitzler, Jan; Das Shrestha, Kashish; Solari, Katherine; Stegner, Lynn; Stegner, M Allison; Stenseth, Nils Chr; Wake, Marvalee H; Zhang, Zhibin

    2017-02-10

    Conservation of species and ecosystems is increasingly difficult because anthropogenic impacts are pervasive and accelerating. Under this rapid global change, maximizing conservation success requires a paradigm shift from maintaining ecosystems in idealized past states toward facilitating their adaptive and functional capacities, even as species ebb and flow individually. Developing effective strategies under this new paradigm will require deeper understanding of the long-term dynamics that govern ecosystem persistence and reconciliation of conflicts among approaches to conserving historical versus novel ecosystems. Integrating emerging information from conservation biology, paleobiology, and the Earth sciences is an important step forward on the path to success. Maintaining nature in all its aspects will also entail immediately addressing the overarching threats of growing human population, overconsumption, pollution, and climate change. Copyright © 2017, American Association for the Advancement of Science.

  13. A path-oriented matrix-based knowledge representation system

    NASA Technical Reports Server (NTRS)

    Feyock, Stefan; Karamouzis, Stamos T.

    1993-01-01

    Experience has shown that designing a good representation is often the key to turning hard problems into simple ones. Most AI (Artificial Intelligence) search/representation techniques are oriented toward an infinite domain of objects and arbitrary relations among them. In reality much of what needs to be represented in AI can be expressed using a finite domain and unary or binary predicates. Well-known vector- and matrix-based representations can efficiently represent finite domains and unary/binary predicates, and allow effective extraction of path information by generalized transitive closure/path matrix computations. In order to avoid space limitations a set of abstract sparse matrix data types was developed along with a set of operations on them. This representation forms the basis of an intelligent information system for representing and manipulating relational data.

  14. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances.

    PubMed

    Gil, Manuel

    2014-01-01

    Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error.

  15. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances

    PubMed Central

    2014-01-01

    Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error. PMID:25279263

  16. Fifty year canon of solar eclipses: 1986 - 2035

    NASA Technical Reports Server (NTRS)

    Espenak, Fred

    1987-01-01

    A complete catalog is presented, listing the general characteristics of every solar eclipse from 1901 through 2100. To complement this catalog, a detailed set of cylindrical projection world maps shows the umbral paths of every solar eclipse over the 200 year interval. Focusing in on the next 50 years, accurate geodetic path coordinates and local circumstances for the 71 central eclipses from 1987 through 2035 are tabulated. Finally, the geodetic paths of the umbral and penumbral shadows of all 109 solar eclipses in this period are plotted on orthographic projection maps of the Earth. Appendices are included which discuss eclipse geometry, eclipse frequency and occurrence, modern eclipse prediction and time determination. Finally, code for a simple Fortran program is given to predict the occurrence and characteristics of solar eclipses.

  17. Detection of silver nanoparticles in seawater at ppb levels using UV-visible spectrophotometry with long path cells.

    PubMed

    Lodeiro, Pablo; Achterberg, Eric P; El-Shahawi, Mohammad S

    2017-03-01

    Silver nanoparticles (AgNPs) are emerging contaminants that are difficult to detect in natural waters. UV-visible spectrophotometry is a simple technique that allows detection of AgNPs through analysis of their characteristic surface plasmon resonance band. The detection limit for nanoparticles using up to 10cm path length cuvettes with UV-visible spectrophotometry is in the 0.1-10ppm range. This detection limit is insufficiently low to observe AgNPs in natural environments. Here we show how the use of capillary cells with an optical path length up to 200cm, forms an excellent technique for rapid detection and quantification of non-aggregated AgNPs at ppb concentrations in complex natural matrices such as seawater. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Simple measures of channel habitat complexity predict transient hydraulic storage in streams

    EPA Science Inventory

    Stream thalweg depth profiles (along path of greatest channel depth) and woody debris tallies have recently become components of routine field procedures for quantifying physical habitat in national stream monitoring efforts. Mean residual depth, standard deviation of thalweg dep...

  19. Post processing for offline Chinese handwritten character string recognition

    NASA Astrophysics Data System (ADS)

    Wang, YanWei; Ding, XiaoQing; Liu, ChangSong

    2012-01-01

    Offline Chinese handwritten character string recognition is one of the most important research fields in pattern recognition. Due to the free writing style, large variability in character shapes and different geometric characteristics, Chinese handwritten character string recognition is a challenging problem to deal with. However, among the current methods over-segmentation and merging method which integrates geometric information, character recognition information and contextual information, shows a promising result. It is found experimentally that a large part of errors are segmentation error and mainly occur around non-Chinese characters. In a Chinese character string, there are not only wide characters namely Chinese characters, but also narrow characters like digits and letters of the alphabet. The segmentation error is mainly caused by uniform geometric model imposed on all segmented candidate characters. To solve this problem, post processing is employed to improve recognition accuracy of narrow characters. On one hand, multi-geometric models are established for wide characters and narrow characters respectively. Under multi-geometric models narrow characters are not prone to be merged. On the other hand, top rank recognition results of candidate paths are integrated to boost final recognition of narrow characters. The post processing method is investigated on two datasets, in total 1405 handwritten address strings. The wide character recognition accuracy has been improved lightly and narrow character recognition accuracy has been increased up by 10.41% and 10.03% respectively. It indicates that the post processing method is effective to improve recognition accuracy of narrow characters.

  20. U.S. Navy/U.S. Marine Corps Command and Control in the 21st Century

    DTIC Science & Technology

    1993-04-01

    proceeding toward C41 for the Warrior? The simple answer is Copernicus . Nicholas Copernicus was a famous Polish intellectual who, in 1543, published The...not the earth, was the center of the solar system. Pre-Copernican astronomy could not figure the paths of the planets because they thought the Earth...was the center of the solar system. Copernicus ’ brilliant conclusion was to look for a simple answer, a different perspective. Today the Navy is

  1. New Perspectives on Southern Ocean Frontal Variability

    NASA Astrophysics Data System (ADS)

    Chapman, Christopher

    2017-04-01

    The frontal structure of the Southern Ocean is investigated using a the Wavelet/Higher Order Statistics Enhancement (WHOSE) frontal detection method, introduced in Chapman (2014). This methodology is applied to 21 years of daily gridded sea-surface height (SSH) data to obtain daily maps of the locations of the fronts. By forming frontal occurrence frequency maps and then approximating these occurrence-maps by a superposition of simple functions, the time-mean locations of the fronts, as well as a measure of their capacity to meander, are obtained and related to the frontal locations found by previous studies. The spatial and temporal variability of the frontal structure is then considered. The number of fronts is found to be highly variable throughout the Southern Ocean, increasing (`splitting') downstream of large bathymetric features and decreasing (`merging') in regions where the fronts are tightly controlled by the underlying topography. In contrast, frontal meandering remains relatively constant. Contrary to many previous studies, little no southward migration of the fronts over the 1993-2014 time period is found, and there is only weak sensitivity to atmospheric forcing related to SAM or ENSO. Finally, the implications of splitting and merging for the flux of tracers will be discussed.

  2. Knowledge evolution in physics research: An analysis of bibliographic coupling networks.

    PubMed

    Liu, Wenyuan; Nanetti, Andrea; Cheong, Siew Ann

    2017-01-01

    Even as we advance the frontiers of physics knowledge, our understanding of how this knowledge evolves remains at the descriptive levels of Popper and Kuhn. Using the American Physical Society (APS) publications data sets, we ask in this paper how new knowledge is built upon old knowledge. We do so by constructing year-to-year bibliographic coupling networks, and identify in them validated communities that represent different research fields. We then visualize their evolutionary relationships in the form of alluvial diagrams, and show how they remain intact through APS journal splits. Quantitatively, we see that most fields undergo weak Popperian mixing, and it is rare for a field to remain isolated/undergo strong mixing. The sizes of fields obey a simple linear growth with recombination. We can also reliably predict the merging between two fields, but not for the considerably more complex splitting. Finally, we report a case study of two fields that underwent repeated merging and splitting around 1995, and how these Kuhnian events are correlated with breakthroughs on Bose-Einstein condensation (BEC), quantum teleportation, and slow light. This impact showed up quantitatively in the citations of the BEC field as a larger proportion of references from during and shortly after these events.

  3. Distance comparisons in virtual reality: effects of path, context, and age

    PubMed Central

    van der Ham, Ineke J. M.; Baalbergen, Heleen; van der Heijden, Peter G. M.; Postma, Albert; Braspenning, Merel; van der Kuil, Milan N. A.

    2015-01-01

    In this large scale, individual differences study (N = 521), the effects of cardinal axes of an environment and the path taken between locations on distance comparisons were assessed. The main goal was to identify if and to what extent previous findings in simple 2D tasks can be generalized to a more dynamic, three-dimensional virtual reality environment. Moreover, effects of age and gender were assessed. After memorizing the locations of six objects in a circular environment, participants were asked to judge the distance between objects they encountered. Results indicate that categorization (based on the cardinal axes) was present, as distances within one quadrant were judged as being closer together, even when no visual indication of the cardinal axes was given. Moreover, strong effects of the path taken between object locations were found; objects that were near on the path taken were perceived as being closer together than objects that were further apart on this path, regardless of the metric distance between the objects. Males outperformed females in distance comparison, but did not differ in the extent of the categorization and path effects. Age also affected performance; the categorization and path effects were highly similar across the age range tested, but the general ability to estimate distances does show a clear pattern increase during development and decrease with aging. PMID:26321968

  4. Bim-Based Indoor Path Planning Considering Obstacles

    NASA Astrophysics Data System (ADS)

    Xu, M.; Wei, S.; Zlatanova, S.; Zhang, R.

    2017-09-01

    At present, 87 % of people's activities are in indoor environment; indoor navigation has become a research issue. As the building structures for people's daily life are more and more complex, many obstacles influence humans' moving. Therefore it is essential to provide an accurate and efficient indoor path planning. Nowadays there are many challenges and problems in indoor navigation. Most existing path planning approaches are based on 2D plans, pay more attention to the geometric configuration of indoor space, often ignore rich semantic information of building components, and mostly consider simple indoor layout without taking into account the furniture. Addressing the above shortcomings, this paper uses BIM (IFC) as the input data and concentrates on indoor navigation considering obstacles in the multi-floor buildings. After geometric and semantic information are extracted, 2D and 3D space subdivision methods are adopted to build the indoor navigation network and to realize a path planning that avoids obstacles. The 3D space subdivision is based on triangular prism. The two approaches are verified by the experiments.

  5. Monitoring trace gases in downtown Toronto using open-path Fourier transform infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Byrne, B.; Strong, K.; Colebatch, O.; Fogal, P.; Mittermeier, R. L.; Wunch, D.; Jones, D. B. A.

    2017-12-01

    Emissions of greenhouse gases (GHGs) in urban environments can be highly heterogeneous. For example, vehicles produce point source emissions which can result in heterogeneous GHG concentrations on scales <10 m. The highly localized scale of these emissions can make it difficult to measure mean GHG concentrations on scales of 100-1000 m. Open-Path Fourier Transform Infrared Spectroscopy (OP-FTIR) measurements offer spatial averaging and continuous measurements of several trace gases simultaneously in the same airmass. We have set up an open-path system in downtown Toronto to monitor trace gases in the urban boundary layer. Concentrations of CO2, CO, CH4, and N2O are derived from atmospheric absorption spectra recorded over a two-way atmospheric open path of 320 m using non-linear least squares fitting. Using a simple box model and co-located boundary layer height measurements, we estimate surface fluxes of these gases in downtown Toronto from our OP-FTIR observations.

  6. A rule-based shell to hierarchically organize HST observations

    NASA Technical Reports Server (NTRS)

    Bose, Ashim; Gerb, Andrew

    1995-01-01

    An observing program on the Hubble Space Telescope (HST) is described in terms of exposures that are obtained by one or more of the instruments onboard the HST. These exposures are organized into a hierarchy of structures for purposes of efficient scheduling of observations. The process by which exposures get organized into the higher-level structures is called merging. This process relies on rules to determine which observations can be 'merged' into the same higher level structure, and which cannot. The TRANSformation expert system converts proposals for astronomical observations with HST into detailed observing plans. The conversion process includes the task of merging. Within TRANS, we have implemented a declarative shell to facilitate merging. This shell offers the following features: (1) an easy way of specifying rules on when to merge and when not to merge, (2) a straightforward priority mechanism for resolving conflicts among rules, (3) an explanation facility for recording the merging history, (4) a report generating mechanism to help users understand the reasons for merging, and (5) a self-documenting mechanism that documents all the merging rules that have been defined in the shell, ordered by priority. The merging shell is implemented using an object-oriented paradigm in CLOS. It has been a part of operational TRANS (after extensive testing) since July 1993. It has fulfilled all performance expectations, and has considerably simplified the process of implementing new or changed requirements for merging. The users are pleased with its report-generating and self-documenting features.

  7. An optimal merging technique for high-resolution precipitation products: OPTIMAL MERGING OF PRECIPITATION METHOD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shrestha, Roshan; Houser, Paul R.; Anantharaj, Valentine G.

    2011-04-01

    Precipitation products are currently available from various sources at higher spatial and temporal resolution than any time in the past. Each of the precipitation products has its strengths and weaknesses in availability, accuracy, resolution, retrieval techniques and quality control. By merging the precipitation data obtained from multiple sources, one can improve its information content by minimizing these issues. However, precipitation data merging poses challenges of scale-mismatch, and accurate error and bias assessment. In this paper we present Optimal Merging of Precipitation (OMP), a new method to merge precipitation data from multiple sources that are of different spatial and temporal resolutionsmore » and accuracies. This method is a combination of scale conversion and merging weight optimization, involving performance-tracing based on Bayesian statistics and trend-analysis, which yields merging weights for each precipitation data source. The weights are optimized at multiple scales to facilitate multiscale merging and better precipitation downscaling. Precipitation data used in the experiment include products from the 12-km resolution North American Land Data Assimilation (NLDAS) system, the 8-km resolution CMORPH and the 4-km resolution National Stage-IV QPE. The test cases demonstrate that the OMP method is capable of identifying a better data source and allocating a higher priority for them in the merging procedure, dynamically over the region and time period. This method is also effective in filtering out poor quality data introduced into the merging process.« less

  8. Evaluation of the late merge work zone traffic control strategy.

    DOT National Transportation Integrated Search

    2004-01-01

    Several alternative lane merge strategies have been proposed in recent years to process vehicles through work zone lane closures more safely and efficiently. Among these is the late merge. With the late merge, drivers are instructed to use all lanes ...

  9. Physics Notes.

    ERIC Educational Resources Information Center

    School Science Review, 1982

    1982-01-01

    Discusses determination of elliptical path of a satellite caught into orbit by the sun or earth; using microcomputer as signal generator (includes program listing); collision process; simple hysteresis loop using double beam CRO; method of demonstrating parallelogram of forces; measuring radius of electron beam curvature; and half-life of thorium…

  10. Innovative Science Experiments Using Phoenix

    ERIC Educational Resources Information Center

    Kumar, B. P. Ajith; Satyanarayana, V. V. V.; Singh, Kundan; Singh, Parmanand

    2009-01-01

    A simple, flexible and very low cost hardware plus software framework for developing computer-interfaced science experiments is presented. It can be used for developing computer-interfaced science experiments without getting into the details of electronics or computer programming. For developing experiments this is a middle path between…

  11. Spatial-spectral characterization of focused spatially chirped broadband laser beams.

    PubMed

    Greco, Michael J; Block, Erica; Meier, Amanda K; Beaman, Alex; Cooper, Samuel; Iliev, Marin; Squier, Jeff A; Durfee, Charles G

    2015-11-20

    Proper alignment is critical to obtain the desired performance from focused spatially chirped beams, for example in simultaneous spatial and temporal focusing (SSTF). We present a simple technique for inspecting the beam paths and focusing conditions for the spectral components of a broadband beam. We spectrally resolve the light transmitted past a knife edge as it was scanned across the beam at several axial positions. The measurement yields information about spot size, M2, and the propagation paths of different frequency components. We also present calculations to illustrate the effects of defocus aberration on SSTF beams.

  12. What Makes the Foucault Pendulum Move among the Stars?

    NASA Astrophysics Data System (ADS)

    Phillips, Norman

    2004-11-01

    Foucault's pendulum exhibition in 1851 occurred in an era now known by development of the theorems of Coriolis and the formulation of dynamical meteorology by Ferrel. Yet today the behavior of the pendulum is often misunderstood. The existence of a horizontal component of Newtonian gravitation is essential for understanding the behavior with respect to the stars. Two simple mechanical principles describe why the path of oscillation is fixed only at the poles; the principle of centripetal acceleration and the principle of conservation of angular momentum. A sky map is used to describe the elegant path among the stars produced by these principles.

  13. Hardware development process for Human Research facility applications

    NASA Astrophysics Data System (ADS)

    Bauer, Liz

    2000-01-01

    The simple goal of the Human Research Facility (HRF) is to conduct human research experiments on the International Space Station (ISS) astronauts during long-duration missions. This is accomplished by providing integration and operation of the necessary hardware and software capabilities. A typical hardware development flow consists of five stages: functional inputs and requirements definition, market research, design life cycle through hardware delivery, crew training, and mission support. The purpose of this presentation is to guide the audience through the early hardware development process: requirement definition through selecting a development path. Specific HRF equipment is used to illustrate the hardware development paths. .

  14. An Open-path Laser Transmissometer for Atmospheric Extinction Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chandran, P. M. Satheesh; Krishnakumar, C. P.; Varma, Ravi

    2011-10-20

    A transmissometer is an optical instrument which measures transmitted intensity of monochromatic light over a fixed pathlength. Prototype of a simple laser transmissometer has been developed for transmission (or extinction) measurements through suspended absorbers and scatterers in the atmosphere over tens of meters. Instrument consists of a continuous green diode pumped solid state laser, transmission optics, photodiode detectors and A/D data acquisition components. A modulated laser beam is transmitted and subsequently reflected and returned to the unit by a retroreflecting mirror assembly placed several tens of meters away. Results from an open-path field measurement of the instrument are described.

  15. Functional correlates of the lateral and medial entorhinal cortex: objects, path integration and local-global reference frames.

    PubMed

    Knierim, James J; Neunuebel, Joshua P; Deshmukh, Sachin S

    2014-02-05

    The hippocampus receives its major cortical input from the medial entorhinal cortex (MEC) and the lateral entorhinal cortex (LEC). It is commonly believed that the MEC provides spatial input to the hippocampus, whereas the LEC provides non-spatial input. We review new data which suggest that this simple dichotomy between 'where' versus 'what' needs revision. We propose a refinement of this model, which is more complex than the simple spatial-non-spatial dichotomy. MEC is proposed to be involved in path integration computations based on a global frame of reference, primarily using internally generated, self-motion cues and external input about environmental boundaries and scenes; it provides the hippocampus with a coordinate system that underlies the spatial context of an experience. LEC is proposed to process information about individual items and locations based on a local frame of reference, primarily using external sensory input; it provides the hippocampus with information about the content of an experience.

  16. A comparison of the structureborne and airborne paths for propfan interior noise

    NASA Technical Reports Server (NTRS)

    Eversman, W.; Koval, L. R.; Ramakrishnan, J. V.

    1986-01-01

    A comparison is made between the relative levels of aircraft interior noise related to structureborne and airborne paths for the same propeller source. A simple, but physically meaningful, model of the structure treats the fuselage interior as a rectangular cavity with five rigid walls. The sixth wall, the fuselage sidewall, is a stiffened panel. The wing is modeled as a simple beam carried into the fuselage by a large discrete stiffener representing the carry-through structure. The fuselage interior is represented by analytically-derived acoustic cavity modes and the entire structure is represented by structural modes derived from a finite element model. The noise source for structureborne noise is the unsteady lift generation on the wing due to the rotating trailing vortex system of the propeller. The airborne noise source is the acoustic field created by a propeller model consistent with the vortex representation. Comparisons are made on the basis of interior noise over a range of propeller rotational frequencies at a fixed thrust.

  17. Functional correlates of the lateral and medial entorhinal cortex: objects, path integration and local–global reference frames

    PubMed Central

    Knierim, James J.; Neunuebel, Joshua P.; Deshmukh, Sachin S.

    2014-01-01

    The hippocampus receives its major cortical input from the medial entorhinal cortex (MEC) and the lateral entorhinal cortex (LEC). It is commonly believed that the MEC provides spatial input to the hippocampus, whereas the LEC provides non-spatial input. We review new data which suggest that this simple dichotomy between ‘where’ versus ‘what’ needs revision. We propose a refinement of this model, which is more complex than the simple spatial–non-spatial dichotomy. MEC is proposed to be involved in path integration computations based on a global frame of reference, primarily using internally generated, self-motion cues and external input about environmental boundaries and scenes; it provides the hippocampus with a coordinate system that underlies the spatial context of an experience. LEC is proposed to process information about individual items and locations based on a local frame of reference, primarily using external sensory input; it provides the hippocampus with information about the content of an experience. PMID:24366146

  18. Path integral Monte Carlo and the electron gas

    NASA Astrophysics Data System (ADS)

    Brown, Ethan W.

    Path integral Monte Carlo is a proven method for accurately simulating quantum mechanical systems at finite-temperature. By stochastically sampling Feynman's path integral representation of the quantum many-body density matrix, path integral Monte Carlo includes non-perturbative effects like thermal fluctuations and particle correlations in a natural way. Over the past 30 years, path integral Monte Carlo has been successfully employed to study the low density electron gas, high-pressure hydrogen, and superfluid helium. For systems where the role of Fermi statistics is important, however, traditional path integral Monte Carlo simulations have an exponentially decreasing efficiency with decreased temperature and increased system size. In this thesis, we work towards improving this efficiency, both through approximate and exact methods, as specifically applied to the homogeneous electron gas. We begin with a brief overview of the current state of atomic simulations at finite-temperature before we delve into a pedagogical review of the path integral Monte Carlo method. We then spend some time discussing the one major issue preventing exact simulation of Fermi systems, the sign problem. Afterwards, we introduce a way to circumvent the sign problem in PIMC simulations through a fixed-node constraint. We then apply this method to the homogeneous electron gas at a large swatch of densities and temperatures in order to map out the warm-dense matter regime. The electron gas can be a representative model for a host of real systems, from simple medals to stellar interiors. However, its most common use is as input into density functional theory. To this end, we aim to build an accurate representation of the electron gas from the ground state to the classical limit and examine its use in finite-temperature density functional formulations. The latter half of this thesis focuses on possible routes beyond the fixed-node approximation. As a first step, we utilize the variational principle inherent in the path integral Monte Carlo method to optimize the nodal surface. By using a ansatz resembling a free particle density matrix, we make a unique connection between a nodal effective mass and the traditional effective mass of many-body quantum theory. We then propose and test several alternate nodal ansatzes and apply them to single atomic systems. Finally, we propose a method to tackle the sign problem head on, by leveraging the relatively simple structure of permutation space. Using this method, we find we can perform exact simulations this of the electron gas and 3He that were previously impossible.

  19. Hierarchical Image Segmentation of Remotely Sensed Data using Massively Parallel GNU-LINUX Software

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2003-01-01

    A hierarchical set of image segmentations is a set of several image segmentations of the same image at different levels of detail in which the segmentations at coarser levels of detail can be produced from simple merges of regions at finer levels of detail. In [1], Tilton, et a1 describes an approach for producing hierarchical segmentations (called HSEG) and gave a progress report on exploiting these hierarchical segmentations for image information mining. The HSEG algorithm is a hybrid of region growing and constrained spectral clustering that produces a hierarchical set of image segmentations based on detected convergence points. In the main, HSEG employs the hierarchical stepwise optimization (HSWO) approach to region growing, which was described as early as 1989 by Beaulieu and Goldberg. The HSWO approach seeks to produce segmentations that are more optimized than those produced by more classic approaches to region growing (e.g. Horowitz and T. Pavlidis, [3]). In addition, HSEG optionally interjects between HSWO region growing iterations, merges between spatially non-adjacent regions (i.e., spectrally based merging or clustering) constrained by a threshold derived from the previous HSWO region growing iteration. While the addition of constrained spectral clustering improves the utility of the segmentation results, especially for larger images, it also significantly increases HSEG s computational requirements. To counteract this, a computationally efficient recursive, divide-and-conquer, implementation of HSEG (RHSEG) was devised, which includes special code to avoid processing artifacts caused by RHSEG s recursive subdivision of the image data. The recursive nature of RHSEG makes for a straightforward parallel implementation. This paper describes the HSEG algorithm, its recursive formulation (referred to as RHSEG), and the implementation of RHSEG using massively parallel GNU-LINUX software. Results with Landsat TM data are included comparing RHSEG with classic region growing.

  20. Robust detection of heart beats in multimodal records using slope- and peak-sensitive band-pass filters.

    PubMed

    Pangerc, Urška; Jager, Franc

    2015-08-01

    In this work, we present the development, architecture and evaluation of a new and robust heart beat detector in multimodal records. The detector uses electrocardiogram (ECG) signals, and/or pulsatile (P) signals, such as: blood pressure, artery blood pressure and pulmonary artery pressure, if present. The base approach behind the architecture of the detector is collecting signal energy (differentiating and low-pass filtering, squaring, integrating). To calculate the detection and noise functions, simple and fast slope- and peak-sensitive band-pass digital filters were designed. By using morphological smoothing, the detection functions were further improved and noise intervals were estimated. The detector looks for possible pacemaker heart rate patterns and repairs the ECG signals and detection functions. Heart beats are detected in each of the ECG and P signals in two steps: a repetitive learning phase and a follow-up detecting phase. The detected heart beat positions from the ECG signals are merged into a single stream of detected ECG heart beat positions. The merged ECG heart beat positions and detected heart beat positions from the P signals are verified for their regularity regarding the expected heart rate. The detected heart beat positions of a P signal with the best match to the merged ECG heart beat positions are selected for mapping into the noise and no-signal intervals of the record. The overall evaluation scores in terms of average sensitivity and positive predictive values obtained on databases that are freely available on the Physionet website were as follows: the MIT-BIH Arrhythmia database (99.91%), the MGH/MF Waveform database (95.14%), the augmented training set of the follow-up phase of the PhysioNet/Computing in Cardiology Challenge 2014 (97.67%), and the Challenge test set (93.64%).

  1. Performance of Optimally Merged Multisatellite Precipitation Products Using the Dynamic Bayesian Model Averaging Scheme Over the Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Ma, Yingzhao; Hong, Yang; Chen, Yang; Yang, Yuan; Tang, Guoqiang; Yao, Yunjun; Long, Di; Li, Changmin; Han, Zhongying; Liu, Ronghua

    2018-01-01

    Accurate estimation of precipitation from satellites at high spatiotemporal scales over the Tibetan Plateau (TP) remains a challenge. In this study, we proposed a general framework for blending multiple satellite precipitation data using the dynamic Bayesian model averaging (BMA) algorithm. The blended experiment was performed at a daily 0.25° grid scale for 2007-2012 among Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA) 3B42RT and 3B42V7, Climate Prediction Center MORPHing technique (CMORPH), and Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Climate Data Record (PERSIANN-CDR). First, the BMA weights were optimized using the expectation-maximization (EM) method for each member on each day at 200 calibrated sites and then interpolated to the entire plateau using the ordinary kriging (OK) approach. Thus, the merging data were produced by weighted sums of the individuals over the plateau. The dynamic BMA approach showed better performance with a smaller root-mean-square error (RMSE) of 6.77 mm/day, higher correlation coefficient of 0.592, and closer Euclid value of 0.833, compared to the individuals at 15 validated sites. Moreover, BMA has proven to be more robust in terms of seasonality, topography, and other parameters than traditional ensemble methods including simple model averaging (SMA) and one-outlier removed (OOR). Error analysis between BMA and the state-of-the-art IMERG in the summer of 2014 further proved that the performance of BMA was superior with respect to multisatellite precipitation data merging. This study demonstrates that BMA provides a new solution for blending multiple satellite data in regions with limited gauges.

  2. Automated separation of merged Langerhans islets

    NASA Astrophysics Data System (ADS)

    Švihlík, Jan; Kybic, Jan; Habart, David

    2016-03-01

    This paper deals with separation of merged Langerhans islets in segmentations in order to evaluate correct histogram of islet diameters. A distribution of islet diameters is useful for determining the feasibility of islet transplantation in diabetes. First, the merged islets at training segmentations are manually separated by medical experts. Based on the single islets, the merged islets are identified and the SVM classifier is trained on both classes (merged/single islets). The testing segmentations were over-segmented using watershed transform and the most probable back merging of islets were found using trained SVM classifier. Finally, the optimized segmentation is compared with ground truth segmentation (correctly separated islets).

  3. MetaGenomic Assembly by Merging (MeGAMerge)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scholz Chien-Chi Lo, Matthew B.

    2015-08-03

    "MetaGenomic Assembly by Merging" (MeGAMerge)Is a novel method of merging of multiple genomic assembly or long read data sources for assembly by use of internal trimming/filtering of data, followed by use of two 3rd party tools to merge data by overlap based assembly.

  4. BRST Exactness of Stress-Energy Tensors

    NASA Astrophysics Data System (ADS)

    Miyata, Hideo; Sugimoto, Hiroshi

    BRST commutators in the topological conformal field theories obtained by twisting N=2 theories are evaluated explicitly. By our systematic calculations of the multiple integrals which contain screening operators, the BRST exactness of the twisted stress-energy tensors is deduced for classical simple Lie algebras and general level k. We can see that the paths of integrations do not affect the result, and further, the N=2 coset theories are obtained by deleting two simple roots with Kac-label 1 from the extended Dynkin diagram; in other words, by not performing the integrations over the variables corresponding to the two simple roots of Kac-Moody algebras. It is also shown that a series of N=1 theories are generated in the same way by deleting one simple root with Kac-label 2.

  5. Improved CORF model of simple cell combined with non-classical receptive field and its application on edge detection

    NASA Astrophysics Data System (ADS)

    Sun, Xiao; Chai, Guobei; Liu, Wei; Bao, Wenzhuo; Zhao, Xiaoning; Ming, Delie

    2018-02-01

    Simple cells in primary visual cortex are believed to extract local edge information from a visual scene. In this paper, inspired by different receptive field properties and visual information flow paths of neurons, an improved Combination of Receptive Fields (CORF) model combined with non-classical receptive fields was proposed to simulate the responses of simple cell's receptive fields. Compared to the classical model, the proposed model is able to better imitate simple cell's physiologic structure with consideration of facilitation and suppression of non-classical receptive fields. And on this base, an edge detection algorithm as an application of the improved CORF model was proposed. Experimental results validate the robustness of the proposed algorithm to noise and background interference.

  6. Simulation based optimized beam velocity in additive manufacturing

    NASA Astrophysics Data System (ADS)

    Vignat, Frédéric; Béraud, Nicolas; Villeneuve, François

    2017-08-01

    Manufacturing good parts with additive technologies rely on melt pool dimension and temperature and are controlled by manufacturing strategies often decided on machine side. Strategies are built on beam path and variable energy input. Beam path are often a mix of contour and hatching strategies filling the contours at each slice. Energy input depend on beam intensity and speed and is determined from simple thermal models to control melt pool dimensions and temperature and ensure porosity free material. These models take into account variation in thermal environment such as overhanging surfaces or back and forth hatching path. However not all the situations are correctly handled and precision is limited. This paper proposes new method to determine energy input from full built chamber 3D thermal simulation. Using the results of the simulation, energy is modified to keep melt pool temperature in a predetermined range. The paper present first an experimental method to determine the optimal range of temperature. In a second part the method to optimize the beam speed from the simulation results is presented. Finally, the optimized beam path is tested in the EBM machine and built part are compared with part built with ordinary beam path.

  7. Analysis and Simulation of the Simplified Aircraft-Based Paired Approach Concept With the ALAS Alerting Algorithm in Conjunction With Echelon and Offset Strategies

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo; Madden, Michael M.; Butler, Rickey W.; Perry, Raleigh B.

    2014-01-01

    This report presents analytical and simulation results of an investigation into proposed operational concepts for closely spaced parallel runways, including the Simplified Aircraft-based Paired Approach (SAPA) with alerting and an escape maneuver, MITRE?s echelon spacing and no escape maneuver, and a hybrid concept aimed at lowering the visibility minima. We found that the SAPA procedure can be used at 950 ft separations or higher with next-generation avionics and that 1150 ft separations or higher is feasible with current-rule compliant ADS-B OUT. An additional 50 ft reduction in runway separation for the SAPA procedure is possible if different glideslopes are used. For the echelon concept we determined that current generation aircraft cannot conduct paired approaches on parallel paths using echelon spacing on runways less than 1400 ft apart and next-generation aircraft will not be able to conduct paired approach on runways less than 1050 ft apart. The hybrid concept added alerting and an escape maneuver starting 1 NM from the threshold when flying the echelon concept. This combination was found to be effective, but the probability of a collision can be seriously impacted if the turn component of the escape maneuver has to be disengaged near the ground (e.g. 300 ft or below) due to airport buildings and surrounding terrain. We also found that stabilizing the approach path in the straight-in segment was only possible if the merge point was at least 1.5 to 2 NM from the threshold unless the total system error can be sufficiently constrained on the offset path and final turn.

  8. On Edge Exchangeable Random Graphs

    NASA Astrophysics Data System (ADS)

    Janson, Svante

    2017-06-01

    We study a recent model for edge exchangeable random graphs introduced by Crane and Dempsey; in particular we study asymptotic properties of the random simple graph obtained by merging multiple edges. We study a number of examples, and show that the model can produce dense, sparse and extremely sparse random graphs. One example yields a power-law degree distribution. We give some examples where the random graph is dense and converges a.s. in the sense of graph limit theory, but also an example where a.s. every graph limit is the limit of some subsequence. Another example is sparse and yields convergence to a non-integrable generalized graphon defined on (0,∞).

  9. Statistical science: a grammar for research.

    PubMed

    Cox, David R

    2017-06-01

    I greatly appreciate the invitation to give this lecture with its century long history. The title is a warning that the lecture is rather discursive and not highly focused and technical. The theme is simple. That statistical thinking provides a unifying set of general ideas and specific methods relevant whenever appreciable natural variation is present. To be most fruitful these ideas should merge seamlessly with subject-matter considerations. By contrast, there is sometimes a temptation to regard formal statistical analysis as a ritual to be added after the serious work has been done, a ritual to satisfy convention, referees, and regulatory agencies. I want implicitly to refute that idea.

  10. A simple analytical model for dynamics of time-varying target leverage ratios

    NASA Astrophysics Data System (ADS)

    Lo, C. F.; Hui, C. H.

    2012-03-01

    In this paper we have formulated a simple theoretical model for the dynamics of the time-varying target leverage ratio of a firm under some assumptions based upon empirical observations. In our theoretical model the time evolution of the target leverage ratio of a firm can be derived self-consistently from a set of coupled Ito's stochastic differential equations governing the leverage ratios of an ensemble of firms by the nonlinear Fokker-Planck equation approach. The theoretically derived time paths of the target leverage ratio bear great resemblance to those used in the time-dependent stationary-leverage (TDSL) model [Hui et al., Int. Rev. Financ. Analy. 15, 220 (2006)]. Thus, our simple model is able to provide a theoretical foundation for the selected time paths of the target leverage ratio in the TDSL model. We also examine how the pace of the adjustment of a firm's target ratio, the volatility of the leverage ratio and the current leverage ratio affect the dynamics of the time-varying target leverage ratio. Hence, with the proposed dynamics of the time-dependent target leverage ratio, the TDSL model can be readily applied to generate the default probabilities of individual firms and to assess the default risk of the firms.

  11. A likelihood-based biostatistical model for analyzing consumer movement in simultaneous choice experiments

    USDA-ARS?s Scientific Manuscript database

    Measures of animal movement versus consumption rates can provide valuable, ecologically relevant information on feeding preference, specifically estimates of attraction rate, leaving rate, tenure time, or measures of flight/walking path. Here, we develop a simple biostatistical model to analyze repe...

  12. A Combined Soil Moisture Product of the Tibetan Plateau using Different Sensors Simultaneously

    NASA Astrophysics Data System (ADS)

    Zeng, Y.; Dente, L.; Su, B.; Wang, L.

    2012-12-01

    It is always challenging to find a single satellite-derived soil moisture product that has complete coverage of the Tibetan Plateau for a long time period and is suitable for climate change studies at sub-continental scale. Meanwhile, having a number of independent satellite-derived soil moisture data sets does not mean that it is straightforward to create long-term consistent time series, due to the differences among the data sets related to the different retrieval approaches. Therefore, this study is focused on the development and validation of a simple Bayesian based method to merge/blend different satellite-derived soil moisture data. The merging method was firstly tested over the Maqu region (north-eastern fringe of the Tibetan Plateau), where in situ soil moisture data were collected, for the period from May 2008 to December 2010. The in situ data provided by the 20 monitoring stations in the Maqu region were compared to the AMSR-E soil moisture products by VUA-NASA and the ASCAT soil moisture products by TU Wien, in order to determine bias and standard deviation. It was found that the bias between the satellite and the in situ data varies with seasons. The satellite-derived products were first corrected for the bias and then merged. This is generally caused by notable differences in the represented depth, spatial extent and so on. The systematic bias is affected by the spatial variability and the temporal stability (Dente et al. 2012). The dependence of the bias on season was investigated and identified as the monsoon season only (May-September), in winter only (December - February), and in the period between the monsoon season and winter (March-April, October-November, called the transition season) (Dente et al. 2012, Su et al. 2011). After the date merging procedure, the standard deviations between the satellite and the in situ data reduced from 0.0839 to 0.0622 for ASCAT data, and from 0.0682 to 0.0593 for AMSR-E data. The developed merging method is therefore suitable to provide a more accurate soil moisture product than the AMSR-E and ASCAT products. As the merging method was shown to be promising over the Maqu region, it will be extended to the entire Tibetan Plateau. Then, the combined soil moisture product will be validated over the monitored sites located in Ngari and Naqu regions. References: Dente, L.; Vekerdy, Z.; Wen, J.; Su, Z., (2012) Maqu network for validation of satellite-derived soil moisture products. International Journal of Applied Earth Observation and Geoinformation, 17, 55-65. Su, Z., Wen, J., Dente, L., van der Velde, R. and ... [et al.] (2011) The Tibetan plateau observatory of plateau scale soil moisture and soil temperature, Tibet - Obs, for quantifying uncertainties in coarse resolution satellite and model products. In: Hydrology and earth system sciences (HESS): open access, 15 (2011)7 pp. 2303-2016.

  13. Coordinative Alignment of Chiral Molecules to Control over the Chirality Transfer in Spontaneous Resolution and Asymmetric Catalysis.

    PubMed

    Xia, Zhengqiang; Jing, Xu; He, Cheng; Wang, Xiaoge; Duan, Chunying

    2017-11-13

    The production and availability of enantiomerically pure compounds that spurred the development of chiral technologies and materials are very important to the fine chemicals and pharmaceutical industries. By coordinative alignment of enantiopure guests in the metal‒organic frameworks, we reported an approach to control over the chirality of homochiral crystallization and asymmetric transformation. Synthesized by achiral triphenylamine derivatives, the chirality of silver frameworks was determined by the encapsulated enantiopure azomethine ylides, from which clear interaction patterns were observed to explore the chiral induction principles. With the changing of addition sequence of substrates, the enantioselectivity of asymmetric cycloaddition was controlled to verify the determinant on the chirality of the bulky MOF materials. The economical chirality amplification that merges a series of complicated self-inductions, bulk homochiral crystallization and enantioselective catalysis opens new avenues for enantiopure chemical synthesis and provides a promising path for the directional design and development of homochiral materials.

  14. Multiple-stripe lithiation mechanism of individual SnO2 nanowires in a flooding geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, Li; Liu, Xiao H.; Wang, G. F.

    2011-06-17

    The atomic scale lithiation mechanism of individual SnO2 nanowires in a flooding geometry with the entire wires being immersed in the electrolyte was revealed by in-situ transmission electron microscopy. The lithiation initiated multiple stripes with width of a few nanometer parallel to {020} planes transversing the entire wires, serving as multiple reaction fronts for late stage of lithiation. Inside the stripes, we identified high density of dislocations and enlarged inter-planar spacing, which provide effective path for lithium ion transport. The density of the stripes increased with further lithiation, and eventually they merged with one another, causing a large enlongation andmore » volume expansion and the crystalline to amorphous phase transformation. This multiple stripes and multiple reaction fronts lithiation mechanism is unexpected and differs completely from the expected core-shell lithiation mechanism.« less

  15. Robustness of cluster synchronous patterns in small-world networks with inter-cluster co-competition balance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jianbao; Ma, Zhongjun, E-mail: mzj1234402@163.com; Chen, Guanrong

    All edges in the classical Watts and Strogatz's small-world network model are unweighted and cooperative (positive). By introducing competitive (negative) inter-cluster edges and assigning edge weights to mimic more realistic networks, this paper develops a modified model which possesses co-competitive weighted couplings and cluster structures while maintaining the common small-world network properties of small average shortest path lengths and large clustering coefficients. Based on theoretical analysis, it is proved that the new model with inter-cluster co-competition balance has an important dynamical property of robust cluster synchronous pattern formation. More precisely, clusters will neither merge nor split regardless of adding ormore » deleting nodes and edges, under the condition of inter-cluster co-competition balance. Numerical simulations demonstrate the robustness of the model against the increase of the coupling strength and several topological variations.« less

  16. Response of North Atlantic Ocean Chlorophyll a to the Change of Atlantic Meridional Overturning Circulation

    NASA Astrophysics Data System (ADS)

    Zhang, Min; Zhang, Yuanling; Shu, Qi; Zhao, Chang; Wang, Gang; Wu, Zhaohua; Qiao, Fangli

    2017-04-01

    Changes in marine phytoplankton are a vital component in global carbon cycling. Despite this far-reaching importance, the variable trend in phytoplankton and its response to climate variability remain unclear. This work presents the spatiotemporal evolution of the chlorophyll a trend in the North Atlantic Ocean by using merged ocean color products for the period 1997-2016. We find a dipole pattern between the subpolar gyre and the Gulf Stream path,and chlorophyll a trend signal propagatedalong the opposite direction of the North Atlantic Current. Such a dipole pattern and opposite propagation of chlorophyll a signal are consistent with the recent distinctive signature of the slowdown of the Atlantic MeridionalOverturning Circulation (AMOC). It is suggested that the spatiotemporal evolution of chlorophyll a during the two most recent decades is a part of the multidecadal variation and regulated byAMOC, which could be used as an indicator of AMOC variations.

  17. Persistent grief in the aftermath of mass violence: the predictive roles of posttraumatic stress symptoms, self-efficacy, and disrupted worldview.

    PubMed

    Smith, Andrew J; Abeyta, Andrew A; Hughes, Michael; Jones, Russell T

    2015-03-01

    This study tested a conceptual model merging anxiety buffer disruption and social-cognitive theories to predict persistent grief severity among students who lost a close friend, significant other, and/or professor/teacher in tragic university campus shootings. A regression-based path model tested posttraumatic stress (PTS) symptom severity 3 to 4 months postshooting (Time 1) as a predictor of grief severity 1 year postshootings (Time 2), both directly and indirectly through cognitive processes (self-efficacy and disrupted worldview). Results revealed a model that predicted 61% of the variance in Time 2 grief severity. Hypotheses were supported, demonstrating that Time 1 PTS severity indirectly, positively predicted Time 2 grief severity through undermining self-efficacy and more severely disrupting worldview. Findings and theoretical interpretation yield important insights for future research and clinical application. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  18. Robustness of cluster synchronous patterns in small-world networks with inter-cluster co-competition balance

    NASA Astrophysics Data System (ADS)

    Zhang, Jianbao; Ma, Zhongjun; Chen, Guanrong

    2014-06-01

    All edges in the classical Watts and Strogatz's small-world network model are unweighted and cooperative (positive). By introducing competitive (negative) inter-cluster edges and assigning edge weights to mimic more realistic networks, this paper develops a modified model which possesses co-competitive weighted couplings and cluster structures while maintaining the common small-world network properties of small average shortest path lengths and large clustering coefficients. Based on theoretical analysis, it is proved that the new model with inter-cluster co-competition balance has an important dynamical property of robust cluster synchronous pattern formation. More precisely, clusters will neither merge nor split regardless of adding or deleting nodes and edges, under the condition of inter-cluster co-competition balance. Numerical simulations demonstrate the robustness of the model against the increase of the coupling strength and several topological variations.

  19. A New Narrowbeam, Multi-Frequency Scanning Radiometer and Its Application to In-Flight Icing Detection

    NASA Technical Reports Server (NTRS)

    Serke, David J.; Solheim, Frederick; Ware, Randolph; Politovich, Marcia K.; Brunkow, David; Bowie, Robert

    2010-01-01

    A narrow-beam (1 degree beamwidth), multi-channel (20 to 30 and 89 GHz), polarized (89 vertical and horizontal) radiometer with full azimuth and elevation scanning capabilities has been built with the purpose of improving the detection of in-flight icing hazards to aircraft in the near airport environment. This goal was achieved by co-locating the radiometer with Colorado State University's CHILL polarized Doppler radar and taking advantage of similar beamwidth and volume scan regiments. In this way, the liquid water path and water vapor measurements derived from the radiometer were merged with CHILL's moment fields to provide diagnoses of water phase and microphysics aloft. The radiometer was field tested at Colorado State University's CHILL radar site near Greeley, Colorado, during the summer of 2009. Instrument design, calibration and initial field testing results are discussed in this paper

  20. The Mpi-M Aerosol Climatology (MAC)

    NASA Astrophysics Data System (ADS)

    Kinne, S.

    2014-12-01

    Monthly gridded global data-sets for aerosol optical properties (AOD, SSA and g) and for aerosol microphysical properties (CCN and IN) offer a (less complex) alternate path to include aerosol radiative effects and aerosol impacts on cloud-microphysics in global simulations. Based on merging AERONET sun-/sky-photometer data onto background maps provided by AeroCom phase 1 modeling output and AERONET sun-/the MPI-M Aerosol Climatology (MAC) version 1 was developed and applied in IPCC simulations with ECHAM and as ancillary data-set in satellite-based global data-sets. An updated version 2 of this climatology will be presented now applying central values from the more recent AeroCom phase 2 modeling and utilizing the better global coverage of trusted sun-photometer data - including statistics from the Marine Aerosol network (MAN). Applications include spatial distributions of estimates for aerosol direct and aerosol indirect radiative effects.

  1. Current process and future path for health economic assessment of pharmaceuticals in France

    PubMed Central

    Toumi, Mondher; Rémuzat, Cécile; El Hammi, Emna; Millier, Aurélie; Aballéa, Samuel; Chouaid, Christos; Falissard, Bruno

    2015-01-01

    The Social Security Funding Law for 2012 introduced the Economic and Public Health Assessment Committee (Commission Evaluation Economique et de Santé Publique, or CEESP) in the Social Security Code as a specialised committee affiliated with the Haute Autorité de Santé in charge of providing recommendations and health economic opinions. This article provides an in-depth description of the CEESP's structure and working methods, and analyses the impact of health economic assessment on market access of drugs in France. It also points out the areas of uncertainty and the conflicting rules following the introduction of the health economic assessment in France. The authors also provide their personal opinion on the likely future of health economic assessment of drugs in France, including the possible merge of the CEESP and the Transparency Committee, the implementation of a French threshold, and the extension of health economic assessment to a larger number of products. PMID:27123173

  2. Degree-Pruning Dynamic Programming Approaches to Central Time Series Minimizing Dynamic Time Warping Distance.

    PubMed

    Sun, Tao; Liu, Hongbo; Yu, Hong; Chen, C L Philip

    2016-06-28

    The central time series crystallizes the common patterns of the set it represents. In this paper, we propose a global constrained degree-pruning dynamic programming (g(dp)²) approach to obtain the central time series through minimizing dynamic time warping (DTW) distance between two time series. The DTW matching path theory with global constraints is proved theoretically for our degree-pruning strategy, which is helpful to reduce the time complexity and computational cost. Our approach can achieve the optimal solution between two time series. An approximate method to the central time series of multiple time series [called as m_g(dp)²] is presented based on DTW barycenter averaging and our g(dp)² approach by considering hierarchically merging strategy. As illustrated by the experimental results, our approaches provide better within-group sum of squares and robustness than other relevant algorithms.

  3. Integrating Quality Assurance Systems in a Merged Higher Education Institution

    ERIC Educational Resources Information Center

    Kistan, Chandru

    2005-01-01

    Purpose: This article seeks to highlight the challenges and issues that face merging higher education institutions and also to outline some of the challenges in integrating the quality assurance systems during the pre-, interim and post-merger phases in a merged university. Design/methodology/approach: Case studies of merged and merging…

  4. Common-path digital holographic microscopy based on a beam displacer unit

    NASA Astrophysics Data System (ADS)

    Di, Jianglei; Zhang, Jiwei; Song, Yu; Wang, Kaiqiang; Wei, Kun; Zhao, Jianlin

    2018-02-01

    Digital holographic microscopy (DHM) has become a novel tool with advantages of full field, non-destructive, high-resolution and 3D imaging, which captures the quantitative amplitude and phase information of microscopic specimens. It's a well-established method for digital recording and numerical reconstructing the full complex field of wavefront of the samples with a diffraction-limited lateral resolution down to 0.3 μm depending on the numerical aperture of microscope objective. Meanwhile, its axial resolution through axial direction is less than 10 nm due to the interferometric nature in phase imaging. Compared with the typical optical configurations such as Mach-Zehnder interferometer and Michelson interferometer, the common-path DHM has the advantages of simple and compact configuration, high stability, and so on. Here, a simple, compact, and low-cost common-path DHM based on a beam displacer unit is proposed for quantitative phase imaging of biological cells. The beam displacer unit is completely compatible with commercial microscope and can be easily set up in the output port of the microscope as a compact independent device. This technique can be used to achieve the quantitative phase measurement of biological cells with an excellent temporal stability of 0.51 nm, which makes it having a good prospect in the fields of biological and medical science. Living mouse osteoblastic cells are quantitatively measured with the system to demonstrate its capability and applicability.

  5. Quantum robots and environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benioff, P.

    1998-08-01

    Quantum robots and their interactions with environments of quantum systems are described, and their study justified. A quantum robot is a mobile quantum system that includes an on-board quantum computer and needed ancillary systems. Quantum robots carry out tasks whose goals include specified changes in the state of the environment, or carrying out measurements on the environment. Each task is a sequence of alternating computation and action phases. Computation phase activites include determination of the action to be carried out in the next phase, and recording of information on neighborhood environmental system states. Action phase activities include motion of themore » quantum robot and changes in the neighborhood environment system states. Models of quantum robots and their interactions with environments are described using discrete space and time. A unitary step operator T that gives the single time step dynamics is associated with each task. T=T{sub a}+T{sub c} is a sum of action phase and computation phase step operators. Conditions that T{sub a} and T{sub c} should satisfy are given along with a description of the evolution as a sum over paths of completed phase input and output states. A simple example of a task{emdash}carrying out a measurement on a very simple environment{emdash}is analyzed in detail. A decision tree for the task is presented and discussed in terms of the sums over phase paths. It is seen that no definite times or durations are associated with the phase steps in the tree, and that the tree describes the successive phase steps in each path in the sum over phase paths. {copyright} {ital 1998} {ital The American Physical Society}« less

  6. Development and Demonstration of an Ada Test Generation System

    NASA Technical Reports Server (NTRS)

    1996-01-01

    In this project we have built a prototype system that performs Feasible Path Analysis on Ada programs: given a description of a set of control flow paths through a procedure, and a predicate at a program point feasible path analysis determines if there is input data which causes execution to flow down some path in the collection reaching the point so that tile predicate is true. Feasible path analysis can be applied to program testing, program slicing, array bounds checking, and other forms of anomaly checking. FPA is central to most applications of program analysis. But, because this problem is formally unsolvable, syntactic-based approximations are used in its place. For example, in dead-code analysis the problem is to determine if there are any input values which cause execution to reach a specified program point. Instead an approximation to this problem is computed: determine whether there is a control flow path from the start of the program to the point. This syntactic approximation is efficiently computable and conservative: if there is no such path the program point is clearly unreachable, but if there is such a path, the analysis is inconclusive, and the code is assumed to be live. Such conservative analysis too often yields unsatisfactory results because the approximation is too weak. As another example, consider data flow analysis. A du-pair is a pair of program points such that the first point is a definition of a variable and the second point a use and for which there exists a definition-free path from the definition to the use. The sharper, semantic definition of a du-pair requires that there be a feasible definition-free path from the definition to the use. A compiler using du-pairs for detecting dead variables may miss optimizations by not considering feasibility. Similarly, a program analyzer computing program slices to merge parallel versions may report conflicts where none exist. In the context of software testing, feasibility analysis plays an important role in identifying testing requirements which are infeasible. This is especially true for data flow testing and modified condition/decision coverage. Our system uses in an essential way symbolic analysis and theorem proving technology, and we believe this work represents one of the few successful uses of a theorem prover working in a completely automatic fashion to solve a problem of practical interest. We believe this work anticipates an important trend away from purely syntactic-based methods for program analysis to semantic methods based on symbolic processing and inference technology. Other results demonstrating the practical use of automatic inference is being reported in hardware verification, although there are significant differences between the hardware work and ours. However, what is common and important is that general purpose theorem provers are being integrated with more special-purpose decision procedures to solve problems in analysis and verification. We are pursuina commercial opportunities for this work, and will use and extend the work in other projects we are engaged in. Ultimately we would like to rework the system to analyze C, C++, or Java as a key step toward commercialization.

  7. Global paths of time-periodic solutions of the Benjamin-Ono equation connecting arbitrary traveling waves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ambrose, David M.; Wilkening, Jon

    2008-12-11

    We classify all bifurcations from traveling waves to non-trivial time-periodic solutions of the Benjamin-Ono equation that are predicted by linearization. We use a spectrally accurate numerical continuation method to study several paths of non-trivial solutions beyond the realm of linear theory. These paths are found to either re-connect with a different traveling wave or to blow up. In the latter case, as the bifurcation parameter approaches a critical value, the amplitude of the initial condition grows without bound and the period approaches zero. We propose a conjecture that gives the mapping from one bifurcation to its counterpart on the othermore » side of the path of non-trivial solutions. By experimentation with data fitting, we identify the form of the exact solutions on the path connecting two traveling waves, which represents the Fourier coefficients of the solution as power sums of a finite number of particle positions whose elementary symmetric functions execute simple orbits in the complex plane (circles or epicycles). We then solve a system of algebraic equations to express the unknown constants in the new representation in terms of the mean, a spatial phase, a temporal phase, four integers (enumerating the bifurcation at each end of the path) and one additional bifurcation parameter. We also find examples of interior bifurcations from these paths of already non-trivial solutions, but we do not attempt to analyze their algebraic structure.« less

  8. Hydrodynamical Evolution of Merging Carbon-Oxygen White Dwarfs: Their Pre-supernova Structure and Observational Counterparts

    NASA Astrophysics Data System (ADS)

    Tanikawa, Ataru; Nakasato, Naohito; Sato, Yushi; Nomoto, Ken'ichi; Maeda, Keiichi; Hachisu, Izumi

    2015-07-01

    We perform smoothed particle hydrodynamics simulations for merging binary carbon-oxygen (CO) WDs with masses of 1.1 and 1.0 {M}⊙ , until the merger remnant reaches a dynamically steady state. Using these results, we assess whether the binary could induce a thermonuclear explosion, and whether the explosion could be observed as a type Ia supernova (SN Ia). We investigate three explosion mechanisms: a helium-ignition following the dynamical merger (“helium-ignited violent merger model”), a carbon-ignition (“carbon-ignited violent merger model”), and an explosion following the formation of the Chandrasekhar mass WD (“Chandrasekhar mass model”). An explosion of the helium-ignited violent merger model is possible, while we predict that the resulting SN ejecta are highly asymmetric since its companion star is fully intact at the time of the explosion. The carbon-ignited violent merger model can also lead to an explosion. However, the envelope of the exploding WD spreads out to ˜ 0.1 {R}⊙ ; it is much larger than that inferred for SN 2011fe (\\lt 0.1 {R}⊙ ) while much smaller than that for SN 2014J (˜ 1 {R}⊙ ). For the particular combination of the WD masses studied in this work, the Chandrasekhar mass model does not successfully lead to an SN Ia explosion. Besides these assessments, we investigate the evolution of unbound materials ejected through the merging process (“merger ejecta”), assuming a case where the SN Ia explosion is not triggered by the helium- or carbon-ignition during the merger. The merger ejecta interact with the surrounding interstellar medium and form a shell. The shell has a bolometric luminosity of more than 2× {10}35 {erg} {{{s}}}-1, lasting for ˜ 2× {10}4 years. If this is the case, the Milky Way should harbor about 10 such shells at any given time. The detection of the shell(s) can therefore rule out the helium-ignited and carbon-ignited violent merger models as major paths to SN Ia explosions.

  9. A Note on Verification of Computer Simulation Models

    ERIC Educational Resources Information Center

    Aigner, Dennis J.

    1972-01-01

    Establishes an argument that questions the validity of one test'' of goodness-of-fit (the extent to which a series of obtained measures agrees with a series of theoretical measures) for the simulated time path of a simple endogenous (internally developed) variable in a simultaneous, perhaps dynamic econometric model. (Author)

  10. The Path to Equal Rights in Michigan

    ERIC Educational Resources Information Center

    Gratz, Jennifer

    2007-01-01

    The litigant in a historic reverse-discrimination case against the University of Michigan, and subsequently the leader of a Michigan ballot initiative that carried the day against long odds, recounts how her simple call for equal treatment under the law persuaded the people of her state that color-conscious preferences are wrong.

  11. Counterfactual quantum erasure: spooky action without entanglement

    NASA Astrophysics Data System (ADS)

    Salih, Hatim

    2018-02-01

    We combine the eyebrow-raising quantum phenomena of erasure and counterfactuality for the first time, proposing a simple yet unusual quantum eraser: A distant Bob can decide to erase which-path information from Alice's photon, dramatically restoring interference-without previously shared entanglement, and without Alice's photon ever leaving her laboratory.

  12. A Simple Exposition of the Social Security Trust Fund.

    ERIC Educational Resources Information Center

    Holahan, William L.; Schug, Mark C.

    2000-01-01

    Discusses a strategy for teaching students about how the Social Security Trust Fund works. Explains that a flow chart is presented to the students; four terms are defined (deficit, surplus, debt, and reserve); and a new graph is prepared to show the paths of these four variables. (CMK)

  13. Counterfactual quantum erasure: spooky action without entanglement.

    PubMed

    Salih, Hatim

    2018-02-01

    We combine the eyebrow-raising quantum phenomena of erasure and counterfactuality for the first time, proposing a simple yet unusual quantum eraser: A distant Bob can decide to erase which-path information from Alice's photon, dramatically restoring interference-without previously shared entanglement, and without Alice's photon ever leaving her laboratory.

  14. The Shining Path and the Future of Peru

    DTIC Science & Technology

    1990-03-01

    prospect of a steady income. Most are Quechua speakers, often exclusively so, and have the same cultural and economic back- grounds as Sendero’s rural...speak Quechua . This gulf, which often makes even simple communication difficult, is not bridged by a profes- sional corps of noncommissioned officers

  15. Drone based structural mapping at Holuhraun indicates fault reactivation and complexity

    NASA Astrophysics Data System (ADS)

    Mueller, Daniel; Walter, Thomas R.; Steinke, Bastian; Witt, Tanja; Schoepa, Anne; Duerig, Tobi; Gudmundsson, Magnus T.

    2016-04-01

    Accompanied by an intense seismic swarm in August 2014, a dike laterally formed, starting under Icelands Vatnajökull glacier, propagating over a distance of more than 45 km within only two weeks, leading to the largest eruption by volume since the 1783-84 Laki eruption. Along its propagation path, the dike caused intense surface displacements up to meters. Based on seismicity, GPS and InSAR, the propagation has already been analysed and described as segmented lateral dike growth. We now focus on few smaller regions of the dike. We consider the Terrasar-X tandem digital elevation map and aerial photos and find localized zones where structural fissures formed and curved. At these localized, regions we performed a field campaign in summer 2015, applying the close range remote sensing techniques Structure from Motion (SfM) and Terrestrial Laser Scanning (TLS). Over 4 TLS scan were collected, along with over 5,000 aerial images. Point clouds from SfM and TLS are merged and compared, and local structural lineaments analysed. As a result, we obtained an unprecedentedly high-resolution digital elevation map. With this map, we analyse the structural expression of the fissure eruption at the surface and improve understanding on the conditions that influenced the magma propagation path. We elaborate scenarios that lead to complexities of the surface structures and the link to the underlying dike intrusion.

  16. Bohman-Frieze-Wormald model on the lattice, yielding a discontinuous percolation transition

    NASA Astrophysics Data System (ADS)

    Schrenk, K. J.; Felder, A.; Deflorin, S.; Araújo, N. A. M.; D'Souza, R. M.; Herrmann, H. J.

    2012-03-01

    The BFW model introduced by Bohman, Frieze, and Wormald [Random Struct. Algorithms1042-983210.1002/rsa.20038, 25, 432 (2004)], and recently investigated in the framework of discontinuous percolation by Chen and D'Souza [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.106.115701 106, 115701 (2011)], is studied on the square and simple-cubic lattices. In two and three dimensions, we find numerical evidence for a strongly discontinuous transition. In two dimensions, the clusters at the threshold are compact with a fractal surface of fractal dimension df=1.49±0.02. On the simple-cubic lattice, distinct jumps in the size of the largest cluster are observed. We proceed to analyze the tree-like version of the model, where only merging bonds are sampled, for dimension two to seven. The transition is again discontinuous in any considered dimension. Finally, the dependence of the cluster-size distribution at the threshold on the spatial dimension is also investigated.

  17. A sophisticated cad tool for the creation of complex models for electromagnetic interaction analysis

    NASA Astrophysics Data System (ADS)

    Dion, Marc; Kashyap, Satish; Louie, Aloisius

    1991-06-01

    This report describes the essential features of the MS-DOS version of DIDEC-DREO, an interactive program for creating wire grid, surface patch, and cell models of complex structures for electromagnetic interaction analysis. It uses the device-independent graphics library DIGRAF and the graphics kernel system HALO, and can be executed on systems with various graphics devices. Complicated structures can be created by direct alphanumeric keyboard entry, digitization of blueprints, conversion form existing geometric structure files, and merging of simple geometric shapes. A completed DIDEC geometric file may then be converted to the format required for input to a variety of time domain and frequency domain electromagnetic interaction codes. This report gives a detailed description of the program DIDEC-DREO, its installation, and its theoretical background. Each available interactive command is described. The associated program HEDRON which generates simple geometric shapes, and other programs that extract the current amplitude data from electromagnetic interaction code outputs, are also discussed.

  18. A study of the electric field in an open magnetospheric model

    NASA Technical Reports Server (NTRS)

    Stern, D. P.

    1972-01-01

    The qualitative properties of an open magnetosphere and its electric field are examined and compared to a simple model of a dipole in a constant field and to actual observations. Many of these properties are found to depend on the separatrix, a curve connecting neutral points and separating different field-line regimes. In the simple model, the electric field in the central polar cap tends to point from dawn to dusk for a wide choice of external fields. Near the boundary of the polar cap electric equipotentials curve and become crescent-shaped, which may explain the correlation of polar magnetic variations with the azimuthal component of the interplanetary magnetic field, reported by Svalgaard. Modifications expected to occur in the actual magnetosphere are also investigated: in particular, it appears that bending of equipotentials may be reduced by cross-field flow during the merging of field lines and that open field lines connected to the polar caps emerge from a long and narrow slot extending along the tail.

  19. Marr's levels and the minimalist program.

    PubMed

    Johnson, Mark

    2017-02-01

    A simple change to a cognitive system at Marr's computational level may entail complex changes at the other levels of description of the system. The implementational level complexity of a change, rather than its computational level complexity, may be more closely related to the plausibility of a discrete evolutionary event causing that change. Thus the formal complexity of a change at the computational level may not be a good guide to the plausibility of an evolutionary event introducing that change. For example, while the Minimalist Program's Merge is a simple formal operation (Berwick & Chomsky, 2016), the computational mechanisms required to implement the language it generates (e.g., to parse the language) may be considerably more complex. This has implications for the theory of grammar: theories of grammar which involve several kinds of syntactic operations may be no less evolutionarily plausible than a theory of grammar that involves only one. A deeper understanding of human language at the algorithmic and implementational levels could strengthen Minimalist Program's account of the evolution of language.

  20. Control of electrochemical signals from quantum dots conjugated to organic materials by using DNA structure in an analog logic gate.

    PubMed

    Chen, Qi; Yoo, Si-Youl; Chung, Yong-Ho; Lee, Ji-Young; Min, Junhong; Choi, Jeong-Woo

    2016-10-01

    Various bio-logic gates have been studied intensively to overcome the rigidity of single-function silicon-based logic devices arising from combinations of various gates. Here, a simple control tool using electrochemical signals from quantum dots (QDs) was constructed using DNA and organic materials for multiple logic functions. The electrochemical redox current generated from QDs was controlled by the DNA structure. DNA structure, in turn, was dependent on the components (organic materials) and the input signal (pH). Independent electrochemical signals from two different logic units containing QDs were merged into a single analog-type logic gate, which was controlled by two inputs. We applied this electrochemical biodevice to a simple logic system and achieved various logic functions from the controlled pH input sets. This could be further improved by choosing QDs, ionic conditions, or DNA sequences. This research provides a feasible method for fabricating an artificial intelligence system. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. A CAI System for Visually Impaired Children to Improve Abilities of Orientation and Mobility

    NASA Astrophysics Data System (ADS)

    Yoneda, Takahiro; Kudo, Hiroaki; Minagawa, Hiroki; Ohnishi, Noboru; Matsubara, Shizuya

    Some visually impaired children have difficulty in simple locomotion, and need orientation and mobility training. We developed a computer assisted instruction system which assists this training. A user realizes a task given by a tactile map and synthesized speech. The user walks around a room according to the task. The system gives the gap of walk path from its target path via both auditory and tactile feedback after the end of a task. Then the user can understand how well the user walked. We describe the detail of the proposed system and task, and the experimental result with three visually impaired children.

  2. The Container Problem in Bubble-Sort Graphs

    NASA Astrophysics Data System (ADS)

    Suzuki, Yasuto; Kaneko, Keiichi

    Bubble-sort graphs are variants of Cayley graphs. A bubble-sort graph is suitable as a topology for massively parallel systems because of its simple and regular structure. Therefore, in this study, we focus on n-bubble-sort graphs and propose an algorithm to obtain n-1 disjoint paths between two arbitrary nodes in time bounded by a polynomial in n, the degree of the graph plus one. We estimate the time complexity of the algorithm and the sum of the path lengths after proving the correctness of the algorithm. In addition, we report the results of computer experiments evaluating the average performance of the algorithm.

  3. Fermat's principle of least time predicts refraction of ant trails at substrate borders.

    PubMed

    Oettler, Jan; Schmid, Volker S; Zankl, Niko; Rey, Olivier; Dress, Andreas; Heinze, Jürgen

    2013-01-01

    Fermat's principle of least time states that light rays passing through different media follow the fastest (and not the most direct) path between two points, leading to refraction at medium borders. Humans intuitively employ this rule, e.g., when a lifeguard has to infer the fastest way to traverse both beach and water to reach a swimmer in need. Here, we tested whether foraging ants also follow Fermat's principle when forced to travel on two surfaces that differentially affected the ants' walking speed. Workers of the little fire ant, Wasmannia auropunctata, established "refracted" pheromone trails to a food source. These trails deviated from the most direct path, but were not different to paths predicted by Fermat's principle. Our results demonstrate a new aspect of decentralized optimization and underline the versatility of the simple yet robust rules governing the self-organization of group-living animals.

  4. Regular paths in SparQL: querying the NCI Thesaurus.

    PubMed

    Detwiler, Landon T; Suciu, Dan; Brinkley, James F

    2008-11-06

    OWL, the Web Ontology Language, provides syntax and semantics for representing knowledge for the semantic web. Many of the constructs of OWL have a basis in the field of description logics. While the formal underpinnings of description logics have lead to a highly computable language, it has come at a cognitive cost. OWL ontologies are often unintuitive to readers lacking a strong logic background. In this work we describe GLEEN, a regular path expression library, which extends the RDF query language SparQL to support complex path expressions over OWL and other RDF-based ontologies. We illustrate the utility of GLEEN by showing how it can be used in a query-based approach to defining simpler, more intuitive views of OWL ontologies. In particular we show how relatively simple GLEEN-enhanced SparQL queries can create views of the OWL version of the NCI Thesaurus that match the views generated by the web-based NCI browser.

  5. Predicting diffusion paths and interface motion in gamma/gamma + beta, Ni-Cr-Al diffusion couples

    NASA Technical Reports Server (NTRS)

    Nesbitt, J. A.; Heckel, R. W.

    1987-01-01

    A simplified model has been developed to predict Beta recession and diffusion paths in ternary gamma/gamma + beta diffusion couples (gamma:fcc, beta: NiAl structure). The model was tested by predicting beta recession and diffusion paths for four gamma/gamma + beta, Ni-Cr-Al couples annealed for 100 hours at 1200 C. The model predicted beta recession within 20 percent of that measured for each of the couples. The model also predicted shifts in the concentration of the gamma phase at the gamma/gamma + beta interface within 2 at. pct Al and 6 at. pct Cr of that measured in each of the couples. A qualitative explanation based on simple kinetic and mass balance arguments has been given which demonstrates the necessity for diffusion in the two-phase region of certain gamma/gamma + beta, Ni-Cr-Al couples.

  6. Procedure Enabling Simulation and In-Depth Analysis of Optical Effects in Camera-Based Time-Of Sensors

    NASA Astrophysics Data System (ADS)

    Baumgart, M.; Druml, N.; Consani, M.

    2018-05-01

    This paper presents a simulation approach for Time-of-Flight cameras to estimate sensor performance and accuracy, as well as to help understanding experimentally discovered effects. The main scope is the detailed simulation of the optical signals. We use a raytracing-based approach and use the optical path length as the master parameter for depth calculations. The procedure is described in detail with references to our implementation in Zemax OpticStudio and Python. Our simulation approach supports multiple and extended light sources and allows accounting for all effects within the geometrical optics model. Especially multi-object reflection/scattering ray-paths, translucent objects, and aberration effects (e.g. distortion caused by the ToF lens) are supported. The optical path length approach also enables the implementation of different ToF senor types and transient imaging evaluations. The main features are demonstrated on a simple 3D test scene.

  7. Kinefold web server for RNA/DNA folding path and structure prediction including pseudoknots and knots

    PubMed Central

    Xayaphoummine, A.; Bucher, T.; Isambert, H.

    2005-01-01

    The Kinefold web server provides a web interface for stochastic folding simulations of nucleic acids on second to minute molecular time scales. Renaturation or co-transcriptional folding paths are simulated at the level of helix formation and dissociation in agreement with the seminal experimental results. Pseudoknots and topologically ‘entangled’ helices (i.e. knots) are efficiently predicted taking into account simple geometrical and topological constraints. To encourage interactivity, simulations launched as immediate jobs are automatically stopped after a few seconds and return adapted recommendations. Users can then choose to continue incomplete simulations using the batch queuing system or go back and modify suggested options in their initial query. Detailed output provide (i) a series of low free energy structures, (ii) an online animated folding path and (iii) a programmable trajectory plot focusing on a few helices of interest to each user. The service can be accessed at . PMID:15980546

  8. Multistructural microiteration technique for geometry optimization and reaction path calculation in large systems.

    PubMed

    Suzuki, Kimichi; Morokuma, Keiji; Maeda, Satoshi

    2017-10-05

    We propose a multistructural microiteration (MSM) method for geometry optimization and reaction path calculation in large systems. MSM is a simple extension of the geometrical microiteration technique. In conventional microiteration, the structure of the non-reaction-center (surrounding) part is optimized by fixing atoms in the reaction-center part before displacements of the reaction-center atoms. In this method, the surrounding part is described as the weighted sum of multiple surrounding structures that are independently optimized. Then, geometric displacements of the reaction-center atoms are performed in the mean field generated by the weighted sum of the surrounding parts. MSM was combined with the QM/MM-ONIOM method and applied to chemical reactions in aqueous solution or enzyme. In all three cases, MSM gave lower reaction energy profiles than the QM/MM-ONIOM-microiteration method over the entire reaction paths with comparable computational costs. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  9. Limb Sensing, on the Path to Better Weather Forecasting.

    NASA Astrophysics Data System (ADS)

    Gordley, L. L.; Marshall, B. T.; Lachance, R. L.; Fritts, D. C.; Fisher, J.

    2017-12-01

    Earth limb observations from orbiting sensors have a rich history. The cold space background, long optical paths, and limb geometry provide formidable advantages for calibration, sensitivity and retrieval of vertically well-resolved geophysical parameters. The measurement of limb ray refraction now provides temperature and pressure profiles unburdened by requirements of spectral calibration or gas concentration knowledge, leading to reliable long-term trends. This talk discusses those advantages and our relevant achievements with data from the SOFIE instrument on the AIM satellite. We then describe a path to advances in calibration, sensitivity, profile fidelity, and synergy between limb sensors and nadir sounders. These advances also include small-sat compatible size, elimination of on-board calibration systems and simple static designs, dramatically reducing risk, complexity and cost. Finally, we show how these advances, made possible by modern ADCS, FPA and GPS capabilities, will lead to improvements in weather forecasting and climate observation.

  10. Predicting Deformation Limits of Dual-Phase Steels Under Complex Loading Paths

    DOE PAGES

    Cheng, G.; Choi, K. S.; Hu, X.; ...

    2017-04-05

    Here in this study, the deformation limits of various DP980 steels are examined with the deformation instability theory. Under uniaxial tension, overall stress–strain curves of the material are estimated based on a simple rule of mixture (ROM) with both iso-strain and iso-stress assumptions. Under complex loading paths, an actual microstructure-based finite element (FE) method is used to resolve the deformation compatibilities explicitly between the soft ferrite and hard martensite phases. The results show that, for uniaxial tension, the deformation instability theory with iso-strain-based ROM can be used to provide the lower bound estimate of the uniform elongation (UE) for themore » various DP980 considered. Under complex loading paths, the deformation instability theory with microstructure-based FE method can be used in examining the effects of various microstructural features on the deformation limits of DP980 steels.« less

  11. Fermat’s Principle of Least Time Predicts Refraction of Ant Trails at Substrate Borders

    PubMed Central

    Zankl, Niko; Rey, Olivier; Dress, Andreas; Heinze, Jürgen

    2013-01-01

    Fermat’s principle of least time states that light rays passing through different media follow the fastest (and not the most direct) path between two points, leading to refraction at medium borders. Humans intuitively employ this rule, e.g., when a lifeguard has to infer the fastest way to traverse both beach and water to reach a swimmer in need. Here, we tested whether foraging ants also follow Fermat’s principle when forced to travel on two surfaces that differentially affected the ants’ walking speed. Workers of the little fire ant, Wasmannia auropunctata, established “refracted” pheromone trails to a food source. These trails deviated from the most direct path, but were not different to paths predicted by Fermat’s principle. Our results demonstrate a new aspect of decentralized optimization and underline the versatility of the simple yet robust rules governing the self-organization of group-living animals. PMID:23527263

  12. Predicting Deformation Limits of Dual-Phase Steels Under Complex Loading Paths

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, G.; Choi, K. S.; Hu, X.

    The deformation limits of various DP980 steels are examined in this study with deformation instability theory. Under uniaxial tension, overall stress-strain curves of the material are estimated based on simple rule of mixture (ROM) with both iso-strain and iso-stress assumptions. Under complex loading paths, actual microstructure-based finite element (FE) method is used to explicitly resolve the deformation incompatibilities between the soft ferrite and hard martensite phases. The results show that, for uniaxial tension, the deformation instability theory with iso-strain-based ROM can be used to provide the lower bound estimate of the uniform elongation (UE) for the various DP980 considered. Undermore » complex loading paths, the deformation instability theory with microstructure-based FE method can be used in examining the effects of various microstructural features on the deformation limits of DP980 steels.« less

  13. Predicting Deformation Limits of Dual-Phase Steels Under Complex Loading Paths

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, G.; Choi, K. S.; Hu, X.

    Here in this study, the deformation limits of various DP980 steels are examined with the deformation instability theory. Under uniaxial tension, overall stress–strain curves of the material are estimated based on a simple rule of mixture (ROM) with both iso-strain and iso-stress assumptions. Under complex loading paths, an actual microstructure-based finite element (FE) method is used to resolve the deformation compatibilities explicitly between the soft ferrite and hard martensite phases. The results show that, for uniaxial tension, the deformation instability theory with iso-strain-based ROM can be used to provide the lower bound estimate of the uniform elongation (UE) for themore » various DP980 considered. Under complex loading paths, the deformation instability theory with microstructure-based FE method can be used in examining the effects of various microstructural features on the deformation limits of DP980 steels.« less

  14. Accelerated Path-following Iterative Shrinkage Thresholding Algorithm with Application to Semiparametric Graph Estimation

    PubMed Central

    Zhao, Tuo; Liu, Han

    2016-01-01

    We propose an accelerated path-following iterative shrinkage thresholding algorithm (APISTA) for solving high dimensional sparse nonconvex learning problems. The main difference between APISTA and the path-following iterative shrinkage thresholding algorithm (PISTA) is that APISTA exploits an additional coordinate descent subroutine to boost the computational performance. Such a modification, though simple, has profound impact: APISTA not only enjoys the same theoretical guarantee as that of PISTA, i.e., APISTA attains a linear rate of convergence to a unique sparse local optimum with good statistical properties, but also significantly outperforms PISTA in empirical benchmarks. As an application, we apply APISTA to solve a family of nonconvex optimization problems motivated by estimating sparse semiparametric graphical models. APISTA allows us to obtain new statistical recovery results which do not exist in the existing literature. Thorough numerical results are provided to back up our theory. PMID:28133430

  15. Virtual hybrid test control of sinuous crack

    NASA Astrophysics Data System (ADS)

    Jailin, Clément; Carpiuc, Andreea; Kazymyrenko, Kyrylo; Poncelet, Martin; Leclerc, Hugo; Hild, François; Roux, Stéphane

    2017-05-01

    The present study aims at proposing a new generation of experimental protocol for analysing crack propagation in quasi brittle materials. The boundary conditions are controlled in real-time to conform to a predefined crack path. Servo-control is achieved through a full-field measurement technique to determine the pre-set fracture path and a simple predictor model based on linear elastic fracture mechanics to prescribe the boundary conditions on the fly so that the actual crack path follows at best the predefined trajectory. The final goal is to identify, for instance, non-local damage models involving internal lengths. The validation of this novel procedure is performed via a virtual test-case based on an enriched damage model with an internal length scale, a prior chosen sinusoidal crack path and a concrete sample. Notwithstanding the fact that the predictor model selected for monitoring the test is a highly simplified picture of the targeted constitutive law, the proposed protocol exhibits a much improved sensitivity to the sought parameters such as internal lengths as assessed from the comparison with other available experimental tests.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bromberger, Seth A.; Klymko, Christine F.; Henderson, Keith A.

    Betweenness centrality is a graph statistic used to nd vertices that are participants in a large number of shortest paths in a graph. This centrality measure is commonly used in path and network interdiction problems and its complete form requires the calculation of all-pairs shortest paths for each vertex. This leads to a time complexity of O(jV jjEj), which is impractical for large graphs. Estimation of betweenness centrality has focused on performing shortest-path calculations on a subset of randomly- selected vertices. This reduces the complexity of the centrality estimation to O(jSjjEj); jSj < jV j, which can be scaled appropriatelymore » based on the computing resources available. An estimation strategy that uses random selection of vertices for seed selection is fast and simple to implement, but may not provide optimal estimation of betweenness centrality when the number of samples is constrained. Our experimentation has identi ed a number of alternate seed-selection strategies that provide lower error than random selection in common scale-free graphs. These strategies are discussed and experimental results are presented.« less

  17. Evolved atmospheric entry corridor with safety factor

    NASA Astrophysics Data System (ADS)

    Liang, Zixuan; Ren, Zhang; Li, Qingdong

    2018-02-01

    Atmospheric entry corridors are established in previous research based on the equilibrium glide condition which assumes the flight-path angle to be zero. To get a better understanding of the highly constrained entry flight, an evolved entry corridor that considers the exact flight-path angle is developed in this study. Firstly, the conventional corridor in the altitude vs. velocity plane is extended into a three-dimensional one in the space of altitude, velocity, and flight-path angle. The three-dimensional corridor is generated by a series of constraint boxes. Then, based on a simple mapping method, an evolved two-dimensional entry corridor with safety factor is obtained. The safety factor is defined to describe the flexibility of the flight-path angle for a state within the corridor. Finally, the evolved entry corridor is simulated for the Space Shuttle and the Common Aero Vehicle (CAV) to demonstrate the effectiveness of the corridor generation approach. Compared with the conventional corridor, the evolved corridor is much wider and provides additional information. Therefore, the evolved corridor would benefit more to the entry trajectory design and analysis.

  18. An experimental and theoretical model of children’s search behavior in relation to target conspicuity and spatial distribution

    NASA Astrophysics Data System (ADS)

    Rosetti, Marcos Francisco; Pacheco-Cobos, Luis; Larralde, Hernán; Hudson, Robyn

    2010-11-01

    This work explores search trajectories of children attempting to find targets distributed on a playing field. This task, of ludic nature, was developed to test the effect of conspicuity and spatial distribution of targets on the searcher’s performance. The searcher’s path was recorded by a Global Positioning System (GPS) device attached to the child’s waist. Participants were not rewarded nor their performance rated. Variation in the conspicuity of the targets influenced search performance as expected; cryptic targets resulted in slower searches and longer, more tortuous paths. Extracting the main features of the paths showed that the children: (1) paid little attention to the spatial distribution and at least in the conspicuous condition approximately followed a nearest neighbor pattern of target collection, (2) were strongly influenced by the conspicuity of the targets. We implemented a simple statistical model for the search rules mimicking the children’s behavior at the level of individual (coarsened) steps. The model reproduced the main features of the children’s paths without the participation of memory or planning.

  19. Two-Phase Flow in Microchannels with Non-Circular Cross Section

    NASA Astrophysics Data System (ADS)

    Eckett, Chris A.; Strumpf, Hal J.

    2002-11-01

    Two-phase flow in microchannels is of practical importance in several microgravity space technology applications. These include evaporative and condensing heat exchangers for thermal management systems and vapor cycle systems, phase separators, and bioreactors. The flow passages in these devices typically have a rectangular cross-section or some other non-circular cross-section; may include complex flow paths with branches, merges and bends; and may involve channel walls of different wettability. However, previous experimental and analytical investigations of two-phase flow in reduced gravity have focussed on straight, circular tubes. This study is an effort to determine two-phase flow behavior, both with and without heat transfer, in microchannel configurations other than straight, circular tubes. The goals are to investigate the geometrical effects on flow pattern, pressure drop and liquid holdup, as well as to determine the relative importance of capillary, surface tension, inertial, and gravitational forces in such geometries. An evaporative heat exchanger for microgravity thermal management systems has been selected as the target technology in this investigation. Although such a heat exchanger has never been developed at Honeywell, a preliminary sizing has been performed based on knowledge of such devices in normal gravity environments. Fin shapes considered include plain rectangular, offset rectangular, and wavy fin configurations. Each of these fin passages represents a microchannel of non-circular cross section. The pans at the inlet and outlet of the heat exchanger are flow branches and merges, with up to 90-deg bends. R-134a has been used as the refrigerant fluid, although ammonia may well be used in the eventual application.

  20. Aerosol correction for remotely sensed sea surface temperatures from the National Oceanic and Atmospheric Administration advanced very high resolution radiometer

    NASA Astrophysics Data System (ADS)

    Nalli, Nicholas R.; Stowe, Larry L.

    2002-10-01

    This research presents the first-phase derivation and implementation of daytime aerosol correction algorithms for remotely sensed sea surface temperature (SST) from the advanced very high resolution radiometer (AVHRR) instrument flown onboard NOAA polar orbiting satellites. To accomplish this, a long-term (1990-1998), global AVHRR-buoy match-up database was created by merging the NOAA/NASA Pathfinder Atmospheres and Pathfinder Oceans data sets. The merged data set is unique in that it includes daytime estimates of aerosol optical depth (AOD) derived from AVHRR channel 1 (0.63 μm) under global conditions of significant aerosol loading. Histograms of retrieved AOD reveal monomodal, lognormal distributions for both tropospheric and stratospheric aerosol modes. It is then shown empirically that the SST depression caused under each aerosol mode can be expressed as a linear function in two predictors, these being the slant path AOD retrieved from AVHRR channel 1 along with the ratio of channels 1 and 2 normalized reflectances. On the basis of these relationships, parametric equations are derived to provide an aerosol correction for retrievals from the daytime NOAA operational multichannel and nonlinear SST algorithms. Separate sets of coefficients are utilized for two aerosol modes: tropospheric (i.e., dust, smoke, haze) and stratospheric/tropospheric (i.e., following a major volcanic eruption). The equations are shown to significantly reduce retrieved SST bias using an independent set of match-ups. Eliminating aerosol-induced bias in both real-time and retrospective processing will enhance the utility of the AVHRR SST for the general user community and in climate research.

  1. A simple prescription for simulating and characterizing gravitational arcs

    NASA Astrophysics Data System (ADS)

    Furlanetto, C.; Santiago, B. X.; Makler, M.; de Bom, C.; Brandt, C. H.; Neto, A. F.; Ferreira, P. C.; da Costa, L. N.; Maia, M. A. G.

    2013-01-01

    Simple models of gravitational arcs are crucial for simulating large samples of these objects with full control of the input parameters. These models also provide approximate and automated estimates of the shape and structure of the arcs, which are necessary for detecting and characterizing these objects on massive wide-area imaging surveys. We here present and explore the ArcEllipse, a simple prescription for creating objects with a shape similar to gravitational arcs. We also present PaintArcs, which is a code that couples this geometrical form with a brightness distribution and adds the resulting object to images. Finally, we introduce ArcFitting, which is a tool that fits ArcEllipses to images of real gravitational arcs. We validate this fitting technique using simulated arcs and apply it to CFHTLS and HST images of tangential arcs around clusters of galaxies. Our simple ArcEllipse model for the arc, associated to a Sérsic profile for the source, recovers the total signal in real images typically within 10%-30%. The ArcEllipse+Sérsic models also automatically recover visual estimates of length-to-width ratios of real arcs. Residual maps between data and model images reveal the incidence of arc substructure. They may thus be used as a diagnostic for arcs formed by the merging of multiple images. The incidence of these substructures is the main factor that prevents ArcEllipse models from accurately describing real lensed systems.

  2. Simultaneous electrical and optical study of spoke rotation, merging and splitting in HiPIMS plasma

    NASA Astrophysics Data System (ADS)

    Klein, P.; Lockwood Estrin, F.; Hnilica, J.; Vašina, P.; Bradley, J. W.

    2017-01-01

    To gain more information on the temporal and spatial behaviour of self-organized spoke structures in HiPIMS plasmas, a correlation between the broadband optical image of an individual spoke (taken over 200 ns) and the current it delivers to the target has been made for a range of magnetron operating conditions. As a spoke passes over a set of embedded probes in the niobium cathode target, a distinct modulation in the local current density is observed, (typically up to twice the average value), matching very well the radially integrated optical emission intensities (obtained remotely with an ICCD camera). The dual diagnostic system allows the merging and splitting of a set of spokes to be studied as they rotate. It is observed that in the merger of two spokes, the trailing spoke maintains its velocity while the leading spoke either decreases its velocity or increases its azimuthal length. In the spoke splitting process, the total charge collected by an embedded probe is conserved. A simple phenomenological model is developed that relates the spoke mode number m to the spoke dimensions, spoke velocity and gas atom velocity. The results are discussed in the context of the observations of spoke dynamics made by Hecimovic et al (2015 Plasma Sources Sci. Technol. 24 045005)

  3. Comparative study of joint analysis of microarray gene expression data in survival prediction and risk assessment of breast cancer patients

    PubMed Central

    2016-01-01

    Abstract Microarray gene expression data sets are jointly analyzed to increase statistical power. They could either be merged together or analyzed by meta-analysis. For a given ensemble of data sets, it cannot be foreseen which of these paradigms, merging or meta-analysis, works better. In this article, three joint analysis methods, Z -score normalization, ComBat and the inverse normal method (meta-analysis) were selected for survival prognosis and risk assessment of breast cancer patients. The methods were applied to eight microarray gene expression data sets, totaling 1324 patients with two clinical endpoints, overall survival and relapse-free survival. The performance derived from the joint analysis methods was evaluated using Cox regression for survival analysis and independent validation used as bias estimation. Overall, Z -score normalization had a better performance than ComBat and meta-analysis. Higher Area Under the Receiver Operating Characteristic curve and hazard ratio were also obtained when independent validation was used as bias estimation. With a lower time and memory complexity, Z -score normalization is a simple method for joint analysis of microarray gene expression data sets. The derived findings suggest further assessment of this method in future survival prediction and cancer classification applications. PMID:26504096

  4. Knowledge evolution in physics research: An analysis of bibliographic coupling networks

    PubMed Central

    Nanetti, Andrea; Cheong, Siew Ann

    2017-01-01

    Even as we advance the frontiers of physics knowledge, our understanding of how this knowledge evolves remains at the descriptive levels of Popper and Kuhn. Using the American Physical Society (APS) publications data sets, we ask in this paper how new knowledge is built upon old knowledge. We do so by constructing year-to-year bibliographic coupling networks, and identify in them validated communities that represent different research fields. We then visualize their evolutionary relationships in the form of alluvial diagrams, and show how they remain intact through APS journal splits. Quantitatively, we see that most fields undergo weak Popperian mixing, and it is rare for a field to remain isolated/undergo strong mixing. The sizes of fields obey a simple linear growth with recombination. We can also reliably predict the merging between two fields, but not for the considerably more complex splitting. Finally, we report a case study of two fields that underwent repeated merging and splitting around 1995, and how these Kuhnian events are correlated with breakthroughs on Bose-Einstein condensation (BEC), quantum teleportation, and slow light. This impact showed up quantitatively in the citations of the BEC field as a larger proportion of references from during and shortly after these events. PMID:28922427

  5. Black Hole Mergers in Galactic Nuclei Induced by the Eccentric Kozai–Lidov Effect

    NASA Astrophysics Data System (ADS)

    Hoang, Bao-Minh; Naoz, Smadar; Kocsis, Bence; Rasio, Frederic A.; Dosopoulou, Fani

    2018-04-01

    Nuclear star clusters around a central massive black hole (MBH) are expected to be abundant in stellar black hole (BH) remnants and BH–BH binaries. These binaries form a hierarchical triple system with the central MBH, and gravitational perturbations from the MBH can cause high-eccentricity excitation in the BH–BH binary orbit. During this process, the eccentricity may approach unity, and the pericenter distance may become sufficiently small so that gravitational-wave emission drives the BH–BH binary to merge. In this work, we construct a simple proof-of-concept model for this process, and specifically, we study the eccentric Kozai–Lidov mechanism in unequal-mass, soft BH–BH binaries. Our model is based on a set of Monte Carlo simulations for BH–BH binaries in galactic nuclei, taking into account quadrupole- and octupole-level secular perturbations, general relativistic precession, and gravitational-wave emission. For a typical steady-state number of BH–BH binaries, our model predicts a total merger rate of ∼1–3 {Gpc} ‑3 {yr} ‑1, depending on the assumed density profile in the nucleus. Thus, our mechanism could potentially compete with other dynamical formation processes for merging BH–BH binaries, such as the interactions of stellar BHs in globular clusters or in nuclear star clusters without an MBH.

  6. Changing cluster composition in cluster randomised controlled trials: design and analysis considerations

    PubMed Central

    2014-01-01

    Background There are many methodological challenges in the conduct and analysis of cluster randomised controlled trials, but one that has received little attention is that of post-randomisation changes to cluster composition. To illustrate this, we focus on the issue of cluster merging, considering the impact on the design, analysis and interpretation of trial outcomes. Methods We explored the effects of merging clusters on study power using standard methods of power calculation. We assessed the potential impacts on study findings of both homogeneous cluster merges (involving clusters randomised to the same arm of a trial) and heterogeneous merges (involving clusters randomised to different arms of a trial) by simulation. To determine the impact on bias and precision of treatment effect estimates, we applied standard methods of analysis to different populations under analysis. Results Cluster merging produced a systematic reduction in study power. This effect depended on the number of merges and was most pronounced when variability in cluster size was at its greatest. Simulations demonstrate that the impact on analysis was minimal when cluster merges were homogeneous, with impact on study power being balanced by a change in observed intracluster correlation coefficient (ICC). We found a decrease in study power when cluster merges were heterogeneous, and the estimate of treatment effect was attenuated. Conclusions Examples of cluster merges found in previously published reports of cluster randomised trials were typically homogeneous rather than heterogeneous. Simulations demonstrated that trial findings in such cases would be unbiased. However, simulations also showed that any heterogeneous cluster merges would introduce bias that would be hard to quantify, as well as having negative impacts on the precision of estimates obtained. Further methodological development is warranted to better determine how to analyse such trials appropriately. Interim recommendations include avoidance of cluster merges where possible, discontinuation of clusters following heterogeneous merges, allowance for potential loss of clusters and additional variability in cluster size in the original sample size calculation, and use of appropriate ICC estimates that reflect cluster size. PMID:24884591

  7. Penetration of the interplanetary magnetic field B(sub y) magnetosheath plasma into the magnetosphere: Implications for the predominant magnetopause merging site

    NASA Technical Reports Server (NTRS)

    Newell, Patrick T.; Sibeck, David G.; Meng, Ching-I

    1995-01-01

    Magnetosheath plasma peertated into the magnetospere creating the particle cusp, and similarly the interplanetary magnetic field (IMF) B(sub y) component penetrates the magnetopause. We reexamine the phenomenology of such penetration to investigate implications for the magnetopause merging site. Three models are popular: (1) the 'antiparallel' model, in which merging occurs where the local magnetic shear is largest (usually high magnetic latitude); (2) a tilted merging line passing through the subsolar point but extending to very high latitudes; or (3) a tilted merging line passing through the subsolar point in which most merging occurs within a few Earth radii of the equatorial plane and local noon (subsolar merging). It is difficult to distinguish between the first two models, but the third implies some very different predictions. We show that properties of the particle cusp imply that plasma injection into the magnetosphere occurs most often at high magnetic latitudes. In particular, we note the following: (1) The altitude of the merging site inferred from midaltitude cusp ion pitch angle dispersion is typically 8-12 R(sub E). (2) The highest ion energy observable when moving poleward through the cusp drops long before the bulk of the cusp plasma is reached, implying that ions are swimming upstream against the sheath flow shortly after merging. (3) Low-energy ions are less able to enter the winter cusp than the summer cusp. (4) The local time behavior of the cusp as a function of B(sub y) and B(sub z) corroborates predictions of the high-latitude merging models. We also reconsider the penetration of the IMF B(sub y) component onto closed dayside field lines. Our approach, in which closed field lines ove to fill in flux voids created by asymmetric magnetopause flux erosion, shows that strich subsolar merging cannot account for the observations.

  8. Spatiotemporal fusion of multiple-satellite aerosol optical depth (AOD) products using Bayesian maximum entropy method

    NASA Astrophysics Data System (ADS)

    Tang, Qingxin; Bo, Yanchen; Zhu, Yuxin

    2016-04-01

    Merging multisensor aerosol optical depth (AOD) products is an effective way to produce more spatiotemporally complete and accurate AOD products. A spatiotemporal statistical data fusion framework based on a Bayesian maximum entropy (BME) method was developed for merging satellite AOD products in East Asia. The advantages of the presented merging framework are that it not only utilizes the spatiotemporal autocorrelations but also explicitly incorporates the uncertainties of the AOD products being merged. The satellite AOD products used for merging are the Moderate Resolution Imaging Spectroradiometer (MODIS) Collection 5.1 Level-2 AOD products (MOD04_L2) and the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Deep Blue Level 2 AOD products (SWDB_L2). The results show that the average completeness of the merged AOD data is 95.2%,which is significantly superior to the completeness of MOD04_L2 (22.9%) and SWDB_L2 (20.2%). By comparing the merged AOD to the Aerosol Robotic Network AOD records, the results show that the correlation coefficient (0.75), root-mean-square error (0.29), and mean bias (0.068) of the merged AOD are close to those (the correlation coefficient (0.82), root-mean-square error (0.19), and mean bias (0.059)) of the MODIS AOD. In the regions where both MODIS and SeaWiFS have valid observations, the accuracy of the merged AOD is higher than those of MODIS and SeaWiFS AODs. Even in regions where both MODIS and SeaWiFS AODs are missing, the accuracy of the merged AOD is also close to the accuracy of the regions where both MODIS and SeaWiFS have valid observations.

  9. Coupling the Solar-Wind/IMF to the Ionosphere through the High Latitude Cusps

    NASA Technical Reports Server (NTRS)

    Maynard, Nelson C.

    2003-01-01

    Magnetic merging is a primary means for coupling energy from the solar wind into the magnetosphere-ionosphere system. The location and nature of the process remain as open questions. By correlating measurements form diverse locations and using large-scale MHD models to put the measurements in context, it is possible to constrain out interpretations of the global and meso-scale dynamics of magnetic merging. Recent evidence demonstrates that merging often occurs at high latitudes in the vicinity of the cusps. The location is in part controlled by the clock angle in the interplanetary magnetic field (IMF) Y-Z plane. In fact, B(sub Y) bifurcated the cusp relative to source regions. The newly opened field lines may couple to the ionosphere at MLT locations of as much as 3 hr away from local noon. On the other side of noon the cusp may be connected to merging sites in the opposite hemisphere. In face, the small convection cell is generally driven by opposite hemisphere merging. B(sub X) controls the timing of the interaction and merging sites in each hemisphere, which may respond to planar features in the IMF at different times. Correlation times are variable and are controlled by the dynamics of the tilt of the interplanetary electric field phase plane. The orientation of the phase plane may change significantly on time scales of tens of minutes. Merging is temporally variable and may be occurring at multiple sites simultaneously. Accelerated electrons from the merging process excite optical signatures at the foot of the newly opened field lines. All-sky photometer observations of 557.7 nm emissions in the cusp region provide a "television picture" of the merging process and may be used to infer the temporal and spatial variability of merging, tied to variations in the IMF.

  10. Metabolic PathFinding: inferring relevant pathways in biochemical networks.

    PubMed

    Croes, Didier; Couche, Fabian; Wodak, Shoshana J; van Helden, Jacques

    2005-07-01

    Our knowledge of metabolism can be represented as a network comprising several thousands of nodes (compounds and reactions). Several groups applied graph theory to analyse the topological properties of this network and to infer metabolic pathways by path finding. This is, however, not straightforward, with a major problem caused by traversing irrelevant shortcuts through highly connected nodes, which correspond to pool metabolites and co-factors (e.g. H2O, NADP and H+). In this study, we present a web server implementing two simple approaches, which circumvent this problem, thereby improving the relevance of the inferred pathways. In the simplest approach, the shortest path is computed, while filtering out the selection of highly connected compounds. In the second approach, the shortest path is computed on the weighted metabolic graph where each compound is assigned a weight equal to its connectivity in the network. This approach significantly increases the accuracy of the inferred pathways, enabling the correct inference of relatively long pathways (e.g. with as many as eight intermediate reactions). Available options include the calculation of the k-shortest paths between two specified seed nodes (either compounds or reactions). Multiple requests can be submitted in a queue. Results are returned by email, in textual as well as graphical formats (available in http://www.scmbb.ulb.ac.be/pathfinding/).

  11. Quantifying selective elbow movements during an exergame in children with neurological disorders: a pilot study.

    PubMed

    van Hedel, Hubertus J A; Häfliger, Nadine; Gerber, Corinna N

    2016-10-21

    It is difficult to distinguish between restorative and compensatory mechanisms underlying (pediatric) neurorehabilitation, as objective measures assessing selective voluntary motor control (SVMC) are scarce. We aimed to quantify SVMC of elbow movements in children with brain lesions. Children played an airplane game with the glove-based YouGrabber system. Participants were instructed to steer an airplane on a screen through a cloud-free path by correctly applying bilateral elbow flexion and extension movements. Game performance measures were (i) % time on the correct path and (ii) similarity between the ideal flight path and the actually flown path. SVMC was quantified by calculating a correlation coefficient between the derivative of the ideal path and elbow movements. A therapist scored whether the child had used compensatory movements. Thirty-three children with brain lesions (11 girls; 12.6 ± 3.6 years) participated. Clinical motor and cognitive scores correlated moderately with SVMC (0.50-0.74). Receiver Operating Characteristics analyses showed that SVMC could differentiate well and better than clinical and game performance measures between compensatory and physiological movements. We conclude that a simple measure assessed while playing a game appears promising in quantifying SVMC. We propose how to improve the methodology, and how this approach can be easily extended to other joints.

  12. Modeling cooperative driving behavior in freeway merges.

    DOT National Transportation Integrated Search

    2011-11-01

    Merging locations are major sources of freeway bottlenecks and are therefore important for freeway operations analysis. Microscopic simulation tools have been successfully used to analyze merging bottlenecks and to design optimum geometric configurat...

  13. Counterfactual quantum erasure: spooky action without entanglement

    PubMed Central

    2018-01-01

    We combine the eyebrow-raising quantum phenomena of erasure and counterfactuality for the first time, proposing a simple yet unusual quantum eraser: A distant Bob can decide to erase which-path information from Alice’s photon, dramatically restoring interference—without previously shared entanglement, and without Alice’s photon ever leaving her laboratory. PMID:29515845

  14. Simple Minds

    ERIC Educational Resources Information Center

    Kelleher, John

    2013-01-01

    This article describes John Kelleher's experience in observing the creations of his preschool daughter. Both he and his wife are formally trained in the arts, and looked forward to guiding their daughter down an artistic path. In his mind, what makes a great artist usually involves a great deal of technical ability and commitment to a complex…

  15. Making Light Rays Visible in 3-D

    ERIC Educational Resources Information Center

    Logiurato, F.; Gratton, L. M.; Oss, S.

    2007-01-01

    Students become deeply involved in physics classes when spectacular demonstrations take over from abstract and formal presentations. In this paper we propose a simple experimental setup in which the wave behavior of light can be made spectacularly evident along the whole path of the light beam in a practically unlimited number of configurations.…

  16. Easily Testable PLA-Based Finite State Machines

    DTIC Science & Technology

    1989-03-01

    PLATYPUS (20]. Then, justifi- type 1, 4 and 5 can be guaranteed to be testable via cation paths are obtained from the STG using simple logic...next state lines is found, if such a vector par that is gnrt d y the trupt eexists, using PLATYPUS [20]. pair that is generated by the first corrupted

  17. A Psychophysical Test of the Visual Pathway of Children with Autism

    ERIC Educational Resources Information Center

    Sanchez-Marin, Francisco J.; Padilla-Medina, Jose A.

    2008-01-01

    Signal detection psychophysical experiments were conducted to investigate the visual path of children with autism. Computer generated images with Gaussian noise were used. Simple signals, still and in motion were embedded in the background noise. The computer monitor was linearized to properly display the contrast changes. To our knowledge, this…

  18. Validation of a simple distributed sediment delivery approach in selected sub-basins of the River Inn catchment area

    NASA Astrophysics Data System (ADS)

    Reid, Lucas; Kittlaus, Steffen; Scherer, Ulrike

    2015-04-01

    For large areas without highly detailed data the empirical Universal Soil Loss Equation (USLE) is widely used to quantify soil loss. The problem though is usually the quantification of actual sediment influx into the rivers. As the USLE provides long-term mean soil loss rates, it is often combined with spatially lumped models to estimate the sediment delivery ratio (SDR). But it gets difficult with spatially lumped approaches in large catchment areas where the geographical properties have a wide variance. In this study we developed a simple but spatially distributed approach to quantify the sediment delivery ratio by considering the characteristics of the flow paths in the catchments. The sediment delivery ratio was determined using an empirical approach considering the slope, morphology and land use properties along the flow path as an estimation of travel time of the eroded particles. The model was tested against suspended solids measurements in selected sub-basins of the River Inn catchment area in Germany and Austria, ranging from the high alpine south to the Molasse basin in the northern part.

  19. Using a Family of Dividing Surfaces Normal to the Minimum EnergyPath for Quantum Instanton Rate Constants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yimin; Miller, Wlliam H.

    2006-02-22

    One of the outstanding issues in the quantum instanton (QI) theory (or any transition state-type theory) for thermal rate constants of chemical reactions is the choice of an appropriate ''dividing surface'' (DS) that separates reactants and products. (In the general version of the QI theory, there are actually two dividing surfaces involved.) This paper shows one simple and general way for choosing DS's for use in QI Theory, namely using the family of (hyper) planes normal to the minimum energy path (MEP) on the potential energy surface at various distances s along it. Here the reaction coordinate is not onemore » of the dynamical coordinates of the system (which will in general be the Cartesian coordinates of the atoms), but rather simply a parameter which specifies the DS. It is also shown how this idea can be implemented for an N-atom system in 3d space in a way that preserves overall translational and rotational invariance. Numerical application to a simple system (the colliner H + H{sub 2} reaction) is presented to illustrate the procedure.« less

  20. Multiparallel Three-Dimensional Optical Microscopy

    NASA Technical Reports Server (NTRS)

    Nguyen, Lam K.; Price, Jeffrey H.; Kellner, Albert L.; Bravo-Zanoquera, Miguel

    2010-01-01

    Multiparallel three-dimensional optical microscopy is a method of forming an approximate three-dimensional image of a microscope sample as a collection of images from different depths through the sample. The imaging apparatus includes a single microscope plus an assembly of beam splitters and mirrors that divide the output of the microscope into multiple channels. An imaging array of photodetectors in each channel is located at a different distance along the optical path from the microscope, corresponding to a focal plane at a different depth within the sample. The optical path leading to each photodetector array also includes lenses to compensate for the variation of magnification with distance so that the images ultimately formed on all the photodetector arrays are of the same magnification. The use of optical components common to multiple channels in a simple geometry makes it possible to obtain high light-transmission efficiency with an optically and mechanically simple assembly. In addition, because images can be read out simultaneously from all the photodetector arrays, the apparatus can support three-dimensional imaging at a high scanning rate.

  1. Microbiological and ecological responses to global environmental changes in polar regions (MERGE): An IPY core coordinating project

    NASA Astrophysics Data System (ADS)

    Naganuma, Takeshi; Wilmotte, Annick

    2009-11-01

    An integrated program, “Microbiological and ecological responses to global environmental changes in polar regions” (MERGE), was proposed in the International Polar Year (IPY) 2007-2008 and endorsed by the IPY committee as a coordinating proposal. MERGE hosts original proposals to the IPY and facilitates their funding. MERGE selected three key questions to produce scientific achievements. Prokaryotic and eukaryotic organisms in terrestrial, lacustrine, and supraglacial habitats were targeted according to diversity and biogeography; food webs and ecosystem evolution; and linkages between biological, chemical, and physical processes in the supraglacial biome. MERGE hosted 13 original and seven additional proposals, with two full proposals. It respected the priorities and achievements of the individual proposals and aimed to unify their significant results. Ideas and projects followed a bottom-up rather than a top-down approach. We intend to inform the MERGE community of the initial results and encourage ongoing collaboration. Scientists from non-polar regions have also participated and are encouraged to remain involved in MERGE. MERGE is formed by scientists from Argentina, Australia, Austria, Belgium, Brazil, Bulgaria, Canada, Egypt, Finland, France, Germany, Italy, Japan, Korea, Malaysia, New Zealand, Philippines, Poland, Russia, Spain, UK, Uruguay, USA, and Vietnam, and associates from Chile, Denmark, Netherlands, and Norway.

  2. Galaxy mergers and gravitational lens statistics

    NASA Technical Reports Server (NTRS)

    Rix, Hans-Walter; Maoz, Dan; Turner, Edwin L.; Fukugita, Masataka

    1994-01-01

    We investigate the impact of hierarchical galaxy merging on the statistics of gravitational lensing of distant sources. Since no definite theoretical predictions for the merging history of luminous galaxies exist, we adopt a parameterized prescription, which allows us to adjust the expected number of pieces comprising a typical present galaxy at z approximately 0.65. The existence of global parameter relations for elliptical galaxies and constraints on the evolution of the phase space density in dissipationless mergers, allow us to limit the possible evolution of galaxy lens properties under merging. We draw two lessons from implementing this lens evolution into statistical lens calculations: (1) The total optical depth to multiple imaging (e.g., of quasars) is quite insensitive to merging. (2) Merging leads to a smaller mean separation of observed multiple images. Because merging does not reduce drastically the expected lensing frequency, it cannot make lambda-dominated cosmologies compatible with the existing lensing observations. A comparison with the data from the Hubble Space Telescope (HST) Snapshot Survey shows that models with little or no evolution of the lens population are statistically favored over strong merging scenarios. A specific merging scenario proposed to Toomre can be rejected (95% level) by such a comparison. Some versions of the scenario proposed by Broadhurst, Ellis, & Glazebrook are statistically acceptable.

  3. Simulation and validation of concentrated subsurface lateral flow paths in an agricultural landscape

    NASA Astrophysics Data System (ADS)

    Zhu, Q.; Lin, H. S.

    2009-08-01

    The importance of soil water flow paths to the transport of nutrients and contaminants has long been recognized. However, effective means of detecting concentrated subsurface flow paths in a large landscape are still lacking. The flow direction and accumulation algorithm based on single-direction flow algorithm (D8) in GIS hydrologic modeling is a cost-effective way to simulate potential concentrated flow paths over a large area once relevant data are collected. This study tested the D8 algorithm for simulating concentrated lateral flow paths at three interfaces in soil profiles in a 19.5-ha agricultural landscape in central Pennsylvania, USA. These interfaces were (1) the interface between surface plowed layers of Ap1 and Ap2 horizons, (2) the interface with subsoil water-restricting clay layer where clay content increased to over 40%, and (3) the soil-bedrock interface. The simulated flow paths were validated through soil hydrologic monitoring, geophysical surveys, and observable soil morphological features. The results confirmed that concentrated subsurface lateral flow occurred at the interfaces with the clay layer and the underlying bedrock. At these two interfaces, the soils on the simulated flow paths were closer to saturation and showed more temporally unstable moisture dynamics than those off the simulated flow paths. Apparent electrical conductivity in the soil on the simulated flow paths was elevated and temporally unstable as compared to those outside the simulated paths. The soil cores collected from the simulated flow paths showed significantly higher Mn content at these interfaces than those away from the simulated paths. These results suggest that (1) the D8 algorithm is useful in simulating possible concentrated subsurface lateral flow paths if used with appropriate threshold value of contributing area and sufficiently detailed digital elevation model (DEM); (2) repeated electromagnetic surveys can reflect the temporal change of soil water storage and thus is a useful indicator of possible subsurface flow path over a large area; and (3) observable Mn distribution in soil profiles can be used as a simple indicator of water flow paths in soils and over the landscape; however, it does require sufficient soil sampling (by excavation or augering) to possibly infer landscape-scale subsurface flow paths. In areas where subsurface interface topography varies similarly with surface topography, surface DEM can be used to simulate potential subsurface lateral flow path reasonably so the cost associated with obtaining depth to subsurface water-restricting layer can be minimized.

  4. Development of Three-Dimensional Dental Scanning Apparatus Using Structured Illumination

    PubMed Central

    Park, Anjin; Lee, Byeong Ha; Eom, Joo Beom

    2017-01-01

    We demonstrated a three-dimensional (3D) dental scanning apparatus based on structured illumination. A liquid lens was used for tuning focus and a piezomotor stage was used for the shift of structured light. A simple algorithm, which detects intensity modulation, was used to perform optical sectioning with structured illumination. We reconstructed a 3D point cloud, which represents the 3D coordinates of the digitized surface of a dental gypsum cast by piling up sectioned images. We performed 3D registration of an individual 3D point cloud, which includes alignment and merging the 3D point clouds to exhibit a 3D model of the dental cast. PMID:28714897

  5. Pure shear and simple shear calcite textures. Comparison of experimental, theoretical and natural data

    USGS Publications Warehouse

    Wenk, H.-R.; Takeshita, T.; Bechler, E.; Erskine, B.G.; Matthies, S.

    1987-01-01

    The pattern of lattice preferred orientation (texture) in deformed rocks is an expression of the strain path and the acting deformation mechanisms. A first indication about the strain path is given by the symmetry of pole figures: coaxial deformation produces orthorhombic pole figures, while non-coaxial deformation yields monoclinic or triclinic pole figures. More quantitative information about the strain history can be obtained by comparing natural textures with experimental ones and with theoretical models. For this comparison, a representation in the sensitive three-dimensional orientation distribution space is extremely important and efforts are made to explain this concept. We have been investigating differences between pure shear and simple shear deformation incarbonate rocks and have found considerable agreement between textures produced in plane strain experiments and predictions based on the Taylor model. We were able to simulate the observed changes with strain history (coaxial vs non-coaxial) and the profound texture transition which occurs with increasing temperature. Two natural calcite textures were then selected which we interpreted by comparing them with the experimental and theoretical results. A marble from the Santa Rosa mylonite zone in southern California displays orthorhombic pole figures with patterns consistent with low temperature deformation in pure shear. A limestone from the Tanque Verde detachment fault in Arizona has a monoclinic fabric from which we can interpret that 60% of the deformation occurred by simple shear. ?? 1987.

  6. Modeling merging behavior at lane drops : [tech transfer summary].

    DOT National Transportation Integrated Search

    2015-02-01

    A better understanding of the merging behavior of drivers will lead : to the development of better lane-drop traffic-control plans and : strategies, which will provide better guidance to drivers for safer : merging.

  7. Unravelling merging behaviors and electrostatic properties of CVD-grown monolayer MoS{sub 2} domains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hao, Song; Yang, Bingchu, E-mail: bingchuyang@csu.edu.cn; Hunan Key Laboratory for Super-Microstructure and Ultrafast Process, Central South University, 932 South Lushan Road, Changsha 410012

    The presence of grain boundaries is inevitable for chemical vapor deposition (CVD)-grown MoS{sub 2} domains owing to various merging behaviors, which greatly limits its potential applications in novel electronic and optoelectronic devices. It is therefore of great significance to unravel the merging behaviors of the synthesized polygon shape MoS{sub 2} domains. Here we provide systematic investigations of merging behaviors and electrostatic properties of CVD-grown polycrystalline MoS{sub 2} crystals by multiple means. Morphological results exhibit various polygon shape features, ascribed to polycrystalline crystals merged with triangle shape MoS{sub 2} single crystals. The thickness of triangle and polygon shape MoS{sub 2} crystalsmore » is identical manifested by Raman intensity and peak position mappings. Three merging behaviors are proposed to illustrate the formation mechanisms of observed various polygon shaped MoS{sub 2} crystals. The combined photoemission electron microscopy and kelvin probe force microscopy results reveal that the surface potential of perfect merged crystals is identical, which has an important implication for fabricating MoS{sub 2}-based devices.« less

  8. Simulation of the target creation through FRC merging for a magneto-inertial fusion concept

    NASA Astrophysics Data System (ADS)

    Li, Chenguang; Yang, Xianjun

    2017-04-01

    A two-dimensional magnetohydrodynamics model has been used to simulate the target creation process in a magneto-inertial fusion concept named Magnetized Plasma Fusion Reactor (MPFR) [C. Li and X. Yang, Phys. Plasmas 23, 102702 (2016)], where the target plasma created through Field reversed configuration (FRC) merging was compressed by an imploding liner driven by the pulsed-power driver. In the scheme, two initial FRCs (Field reversed configurations) are translated into the region where FRC merging occurs, bringing out the target plasma ready for compression. The simulations cover the three stages of the target creation process: formation, translation, and merging. The factors affecting the achieved target are analyzed numerically. The magnetic field gradient produced by the conical coils is found to determine how fast the FRC is accelerated to peak velocity and the collision merging occurs. Moreover, it is demonstrated that FRC merging can be realized by real coils with gaps showing nearly identical performance, and the optimized target by FRC merging shows larger internal energy and retained flux, which is more suitable for the MPFR concept.

  9. Actin dynamics provides membrane tension to merge fusing vesicles into the plasma membrane

    PubMed Central

    Wen, Peter J.; Grenklo, Staffan; Arpino, Gianvito; Tan, Xinyu; Liao, Hsien-Shun; Heureaux, Johanna; Peng, Shi-Yong; Chiang, Hsueh-Cheng; Hamid, Edaeni; Zhao, Wei-Dong; Shin, Wonchul; Näreoja, Tuomas; Evergren, Emma; Jin, Yinghui; Karlsson, Roger; Ebert, Steven N.; Jin, Albert; Liu, Allen P.; Shupliakov, Oleg; Wu, Ling-Gang

    2016-01-01

    Vesicle fusion is executed via formation of an Ω-shaped structure (Ω-profile), followed by closure (kiss-and-run) or merging of the Ω-profile into the plasma membrane (full fusion). Although Ω-profile closure limits release but recycles vesicles economically, Ω-profile merging facilitates release but couples to classical endocytosis for recycling. Despite its crucial role in determining exocytosis/endocytosis modes, how Ω-profile merging is mediated is poorly understood in endocrine cells and neurons containing small ∼30–300 nm vesicles. Here, using confocal and super-resolution STED imaging, force measurements, pharmacology and gene knockout, we show that dynamic assembly of filamentous actin, involving ATP hydrolysis, N-WASP and formin, mediates Ω-profile merging by providing sufficient plasma membrane tension to shrink the Ω-profile in neuroendocrine chromaffin cells containing ∼300 nm vesicles. Actin-directed compounds also induce Ω-profile accumulation at lamprey synaptic active zones, suggesting that actin may mediate Ω-profile merging at synapses. These results uncover molecular and biophysical mechanisms underlying Ω-profile merging. PMID:27576662

  10. Merged SAGE II / MIPAS / OMPS Ozone Record : Impact of Transfer Standard on Ozone Trends.

    NASA Astrophysics Data System (ADS)

    Kramarova, N. A.; Laeng, A.; von Clarmann, T.; Stiller, G. P.; Walker, K. A.; Zawodny, J. M.; Plieninger, J.

    2017-12-01

    The deseasonalized ozone anomalies from SAGE II, MIPAS and OMPS-LP datasets are merged into one long record. Two versions of the dataset will be presented : ACE-FTS instrument or MLS instrument are used as a transfer standard. The data are provided in 10 degrees latitude bins, going from 60N to 60S for the period from October 1984 to March 2017. The main differences between presented in this study merged ozone record and the merged SAGE II / Ozone_CCI / OMPS-Saskatoon dataset by V. Sofieva are: - the OMPS-LP data are from the NASA GSFC version 2 processor - the MIPAS 2002-2004 date are taken into the record - Data are merged using a transfer standard. In overlapping periods data are merged as weighted means where the weights are inversely proportional to the standard errors of the means (SEM) of the corresponding individual monthly means. The merged dataset comes with the uncertainty estimates. Ozone trends are calculated out of both versions of the dataset. The impact of transfer standard on obtained trends is discussed.

  11. Comparing the effect on the AGS longitudinal emittance of gold ions from the BtA stripping foil with and without a Booster Bunch Merge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeno, K.

    The aim of this note to better understand the effect of merging the Gold bunches in the Booster into one on the resulting AGS longitudinal emittance as compared to not merging them. The reason it matters whether they are merged or not is because they pass through a stripping foil in the BtA line. Data was taken last run (Run 17) for the case where the bunches are not merged, and it will be compared with data from cases where the bunches are merged. Previous data from Tandem operation will also be considered. There are two main pieces to thismore » puzzle. The first is the ε growth associated with the energy spread due to ‘energy straggling’ in the BtA stripping foil and the second is the effective ε growth associated with the energy loss that occurs while passing through the foil. Both of these effects depend on whether or not the Booster bunches have been merged into one.« less

  12. Investigation of the Direct Charge Transfer in Low Energy D2+ + H Collisions using Merged-Beams Technique

    NASA Astrophysics Data System (ADS)

    Romano, S. L.; Guillen, C. I.; Andrianarijaona, V. M.; Havener, C. C.

    2011-10-01

    The hydrogen - hydrogen (deuterium) molecular ion is the most fundamental ion-molecule two-electron system. Charge transfer (CT) for H2+ on H, which is one of the possible reaction paths for the (H-H2)+ system, is of special interest because of its contribution to H2 formation in the early universe, its exoergicity, and rich collision dynamics. Due to technical difficulty in making an atomic H target, the direct experimental investigations of CT for H2+ on H are sparse and generally limited to higher collision energies. The measurements of the absolute cross section of different CT paths for H2+ on H over a large range of collision energy are needed to benchmark theoretical calculations, especially the ones at low energies. The rate coefficient of CT at low energy is not known but may be comparable to other reaction rate coefficients in cold plasmas with H, H+, H2+, and H3+ as constituents. For instance, CT for H2+ on H and the following H3+ formation reaction H2+ + H2 → H + H3+ are clearly rate interdependent although it was always assumed that every ionization of H2 will lead to the formation of H3+. CT proceeds through dynamically coupled electronic, vibrational and rotational degrees of freedom. One can depict three paths, electronic CT, CT with nuclear substitution, and CT with dissociation. Electronic CT and CT with nuclear substitution in the H2+ on H collisions are not distinguishable by any quantum theory. Here we use the isotopic system (D2+ - H) to measure without ambiguity the electronic CT cross section by observing the H+ products. Using the ion-atom merged-beam apparatus at Oak Ridge National Laboratory, the absolute direct CT cross sections for D2+ + H from keV/u to meV/u collision energies have been measured. The molecular ions are extracted from an Electron-Cyclotron Resonance (ECR) ion source with a vibrational state distribution which is most likely determined by Frank-Condon transitions between ground state D2 and D2+. A ground-state H beam is obtained by photo-detachment of H-. Our first measurements are presented in Fig. 1 along with the theories and previous experiments. The collision is rovibrationally frozen at high energy where our measurements are seen to be in good agreement with the high energy theory. Both measurements and low energy theory increase toward low energies where the collision times are long enough to sample vibrational and rotational modes. This research is supported by the National Science Foundation through grant PHY-1068877 and by the Office of Fusion Energy Sciences and the Office of Basic Energy Sciences, U.S. DOE, Contract No. DE-AC05-00OR22725 with UT-Battelle, LLC.

  13. On the merging rates of envelope-deprived components of binary systems which can give rise to supernova events

    NASA Astrophysics Data System (ADS)

    Tornambe, Amedeo

    1989-08-01

    Theoretical rates of mergings of envelope-deprived components of binary systems, which can give rise to supernova events are described. The effects of the various assumptions on the physical properties of the progenitor system and of its evolutionary behavior through common envelope phases are discussed. Four cases have been analyzed: CO-CO, He-CO, He-He double degenerate mergings and He star-CO dwarf merging. It is found that, above a critical efficiency of the common envelope action in system shrinkage, the rate of CO-CO mergings is not strongly sensitive to the efficiency. Below this critical value, no CO-CO systems will survive for times larger than a few Gyr. In contrast, He-CO dwarf systems will continue to merge at a reasonable rate up to 20 Gyr, and more, also under extreme conditions.

  14. Comparing Individual Tree Segmentation Based on High Resolution Multispectral Image and Lidar Data

    NASA Astrophysics Data System (ADS)

    Xiao, P.; Kelly, M.; Guo, Q.

    2014-12-01

    This study compares the use of high-resolution multispectral WorldView images and high density Lidar data for individual tree segmentation. The application focuses on coniferous and deciduous forests in the Sierra Nevada Mountains. The tree objects are obtained in two ways: a hybrid region-merging segmentation method with multispectral images, and a top-down and bottom-up region-growing method with Lidar data. The hybrid region-merging method is used to segment individual tree from multispectral images. It integrates the advantages of global-oriented and local-oriented region-merging strategies into a unified framework. The globally most-similar pair of regions is used to determine the starting point of a growing region. The merging iterations are constrained within the local vicinity, thus the segmentation is accelerated and can reflect the local context. The top-down region-growing method is adopted in coniferous forest to delineate individual tree from Lidar data. It exploits the spacing between the tops of trees to identify and group points into a single tree based on simple rules of proximity and likely tree shape. The bottom-up region-growing method based on the intensity and 3D structure of Lidar data is applied in deciduous forest. It segments tree trunks based on the intensity and topological relationships of the points, and then allocate other points to exact tree crowns according to distance. The accuracies for each method are evaluated with field survey data in several test sites, covering dense and sparse canopy. Three types of segmentation results are produced: true positive represents a correctly segmented individual tree, false negative represents a tree that is not detected and assigned to a nearby tree, and false positive represents that a point or pixel cluster is segmented as a tree that does not in fact exist. They respectively represent correct-, under-, and over-segmentation. Three types of index are compared for segmenting individual tree from multispectral image and Lidar data: recall, precision and F-score. This work explores the tradeoff between the expensive Lidar data and inexpensive multispectral image. The conclusion will guide the optimal data selection in different density canopy areas for individual tree segmentation, and contribute to the field of forest remote sensing.

  15. A Simple Analytic Model for Estimating Mars Ascent Vehicle Mass and Performance

    NASA Technical Reports Server (NTRS)

    Woolley, Ryan C.

    2014-01-01

    The Mars Ascent Vehicle (MAV) is a crucial component in any sample return campaign. In this paper we present a universal model for a two-stage MAV along with the analytic equations and simple parametric relationships necessary to quickly estimate MAV mass and performance. Ascent trajectories can be modeled as two-burn transfers from the surface with appropriate loss estimations for finite burns, steering, and drag. Minimizing lift-off mass is achieved by balancing optimized staging and an optimized path-to-orbit. This model allows designers to quickly find optimized solutions and to see the effects of design choices.

  16. VizieR Online Data Catalog: Comet ion acoustic waves code (Gunell+, 2017)

    NASA Astrophysics Data System (ADS)

    Gunell, H.; Nilsson, H.; Hamrin, M.; Eriksson, A.; Odelstad, E.; Maggiolo, R.; Henri, P.; Vallieres, X.; Altwegg, K.; Tzou, C.-Y.; Rubin, M.; Glassmeier, K.-H.; Stenberg Wieser, G.; Simon Wedlund, C.; de Keyser, J.; Dhooghe, F.; Cessateur, G.; Gibbons, A.

    2017-01-01

    The general package for dispersion relations and fluctuation calculations using simple pole expansions is in the directory named simple. The directory ThisPaper contains files that are specific to the present paper. ThisPaper/startup.m sets up paths and physical constants. ThisPaper/aa16appendix.m plots the figure in the appendix. ThisPaper/aa16figs7to9.m performs the computations behind Figs. 7-9 and plots those figures. ThisPaper/aa16fig6.m performs the computations behind Fig. 6 and plots it. (2 data files).

  17. Simulation of 6 to 3 to 1 merge and squeeze of Au77+ bunches in AGS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, C. J.

    2016-05-09

    In order to increase the intensity per Au77+ bunch at AGS extraction, a 6 to 3 to 1 merge scheme was developed and implemented by K. Zeno during the 2016 RHIC run. For this scheme, 12 Booster loads, each consisting of a single bunch, are delivered to AGS per AGS magnetic cycle. The bunch from Booster is itself the result of a 4 to 2 to 1 merge which is carried out on a flat porch during the Booster magnetic cycle. Each Booster bunch is injected into a harmonic 24 bucket on the AGS injection porch. In order to fitmore » into the buckets and allow for the AGS injection kicker rise time, the bunch width must be reduced by exciting quadrupole oscillations just before extraction from Booster. The bunches are injected into two groups of six adjacent harmonic 24 buckets. In each group the 6 bunches are merged into 3 by bringing on RF harmonic 12 while reducing harmonic 24. This is a straightforward 2 to 1 merge (in which two adjacent bunches are merged into one). One ends up with two groups of three adjacent bunches sitting in harmonic 12 buckets. These bunches are accelerated to an intermediate porch for further merging. Doing the merge on a porch that sits above injection energy helps reduce losses that are believed to be due to the space-charge force acting on the bunched particles. (The 6 to 3 merge is done on the injection porch because the harmonic 24 frequency on the intermediate porch would be too high for the AGS RF cavities.) On the intermediate porch each group of 3 bunches is merged into one by bringing on RF harmonics 8 and 4 and then reducing harmonics 12 and 8. One ends up with 2 bunches, each the result of a 6 to 3 to 1 merge and each sitting in a harmonic 4 bucket. This puts 6 Booster loads into each bunch. Each merged bunch needs to be squeezed into a harmonic 12 bucket for subsequent acceleration. This is done by again bringing on harmonic 8 and then harmonic 12. Results of simulations of the 6 to 3 to 1 merge and the subsequent squeeze into harmonic 12 buckets are presented in this note. In particular, they provide a benchmark for what can be achieved with the available RF voltages.« less

  18. IIR filtering based adaptive active vibration control methodology with online secondary path modeling using PZT actuators

    NASA Astrophysics Data System (ADS)

    Boz, Utku; Basdogan, Ipek

    2015-12-01

    Structural vibrations is a major cause for noise problems, discomfort and mechanical failures in aerospace, automotive and marine systems, which are mainly composed of plate-like structures. In order to reduce structural vibrations on these structures, active vibration control (AVC) is an effective approach. Adaptive filtering methodologies are preferred in AVC due to their ability to adjust themselves for varying dynamics of the structure during the operation. The filtered-X LMS (FXLMS) algorithm is a simple adaptive filtering algorithm widely implemented in active control applications. Proper implementation of FXLMS requires availability of a reference signal to mimic the disturbance and model of the dynamics between the control actuator and the error sensor, namely the secondary path. However, the controller output could interfere with the reference signal and the secondary path dynamics may change during the operation. This interference problem can be resolved by using an infinite impulse response (IIR) filter which considers feedback of the one or more previous control signals to the controller output and the changing secondary path dynamics can be updated using an online modeling technique. In this paper, IIR filtering based filtered-U LMS (FULMS) controller is combined with online secondary path modeling algorithm to suppress the vibrations of a plate-like structure. The results are validated through numerical and experimental studies. The results show that the FULMS with online secondary path modeling approach has more vibration rejection capabilities with higher convergence rate than the FXLMS counterpart.

  19. Direct Position Determination of Unknown Signals in the Presence of Multipath Propagation

    PubMed Central

    Yu, Hongyi

    2018-01-01

    A novel geolocation architecture, termed “Multiple Transponders and Multiple Receivers for Multiple Emitters Positioning System (MTRE)” is proposed in this paper. Existing Direct Position Determination (DPD) methods take advantage of a rather simple channel assumption (line of sight channels with complex path attenuations) and a simplified MUltiple SIgnal Classification (MUSIC) algorithm cost function to avoid the high dimension searching. We point out that the simplified assumption and cost function reduce the positioning accuracy because of the singularity of the array manifold in a multi-path environment. We present a DPD model for unknown signals in the presence of Multi-path Propagation (MP-DPD) in this paper. MP-DPD adds non-negative real path attenuation constraints to avoid the mistake caused by the singularity of the array manifold. The Multi-path Propagation MUSIC (MP-MUSIC) method and the Active Set Algorithm (ASA) are designed to reduce the dimension of searching. A Multi-path Propagation Maximum Likelihood (MP-ML) method is proposed in addition to overcome the limitation of MP-MUSIC in the sense of a time-sensitive application. An iterative algorithm and an approach of initial value setting are given to make the MP-ML time consumption acceptable. Numerical results validate the performances improvement of MP-MUSIC and MP-ML. A closed form of the Cramér–Rao Lower Bound (CRLB) is derived as a benchmark to evaluate the performances of MP-MUSIC and MP-ML. PMID:29562601

  20. Direct Position Determination of Unknown Signals in the Presence of Multipath Propagation.

    PubMed

    Du, Jianping; Wang, Ding; Yu, Wanting; Yu, Hongyi

    2018-03-17

    A novel geolocation architecture, termed "Multiple Transponders and Multiple Receivers for Multiple Emitters Positioning System (MTRE)" is proposed in this paper. Existing Direct Position Determination (DPD) methods take advantage of a rather simple channel assumption (line of sight channels with complex path attenuations) and a simplified MUltiple SIgnal Classification (MUSIC) algorithm cost function to avoid the high dimension searching. We point out that the simplified assumption and cost function reduce the positioning accuracy because of the singularity of the array manifold in a multi-path environment. We present a DPD model for unknown signals in the presence of Multi-path Propagation (MP-DPD) in this paper. MP-DPD adds non-negative real path attenuation constraints to avoid the mistake caused by the singularity of the array manifold. The Multi-path Propagation MUSIC (MP-MUSIC) method and the Active Set Algorithm (ASA) are designed to reduce the dimension of searching. A Multi-path Propagation Maximum Likelihood (MP-ML) method is proposed in addition to overcome the limitation of MP-MUSIC in the sense of a time-sensitive application. An iterative algorithm and an approach of initial value setting are given to make the MP-ML time consumption acceptable. Numerical results validate the performances improvement of MP-MUSIC and MP-ML. A closed form of the Cramér-Rao Lower Bound (CRLB) is derived as a benchmark to evaluate the performances of MP-MUSIC and MP-ML.

  1. Comparison of online and offline based merging methods for high resolution rainfall intensities

    NASA Astrophysics Data System (ADS)

    Shehu, Bora; Haberlandt, Uwe

    2016-04-01

    Accurate rainfall intensities with high spatial and temporal resolution are crucial for urban flow prediction. Commonly, raw or bias corrected radar fields are used for forecasting, while different merging products are employed for simulation. The merging products are proven to be adequate for rainfall intensities estimation, however their application in forecasting is limited as they are developed for offline mode. This study aims at adapting and refining the offline merging techniques for the online implementation, and at comparing the performance of these methods for high resolution rainfall data. Radar bias correction based on mean fields and quantile mapping are analyzed individually and also are implemented in conditional merging. Special attention is given to the impact of different spatial and temporal filters on the predictive skill of all methods. Raw radar data and kriging interpolation of station data are considered as a reference to check the benefit of the merged products. The methods are applied for several extreme events in the time period 2006-2012 caused by different meteorological conditions, and their performance is evaluated by split sampling. The study area is located within the 112 km radius of Hannover radar in Lower Saxony, Germany and the data set constitutes of 80 recording stations in 5 min time steps. The results of this study reveal how the performance of the methods is affected by the adjustment of radar data, choice of merging method and selected event. Merging techniques can be used to improve the performance of online rainfall estimation, which gives way to the application of merging products in forecasting.

  2. Nearshore Hydroacoustic Seafloor Mapping In The German Bight (North Sea): Hydroacoustic Interpretation With And Without Classification

    NASA Astrophysics Data System (ADS)

    Hass, H. C.; Mielck, F.; Papenmeier, S.

    2016-12-01

    Nearshore habitats are in constant dynamic change. They need regular assessment and appropriate monitoring of areas of special interest. To accomplish this, hydroacoustic seabed characterization tools are applied to allow for cost-effective and efficient mapping of the seafloor. In this context single beam echosounders (SBES) systems provide a comprehensive view by analyzing the hardness and roughness characteristics of the seafloor. Interpolation between transect lines becomes necessary when gapless maps are needed. This study presents a simple method to process and visualize data recorded with RoxAnn (Sonavision, Edinburgh, UK) and similar SBES. Both, hardness and roughness data are merged to one combined parameter that receives a color code (RGB) according to the acoustic properties of the seafloor. This color information is then interpolated to obtain an area-wide map that provides unclassified and thus unbiased seabed information. The RGB color data can subsequently be used for classification and modeling purposes. Four surveys are shown from a morphologically complex nearshore area west of the island of Helgoland (SE North Sea). The area has complex textural and dynamic characteristics reaching from outcropping bedrock via sandy to muddy areas with mostly gradual transitions. RoxAnn data allow to discriminate all seafloor types that were suggested by ground-truth information (seafloor samples, video). The area appears to be fluctuating within certain limits. Sediment import (sand and fluid mud) paths can be reconstructed. Manually, six RoxAnn zones (RZ) were identified and left without hard boundaries to better match the seafloor types of the study site. The k-means fuzzy cluster analysis employed yields best results with 3 classes. We show that interpretations on the basis of largely non-classified, color-coded and interpolated data provide the best gain of information in the highest possible resolution. Classification with hard boundaries is necessary for stakeholders but may cause reduction of information important to science. It becomes apparent that the type of classification addressing stakeholder issues is not always compatible with scientific objectives.

  3. The collaborative effect of ram pressure and merging on star formation and stripping fraction

    NASA Astrophysics Data System (ADS)

    Bischko, J. C.; Steinhauser, D.; Schindler, S.

    2015-04-01

    Aims: We investigate the effect of ram pressure stripping (RPS) on several simulations of merging pairs of gas-rich spiral galaxies. We are concerned with the changes in stripping efficiency and the time evolution of the star formation rate. Our goal is to provide an estimate of the combined effect of merging and RPS compared to the influence of the individual processes. Methods: We make use of the combined N-body/hydrodynamic code GADGET-2. The code features a threshold-based statistical recipe for star formation, as well as radiative cooling and modeling of galactic winds. In our simulations, we vary mass ratios between 1:4 and 1:8 in a binary merger. We sample different geometric configurations of the merging systems (edge-on and face-on mergers, different impact parameters). Furthermore, we vary the properties of the intracluster medium (ICM) in rough steps: the speed of the merging system relative to the ICM between 500 and 1000 km s-1, the ICM density between 10-29 and 10-27 g cm-3, and the ICM direction relative to the mergers' orbital plane. Ram pressure is kept constant within a simulation time period, as is the ICM temperature of 107 K. Each simulation in the ICM is compared to simulations of the merger in vacuum and the non-merging galaxies with acting ram pressure. Results: Averaged over the simulation time (1 Gyr) the merging pairs show a negligible 5% enhancement in SFR, when compared to single galaxies under the same environmental conditions. The SFRs peak at the time of the galaxies first fly-through. There, our simulations show SFRs of up to 20 M⊙ yr-1 (compared to 3 M⊙ yr-1 of the non-merging galaxies in vacuum). In the most extreme case, this constitutes a short-term (<50 Myr) SFR increase of 50 % over the non-merging galaxies experiencing ram pressure. The wake of merging galaxies in the ICM typically has a third to half the star mass seen in the non-merging galaxies and 5% to 10% less gas mass. The joint effect of RPS and merging, according to our simulations, is not significantly different from pure ram pressure effects.

  4. Development of a Tandem Electrodynamic Trap Apparatus for Merging Charged Droplets and Spectroscopic Characterization of Resultant Dried Particles.

    PubMed

    Kohno, Jun-Ya; Higashiura, Tetsu; Eguchi, Takaaki; Miura, Shumpei; Ogawa, Masato

    2016-08-11

    Materials work in multicomponent forms. A wide range of compositions must be tested to obtain the optimum composition for a specific application. We propose optimization using a series of small levitated single particles. We describe a tandem-trap apparatus for merging liquid droplets and analyzing the merged droplets and/or dried particles that are produced from the merged droplets under levitation conditions. Droplet merging was confirmed by Raman spectroscopic studies of the levitated particles. The tandem-trap apparatus enables the synthesis of a particle and spectroscopic investigation of its properties. This provides a basis for future investigation of the properties of levitated single particles.

  5. Memory characteristics of ring-shaped ceramic superconductors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takeoka, A.; Hasunuma, M.; Sakaiya, S.

    1989-03-01

    For the practical application of ceramic superconductors, the authors investigated the residual magnetic field characteristics of ring-shaped ceramic superconductors in a Y-Ba-Cu-O system with high Tc. The residual magnetic field of a ring with asymmetric current paths, supplied by external currents, appeared when one of the branch currents was above the critical current. The residual magnetic field saturated when both brach currents exceeded the critical current of the ring and showed hysteresis-like characteristics. The saturated magnetic field is subject to the critical current of the ring. A superconducting ring with asymmetric current paths suggests a simple and quite new persistent-currentmore » type memory device.« less

  6. Channel Capacity Calculation at Large SNR and Small Dispersion within Path-Integral Approach

    NASA Astrophysics Data System (ADS)

    Reznichenko, A. V.; Terekhov, I. S.

    2018-04-01

    We consider the optical fiber channel modelled by the nonlinear Shrödinger equation with additive white Gaussian noise. Using Feynman path-integral approach for the model with small dispersion we find the first nonzero corrections to the conditional probability density function and the channel capacity estimations at large signal-to-noise ratio. We demonstrate that the correction to the channel capacity in small dimensionless dispersion parameter is quadratic and positive therefore increasing the earlier calculated capacity for a nondispersive nonlinear optical fiber channel in the intermediate power region. Also for small dispersion case we find the analytical expressions for simple correlators of the output signals in our noisy channel.

  7. Chord-length and free-path distribution functions for many-body systems

    NASA Astrophysics Data System (ADS)

    Lu, Binglin; Torquato, S.

    1993-04-01

    We study fundamental morphological descriptors of disordered media (e.g., heterogeneous materials, liquids, and amorphous solids): the chord-length distribution function p(z) and the free-path distribution function p(z,a). For concreteness, we will speak in the language of heterogeneous materials composed of two different materials or ``phases.'' The probability density function p(z) describes the distribution of chord lengths in the sample and is of great interest in stereology. For example, the first moment of p(z) is the ``mean intercept length'' or ``mean chord length.'' The chord-length distribution function is of importance in transport phenomena and problems involving ``discrete free paths'' of point particles (e.g., Knudsen diffusion and radiative transport). The free-path distribution function p(z,a) takes into account the finite size of a simple particle of radius a undergoing discrete free-path motion in the heterogeneous material and we show that it is actually the chord-length distribution function for the system in which the ``pore space'' is the space available to a finite-sized particle of radius a. Thus it is shown that p(z)=p(z,0). We demonstrate that the functions p(z) and p(z,a) are related to another fundamentally important morphological descriptor of disordered media, namely, the so-called lineal-path function L(z) studied by us in previous work [Phys. Rev. A 45, 922 (1992)]. The lineal path function gives the probability of finding a line segment of length z wholly in one of the ``phases'' when randomly thrown into the sample. We derive exact series representations of the chord-length and free-path distribution functions for systems of spheres with a polydispersivity in size in arbitrary dimension D. For the special case of spatially uncorrelated spheres (i.e., fully penetrable spheres) we evaluate exactly the aforementioned functions, the mean chord length, and the mean free path. We also obtain corresponding analytical formulas for the case of mutually impenetrable (i.e., spatially correlated) polydispersed spheres.

  8. Interactions of a co-rotating vortex pair at multiple offsets

    NASA Astrophysics Data System (ADS)

    Forster, Kyle J.; Barber, Tracie J.; Diasinos, Sammy; Doig, Graham

    2017-05-01

    Two NACA0012 vanes at various lateral offsets were investigated by wind tunnel testing to observe the interactions between the streamwise vortices. The vanes were separated by nine chord lengths in the streamwise direction to allow the upstream vortex to impact on the downstream geometry. These vanes were evaluated at an angle of incidence of 8° and a Reynolds number of 7 ×104 using particle image velocimetry. A helical motion of the vortices was observed, with rotational rate increasing as the offset was reduced to the point of vortex merging. Downstream meandering of the weaker vortex was found to increase in magnitude near the point of vortex merging. The merging process occurred more rapidly when the upstream vortex was passed on the pressure side of the vane, with the downstream vortex being produced with less circulation and consequently merging into the upstream vortex. The merging distance was found to be statistical rather than deterministic quantity, indicating that the meandering of the vortices affected their separations and energies. This resulted in a fluctuation of the merging location. A loss of circulation associated with the merging process was identified, with the process of achieving vortex circularity causing vorticity diffusion, however all merged cases maintained higher circulation than a single vortex condition. The presence of the upstream vortex was found to reduce the strength of the downstream vortex in all offsets evaluated.

  9. Investigation of alternative work zone merging sign configurations.

    DOT National Transportation Integrated Search

    2013-12-01

    This study investigated the effect of an alternative merge sign configuration within a freeway work zone. In this alternative : configuration, the graphical lane closed sign from the MUTCD was compared with a MERGE/arrow sign on one side and a : RIGH...

  10. Triadic split-merge sampler

    NASA Astrophysics Data System (ADS)

    van Rossum, Anne C.; Lin, Hai Xiang; Dubbeldam, Johan; van der Herik, H. Jaap

    2018-04-01

    In machine vision typical heuristic methods to extract parameterized objects out of raw data points are the Hough transform and RANSAC. Bayesian models carry the promise to optimally extract such parameterized objects given a correct definition of the model and the type of noise at hand. A category of solvers for Bayesian models are Markov chain Monte Carlo methods. Naive implementations of MCMC methods suffer from slow convergence in machine vision due to the complexity of the parameter space. Towards this blocked Gibbs and split-merge samplers have been developed that assign multiple data points to clusters at once. In this paper we introduce a new split-merge sampler, the triadic split-merge sampler, that perform steps between two and three randomly chosen clusters. This has two advantages. First, it reduces the asymmetry between the split and merge steps. Second, it is able to propose a new cluster that is composed out of data points from two different clusters. Both advantages speed up convergence which we demonstrate on a line extraction problem. We show that the triadic split-merge sampler outperforms the conventional split-merge sampler. Although this new MCMC sampler is demonstrated in this machine vision context, its application extend to the very general domain of statistical inference.

  11. Helicopter cabin noise: Methods of source and path identification and characterization

    NASA Technical Reports Server (NTRS)

    Murray, B. S.; Wilby, J. F.

    1978-01-01

    Internal noise sources in a helicopter are considered. These include propulsion machinery, comprising engine and transmission, and turbulent boundary layer effects. It is shown that by using relatively simple concepts together with careful experimental work it is possible to generate reliable data on which to base the design of high performance noise control treatments.

  12. A Complete Set for the Maass Laplacians on the Pseudosphere

    NASA Astrophysics Data System (ADS)

    Oshima, K.

    1989-02-01

    We obtain a completeness relation from eigenfunctions of the Maass laplacians in terms of the pseudospherical polar coordinates. We derive addition theorems of ``generalized'' associated Legendre functions. With the help of the addition theorems, we get a simple path integral picture for a charged particle on the Poincaré upper half plane with a constant magnetic field.

  13. Eye-Tracking Study on Facial Emotion Recognition Tasks in Individuals with High-Functioning Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Tsang, Vicky

    2018-01-01

    The eye-tracking experiment was carried out to assess fixation duration and scan paths that individuals with and without high-functioning autism spectrum disorders employed when identifying simple and complex emotions. Participants viewed human photos of facial expressions and decided on the identification of emotion, the negative-positive emotion…

  14. Quantifying and Testing Indirect Effects in Simple Mediation Models when the Constituent Paths Are Nonlinear

    ERIC Educational Resources Information Center

    Hayes, Andrew F.; Preacher, Kristopher J.

    2010-01-01

    Most treatments of indirect effects and mediation in the statistical methods literature and the corresponding methods used by behavioral scientists have assumed linear relationships between variables in the causal system. Here we describe and extend a method first introduced by Stolzenberg (1980) for estimating indirect effects in models of…

  15. Merged analog and photon counting profiles used as input for other RLPROF VAPs

    DOE Data Explorer

    Newsom, Rob

    2014-10-03

    The rlprof_merge VAP "merges" the photon counting and analog signals appropriately for each channel, creating an output data file that is very similar to the original raw data file format that the Raman lidar initially had.

  16. Merged analog and photon counting profiles used as input for other RLPROF VAPs

    DOE Data Explorer

    Newsom, Rob

    1998-03-01

    The rlprof_merge VAP "merges" the photon counting and analog signals appropriately for each channel, creating an output data file that is very similar to the original raw data file format that the Raman lidar initially had.

  17. Optimized distributed systems achieve significant performance improvement on sorted merging of massive VCF files.

    PubMed

    Sun, Xiaobo; Gao, Jingjing; Jin, Peng; Eng, Celeste; Burchard, Esteban G; Beaty, Terri H; Ruczinski, Ingo; Mathias, Rasika A; Barnes, Kathleen; Wang, Fusheng; Qin, Zhaohui S

    2018-06-01

    Sorted merging of genomic data is a common data operation necessary in many sequencing-based studies. It involves sorting and merging genomic data from different subjects by their genomic locations. In particular, merging a large number of variant call format (VCF) files is frequently required in large-scale whole-genome sequencing or whole-exome sequencing projects. Traditional single-machine based methods become increasingly inefficient when processing large numbers of files due to the excessive computation time and Input/Output bottleneck. Distributed systems and more recent cloud-based systems offer an attractive solution. However, carefully designed and optimized workflow patterns and execution plans (schemas) are required to take full advantage of the increased computing power while overcoming bottlenecks to achieve high performance. In this study, we custom-design optimized schemas for three Apache big data platforms, Hadoop (MapReduce), HBase, and Spark, to perform sorted merging of a large number of VCF files. These schemas all adopt the divide-and-conquer strategy to split the merging job into sequential phases/stages consisting of subtasks that are conquered in an ordered, parallel, and bottleneck-free way. In two illustrating examples, we test the performance of our schemas on merging multiple VCF files into either a single TPED or a single VCF file, which are benchmarked with the traditional single/parallel multiway-merge methods, message passing interface (MPI)-based high-performance computing (HPC) implementation, and the popular VCFTools. Our experiments suggest all three schemas either deliver a significant improvement in efficiency or render much better strong and weak scalabilities over traditional methods. Our findings provide generalized scalable schemas for performing sorted merging on genetics and genomics data using these Apache distributed systems.

  18. Optimized distributed systems achieve significant performance improvement on sorted merging of massive VCF files

    PubMed Central

    Gao, Jingjing; Jin, Peng; Eng, Celeste; Burchard, Esteban G; Beaty, Terri H; Ruczinski, Ingo; Mathias, Rasika A; Barnes, Kathleen; Wang, Fusheng

    2018-01-01

    Abstract Background Sorted merging of genomic data is a common data operation necessary in many sequencing-based studies. It involves sorting and merging genomic data from different subjects by their genomic locations. In particular, merging a large number of variant call format (VCF) files is frequently required in large-scale whole-genome sequencing or whole-exome sequencing projects. Traditional single-machine based methods become increasingly inefficient when processing large numbers of files due to the excessive computation time and Input/Output bottleneck. Distributed systems and more recent cloud-based systems offer an attractive solution. However, carefully designed and optimized workflow patterns and execution plans (schemas) are required to take full advantage of the increased computing power while overcoming bottlenecks to achieve high performance. Findings In this study, we custom-design optimized schemas for three Apache big data platforms, Hadoop (MapReduce), HBase, and Spark, to perform sorted merging of a large number of VCF files. These schemas all adopt the divide-and-conquer strategy to split the merging job into sequential phases/stages consisting of subtasks that are conquered in an ordered, parallel, and bottleneck-free way. In two illustrating examples, we test the performance of our schemas on merging multiple VCF files into either a single TPED or a single VCF file, which are benchmarked with the traditional single/parallel multiway-merge methods, message passing interface (MPI)–based high-performance computing (HPC) implementation, and the popular VCFTools. Conclusions Our experiments suggest all three schemas either deliver a significant improvement in efficiency or render much better strong and weak scalabilities over traditional methods. Our findings provide generalized scalable schemas for performing sorted merging on genetics and genomics data using these Apache distributed systems. PMID:29762754

  19. Combined Monte Carlo and path-integral method for simulated library of time-resolved reflectance curves from layered tissue models

    NASA Astrophysics Data System (ADS)

    Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann

    2009-02-01

    Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.

  20. The most likely voltage path and large deviations approximations for integrate-and-fire neurons.

    PubMed

    Paninski, Liam

    2006-08-01

    We develop theory and numerical methods for computing the most likely subthreshold voltage path of a noisy integrate-and-fire (IF) neuron, given observations of the neuron's superthreshold spiking activity. This optimal voltage path satisfies a second-order ordinary differential (Euler-Lagrange) equation which may be solved analytically in a number of special cases, and which may be solved numerically in general via a simple "shooting" algorithm. Our results are applicable for both linear and nonlinear subthreshold dynamics, and in certain cases may be extended to correlated subthreshold noise sources. We also show how this optimal voltage may be used to obtain approximations to (1) the likelihood that an IF cell with a given set of parameters was responsible for the observed spike train; and (2) the instantaneous firing rate and interspike interval distribution of a given noisy IF cell. The latter probability approximations are based on the classical Freidlin-Wentzell theory of large deviations principles for stochastic differential equations. We close by comparing this most likely voltage path to the true observed subthreshold voltage trace in a case when intracellular voltage recordings are available in vitro.

  1. Numerical Simulation of the Working Process in the Twin Screw Vacuum Pump

    NASA Astrophysics Data System (ADS)

    Lu, Yang; Fu, Yu; Guo, Bei; Fu, Lijuan; Zhang, Qingqing; Chen, Xiaole

    2017-08-01

    Twin screw vacuum pumps inherit the advantages of screw machinery, such as high reliability, stable medium conveying, small vibration, simple and compact structures, convenient operation, etc, which have been widely used in petrochemical and air industry. On the basis of previous studies, this study analyzed the geometric features of variable pitch of the twin screw vacuum pump such as the sealing line, the meshing line and the volume between teeth. The mathematical model of numerical simulation of the twin screw vacuum pump was established. The leakage paths of the working volume including the sealing line and the addendum arc were comprehensively considered. The corresponding simplified geometric model of leakage flow was built up for different leak paths and the flow coefficients were calculated. The flow coefficient value range of different leak paths was given. The results showed that the flow coefficient of different leak paths can be taken as constant value for the studied geometry. The analysis of recorded indicator diagrams showed that the increasing rotational speed can dramatically decrease the exhaust pressure and the lower rotational speed can lead to over-compression. The pressure of the isentropic process which was affected by leakage was higher than the theoretical process.

  2. Measurements of traffic emissions over a medium-sized city using long-path measurements and comparison against bottom-up city estimates

    NASA Astrophysics Data System (ADS)

    Waxman, E.; Cossel, K.; Truong, G. W.; Giorgetta, F.; Swann, W.; Coddington, I.; Newbury, N.

    2017-12-01

    Understanding emissions from cities is increasingly important as a growing fraction of the world's population moves to cities. Here we use a novel technology, dual frequency comb spectroscopy, to measure city emissions using a long outdoor open path. We simultaneously measured CO2, CH4, and H2O over the city of Boulder, Colorado and over a clean-air reference path for two months in the fall of 2016. Because of the spatial coverage of our measurements, the layout of the city and power plant locations, and the predominant wind direction, our measurements primarily pick up vehicle emissions. We choose two days with consistent CO2 enhancements over the city relative to the reference path and use a simple 0-D box model to calculate city emissions for these days. We scale these up to annual emissions and compare our measurements with the City of Boulder bottom-up vehicle emissions inventory based on total vehicle miles traveled, fuel efficiency, and vehicle type distribution. We find good agreement (within about a factor of two) between our top-down measurements and the city's bottom-up inventory value.

  3. Solution Path for Pin-SVM Classifiers With Positive and Negative $\\tau $ Values.

    PubMed

    Huang, Xiaolin; Shi, Lei; Suykens, Johan A K

    2017-07-01

    Applying the pinball loss in a support vector machine (SVM) classifier results in pin-SVM. The pinball loss is characterized by a parameter τ . Its value is related to the quantile level and different τ values are suitable for different problems. In this paper, we establish an algorithm to find the entire solution path for pin-SVM with different τ values. This algorithm is based on the fact that the optimal solution to pin-SVM is continuous and piecewise linear with respect to τ . We also show that the nonnegativity constraint on τ is not necessary, i.e., τ can be extended to negative values. First, in some applications, a negative τ leads to better accuracy. Second, τ = -1 corresponds to a simple solution that links SVM and the classical kernel rule. The solution for τ = -1 can be obtained directly and then be used as a starting point of the solution path. The proposed method efficiently traverses τ values through the solution path, and then achieves good performance by a suitable τ . In particular, τ = 0 corresponds to C-SVM, meaning that the traversal algorithm can output a result at least as good as C-SVM with respect to validation error.

  4. Realization of a multipath ultrasonic gas flowmeter based on transit-time technique.

    PubMed

    Chen, Qiang; Li, Weihua; Wu, Jiangtao

    2014-01-01

    A microcomputer-based ultrasonic gas flowmeter with transit-time method is presented. Modules of the flowmeter are designed systematically, including the acoustic path arrangement, ultrasound emission and reception module, transit-time measurement module, the software and so on. Four 200 kHz transducers forming two acoustic paths are used to send and receive ultrasound simultaneously. The synchronization of the transducers can eliminate the influence caused by the inherent switch time in simple chord flowmeter. The distribution of the acoustic paths on the mechanical apparatus follows the Tailored integration, which could reduce the inherent error by 2-3% compared with the Gaussian integration commonly used in the ultrasonic flowmeter now. This work also develops timing modules to determine the flight time of the acoustic signal. The timing mechanism is different from the traditional method. The timing circuit here adopts high capability chip TDC-GP2, with the typical resolution of 50 ps. The software of Labview is used to receive data from the circuit and calculate the gas flow value. Finally, the two paths flowmeter has been calibrated and validated on the test facilities for air flow in Shaanxi Institute of Measurement & Testing. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Optically based technique for producing merged spectra of water-leaving radiances from ocean color remote sensing.

    PubMed

    Mélin, Frédéric; Zibordi, Giuseppe

    2007-06-20

    An optically based technique is presented that produces merged spectra of normalized water-leaving radiances L(WN) by combining spectral data provided by independent satellite ocean color missions. The assessment of the merging technique is based on a four-year field data series collected by an autonomous above-water radiometer located on the Acqua Alta Oceanographic Tower in the Adriatic Sea. The uncertainties associated with the merged L(WN) obtained from the Sea-viewing Wide Field-of-view Sensor and the Moderate Resolution Imaging Spectroradiometer are consistent with the validation statistics of the individual sensor products. The merging including the third mission Medium Resolution Imaging Spectrometer is also addressed for a reduced ensemble of matchups.

  6. Triple collocation based merging of satellite soil moisture retrievals

    USDA-ARS?s Scientific Manuscript database

    We propose a method for merging soil moisture retrievals from space borne active and passive microwave instruments based on weighted averaging taking into account the error characteristics of the individual data sets. The merging scheme is parameterized using error variance estimates obtained from u...

  7. Hall effect on a Merging Formation Process of a Field-Reversed Configuration

    NASA Astrophysics Data System (ADS)

    Kaminou, Yasuhiro; Guo, Xuehan; Inomoto, Michiaki; Ono, Yasushi; Horiuchi, Ritoku

    2015-11-01

    Counter-helicity spheromak merging is one of the formation methods of a Field-Reversed Configuration (FRC). In counter-helicity spheromak merging, two spheromaks with opposing toroidal fields merge together, through magnetic reconnection events and relax into a FRC, which has no or little toroidal field. This process contains magnetic reconnection and a relaxation phenomena, and the Hall effect has some essential effects on these process because the X-point in the magnetic reconnection or the O-point of the FRC has no or little magnetic field. However, the Hall effect as both global and local effect on counter-helicity spheromak merging has not been elucidated. In this poster, we conducted 2D/3D Hall-MHD simulations and experiments of counter-helicity spheromak merging. We find that the Hall effect enhances the reconnection rate, and reduces the generation of toroidal sheared-flow. The suppression of the ``slingshot effect'' affects the relaxation process. We will discuss details in the poster.

  8. Observations of the Ion Signatures of Double Merging and the Formation of Newly Closed Field Lines

    NASA Technical Reports Server (NTRS)

    Chandler, Michael O.; Avanov, Levon A.; Craven, Paul D.

    2007-01-01

    Observations from the Polar spacecraft, taken during a period of northward interplanetary magnetic field (IMF) show magnetosheath ions within the magnetosphere with velocity distributions resulting from multiple merging sites along the same field line. The observations from the TIDE instrument show two separate ion energy-time dispersions that are attributed to two widely separated (-20Re) merging sites. Estimates of the initial merging times show that they occurred nearly simultaneously (within 5 minutes.) Along with these populations, cold, ionospheric ions were observed counterstreaming along the field lines. The presence of such ions is evidence that these field lines are connected to the ionosphere on both ends. These results are consistent with the hypothesis that double merging can produce closed field lines populated by solar wind plasma. While the merging sites cannot be unambiguously located, the observations and analyses favor one site poleward of the northern cusp and a second site at low latitudes.

  9. Duplicates, redundancies and inconsistencies in the primary nucleotide databases: a descriptive study.

    PubMed

    Chen, Qingyu; Zobel, Justin; Verspoor, Karin

    2017-01-01

    GenBank, the EMBL European Nucleotide Archive and the DNA DataBank of Japan, known collectively as the International Nucleotide Sequence Database Collaboration or INSDC, are the three most significant nucleotide sequence databases. Their records are derived from laboratory work undertaken by different individuals, by different teams, with a range of technologies and assumptions and over a period of decades. As a consequence, they contain a great many duplicates, redundancies and inconsistencies, but neither the prevalence nor the characteristics of various types of duplicates have been rigorously assessed. Existing duplicate detection methods in bioinformatics only address specific duplicate types, with inconsistent assumptions; and the impact of duplicates in bioinformatics databases has not been carefully assessed, making it difficult to judge the value of such methods. Our goal is to assess the scale, kinds and impact of duplicates in bioinformatics databases, through a retrospective analysis of merged groups in INSDC databases. Our outcomes are threefold: (1) We analyse a benchmark dataset consisting of duplicates manually identified in INSDC-a dataset of 67 888 merged groups with 111 823 duplicate pairs across 21 organisms from INSDC databases - in terms of the prevalence, types and impacts of duplicates. (2) We categorize duplicates at both sequence and annotation level, with supporting quantitative statistics, showing that different organisms have different prevalence of distinct kinds of duplicate. (3) We show that the presence of duplicates has practical impact via a simple case study on duplicates, in terms of GC content and melting temperature. We demonstrate that duplicates not only introduce redundancy, but can lead to inconsistent results for certain tasks. Our findings lead to a better understanding of the problem of duplication in biological databases.Database URL: the merged records are available at https://cloudstor.aarnet.edu.au/plus/index.php/s/Xef2fvsebBEAv9w. © The Author(s) 2017. Published by Oxford University Press.

  10. Duplicates, redundancies and inconsistencies in the primary nucleotide databases: a descriptive study

    PubMed Central

    Chen, Qingyu; Zobel, Justin; Verspoor, Karin

    2017-01-01

    GenBank, the EMBL European Nucleotide Archive and the DNA DataBank of Japan, known collectively as the International Nucleotide Sequence Database Collaboration or INSDC, are the three most significant nucleotide sequence databases. Their records are derived from laboratory work undertaken by different individuals, by different teams, with a range of technologies and assumptions and over a period of decades. As a consequence, they contain a great many duplicates, redundancies and inconsistencies, but neither the prevalence nor the characteristics of various types of duplicates have been rigorously assessed. Existing duplicate detection methods in bioinformatics only address specific duplicate types, with inconsistent assumptions; and the impact of duplicates in bioinformatics databases has not been carefully assessed, making it difficult to judge the value of such methods. Our goal is to assess the scale, kinds and impact of duplicates in bioinformatics databases, through a retrospective analysis of merged groups in INSDC databases. Our outcomes are threefold: (1) We analyse a benchmark dataset consisting of duplicates manually identified in INSDC—a dataset of 67 888 merged groups with 111 823 duplicate pairs across 21 organisms from INSDC databases – in terms of the prevalence, types and impacts of duplicates. (2) We categorize duplicates at both sequence and annotation level, with supporting quantitative statistics, showing that different organisms have different prevalence of distinct kinds of duplicate. (3) We show that the presence of duplicates has practical impact via a simple case study on duplicates, in terms of GC content and melting temperature. We demonstrate that duplicates not only introduce redundancy, but can lead to inconsistent results for certain tasks. Our findings lead to a better understanding of the problem of duplication in biological databases. Database URL: the merged records are available at https://cloudstor.aarnet.edu.au/plus/index.php/s/Xef2fvsebBEAv9w PMID:28077566

  11. A continuum of periodic solutions to the planar four-body problem with two pairs of equal masses

    NASA Astrophysics Data System (ADS)

    Ouyang, Tiancheng; Xie, Zhifu

    2018-04-01

    In this paper, we apply the variational method with Structural Prescribed Boundary Conditions (SPBC) to prove the existence of periodic and quasi-periodic solutions for the planar four-body problem with two pairs of equal masses m1 =m3 and m2 =m4. A path q (t) on [ 0 , T ] satisfies the SPBC if the boundaries q (0) ∈ A and q (T) ∈ B, where A and B are two structural configuration spaces in (R2)4 and they depend on a rotation angle θ ∈ (0 , 2 π) and the mass ratio μ = m2/m1 ∈R+. We show that there is a region Ω ⊆ (0 , 2 π) ×R+ such that there exists at least one local minimizer of the Lagrangian action functional on the path space satisfying the SPBC { q (t) ∈H1 ([ 0 , T ] ,(R2)4) | q (0) ∈ A , q (T) ∈ B } for any (θ , μ) ∈ Ω. The corresponding minimizing path of the minimizer can be extended to a non-homographic periodic solution if θ is commensurable with π or a quasi-periodic solution if θ is not commensurable with π. In the variational method with the SPBC, we only impose constraints on the boundary and we do not impose any symmetry constraint on solutions. Instead, we prove that our solutions that are extended from the initial minimizing paths possess certain symmetries. The periodic solutions can be further classified as simple choreographic solutions, double choreographic solutions and non-choreographic solutions. Among the many stable simple choreographic orbits, the most extraordinary one is the stable star pentagon choreographic solution when (θ , μ) = (4 π/5, 1). Remarkably the unequal-mass variants of the stable star pentagon are just as stable as the equal mass choreographies.

  12. Path to Market for Compact Modular Fusion Power Cores

    NASA Astrophysics Data System (ADS)

    Woodruff, Simon; Baerny, Jennifer K.; Mattor, Nathan; Stoulil, Don; Miller, Ronald; Marston, Theodore

    2012-08-01

    The benefits of an energy source whose reactants are plentiful and whose products are benign is hard to measure, but at no time in history has this energy source been more needed. Nuclear fusion continues to promise to be this energy source. However, the path to market for fusion systems is still regularly a matter for long-term (20 + year) plans. This white paper is intended to stimulate discussion of faster commercialization paths, distilling guidance from investors, utilities, and the wider energy research community (including from ARPA-E). There is great interest in a small modular fusion system that can be developed quickly and inexpensively. A simple model shows how compact modular fusion can produce a low cost development path by optimizing traditional systems that burn deuterium and tritium, operating not only at high magnetic field strength, but also by omitting some components that allow for the core to become more compact and easier to maintain. The dominant hurdles to the development of low cost, practical fusion systems are discussed, primarily in terms of the constraints placed on the cost of development stages in the private sector. The main finding presented here is that the bridge from DOE Office of Science to the energy market can come at the Proof of Principle development stage, providing the concept is sufficiently compact and inexpensive that its development allows for a normal technology commercialization path.

  13. Decreased Movement Path Tortuosity Is Associated With Improved Functional Status in Patients With Traumatic Brain Injury.

    PubMed

    Kearns, William D; Scott, Steven; Fozard, James L; Dillahunt-Aspillaga, Christina; Jasiewicz, Jan M

    2016-01-01

    To determine if movement path tortuosity in everyday ambulation decreases in Veterans being treated in a residential setting for traumatic brain injury. Elevated path tortuosity is observed in assisted living facility residents with cognitive impairment and at risk for falls, and tortuosity may decrease over the course of cognitive rehabilitation received by the Veterans. If observed, decreased tortuosity may be linked to improved clinical outcomes. Longitudinal observational study without random assignment. Veterans Affairs Medical Center inpatient residential polytrauma treatment facility. Twenty-two Veterans enrolled in a postacute predischarge residential polytrauma treatment facility. None, observation-only. Mayo-Portland Adaptability Index-4, and movement path tortuosity measured by Fractal Dimension (Fractal D). Fractal D was obtained continuously from an indoor movement tracking system primarily used to provide machine-generated prompts and reminders to facilitate activities of daily living. Patients were deemed "responders" (N = 10) if a significant linear decline in Fractal D occurred over the course of treatment, or nonresponders (N = 12) if no significant decline was observed. Responders had lower discharge Mayo-Portland Adaptability Inventory scores (mean = 32.6, SD = 9.53) than non-responders (mean = 39.5, SD = 6.02) (F = 2.07, df = 20, P = .05). Responders and nonresponders did not differ on initial injury severity or other demographic measures. Fractal D, a relatively simple measure of movement path tortuosity can be linked to functional recovery from traumatic brain injury.

  14. Real-time Collision Avoidance and Path Optimizer for Semi-autonomous UAVs.

    NASA Astrophysics Data System (ADS)

    Hawary, A. F.; Razak, N. A.

    2018-05-01

    Whilst UAV offers a potentially cheaper and more localized observation platform than current satellite or land-based approaches, it requires an advance path planner to reveal its true potential, particularly in real-time missions. Manual control by human will have limited line-of-sights and prone to errors due to careless and fatigue. A good alternative solution is to equip the UAV with semi-autonomous capabilities that able to navigate via a pre-planned route in real-time fashion. In this paper, we propose an easy-and-practical path optimizer based on the classical Travelling Salesman Problem and adopts a brute force search method to re-optimize the route in the event of collisions using range finder sensor. The former utilizes a Simple Genetic Algorithm and the latter uses Nearest Neighbour algorithm. Both algorithms are combined to optimize the route and avoid collision at once. Although many researchers proposed various path planning algorithms, we find that it is difficult to integrate on a basic UAV model and often lacks of real-time collision detection optimizer. Therefore, we explore a practical benefit from this approach using on-board Arduino and Ardupilot controllers by manually emulating the motion of an actual UAV model prior to test on the flying site. The result showed that the range finder sensor provides a real-time data to the algorithm to find a collision-free path and eventually optimized the route successfully.

  15. Accelerating Sequential Gaussian Simulation with a constant path

    NASA Astrophysics Data System (ADS)

    Nussbaumer, Raphaël; Mariethoz, Grégoire; Gravey, Mathieu; Gloaguen, Erwan; Holliger, Klaus

    2018-03-01

    Sequential Gaussian Simulation (SGS) is a stochastic simulation technique commonly employed for generating realizations of Gaussian random fields. Arguably, the main limitation of this technique is the high computational cost associated with determining the kriging weights. This problem is compounded by the fact that often many realizations are required to allow for an adequate uncertainty assessment. A seemingly simple way to address this problem is to keep the same simulation path for all realizations. This results in identical neighbourhood configurations and hence the kriging weights only need to be determined once and can then be re-used in all subsequent realizations. This approach is generally not recommended because it is expected to result in correlation between the realizations. Here, we challenge this common preconception and make the case for the use of a constant path approach in SGS by systematically evaluating the associated benefits and limitations. We present a detailed implementation, particularly regarding parallelization and memory requirements. Extensive numerical tests demonstrate that using a constant path allows for substantial computational gains with very limited loss of simulation accuracy. This is especially the case for a constant multi-grid path. The computational savings can be used to increase the neighbourhood size, thus allowing for a better reproduction of the spatial statistics. The outcome of this study is a recommendation for an optimal implementation of SGS that maximizes accurate reproduction of the covariance structure as well as computational efficiency.

  16. Weak-noise limit of a piecewise-smooth stochastic differential equation.

    PubMed

    Chen, Yaming; Baule, Adrian; Touchette, Hugo; Just, Wolfram

    2013-11-01

    We investigate the validity and accuracy of weak-noise (saddle-point or instanton) approximations for piecewise-smooth stochastic differential equations (SDEs), taking as an illustrative example a piecewise-constant SDE, which serves as a simple model of Brownian motion with solid friction. For this model, we show that the weak-noise approximation of the path integral correctly reproduces the known propagator of the SDE at lowest order in the noise power, as well as the main features of the exact propagator with higher-order corrections, provided the singularity of the path integral associated with the nonsmooth SDE is treated with some heuristics. We also show that, as in the case of smooth SDEs, the deterministic paths of the noiseless system correctly describe the behavior of the nonsmooth SDE in the low-noise limit. Finally, we consider a smooth regularization of the piecewise-constant SDE and study to what extent this regularization can rectify some of the problems encountered when dealing with discontinuous drifts and singularities in SDEs.

  17. Adapting End Host Congestion Control for Mobility

    NASA Technical Reports Server (NTRS)

    Eddy, Wesley M.; Swami, Yogesh P.

    2005-01-01

    Network layer mobility allows transport protocols to maintain connection state, despite changes in a node's physical location and point of network connectivity. However, some congestion-controlled transport protocols are not designed to deal with these rapid and potentially significant path changes. In this paper we demonstrate several distinct problems that mobility-induced path changes can create for TCP performance. Our premise is that mobility events indicate path changes that require re-initialization of congestion control state at both connection end points. We present the application of this idea to TCP in the form of a simple solution (the Lightweight Mobility Detection and Response algorithm, that has been proposed in the IETF), and examine its effectiveness. In general, we find that the deficiencies presented are both relatively easily and painlessly fixed using this solution. We also find that this solution has the counter-intuitive property of being both more friendly to competing traffic, and simultaneously more aggressive in utilizing newly available capacity than unmodified TCP.

  18. Effect of wet tropospheric path delays on estimation of geodetic baselines in the Gulf of California using the Global Positioning System

    NASA Technical Reports Server (NTRS)

    Tralli, David M.; Dixon, Timothy H.; Stephens, Scott A.

    1988-01-01

    Surface Meteorological (SM) and Water Vapor Radiometer (WVR) measurements are used to provide an independent means of calibrating the GPS signal for the wet tropospheric path delay in a study of geodetic baseline measurements in the Gulf of California using GPS in which high tropospheric water vapor content yielded wet path delays in excess of 20 cm at zenith. Residual wet delays at zenith are estimated as constants and as first-order exponentially correlated stochastic processes. Calibration with WVR data is found to yield the best repeatabilities, with improved results possible if combined carrier phase and pseudorange data are used. Although SM measurements can introduce significant errors in baseline solutions if used with a simple atmospheric model and estimation of residual zenith delays as constants, SM calibration and stochastic estimation for residual zenith wet delays may be adequate for precise estimation of GPS baselines. For dry locations, WVRs may not be required to accurately model tropospheric effects on GPS baselines.

  19. Assessment of Masonry Buildings Subjected to Landslide-Induced Settlements: From Load Path Method to Evolutionary Optimization Method

    NASA Astrophysics Data System (ADS)

    Palmisano, Fabrizio; Elia, Angelo

    2017-10-01

    One of the main difficulties, when dealing with landslide structural vulnerability, is the diagnosis of the causes of crack patterns. This is also due to the excessive complexity of models based on classical structural mechanics that makes them inappropriate especially when there is the necessity to perform a rapid vulnerability assessment at the territorial scale. This is why, a new approach, based on a ‘simple model’ (i.e. the Load Path Method, LPM), has been proposed by Palmisano and Elia for the interpretation of the behaviour of masonry buildings subjected to landslide-induced settlements. However, the LPM is very useful for rapidly finding the 'most plausible solution' instead of the exact solution. To find the solution, optimization algorithms are necessary. In this scenario, this article aims to show how the Bidirectional Evolutionary Structural Optimization method by Huang and Xie, can be very useful to optimize the strut-and-tie models obtained by using the Load Path Method.

  20. Overestimation of Mach number due to probe shadow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gosselin, J. J.; Thakur, S. C.; Tynan, G. R.

    2016-07-15

    Comparisons of the plasma ion flow speed measurements from Mach probes and laser induced fluorescence were performed in the Controlled Shear Decorrelation Experiment. We show the presence of the probe causes a low density geometric shadow downstream of the probe that affects the current density collected by the probe in collisional plasmas if the ion-neutral mean free path is shorter than the probe shadow length, L{sub g} = w{sup 2} V{sub drift}/D{sub ⊥}, resulting in erroneous Mach numbers. We then present a simple correction term that provides the corrected Mach number from probe data when the sound speed, ion-neutral mean free path,more » and perpendicular diffusion coefficient of the plasma are known. The probe shadow effect must be taken into account whenever the ion-neutral mean free path is on the order of the probe shadow length in linear devices and the open-field line region of fusion devices.« less

  1. CFO compensation method using optical feedback path for coherent optical OFDM system

    NASA Astrophysics Data System (ADS)

    Moon, Sang-Rok; Hwang, In-Ki; Kang, Hun-Sik; Chang, Sun Hyok; Lee, Seung-Woo; Lee, Joon Ki

    2017-07-01

    We investigate feasibility of carrier frequency offset (CFO) compensation method using optical feedback path for coherent optical orthogonal frequency division multiplexing (CO-OFDM) system. Recently proposed CFO compensation algorithms provide wide CFO estimation range in electrical domain. However, their practical compensation range is limited by sampling rate of an analog-to-digital converter (ADC). This limitation has not drawn attention, since the ADC sampling rate was high enough comparing to the data bandwidth and CFO in the wireless OFDM system. For CO-OFDM, the limitation is becoming visible because of increased data bandwidth, laser instability (i.e. large CFO) and insufficient ADC sampling rate owing to high cost. To solve the problem and extend practical CFO compensation range, we propose a CFO compensation method having optical feedback path. By adding simple wavelength control for local oscillator, the practical CFO compensation range can be extended to the sampling frequency range. The feasibility of the proposed method is experimentally investigated.

  2. The product form for path integrals on curved manifolds

    NASA Astrophysics Data System (ADS)

    Grosche, C.

    1988-03-01

    A general and simple framework for treating path integrals on curved manifolds is presented. The crucial point will be a product ansatz for the metric tensor and the quantum hamiltonian, i.e. we shall write g αβ = h αγh βγ and H = (1/2m)h αγp αp βh βγ + V + ΔV , respectively, a prescription which we shall call “product form” definition. The p α are hermitian momenta and Δ V is a well-defined quantum correction. We shall show that this ansatz, which looks quite special, is in fact - under reasonable assumptions in quantum mechanics - a very general one. We shall derive the lagrangian path integral in the “product form” definition and shall also prove that the Schro¨dinger equation can be derived from the corresponding short-time kernel. We shall discuss briefly an application of this prescription to the problem of free quantum motion on the Poincare´upper half-plane.

  3. CMOS detectors: lessons learned during the STC stereo channel preflight calibration

    NASA Astrophysics Data System (ADS)

    Simioni, E.; De Sio, A.; Da Deppo, V.; Naletto, G.; Cremonese, G.

    2017-09-01

    The Stereo Camera (STC), mounted on-board the BepiColombo spacecraft, will acquire in push frame stereo mode the entire surface of Mercury. STC will provide the images for the global three-dimensional reconstruction of the surface of the innermost planet of the Solar System. The launch of BepiColombo is foreseen in 2018. STC has an innovative optical system configuration, which allows good optical performances with a mass and volume reduction of a factor two with respect to classical stereo camera approach. In such a telescope, two different optical paths inclined of +/-20°, with respect to the nadir direction, are merged together in a unique off axis path and focused on a single detector. The focal plane is equipped with a 2k x 2k hybrid Si-PIN detector, based on CMOS technology, combining low read-out noise, high radiation hardness, compactness, lack of parasitic light, capability of snapshot image acquisition and short exposure times (less than 1 ms) and small pixel size (10 μm). During the preflight calibration campaign of STC, some detector spurious effects have been noticed. Analyzing the images taken during the calibration phase, two different signals affecting the background level have been measured. These signals can reduce the detector dynamics down to a factor of 1/4th and they are not due to dark current, stray light or similar effects. In this work we will describe all the features of these unwilled effects, and the calibration procedures we developed to analyze them.

  4. Evaluation of the Terminal Sequencing and Spacing System for Performance Based Navigation Arrivals

    NASA Technical Reports Server (NTRS)

    Thipphavong, Jane; Jung, Jaewoo; Swenson, Harry N.; Martin, Lynne; Lin, Melody; Nguyen, Jimmy

    2013-01-01

    NASA has developed the Terminal Sequencing and Spacing (TSS) system, a suite of advanced arrival management technologies combining timebased scheduling and controller precision spacing tools. TSS is a ground-based controller automation tool that facilitates sequencing and merging arrivals that have both current standard ATC routes and terminal Performance-Based Navigation (PBN) routes, especially during highly congested demand periods. In collaboration with the FAA and MITRE's Center for Advanced Aviation System Development (CAASD), TSS system performance was evaluated in human-in-the-loop (HITL) simulations with currently active controllers as participants. Traffic scenarios had mixed Area Navigation (RNAV) and Required Navigation Performance (RNP) equipage, where the more advanced RNP-equipped aircraft had preferential treatment with a shorter approach option. Simulation results indicate the TSS system achieved benefits by enabling PBN, while maintaining high throughput rates-10% above baseline demand levels. Flight path predictability improved, where path deviation was reduced by 2 NM on average and variance in the downwind leg length was 75% less. Arrivals flew more fuel-efficient descents for longer, spending an average of 39 seconds less in step-down level altitude segments. Self-reported controller workload was reduced, with statistically significant differences at the p less than 0.01 level. The RNP-equipped arrivals were also able to more frequently capitalize on the benefits of being "Best-Equipped, Best- Served" (BEBS), where less vectoring was needed and nearly all RNP approaches were conducted without interruption.

  5. Path and site effects deduced from merged transfrontier internet macroseismic data of two recent M4 earthquakes in northwest Europe using a grid cell approach

    NASA Astrophysics Data System (ADS)

    Van Noten, Koen; Lecocq, Thomas; Sira, Christophe; Hinzen, Klaus-G.; Camelbeeck, Thierry

    2017-04-01

    The online collection of earthquake reports in Europe is strongly fragmented across numerous seismological agencies. This paper demonstrates how collecting and merging online institutional macroseismic data strongly improves the density of observations and the quality of intensity shaking maps. Instead of using ZIP code Community Internet Intensity Maps, we geocode individual response addresses for location improvement, assign intensities to grouped answers within 100 km2 grid cells, and generate intensity attenuation relations from the grid cell intensities. Grid cell intensity maps are less subjective and illustrate a more homogeneous intensity distribution than communal ZIP code intensity maps. Using grid cells for ground motion analysis offers an advanced method for exchanging transfrontier equal-area intensity data without sharing any personal information. The applicability of the method is demonstrated on the felt responses of two clearly felt earthquakes: the 8 September 2011 ML 4.3 (Mw 3.7) Goch (Germany) and the 22 May 2015 ML 4.2 (Mw 3.7) Ramsgate (UK) earthquakes. Both events resulted in a non-circular distribution of intensities which is not explained by geometrical amplitude attenuation alone but illustrates an important low-pass filtering due to the sedimentary cover above the Anglo-Brabant Massif and in the Lower Rhine Graben. Our study illustrates the effect of increasing bedrock depth on intensity attenuation and the importance of the WNW-ESE Caledonian structural axis of the Anglo-Brabant Massif for seismic wave propagation. Seismic waves are less attenuated - high Q - along the strike of a tectonic structure but are more strongly attenuated - low Q - perpendicular to this structure, particularly when they cross rheologically different seismotectonic units separated by crustal-rooted faults.

  6. Effect of digital template in the assistant of a giant condylar osteochondroma resection.

    PubMed

    Bai, Guo; He, Dongmei; Yang, Chi; Lu, Chuan; Huang, Dong; Chen, Minjie; Yuan, Jianbing

    2014-05-01

    Exostosis osteochondroma is usually resected with the whole condyle even part of it is not involved. This study was to report the effect of using digital template in the assistant of resection while protecting the uninvolved condyle. We used computer-aided design technique in the assistant of making preoperative plan of a patient with giant condylar osteochondroma of exogenous type, including determining the boundary between the tumor and the articular surface of condyle, and designing the virtual tumor resection plane, surgical approach, and remove-out path of the tumor. The digital osteotomy template was made by rapid prototyping technique based on the preoperative plan. Postoperative CT scan was performed and merged with the preoperative CT by the Proplan 1.3 system to evaluate the accuracy of surgical resection with the guide of digital template. The osteotomy template was attached to the lateral surface of condyle accurately, and the tumor was removed totally by the guide of the template without injuries to adjacent nerves and vessels. Postoperative CT showed that the osteochondroma was removed completely and the unaffected articular surface of condyle was preserved well. The merging of postoperative and preoperative CT by Proplan 1.3 system showed the outcome of the operation matched with the preoperative planning quite well with an error of 0.92 mm. There was no sign of recurrence after 6 months of follow-up. The application of digital template could improve the accuracy of the giant condylar tumor resection and help to preserve the uninvolved condyle. The use of digital template could reduce injuries to the nerves and vessels as well as save time for the operation.

  7. Radiation hydrodynamics of super star cluster formation

    NASA Astrophysics Data System (ADS)

    Tsang, Benny Tsz Ho; Milos Milosavljevic

    2018-01-01

    Throughout the history of the Universe, the nuclei of super star clusters represent the most active sites for star formation. The high densities of massive stars within the clusters produce intense radiation that imparts both energy and momentum on the surrounding star-forming gas. Theoretical claims based on idealized geometries have claimed the dominant role of radiation pressure in controlling the star formation activity within the clusters. In order for cluster formation simulations to be reliable, numerical schemes have to be able to model accurately the radiation flows through the gas clumps at the cluster nuclei with high density contrasts. With a hybrid Monte Carlo radiation transport module we developed, we performed 3D radiation hydrodynamical simulations of super star cluster formation in turbulent clouds. Furthermore, our Monte Carlo radiation treatment provides a native capability to produce synthetic observations, which allows us to predict observational indicators and to inform future observations. We found that radiation pressure has definite, but minor effects on limiting the gas supply for star formation, and the final mass of the most massive cluster is about one million solar masses. The ineffective forcing was due to the density variations inside the clusters, i.e. radiation takes the paths of low densities and avoids forcing on dense clumps. Compared to a radiation-free control run, we further found that the presence of radiation amplifies the density variations. The core of the resulting cluster has a high stellar density, about the threshold required for stellar collisions and merging. The very massive star that form from the stellar merging could continue to gain mass from the surrounding gas reservoir that is gravitationally confined by the deep potential of the cluster, seeding the potential formation of a massive black hole.

  8. Optimal cue integration in ants.

    PubMed

    Wystrach, Antoine; Mangan, Michael; Webb, Barbara

    2015-10-07

    In situations with redundant or competing sensory information, humans have been shown to perform cue integration, weighting different cues according to their certainty in a quantifiably optimal manner. Ants have been shown to merge the directional information available from their path integration (PI) and visual memory, but as yet it is not clear that they do so in a way that reflects the relative certainty of the cues. In this study, we manipulate the variance of the PI home vector by allowing ants (Cataglyphis velox) to run different distances and testing their directional choice when the PI vector direction is put in competition with visual memory. Ants show progressively stronger weighting of their PI direction as PI length increases. The weighting is quantitatively predicted by modelling the expected directional variance of home vectors of different lengths and assuming optimal cue integration. However, a subsequent experiment suggests ants may not actually compute an internal estimate of the PI certainty, but are using the PI home vector length as a proxy. © 2015 The Author(s).

  9. [Registration technology for mandibular angle osteotomy based on augmented reality].

    PubMed

    Zhu, Ming; Chai, Gang; Zhang, Yan; Ma, Xiao-Fei; Yu, Zhe-Yuan; Zhu, Yi-Jia

    2010-12-01

    To establish an effective path to register the operative plan to the real model of mandible made by rapid prototyping (RP) technology. Computerize tomography (CT) was performed on 20 patients to create 3D images, and computer aided operation planning information can be merged with the 3D images. Then dental cast was used to fix the signal which can be recognized by the software. The dental cast was transformed to 3D data with a laser scanner and a programmer that run on a personal computer named Rapidform matching the dental cast and the mandible image to generate the virtual image. Then the registration was achieved by video monitoring system. By using this technology, the virtual image of mandible and the cutting planes both can overlay the real model of mandible made by RP. This study found an effective way for registration by using dental cast, and this way might be a powerful option for the registration of augmented reality. Supported by Program for Innovation Research Team of Shanghai Municipal Education Commission.

  10. LEDA 074886: A Remarkable Rectangular-looking Galaxy

    NASA Astrophysics Data System (ADS)

    Graham, Alister W.; Spitler, Lee R.; Forbes, Duncan A.; Lisker, Thorsten; Moore, Ben; Janz, Joachim

    2012-05-01

    We report the discovery of an interesting and rare rectangular-shaped galaxy. At a distance of 21 Mpc, the dwarf galaxy LEDA 074886 has an absolute R-band magnitude of -17.3 mag. Adding to this galaxy's intrigue is the presence of an embedded, edge-on stellar disk (of extent 2 R e, disk = 12'' = 1.2 kpc) for which Forbes et al. reported v rot/σ ≈ 1.4. We speculate that this galaxy may be the remnant of two (nearly edge-on) merged disk galaxies in which the initial gas was driven inward and subsequently formed the inner disk, while the stars at larger radii effectively experienced a dissipationless merger event resulting in this "emerald cut galaxy" having very boxy isophotes with a 4/a = -0.05 to -0.08 from 3 to 5 kpc. This galaxy suggests that knowledge from simulations of both "wet" and "dry" galaxy mergers may need to be combined to properly understand the various paths that galaxy evolution can take, with a particular relevance to blue elliptical galaxies.

  11. Path-programmable water droplet manipulations on an adhesion controlled superhydrophobic surface

    PubMed Central

    Seo, Jungmok; Lee, Seoung-Ki; Lee, Jaehong; Seung Lee, Jung; Kwon, Hyukho; Cho, Seung-Woo; Ahn, Jong-Hyun; Lee, Taeyoon

    2015-01-01

    Here, we developed a novel and facile method to control the local water adhesion force of a thin and stretchable superhydrophobic polydimethylsiloxane (PDMS) substrate with micro-pillar arrays that allows the individual manipulation of droplet motions including moving, merging and mixing. When a vacuum pressure was applied below the PDMS substrate, a local dimple structure was formed and the water adhesion force of structure was significantly changed owing to the dynamically varied pillar density. With the help of the lowered water adhesion force and the slope angle of the formed dimple structure, the motion of individual water droplets could be precisely controlled, which facilitated the creation of a droplet-based microfluidic platform capable of a programmable manipulation of droplets. We showed that the platform could be used in newer and emerging microfluidic operations such as surface-enhanced Raman spectroscopy with extremely high sensing capability (10−15 M) and in vitro small interfering RNA transfection with enhanced transfection efficiency of ~80%. PMID:26202206

  12. Integration and the performance of healthcare networks:do integration strategies enhance efficiency, profitability, and image?

    PubMed Central

    Wan, Thomas T.H.; Ma, Allen; Y.J.Lin, Blossom

    2001-01-01

    Abstract Purpose This study examines the integration effects on efficiency and financial viability of the top 100 integrated healthcare networks (IHNs) in the United States. Theory A contingency- strategic theory is used to identify the relationship of IHNs' performance to their structural and operational characteristics and integration strategies. Methods The lists of the top 100 IHNs ranked in two years, 1998 and 1999, by the SMG Marketing Group were merged to create a database for the study. Multiple indicators were used to examine the relationship between IHNs' characteristics and their performance in efficiency and financial viability. A path analytical model was developed and validated by the Mplus statistical program. Factors influencing the top 100 IHNs' images, represented by attaining ranking among the top 100 in two consecutive years, were analysed. Results and conclusion No positive associations were found between integration and network performance in efficiency or profits. Longitudinal data are needed to investigate the effect of integration on healthcare networks' financial performance. PMID:16896405

  13. A Generalized Method for the Comparable and Rigorous Calculation of the Polytropic Efficiencies of Turbocompressors

    NASA Astrophysics Data System (ADS)

    Dimitrakopoulos, Panagiotis

    2018-03-01

    The calculation of polytropic efficiencies is a very important task, especially during the development of new compression units, like compressor impellers, stages and stage groups. Such calculations are also crucial for the determination of the performance of a whole compressor. As processors and computational capacities have substantially been improved in the last years, the need for a new, rigorous, robust, accurate and at the same time standardized method merged, regarding the computation of the polytropic efficiencies, especially based on thermodynamics of real gases. The proposed method is based on the rigorous definition of the polytropic efficiency. The input consists of pressure and temperature values at the end points of the compression path (suction and discharge), for a given working fluid. The average relative error for the studied cases was 0.536 %. Thus, this high-accuracy method is proposed for efficiency calculations related with turbocompressors and their compression units, especially when they are operating at high power levels, for example in jet engines and high-power plants.

  14. Smisc - A collection of miscellaneous functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Landon Sego, PNNL

    2015-08-31

    A collection of functions for statistical computing and data manipulation. These include routines for rapidly aggregating heterogeneous matrices, manipulating file names, loading R objects, sourcing multiple R files, formatting datetimes, multi-core parallel computing, stream editing, specialized plotting, etc. Smisc-package A collection of miscellaneous functions allMissing Identifies missing rows or columns in a data frame or matrix as.numericSilent Silent wrapper for coercing a vector to numeric comboList Produces all possible combinations of a set of linear model predictors cumMax Computes the maximum of the vector up to the current index cumsumNA Computes the cummulative sum of a vector without propogating NAsmore » d2binom Probability functions for the sum of two independent binomials dataIn A flexible way to import data into R. dbb The Beta-Binomial Distribution df2list Row-wise conversion of a data frame to a list dfplapply Parallelized single row processing of a data frame dframeEquiv Examines the equivalence of two dataframes or matrices dkbinom Probability functions for the sum of k independent binomials factor2character Converts all factor variables in a dataframe to character variables findDepMat Identify linearly dependent rows or columns in a matrix formatDT Converts date or datetime strings into alternate formats getExtension Filename manipulations: remove the extension or path, extract the extension or path getPath Filename manipulations: remove the extension or path, extract the extension or path grabLast Filename manipulations: remove the extension or path, extract the extension or path ifelse1 Non-vectorized version of ifelse integ Simple numerical integration routine interactionPlot Two-way Interaction Plot with Error Bar linearMap Linear mapping of a numerical vector or scalar list2df Convert a list to a data frame loadObject Loads and returns the object(s) in an ".Rdata" file more Display the contents of a file to the R terminal movAvg2 Calculate the moving average using a 2-sided window openDevice Opens a graphics device based on the filename extension p2binom Probability functions for the sum of two independent binomials padZero Pad a vector of numbers with zeros parseJob Parses a collection of elements into (almost) equal sized groups pbb The Beta-Binomial Distribution pcbinom A continuous version of the binomial cdf pkbinom Probability functions for the sum of k independent binomials plapply Simple parallelization of lapply plotFun Plot one or more functions on a single plot PowerData An example of power data pvar Prints the name and value of one or more objects qbb The Beta-Binomial Distribution rbb And numerous others (space limits reporting).« less

  15. The Spectral Web of stationary plasma equilibria. II. Internal modes

    NASA Astrophysics Data System (ADS)

    Goedbloed, J. P.

    2018-03-01

    The new method of the Spectral Web to calculate the spectrum of waves and instabilities of plasma equilibria with sizeable flows, developed in the preceding Paper I [Goedbloed, Phys. Plasmas 25, 032109 (2018)], is applied to a collection of classical magnetohydrodynamic instabilities operating in cylindrical plasmas with shear flow or rotation. After a review of the basic concepts of the complementary energy giving the solution path and the conjugate path, which together constitute the Spectral Web, the cylindrical model is presented and the spectral equations are derived. The first example concerns the internal kink instabilities of a cylindrical force-free magnetic field of constant α subjected to a parabolic shear flow profile. The old stability diagram and the associated growth rate calculations for static equilibria are replaced by a new intricate stability diagram and associated complex growth rates for the stationary model. The power of the Spectral Web method is demonstrated by showing that the two associated paths in the complex ω-plane nearly automatically guide to the new class of global Alfvén instabilities of the force-free configuration that would have been very hard to predict by other methods. The second example concerns the Rayleigh-Taylor instability of a rotating theta-pinch. The old literature is revisited and shown to suffer from inconsistencies that are remedied. The most global n = 1 instability and a cluster sequence of more local but much more unstable n =2 ,3 ,…∞ modes are located on separate solution paths in the hydrodynamic (HD) version of the instability, whereas they merge in the MHD version. The Spectral Web offers visual demonstration of the central position the HD flow continuum and of the MHD Alfvén and slow magneto-sonic continua in the respective spectra by connecting the discrete modes in the complex plane by physically meaningful curves towards the continua. The third example concerns the magneto-rotational instability (MRI) thought to be operating in accretion disks about black holes. The sequence n =1 ,2 ,… of unstable MRIs is located on one continuous solution path, but also on infinitely many separate loops ("pancakes") of the conjugate path with just one MRI on each of them. For narrow accretion disks, those sequences are connected with the slow magneto-sonic continuum, which is far away though from the marginal stability transition. In this case, the Spectral Web method is the first to effectively incorporate the MRIs into the general MHD spectral theory of equilibria with background flows. Together, the three examples provide compelling evidence of the computational power of the Spectral Web Method.

  16. Responsible Student Affairs Practice: Merging Student Development and Quality Management.

    ERIC Educational Resources Information Center

    Whitner, Phillip A.; And Others

    The merging of Total Quality Management (TQM) and Involvement Theory into a managerial philosophy can assist student affairs professionals with an approach for conducting work that improves student affairs practice. When merged or integrated, accountability can easily be obtained because the base philosophies of qualitative research, TQM, and…

  17. Distortion in Two-Dimensional Shapes of Merging Nanobubbles: Evidence for Anisotropic Gas Flow Mechanism.

    PubMed

    Park, Jong Bo; Shin, Dongha; Kang, Sangmin; Cho, Sung-Pyo; Hong, Byung Hee

    2016-11-01

    Two nanobubbles that merge in a graphene liquid cell take elliptical shapes rather than the ideal circular shapes. This phenomenon was investigated in detail by using in situ transmission electron microscopy (TEM). The results show that the distortion in the two-dimensional shapes of the merging nanobubbles is attributed to the anisotropic gas transport flux between the nanobubbles. We also predicted and confirmed the same phenomenon in a three-nanobubble system, indicating that the relative size difference is important in determining the shape of merging nanobubbles.

  18. Online Optimal Control of Connected Vehicles for Efficient Traffic Flow at Merging Roads

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rios-Torres, Jackeline; Malikopoulos, Andreas; Pisu, Pierluigi

    2015-01-01

    This paper addresses the problem of coordinating online connected vehicles at merging roads to achieve a smooth traffic flow without stop-and-go driving. We present a framework and a closed-form solution that optimize the acceleration profile of each vehicle in terms of fuel economy while avoiding collision with other vehicles at the merging zone. The proposed solution is validated through simulation and it is shown that coordination of connected vehicles can reduce significantly fuel consumption and travel time at merging roads.

  19. Using DOUBLE STAR and CLUSTER Synoptic Observations to Test Global MHD Simulations of the Large-scale Topology of the Dayside Merging Region

    NASA Astrophysics Data System (ADS)

    Berchem, J.; Marchaudon, A.; Bosqued, J.; Escoubet, C. P.; Dunlop, M.; Owen, C. J.; Reme, H.; Balogh, A.; Carr, C.; Fazakerley, A. N.; Cao, J. B.

    2005-12-01

    Synoptic measurements from the DOUBLE STAR and CLUSTER spacecraft offer a unique opportunity to evaluate global models in simulating the complex topology and dynamics of the dayside merging region. We compare observations from the DOUBLE STAR TC-1 and CLUSTER spacecraft on May 8, 2004 with the predictions from a three-dimensional magnetohydrodynamic (MHD) simulation that uses plasma and magnetic field parameters measured upstream of the bow shock by the WIND spacecraft. Results from the global simulation are consistent with the large-scale features observed by CLUSTER and TC-1. We discuss topological changes and plasma flows at the dayside magnetospheric boundary inferred from the simulation results. The simulation shows that the DOUBLE STAR spacecraft passed through the dawn side merging region as the IMF rotated. In particular, the simulation indicates that at times TC-1 was very close to the merging region. In addition, we found that the bifurcation of the merging region in the simulation results is consistent with predictions by the antiparallel merging model. However, because of the draping of the magnetosheath field lines over the magnetopause, the positions and shape of the merging region differ significantly from those predicted by the model.

  20. Tool for Merging Proposals Into DSN Schedules

    NASA Technical Reports Server (NTRS)

    Khanampornpan, Teerapat; Kwok, John; Call, Jared

    2008-01-01

    A Practical Extraction and Reporting Language (Perl) script called merge7da has been developed to facilitate determination, by a project scheduler in NASA's Deep Space Network, of whether a proposal for use of the DSN could create a conflict with the current DSN schedule. Prior to the development of merge7da, there was no way to quickly identify potential schedule conflicts: it was necessary to submit a proposal and wait a day or two for a response from a DSN scheduling facility. By using merge7da to detect and eliminate potential schedule conflicts before submitting a proposal, a project scheduler saves time and gains assurance that the proposal will probably be accepted. merge7da accepts two input files, one of which contains the current DSN schedule and is in a DSN-standard format called '7da'. The other input file contains the proposal and is in another DSN-standard format called 'C1/C2'. merge7da processes the two input files to produce a merged 7da-format output file that represents the DSN schedule as it would be if the proposal were to be adopted. This 7da output file can be loaded into various DSN scheduling software tools now in use.

  1. Merged ozone profiles from four MIPAS processors

    NASA Astrophysics Data System (ADS)

    Laeng, Alexandra; von Clarmann, Thomas; Stiller, Gabriele; Dinelli, Bianca Maria; Dudhia, Anu; Raspollini, Piera; Glatthor, Norbert; Grabowski, Udo; Sofieva, Viktoria; Froidevaux, Lucien; Walker, Kaley A.; Zehner, Claus

    2017-04-01

    The Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) was an infrared (IR) limb emission spectrometer on the Envisat platform. Currently, there are four MIPAS ozone data products, including the operational Level-2 ozone product processed at ESA, with the scientific prototype processor being operated at IFAC Florence, and three independent research products developed by the Istituto di Fisica Applicata Nello Carrara (ISAC-CNR)/University of Bologna, Oxford University, and the Karlsruhe Institute of Technology-Institute of Meteorology and Climate Research/Instituto de Astrofísica de Andalucía (KIT-IMK/IAA). Here we present a dataset of ozone vertical profiles obtained by merging ozone retrievals from four independent Level-2 MIPAS processors. We also discuss the advantages and the shortcomings of this merged product. As the four processors retrieve ozone in different parts of the spectra (microwindows), the source measurements can be considered as nearly independent with respect to measurement noise. Hence, the information content of the merged product is greater and the precision is better than those of any parent (source) dataset. The merging is performed on a profile per profile basis. Parent ozone profiles are weighted based on the corresponding error covariance matrices; the error correlations between different profile levels are taken into account. The intercorrelations between the processors' errors are evaluated statistically and are used in the merging. The height range of the merged product is 20-55 km, and error covariance matrices are provided as diagnostics. Validation of the merged dataset is performed by comparison with ozone profiles from ACE-FTS (Atmospheric Chemistry Experiment-Fourier Transform Spectrometer) and MLS (Microwave Limb Sounder). Even though the merging is not supposed to remove the biases of the parent datasets, around the ozone volume mixing ratio peak the merged product is found to have a smaller (up to 0.1 ppmv) bias with respect to ACE-FTS than any of the parent datasets. The bias with respect to MLS is of the order of 0.15 ppmv at 20-30 km height and up to 0.45 ppmv at larger altitudes. The agreement between the merged data MIPAS dataset with ACE-FTS is better than that with MLS. This is, however, the case for all parent processors as well.

  2. Digital PCR on a SlipChip.

    PubMed

    Shen, Feng; Du, Wenbin; Kreutz, Jason E; Fok, Alice; Ismagilov, Rustem F

    2010-10-21

    This paper describes a SlipChip to perform digital PCR in a very simple and inexpensive format. The fluidic path for introducing the sample combined with the PCR mixture was formed using elongated wells in the two plates of the SlipChip designed to overlap during sample loading. This fluidic path was broken up by simple slipping of the two plates that removed the overlap among wells and brought each well in contact with a reservoir preloaded with oil to generate 1280 reaction compartments (2.6 nL each) simultaneously. After thermal cycling, end-point fluorescence intensity was used to detect the presence of nucleic acid. Digital PCR on the SlipChip was tested quantitatively by using Staphylococcus aureus genomic DNA. As the concentration of the template DNA in the reaction mixture was diluted, the fraction of positive wells decreased as expected from the statistical analysis. No cross-contamination was observed during the experiments. At the extremes of the dynamic range of digital PCR the standard confidence interval determined using a normal approximation of the binomial distribution is not satisfactory. Therefore, statistical analysis based on the score method was used to establish these confidence intervals. The SlipChip provides a simple strategy to count nucleic acids by using PCR. It may find applications in research applications such as single cell analysis, prenatal diagnostics, and point-of-care diagnostics. SlipChip would become valuable for diagnostics, including applications in resource-limited areas after integration with isothermal nucleic acid amplification technologies and visual readout.

  3. Therapeutic Alliance: A Concept for the Childbearing Season

    PubMed Central

    Doherty, Mary Ellen

    2009-01-01

    This analysis was conducted to describe the concept of therapeutic alliance and its appropriateness for health-care provider-client interactions during the childbearing season. The concept has been defined in other disciplines. A universal definition suggested a merging of efforts directed toward health. A simple and concise definition evolved, which is applicable to the childbearing season as well as to health-care encounters across the life span. This definition states: Therapeutic alliance is a process within a health-care provider-client interaction that is initiated by an identified need for positive client health-care behaviors, whereby both parties work together toward this goal with consideration of the client's current health status and developmental stage within the life span. PMID:20514120

  4. A parallel algorithm for multi-level logic synthesis using the transduction method. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Lim, Chieng-Fai

    1991-01-01

    The Transduction Method has been shown to be a powerful tool in the optimization of multilevel networks. Many tools such as the SYLON synthesis system (X90), (CM89), (LM90) have been developed based on this method. A parallel implementation is presented of SYLON-XTRANS (XM89) on an eight processor Encore Multimax shared memory multiprocessor. It minimizes multilevel networks consisting of simple gates through parallel pruning, gate substitution, gate merging, generalized gate substitution, and gate input reduction. This implementation, called Parallel TRANSduction (PTRANS), also uses partitioning to break large circuits up and performs inter- and intra-partition dynamic load balancing. With this, good speedups and high processor efficiencies are achievable without sacrificing the resulting circuit quality.

  5. Entropy reduction via simplified image contourization

    NASA Technical Reports Server (NTRS)

    Turner, Martin J.

    1993-01-01

    The process of contourization is presented which converts a raster image into a set of plateaux or contours. These contours can be grouped into a hierarchical structure, defining total spatial inclusion, called a contour tree. A contour coder has been developed which fully describes these contours in a compact and efficient manner and is the basis for an image compression method. Simplification of the contour tree has been undertaken by merging contour tree nodes thus lowering the contour tree's entropy. This can be exploited by the contour coder to increase the image compression ratio. By applying general and simple rules derived from physiological experiments on the human vision system, lossy image compression can be achieved which minimizes noticeable artifacts in the simplified image.

  6. Merging Digital Medicine and Economics: Two Moving Averages Unlock Biosignals for Better Health.

    PubMed

    Elgendi, Mohamed

    2018-01-06

    Algorithm development in digital medicine necessitates ongoing knowledge and skills updating to match the current demands and constant progression in the field. In today's chaotic world there is an increasing trend to seek out simple solutions for complex problems that can increase efficiency, reduce resource consumption, and improve scalability. This desire has spilled over into the world of science and research where many disciplines have taken to investigating and applying more simplistic approaches. Interestingly, through a review of current literature and research efforts, it seems that the learning and teaching principles in digital medicine continue to push towards the development of sophisticated algorithms with a limited scope and has not fully embraced or encouraged a shift towards more simple solutions that yield equal or better results. This short note aims to demonstrate that within the world of digital medicine and engineering, simpler algorithms can offer effective and efficient solutions, where traditionally more complex algorithms have been used. Moreover, the note demonstrates that bridging different research disciplines is very beneficial and yields valuable insights and results.

  7. Application of simple mathematical expressions to relate the half-lives of xenobiotics in rats to values in humans.

    PubMed

    Ward, Keith W; Erhardt, Paul; Bachmann, Kenneth

    2005-01-01

    Previous publications from GlaxoSmithKline and University of Toledo laboratories convey our independent attempts to predict the half-lives of xenobiotics in humans using data obtained from rats. The present investigation was conducted to compare the performance of our published models against a common dataset obtained by merging the two sets of rat versus human half-life (hHL) data previously used by each laboratory. After combining data, mathematical analyses were undertaken by deploying both of our previous models, namely the use of an empirical algorithm based on a best-fit model and the use of rat-to-human liver blood flow ratios as a half-life correction factor. Both qualitative and quantitative analyses were performed, as well as evaluation of the impact of molecular properties on predictability. The merged dataset was remarkably diverse with respect to physiochemical and pharmacokinetic (PK) properties. Application of both models revealed similar predictability, depending upon the measure of stipulated accuracy. Certain molecular features, particularly rotatable bond count and pK(a), appeared to influence the accuracy of prediction. This collaborative effort has resulted in an improved understanding and appreciation of the value of rats to serve as a surrogate for the prediction of xenobiotic half-lives in humans when clinical pharmacokinetic studies are not possible or practicable.

  8. Use of a 3D Skull Model to Improve Accuracy in Cranioplasty for Autologous Flap Resorption in a 3-Year-Old Child.

    PubMed

    Maduri, Rodolfo; Viaroli, Edoardo; Levivier, Marc; Daniel, Roy T; Messerer, Mahmoud

    2017-01-01

    Cranioplasty is considered a simple reconstructive procedure, usually performed in a single stage. In some clinical conditions, such as in children with multifocal flap osteolysis, it could represent a surgical challenge. In these patients, the partially resorbed autologous flap should be removed and replaced with a precustomed prosthesis which should perfectly match the expected bone defect. We describe the technique used for a navigated cranioplasty in a 3-year-old child with multifocal autologous flap osteolysis. We decided to perform a cranioplasty using a custom-made hydroxyapatite porous ceramic flap. The prosthesis was produced with an epoxy resin 3D skull model of the patient, which included a removable flap corresponding to the planned cranioplasty. Preoperatively, a CT scan of the 3D skull model was performed without the removable flap. The CT scan images of the 3D skull model were merged with the preoperative 3D CT scan of the patient and navigated during the cranioplasty to define with precision the cranioplasty margins. After removal of the autologous resorbed flap, the hydroxyapatite prosthesis matched perfectly with the skull defect. The anatomical result was excellent. Thus, the implementation of cranioplasty with image merge navigation of a 3D skull model may improve cranioplasty accuracy, allowing precise anatomic reconstruction in complex skull defect cases. © 2017 S. Karger AG, Basel.

  9. Dependence of marine stratocumulus reflectivities on liquid water paths

    NASA Technical Reports Server (NTRS)

    Coakley, James A., Jr.; Snider, Jack B.

    1990-01-01

    Simple parameterizations that relate cloud liquid water content to cloud reflectivity are often used in general circulation climate models to calculate the effect of clouds in the earth's energy budget. Such parameterizations have been developed by Stephens (1978) and by Slingo and Schrecker (1982) and others. Here researchers seek to verify the parametric relationship through the use of simultaneous observations of cloud liquid water content and cloud reflectivity. The column amount of cloud liquid was measured using a microwave radiometer on San Nicolas Island following techniques described by Hogg et al., (1983). Cloud reflectivity was obtained through spatial coherence analysis of Advanced Very High Resolution Radiometer (AVHRR) imagery data (Coakley and Beckner, 1988). They present the dependence of the observed reflectivity on the observed liquid water path. They also compare this empirical relationship with that proposed by Stephens (1978). Researchers found that by taking clouds to be isotropic reflectors, the observed reflectivities and observed column amounts of cloud liquid water are related in a manner that is consistent with simple parameterizations often used in general circulation climate models to determine the effect of clouds on the earth's radiation budget. Attempts to use the results of radiative transfer calculations to correct for the anisotropy of the AVHRR derived reflectivities resulted in a greater scatter of the points about the relationship expected between liquid water path and reflectivity. The anisotropy of the observed reflectivities proved to be small, much smaller than indicated by theory. To critically assess parameterizations, more simultaneous observations of cloud liquid water and cloud reflectivities and better calibration of the AVHRR sensors are needed.

  10. Space climate implications from substorm frequency

    NASA Astrophysics Data System (ADS)

    Newell, P. T.; Gjerloev, J. W.; Mitchell, E. J.

    2013-10-01

    solar wind impacting the Earth varies over a wide range of time scales, driving a corresponding range of geomagnetic activity. Past work has strongly indicated that the rate of merging on the frontside magnetosphere is the most important predictor for magnetospheric activity, especially over a few hours. However, the magnetosphere exhibits variations on other time scales, including UT, seasonal, and solar cycle variations. Much of this geomagnetic variation cannot be reasonably attributed to changes in the solar wind driving—that is, it is not created by the original Russell-McPherron effect or any generalization thereof. In this paper we examine the solar cycle, seasonal, and diurnal effects based upon the frequency of substorm onsets, using a data set of 53,000 substorm onsets. These were identified through the SuperMAG collaboration and span three decades with continuous coverage. Solar cycle variations include a profound minimum in 2009 (448 substorms) and peak in 2003 (3727). The magnitude of this variation (a factor of 8.3) is not explained through variations in estimators of the frontside merging rate (such as dΦMP/dt), even when the more detailed probability distribution functions are examined. Instead, v, or better, n1/2v2 seems to be implicated in the dramatic difference between active and quiet years, even beyond the role of velocity in modulating merging. Moreover, we find that although most substorms are preceded by flux loading (78.5% are above the mean and 83.8% above median solar wind driving), a high solar wind v is almost as important (68.3% above mean, 74.8% above median). This and other evidence suggest that either v or n1/2v2 (but probably not p) plays a strong secondary role in substorm onset. As for the seasonal and diurnal effects, the elliptical nature of the Earth's orbit, which is closest to the Sun in January, leads to a larger solar wind driving (measured by Bs, vBs, or dΦMP/dt) in November, as is confirmed by 22 years of solar wind observations. However, substorms peak in October and March and have a UT dependence best explained by whether a conducting path established by solar illumination exists in at least one hemisphere in the region where substorm onsets typically occur.

  11. Merging Hydrologic, Geochemical, and Geophysical Approaches to Understand the Regolith Architecture of a Deeply Weathered Piedmont Critical Zone

    NASA Astrophysics Data System (ADS)

    Cosans, C.; Moore, J.; Harman, C. J.

    2017-12-01

    Located in the deeply weathered Piedmont in Maryland, Pond Branch has a rich legacy of hydrological and geochemical research dating back to the first geochemical mass balance study published in 1970. More recently, geophysical investigations including seismic and electrical resistivity tomography have characterized the subsurface at Pond Branch and contributed to new hypotheses about critical zone evolution. Heterogeneity in electrical resistivity in the shallow subsurface may suggest disparate flow paths for recharge, with some regions with low hydraulic conductivity generating perched flow, while other hillslope sections recharge to the much deeper regolith boundary. These shallow and deep flow paths are hypothesized to be somewhat hydrologically and chemically connected, with the spatially and temporally discontinuous connections resulting in different hydraulic responses to recharge and different concentrations of weathering solutes. To test this hypothesis, we combined modeling and field approaches. We modeled weathering solutes along the hypothesized flow paths using PFLOTRAN. We measured hydrologic gradients in the hillslopes and riparian zone using piezometer water levels. We collected geochemical data including major ions and silica. Weathering solute concentrations were measured directly in the precipitation, hillslope springs, and the riparian zone for comparison to modeled concentration values. End member mixing methods were used to determine contributions of precipitation, hillslopes, and riparian zone to the stream. Combining geophysical, geochemical, and hydrological methods may offer insights into the source of stream water and controls on chemical weathering. Previous hypotheses that Piedmont critical zone architecture results from a balance of erosion, soil, and weathering front advance rates cannot account for the inverted regolith structure observed through seismic investigations at Pond Branch. Recent alternative hypotheses including weathering along tectonically-induced fractures and weathering front advance have been proposed, but additional data are needed to test them. Developing a thorough, nuanced understanding of the geochemical and hydrological behavior of Pond Branch may help test and refine hypotheses for Piedmont critical zone evolution.

  12. Nanophononics at low temperature: manipulating heat at the nanoscale

    NASA Astrophysics Data System (ADS)

    Bourgeois, Olivier

    2014-03-01

    Nanophononics is an emerging field of condensed matter that deals with transport of thermal phonons at small length scales. When the section of a waveguide becomes smaller than the mean free path or the phonon wavelength, heat transfer are strongly affected. Here, I will present the results we obtained by ultra- sensitive measurements of thermal conductance of suspended nano-objects (nanowires and membranes) using the 3 ω method. This experimental set-up allows the measurement of power as small as a fraction of femtoWatt (10-15 Watt). These experiments show that the concepts of mean free path and dominant wavelength are crucial to understand the phonon thermal transport below 10K. The phonon transport, at this temperature, is well described by the Casimir-Ziman model used here to treat the data. The contribution of the thermal contact between a nanowire and the heat bath has been estimated to be close to one, thanks to the fact that the nanowire are made out of monolithic single crystal. Strong reduction of thermal conductance has been obtained in serpentine nanowire where the transport of ballistic phonons is blocked. Moreover, in corrugated silicon nanowire, we showed that the corrugations induce significant backscattering of phonon that severely reduces the mean free path, beating in some cases, the Casimir limit. These experiments demonstrate the ability to manipulate ballistic phonons by adjusting the geometry of thermal conductors, and hence manipulate heat transfer. Finally, the use of these new concepts of engineering ballistic phonons at the nanoscale allows considering the development of new nanostructured materials for thermoelectrics at room temperature, opening exciting prospects for future applications in the energy recovery. J.-S. Heron, T. Fournier, N. Mingo and O. Bourgeois, Nano Letters 9, 1861 (2009). J-S. Heron, C. Bera, T. Fournier, N. Mingo, and O. Bourgeois, Phys. Rev. B 82, 155458 (2010). C. Blanc, A. Rajabpour, S. Volz, T. Fournier, and O. Bourgeois, Appl. Phys. Lett. 103, 043109 (2013). EU Merging Project grant Agreement No. 309150.

  13. LAV@HAZARD: a Web-GIS Framework for Real-Time Forecasting of Lava Flow Hazards

    NASA Astrophysics Data System (ADS)

    Del Negro, C.; Bilotta, G.; Cappello, A.; Ganci, G.; Herault, A.

    2014-12-01

    Crucial to lava flow hazard assessment is the development of tools for real-time prediction of flow paths, flow advance rates, and final flow lengths. Accurate prediction of flow paths and advance rates requires not only rapid assessment of eruption conditions (especially effusion rate) but also improved models of lava flow emplacement. Here we present the LAV@HAZARD web-GIS framework, which combines spaceborne remote sensing techniques and numerical simulations for real-time forecasting of lava flow hazards. By using satellite-derived discharge rates to drive a lava flow emplacement model, LAV@HAZARD allows timely definition of parameters and maps essential for hazard assessment, including the propagation time of lava flows and the maximum run-out distance. We take advantage of the flexibility of the HOTSAT thermal monitoring system to process satellite images coming from sensors with different spatial, temporal and spectral resolutions. HOTSAT was designed to ingest infrared satellite data acquired by the MODIS and SEVIRI sensors to output hot spot location, lava thermal flux and discharge rate. We use LAV@HAZARD to merge this output with the MAGFLOW physics-based model to simulate lava flow paths and to update, in a timely manner, flow simulations. Thus, any significant changes in lava discharge rate are included in the predictions. A significant benefit in terms of computational speed was obtained thanks to the parallel implementation of MAGFLOW on graphic processing units (GPUs). All this useful information has been gathered into the LAV@HAZARD platform which, due to the high degree of interactivity, allows generation of easily readable maps and a fast way to explore alternative scenarios. We will describe and demonstrate the operation of this framework using a variety of case studies pertaining to Mt Etna, Sicily. Although this study was conducted on Mt Etna, the approach used is designed to be applicable to other volcanic areas around the world.

  14. Increase Student Engagement through Project-Based Learning. Best Practices Newsletter

    ERIC Educational Resources Information Center

    Southern Regional Education Board (SREB), 2015

    2015-01-01

    We learn by doing. This simple philosophy is at the heart of project-based learning in the 21st-century classroom. It is grounded in the belief that the stand and lecture approach to teaching, worksheets and rote memorization are not enough to move students down a path to the deep learning necessary for success in college and careers. Essential…

  15. The Probabilistic Thinking of Primary School Pupils in Cyprus: The Case of Tree Diagrams

    ERIC Educational Resources Information Center

    Lamprianou, Iasonas; Lamprianou, Thekla Afantiti

    2003-01-01

    In this research work we explored the nature of 9-12 year old pupils' responses to probabilistic problems with tree diagrams. It was found that a large percentage of pupils failed to respond correctly even to very simple problems that demanded the identification of "possible routes/paths" in figures with tree diagrams/mazes. The results…

  16. Frequency Division Multiplexing of Interferometric Sensor Arrays

    DTIC Science & Technology

    1989-05-03

    exception to this is the approach which employs Fabry - Perot sensorsg 10,12 in which higher order reflections will result inmoderately severe crosstalk...The Fabry - Perot technique appears to have limited array applications because of this problem. Although frequency division multiplexing has received...interferometers (- 4 cm path difference) and phase generated carrier demultiplexing demodulation . This approach leads to a simple all-passive sensor

  17. Effects of Exposure on Attitudes towards STEM Interests

    ERIC Educational Resources Information Center

    Kurz, Mary Elizabeth; Yoder, S. Elizabeth; Zu, Ling

    2015-01-01

    There are many things that can influence a child's career path. Parents and teachers are first to come to mind. But simple exposure to a certain career may also influence a child's choice of a career. There is always a need for science, technology, engineering, and math (STEM) jobs. We look at elementary students who have attended an expo designed…

  18. Research on the Present Status of the Five-Year Medical Training Program in Chinese Medical Colleges

    ERIC Educational Resources Information Center

    Xu, Yan; Dong, Zhe; Miao, Le; Ke, Yang

    2014-01-01

    The five-year program is the main path for undergraduate medical training in China. Studies have shown that during the past eleven years, the scale of medical student enrollment increased annually with a relatively simple entrance exam. The ideas, teaching contents and methods, assessment and evaluation should be updated and improved. In general,…

  19. Formation of cyanoallene (buta-2, 3-dienenitrile) in the interstellar medium: a quantum chemical and spectroscopic study

    NASA Astrophysics Data System (ADS)

    Singh, Amresh; Shivani; Misra, Alka; Tandon, Poonam

    2014-03-01

    The interstellar medium, filling the vast space between stars, is a rich reservoir of molecular material ranging from simple diatomic molecules to more complex, astrobiologically important molecules such as vinylcyanide, methylcyanodiaccetylene, cyanoallene, etc. Interstellar molecular cyanoallene is one of the most stable isomers of methylcynoacetylene. An attempt has been made to explore the possibility of forming cyanoallene in interstellar space by radical-radical and radical-molecule interaction schemes in the gaseous phase. The formation of cyanoallene starting from some simple, neutral interstellar molecules and radicals has been studied using density functional theory. The reaction energies and structures of the reactants and products show that the formation of cyanoallene is possible in the gaseous phase. Both of the considered reaction paths are totally exothermic and barrierless, thus giving rise to a high probability of occurrence. Rate constants for each step in the formation process of cyanoallene in both the reaction paths are estimated. A full vibrational analysis has been attempted for cyanoallene in the harmonic and anharmonic approximations. Anharmonic spectroscopic parameters such as rotational constants, rotation-vibration coupling constants and centrifugal distortion constants have been calculated.

  20. Enhanced sensitivity for optical loss measurement in planar thin-films (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Yuan, Hua-Kang

    2016-09-01

    An organic-inorganic hybrid material benefits from processing advantages of organics and high refractive indices of inorganics. We focus on a titanium oxide hydrate system combined with common bulk polymers. In particular, we target thin-film structures of a few microns in thickness. Traditional Beer-Lambert approaches for measuring optical losses can only provide an upper limit estimate. This sensitivity is highly limited when considering the low-losses required for mid-range optical applications, on the order of 0.1 cm-1. For intensity based measurements, improving the sensitivity requires an increase in the optical path length. Instead, a new sensitive technique suitable for simple planar thin films is required. A number of systems were modelled to measure optical losses in films of 1 micron thick. The presented techniques utilise evanescent waves and total internal reflection to increase optical path length through the material. It was found that a new way of using prism coupling provides the greatest improvement in sensitivity. In keeping the requirements on the material simple, this method for measuring loss is well suited to any future developments of new materials in thin-film structures.

  1. Influence of social presence on eye movements in visual search tasks.

    PubMed

    Liu, Na; Yu, Ruifeng

    2017-12-01

    This study employed an eye-tracking technique to investigate the influence of social presence on eye movements in visual search tasks. A total of 20 male subjects performed visual search tasks in a 2 (target presence: present vs. absent) × 2 (task complexity: complex vs. simple) × 2 (social presence: alone vs. a human audience) within-subject experiment. Results indicated that the presence of an audience could evoke a social facilitation effect on response time in visual search tasks. Compared with working alone, the participants made fewer and shorter fixations, larger saccades and shorter scan path in simple search tasks and more and longer fixations, smaller saccades and longer scan path in complex search tasks when working with an audience. The saccade velocity and pupil diameter in the audience-present condition were larger than those in the working-alone condition. No significant change in target fixation number was observed between two social presence conditions. Practitioner Summary: This study employed an eye-tracking technique to examine the influence of social presence on eye movements in visual search tasks. Results clarified the variation mechanism and characteristics of oculomotor scanning induced by social presence in visual search.

  2. Facilitating energy savings with programmable thermostats: evaluation and guidelines for the thermostat user interface.

    PubMed

    Peffer, Therese; Perry, Daniel; Pritoni, Marco; Aragon, Cecilia; Meier, Alan

    2013-01-01

    Thermostats control heating and cooling in homes - representing a major part of domestic energy use - yet, poor ergonomics of these devices has thwarted efforts to reduce energy consumption. Theoretically, programmable thermostats can reduce energy by 5-15%, but in practice little to no savings compared to manual thermostats are found. Several studies have found that programmable thermostats are not installed properly, are generally misunderstood and have poor usability. After conducting a usability study of programmable thermostats, we reviewed several guidelines from ergonomics, general device usability, computer-human interfaces and building control sources. We analysed the characteristics of thermostats that enabled or hindered successfully completing tasks and in a timely manner. Subjects had higher success rates with thermostat displays with positive examples of guidelines, such as visibility of possible actions, consistency and standards, and feedback. We suggested other guidelines that seemed missing, such as navigation cues, clear hierarchy and simple decision paths. Our evaluation of a usability test of five residential programmable thermostats led to the development of a comprehensive set of specific guidelines for thermostat design including visibility of possible actions, consistency, standards, simple decision paths and clear hierarchy. Improving the usability of thermostats may facilitate energy savings.

  3. Program helps quickly calculate deviated well path

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, M.P.

    1993-11-22

    A BASIC computer program quickly calculates the angle and measured depth of a simple directional well given only the true vertical depth and total displacement of the target. Many petroleum engineers and geologists need a quick, easy method to calculate the angle and measured depth necessary to reach a target in a proposed deviated well bore. Too many of the existing programs are large and require much input data. The drilling literature is full of equations and methods to calculate the course of well paths from surveys taken after a well is drilled. Very little information, however, covers how tomore » calculate well bore trajectories for proposed wells from limited data. Furthermore, many of the equations are quite complex and difficult to use. A figure lists a computer program with the equations to calculate the well bore trajectory necessary to reach a given displacement and true vertical depth (TVD) for a simple build plant. It can be run on an IBM compatible computer with MS-DOS version 5 or higher, QBasic, or any BASIC that does no require line numbers. QBasic 4.5 compiler will also run the program. The equations are based on conventional geometry and trigonometry.« less

  4. Sample path analysis of contribution and reward in cooperative groups.

    PubMed

    Toyoizumi, Hiroshi

    2009-02-07

    Explaining cooperative behavior is one of the major challenges in both biology and human society. The individual reward in cooperative group depends on how we share the rewards in the group. Thus, the group size dynamics in a cooperative group and reward-allocation rule seem essential to evaluate the emergence of cooperative groups. We apply a sample path-based analysis called an extension of Little's formula to general cooperative group. We show that the expected reward is insensitive to the specific reward-allocation rule and probabilistic structure of group dynamics, and the simple productivity condition guarantees the expected reward to be larger than the average contribution. As an example, we take social queues to see the insensitivity result in detail.

  5. Radiative transport equation for the Mittag-Leffler path length distribution

    NASA Astrophysics Data System (ADS)

    Liemert, André; Kienle, Alwin

    2017-05-01

    In this paper, we consider the radiative transport equation for infinitely extended scattering media that are characterized by the Mittag-Leffler path length distribution p (ℓ ) =-∂ℓEα(-σtℓα ) , which is a generalization of the usually assumed Lambert-Beer law p (ℓ ) =σtexp(-σtℓ ) . In this context, we derive the infinite-space Green's function of the underlying fractional transport equation for the spherically symmetric medium as well as for the one-dimensional string. Moreover, simple analytical solutions are presented for the prediction of the radiation field in the single-scattering approximation. The resulting equations are compared with Monte Carlo simulations in the steady-state and time domain showing, within the stochastic nature of the simulations, an excellent agreement.

  6. Using the USU ionospheric model to predict radio propagation through a simulated ionosphere

    NASA Astrophysics Data System (ADS)

    Huffines, Gary R.

    1990-12-01

    To evaluate the capabilities of communication, navigation, and defense systems utilizing electromagnetic waves which interact with the ionosphere, a three-dimensional ray tracing program was used. A simple empirical model (Chapman function) and a complex physical model (Schunk and Sojka model) were used to compare the representation of ionospheric conditions. Four positions were chosen to test four different features of the Northern Hemispheric ionosphere. It seems that decreasing electron density has little or no effect on the horizontal components of the ray path while increasing electron density causes deviations in the ray path. It was also noted that rays in the physical model's mid-latitude trough region escaped the ionosphere for all frequencies used in this study.

  7. Optical encrypted holographic memory using triple random phase-encoded multiplexing in photorefractive LiNbO3:Fe crystal

    NASA Astrophysics Data System (ADS)

    Tang, Li-Chuan; Hu, Guang W.; Russell, Kendra L.; Chang, Chen S.; Chang, Chi Ching

    2000-10-01

    We propose a new holographic memory scheme based on random phase-encoded multiplexing in a photorefractive LiNbO3:Fe crystal. Experimental results show that rotating a diffuser placed as a random phase modulator in the path of the reference beam provides a simple yet effective method of increasing the holographic storage capabilities of the crystal. Combining this rotational multiplexing with angular multiplexing offers further advantages. Storage capabilities can be optimized by using a post-image random phase plate in the path of the object beam. The technique is applied to a triple phase-encoded optical security system that takes advantage of the high angular selectivity of the angular-rotational multiplexing components.

  8. DETECTION OF FLUX EMERGENCE, SPLITTING, MERGING, AND CANCELLATION OF NETWORK FIELD. I. SPLITTING AND MERGING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iida, Y.; Yokoyama, T.; Hagenaar, H. J.

    2012-06-20

    Frequencies of magnetic patch processes on the supergranule boundary, namely, flux emergence, splitting, merging, and cancellation, are investigated through automatic detection. We use a set of line-of-sight magnetograms taken by the Solar Optical Telescope (SOT) on board the Hinode satellite. We found 1636 positive patches and 1637 negative patches in the data set, whose time duration is 3.5 hr and field of view is 112'' Multiplication-Sign 112''. The total numbers of magnetic processes are as follows: 493 positive and 482 negative splittings, 536 positive and 535 negative mergings, 86 cancellations, and 3 emergences. The total numbers of emergence and cancellationmore » are significantly smaller than those of splitting and merging. Further, the frequency dependence of the merging and splitting processes on the flux content are investigated. Merging has a weak dependence on the flux content with a power-law index of only 0.28. The timescale for splitting is found to be independent of the parent flux content before splitting, which corresponds to {approx}33 minutes. It is also found that patches split into any flux contents with the same probability. This splitting has a power-law distribution of the flux content with an index of -2 as a time-independent solution. These results support that the frequency distribution of the flux content in the analyzed flux range is rapidly maintained by merging and splitting, namely, surface processes. We suggest a model for frequency distributions of cancellation and emergence based on this idea.« less

  9. Resource cost results for one-way entanglement distillation and state merging of compound and arbitrarily varying quantum sources

    NASA Astrophysics Data System (ADS)

    Boche, H.; Janßen, G.

    2014-08-01

    We consider one-way quantum state merging and entanglement distillation under compound and arbitrarily varying source models. Regarding quantum compound sources, where the source is memoryless, but the source state an unknown member of a certain set of density matrices, we continue investigations begun in the work of Bjelaković et al. ["Universal quantum state merging," J. Math. Phys. 54, 032204 (2013)] and determine the classical as well as entanglement cost of state merging. We further investigate quantum state merging and entanglement distillation protocols for arbitrarily varying quantum sources (AVQS). In the AVQS model, the source state is assumed to vary in an arbitrary manner for each source output due to environmental fluctuations or adversarial manipulation. We determine the one-way entanglement distillation capacity for AVQS, where we invoke the famous robustification and elimination techniques introduced by Ahlswede. Regarding quantum state merging for AVQS we show by example that the robustification and elimination based approach generally leads to suboptimal entanglement as well as classical communication rates.

  10. Numerical studies of the margin of vortices with decaying cores

    NASA Technical Reports Server (NTRS)

    Liu, G. C.; Ting, L.

    1986-01-01

    The merging of vortices to a single one is a canonical incompressible viscous flow problem. The merging process begins when the core sizes or the vortices are comparable to their distances and ends when the contour lines of constant vorticity lines are circularized around one center. Approximate solutions to this problem are constructed by adapting the asymptotic solutions for distinct vortices. For the early stage of merging, the next-order terms in the asymptotic solutions are added to the leading term. For the later stage of merging, the vorticity distribution is reinitialized by vortices with overlapping core structures guided by the 'rule of merging' and the velocity of the 'vortex centers' are then defined by a minimum principle. To show the accuracy of the approximate solution, it is compared with the finite-difference solution.

  11. Computing and visualizing time-varying merge trees for high-dimensional data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oesterling, Patrick; Heine, Christian; Weber, Gunther H.

    2017-06-03

    We introduce a new method that identifies and tracks features in arbitrary dimensions using the merge tree -- a structure for identifying topological features based on thresholding in scalar fields. This method analyzes the evolution of features of the function by tracking changes in the merge tree and relates features by matching subtrees between consecutive time steps. Using the time-varying merge tree, we present a structural visualization of the changing function that illustrates both features and their temporal evolution. We demonstrate the utility of our approach by applying it to temporal cluster analysis of high-dimensional point clouds.

  12. Characterization of Metering, Merging and Spacing Requirements for Future Trajectory-Based Operations

    NASA Technical Reports Server (NTRS)

    Johnson, Sally

    2017-01-01

    Trajectory-Based Operations (TBO) is one of the essential paradigm shifts in the NextGen transformation of the National Airspace System. Under TBO, aircraft are managed by 4-dimensional trajectories, and airborne and ground-based metering, merging, and spacing operations are key to managing those trajectories. This paper presents the results of a study of potential metering, merging, and spacing operations within a future TBO environment. A number of operational scenarios for tactical and strategic uses of metering, merging, and spacing are described, and interdependencies between concurrent tactical and strategic operations are identified.

  13. Entanglement and Coherence in Quantum State Merging.

    PubMed

    Streltsov, A; Chitambar, E; Rana, S; Bera, M N; Winter, A; Lewenstein, M

    2016-06-17

    Understanding the resource consumption in distributed scenarios is one of the main goals of quantum information theory. A prominent example for such a scenario is the task of quantum state merging, where two parties aim to merge their tripartite quantum state parts. In standard quantum state merging, entanglement is considered to be an expensive resource, while local quantum operations can be performed at no additional cost. However, recent developments show that some local operations could be more expensive than others: it is reasonable to distinguish between local incoherent operations and local operations which can create coherence. This idea leads us to the task of incoherent quantum state merging, where one of the parties has free access to local incoherent operations only. In this case the resources of the process are quantified by pairs of entanglement and coherence. Here, we develop tools for studying this process and apply them to several relevant scenarios. While quantum state merging can lead to a gain of entanglement, our results imply that no merging procedure can gain entanglement and coherence at the same time. We also provide a general lower bound on the entanglement-coherence sum and show that the bound is tight for all pure states. Our results also lead to an incoherent version of Schumacher compression: in this case the compression rate is equal to the von Neumann entropy of the diagonal elements of the corresponding quantum state.

  14. A hierarchical word-merging algorithm with class separability measure.

    PubMed

    Wang, Lei; Zhou, Luping; Shen, Chunhua; Liu, Lingqiao; Liu, Huan

    2014-03-01

    In image recognition with the bag-of-features model, a small-sized visual codebook is usually preferred to obtain a low-dimensional histogram representation and high computational efficiency. Such a visual codebook has to be discriminative enough to achieve excellent recognition performance. To create a compact and discriminative codebook, in this paper we propose to merge the visual words in a large-sized initial codebook by maximally preserving class separability. We first show that this results in a difficult optimization problem. To deal with this situation, we devise a suboptimal but very efficient hierarchical word-merging algorithm, which optimally merges two words at each level of the hierarchy. By exploiting the characteristics of the class separability measure and designing a novel indexing structure, the proposed algorithm can hierarchically merge 10,000 visual words down to two words in merely 90 seconds. Also, to show the properties of the proposed algorithm and reveal its advantages, we conduct detailed theoretical analysis to compare it with another hierarchical word-merging algorithm that maximally preserves mutual information, obtaining interesting findings. Experimental studies are conducted to verify the effectiveness of the proposed algorithm on multiple benchmark data sets. As shown, it can efficiently produce more compact and discriminative codebooks than the state-of-the-art hierarchical word-merging algorithms, especially when the size of the codebook is significantly reduced.

  15. 29 CFR 4211.31 - Allocation of unfunded vested benefits following the merger of plans.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) through (d) of this section, when two or more multiemployer plans merge, the merged plan shall adopt one... allocation methods prescribed in §§ 4211.32 through 4211.35, and the method adopted shall apply to all employer withdrawals occurring after the initial plan year. Alternatively, a merged plan may adopt its own...

  16. 76 FR 54931 - Post Office (PO) Box Fee Groups for Merged Locations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-06

    ... POSTAL SERVICE 39 CFR Part 111 Post Office (PO) Box Fee Groups for Merged Locations AGENCY: Postal... different ZIP Code TM location because of a merger of two or more ZIP Code locations into a single location... merged with a location whose box section is more than one fee group level different, the location would...

  17. Merged Federal Files [Academic Year] 1978-79 [machine-readable data file].

    ERIC Educational Resources Information Center

    National Center for Education Statistics (ED), Washington, DC.

    The Merged Federal File for 1978-79 contains school district level data from the following six source files: (1) the Census of Governments' Survey of Local Government Finances--School Systems (F-33) (with 16,343 records merged); (2) the National Center for Education Statistics Survey of School Systems (School District Universe) (with 16,743…

  18. Optical-Near-infrared Color Gradients and Merging History of Elliptical Galaxies

    NASA Astrophysics Data System (ADS)

    Kim, Duho; Im, Myungshin

    2013-04-01

    It has been suggested that merging plays an important role in the formation and the evolution of elliptical galaxies. While gas dissipation by star formation is believed to steepen metallicity and color gradients of the merger products, mixing of stars through dissipation-less merging (dry merging) is believed to flatten them. In order to understand the past merging history of elliptical galaxies, we studied the optical-near-infrared (NIR) color gradients of 204 elliptical galaxies. These galaxies are selected from the overlap region of the Sloan Digital Sky Survey (SDSS) Stripe 82 and the UKIRT Infrared Deep Sky Survey (UKIDSS) Large Area Survey (LAS). The use of optical and NIR data (g, r, and K) provides large wavelength baselines, and breaks the age-metallicity degeneracy, allowing us to derive age and metallicity gradients. The use of the deep SDSS Stripe 82 images makes it possible for us to examine how the color/age/metallicity gradients are related to merging features. We find that the optical-NIR color and the age/metallicity gradients of elliptical galaxies with tidal features are consistent with those of relaxed ellipticals, suggesting that the two populations underwent a similar merging history on average and that mixing of stars was more or less completed before the tidal features disappeared. Elliptical galaxies with dust features have steeper color gradients than the other two types, even after masking out dust features during the analysis, which can be due to a process involving wet merging. More importantly, we find that the scatter in the color/age/metallicity gradients of the relaxed and merging feature types decreases as their luminosities (or masses) increase at M > 1011.4 M ⊙ but stays large at lower luminosities. Mean metallicity gradients appear nearly constant over the explored mass range, but a possible flattening is observed at the massive end. According to our toy model that predicts how the distribution of metallicity gradients changes as a result of major dry merging, the mean metallicity gradient should flatten by 40% and its scatter becomes smaller by 80% per a mass-doubling scale if ellipticals evolve only through major dry merger. Our result, although limited by a number statistics at the massive end, is consistent with the picture that major dry merging is an important mechanism for the evolution for ellipticals at M > 1011.4 M ⊙, but is less important at the lower mass range.

  19. OPTICAL-NEAR-INFRARED COLOR GRADIENTS AND MERGING HISTORY OF ELLIPTICAL GALAXIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Duho; Im, Myungshin

    2013-04-01

    It has been suggested that merging plays an important role in the formation and the evolution of elliptical galaxies. While gas dissipation by star formation is believed to steepen metallicity and color gradients of the merger products, mixing of stars through dissipation-less merging (dry merging) is believed to flatten them. In order to understand the past merging history of elliptical galaxies, we studied the optical-near-infrared (NIR) color gradients of 204 elliptical galaxies. These galaxies are selected from the overlap region of the Sloan Digital Sky Survey (SDSS) Stripe 82 and the UKIRT Infrared Deep Sky Survey (UKIDSS) Large Area Surveymore » (LAS). The use of optical and NIR data (g, r, and K) provides large wavelength baselines, and breaks the age-metallicity degeneracy, allowing us to derive age and metallicity gradients. The use of the deep SDSS Stripe 82 images makes it possible for us to examine how the color/age/metallicity gradients are related to merging features. We find that the optical-NIR color and the age/metallicity gradients of elliptical galaxies with tidal features are consistent with those of relaxed ellipticals, suggesting that the two populations underwent a similar merging history on average and that mixing of stars was more or less completed before the tidal features disappeared. Elliptical galaxies with dust features have steeper color gradients than the other two types, even after masking out dust features during the analysis, which can be due to a process involving wet merging. More importantly, we find that the scatter in the color/age/metallicity gradients of the relaxed and merging feature types decreases as their luminosities (or masses) increase at M > 10{sup 11.4} M{sub Sun} but stays large at lower luminosities. Mean metallicity gradients appear nearly constant over the explored mass range, but a possible flattening is observed at the massive end. According to our toy model that predicts how the distribution of metallicity gradients changes as a result of major dry merging, the mean metallicity gradient should flatten by 40% and its scatter becomes smaller by 80% per a mass-doubling scale if ellipticals evolve only through major dry merger. Our result, although limited by a number statistics at the massive end, is consistent with the picture that major dry merging is an important mechanism for the evolution for ellipticals at M > 10{sup 11.4} M{sub Sun }, but is less important at the lower mass range.« less

  20. How Certain are We of the Uncertainties in Recent Ozone Profile Trend Assessments of Merged Limbo Ccultation Records? Challenges and Possible Ways Forward

    NASA Technical Reports Server (NTRS)

    Hubert, Daan; Lambert, Jean-Christopher; Verhoelst, Tijl; Granville, Jose; Keppens, Arno; Baray, Jean-Luc; Cortesi, Ugo; Degenstein, D. A.; Froidevaux, Lucien; Godin-Beekmann, Sophie; hide

    2015-01-01

    Most recent assessments of long-term changes in the vertical distribution of ozone (by e.g. WMO and SI2N) rely on data sets that integrate observations by multiple instruments. Several merged satellite ozone profile records have been developed over the past few years; each considers a particular set of instruments and adopts a particular merging strategy. Their intercomparison by Tummon et al. revealed that the current merging schemes are not sufficiently refined to correct for all major differences between the limb/occultation records. This shortcoming introduces uncertainties that need to be known to obtain a sound interpretation of the different satellite-based trend studies. In practice however, producing realistic uncertainty estimates is an intricate task which depends on a sufficiently detailed understanding of the characteristics of each contributing data record and on the subsequent interplay and propagation of these through the merging scheme. Our presentation discusses these challenges in the context of limb/occultation ozone profile records, but they are equally relevant for other instruments and atmospheric measurements. We start by showing how the NDACC and GAW-affiliated ground-based networks of ozonesonde and lidar instruments allowed us to characterize fourteen limb/occultation ozone profile records, together providing a global view over the last three decades. Our prime focus will be on techniques to estimate long-term drift since our results suggest this is the main driver of the major trend differences between the merged data sets. The single-instrument drift estimates are then used for a tentative estimate of the systematic uncertainty in the profile trends from merged data records. We conclude by reflecting on possible further steps needed to improve the merging algorithms and to obtain a better characterization of the uncertainties involved.

  1. Merging black hole binaries: the effects of progenitor's metallicity, mass-loss rate and Eddington factor

    NASA Astrophysics Data System (ADS)

    Giacobbo, Nicola; Mapelli, Michela; Spera, Mario

    2018-03-01

    The first four gravitational wave events detected by LIGO were all interpreted as merging black hole binaries (BHBs), opening a new perspective on the study of such systems. Here we use our new population-synthesis code MOBSE, an upgraded version of BSE, to investigate the demography of merging BHBs. MOBSE includes metallicity-dependent prescriptions for mass-loss of massive hot stars. It also accounts for the impact of the electron-scattering Eddington factor on mass-loss. We perform >108 simulations of isolated massive binaries, with 12 different metallicities, to study the impact of mass-loss, core-collapse supernovae and common envelope on merging BHBs. Accounting for the dependence of stellar winds on the Eddington factor leads to the formation of black holes (BHs) with mass up to 65 M⊙ at metallicity Z ˜ 0.0002. However, most BHs in merging BHBs have masses ≲ 40 M⊙. We find merging BHBs with mass ratios in the 0.1-1.0 range, even if mass ratios >0.6 are more likely. We predict that systems like GW150914, GW170814 and GW170104 can form only from progenitors with metallicity Z ≤ 0.006, Z ≤ 0.008 and Z ≤ 0.012, respectively. Most merging BHBs have gone through a common envelope phase, but up to ˜17 per cent merging BHBs at low metallicity did not undergo any common envelope phase. We find a much higher number of mergers from metal-poor progenitors than from metal-rich ones: the number of BHB mergers per unit mass is ˜10-4 M_{⊙}^{-1} at low metallicity (Z = 0.0002-0.002) and drops to ˜10-7 M_{⊙}^{-1} at high metallicity (Z ˜ 0.02).

  2. Geographically weighted regression based methods for merging satellite and gauge precipitation

    NASA Astrophysics Data System (ADS)

    Chao, Lijun; Zhang, Ke; Li, Zhijia; Zhu, Yuelong; Wang, Jingfeng; Yu, Zhongbo

    2018-03-01

    Real-time precipitation data with high spatiotemporal resolutions are crucial for accurate hydrological forecasting. To improve the spatial resolution and quality of satellite precipitation, a three-step satellite and gauge precipitation merging method was formulated in this study: (1) bilinear interpolation is first applied to downscale coarser satellite precipitation to a finer resolution (PS); (2) the (mixed) geographically weighted regression methods coupled with a weighting function are then used to estimate biases of PS as functions of gauge observations (PO) and PS; and (3) biases of PS are finally corrected to produce a merged precipitation product. Based on the above framework, eight algorithms, a combination of two geographically weighted regression methods and four weighting functions, are developed to merge CMORPH (CPC MORPHing technique) precipitation with station observations on a daily scale in the Ziwuhe Basin of China. The geographical variables (elevation, slope, aspect, surface roughness, and distance to the coastline) and a meteorological variable (wind speed) were used for merging precipitation to avoid the artificial spatial autocorrelation resulting from traditional interpolation methods. The results show that the combination of the MGWR and BI-square function (MGWR-BI) has the best performance (R = 0.863 and RMSE = 7.273 mm/day) among the eight algorithms. The MGWR-BI algorithm was then applied to produce hourly merged precipitation product. Compared to the original CMORPH product (R = 0.208 and RMSE = 1.208 mm/hr), the quality of the merged data is significantly higher (R = 0.724 and RMSE = 0.706 mm/hr). The developed merging method not only improves the spatial resolution and quality of the satellite product but also is easy to implement, which is valuable for hydrological modeling and other applications.

  3. Path integral approach to the Wigner representation of canonical density operators for discrete systems coupled to harmonic baths.

    PubMed

    Montoya-Castillo, Andrés; Reichman, David R

    2017-01-14

    We derive a semi-analytical form for the Wigner transform for the canonical density operator of a discrete system coupled to a harmonic bath based on the path integral expansion of the Boltzmann factor. The introduction of this simple and controllable approach allows for the exact rendering of the canonical distribution and permits systematic convergence of static properties with respect to the number of path integral steps. In addition, the expressions derived here provide an exact and facile interface with quasi- and semi-classical dynamical methods, which enables the direct calculation of equilibrium time correlation functions within a wide array of approaches. We demonstrate that the present method represents a practical path for the calculation of thermodynamic data for the spin-boson and related systems. We illustrate the power of the present approach by detailing the improvement of the quality of Ehrenfest theory for the correlation function C zz (t)=Re⟨σ z (0)σ z (t)⟩ for the spin-boson model with systematic convergence to the exact sampling function. Importantly, the numerically exact nature of the scheme presented here and its compatibility with semiclassical methods allows for the systematic testing of commonly used approximations for the Wigner-transformed canonical density.

  4. Mode-specific tunneling using the Qim path: theory and an application to full-dimensional malonaldehyde.

    PubMed

    Wang, Yimin; Bowman, Joel M

    2013-10-21

    We present a theory of mode-specific tunneling that makes use of the general tunneling path along the imaginary-frequency normal mode of the saddle point, Qim, and the associated relaxed potential, V(Qim) [Y. Wang and J. M. Bowman, J. Chem. Phys. 129, 121103 (2008)]. The novel aspect of the theory is the projection of the normal modes of a minimum onto the Qim path and the determination of turning points on V(Qim). From that projection, the change in tunneling upon mode excitation can be calculated. If the projection is zero, no enhancement of tunneling is predicted. In that case vibrationally adiabatic (VA) theory could apply. However, if the projection is large then VA theory is not applicable. The approach is applied to mode-specific tunneling in full-dimensional malonaldehyde, using an accurate full-dimensional potential energy surface. Results are in semi-quantitative agreement with experiment for modes that show large enhancement of the tunneling, relative to the ground state tunneling splitting. For the six out-of-plane modes, which have zero projection on the planar Qim path, VA theory does apply, and results from that theory agree qualitatively and even semi-quantitatively with experiment. We also verify the failure of simple VA theory for modes that show large enhancement of tunneling.

  5. A driven active mass damper by using output of a neural oscillator (effects of position control system changes on vibration mitigation performance)

    NASA Astrophysics Data System (ADS)

    Hongu, J.; Iba, D.; Sasaki, T.; Nakamura, M.; Moriwaki, I.

    2015-03-01

    In this paper, a design method for a PD controller, which is a part of a new active mass damper system using a neural oscillator for high-rise buildings, is proposed. The new system mimicking the motion of bipedal mammals is a quite simple system, which has the neural oscillator synchronizing with the acceleration response of the structure. The travel distance and direction of the auxiliary mass of the active mass damper is decided by the output of the neural oscillator, and then, the auxiliary mass is transferred to the decided location by using the PD controller. Therefore, the performance of the PD controller must be evaluated by the vibration energy absorbing efficiency by the system. In order to bring the actual path driven by the PD controller in closer alignment with the ideal path, which is assumed to be a sinusoidal wave under resonance, firstly, the path of the auxiliary mass driven by the PD controller is analytically derived, and the inner product between the vector of ideal and analytical path is evaluated. And then, the PD gain is decided by the maximum value of the inner product. Finally, numerical simulations confirm the validity of the proposed design method of the PD controller.

  6. Guiding High School Students through Applied Internship Projects in College Environments: A Met School Story

    ERIC Educational Resources Information Center

    Hassan, Said

    2012-01-01

    Many high school students are faced with the dilemma of "what next?" as they go through their final years at school. With new-economy jobs becoming more complex and career paths increasingly convoluted, the decision-making process is no simple task. What do these jobs and careers entail? How does what they are studying in school relate…

  7. Broadband Venetian-Blind Polarizer With Dual Vanes

    NASA Technical Reports Server (NTRS)

    Conroy, Bruce L.; Hoppe, Daniel J.

    1995-01-01

    Improved venetian-blind polarizer features optimized tandem, two-layer vane configuration reducing undesired reflections and deformation of radiation pattern below those of prior single-layer vane configuration. Consists of number of thin, parallel metal strips placed in path of propagating radio-frequency beam. Offers simple way to convert polarization from linear to circular or from circular to linear. Particularly useful for beam-wave-guide applications.

  8. Authentic Science Experiences as a Vehicle for Assessing Orientation towards Science and Science Careers Relative to Identity and Agency: A Response to "Learning from the Path Followed by Brad"

    ERIC Educational Resources Information Center

    Chinn, Pauline W. U.

    2009-01-01

    This response draws from the literature on adaptive learning, traditional ecological knowledge, and social-ecological systems to show that Brad's choice is not a simple decision between traditional ecological knowledge and authentic science. This perspective recognizes knowledge systems as dynamic, cultural and historical activities characterized…

  9. Leadership Begins at the End of Your Comfort Zone

    ERIC Educational Resources Information Center

    Ryder, Marilou

    2013-01-01

    The path one chooses in life may not always be an easy one. Through simple acts of failure and disappointment, the author learned a lot about her inspiration to become a leader and what it actually took to make them a reality. As a woman who had little confidence to handle major life changes, the author values every second of the experience.…

  10. New Opportunities--Implications and Recommendations for Business Education's Role in Non-Traditional and Organizational Settings

    ERIC Educational Resources Information Center

    Bronner, Michael; Kaliski, Burton S.

    2007-01-01

    Business educators have long been prepared for service in a wide range of settings; however, these settings have been concentrated in secondary education, teaching business subjects at the 7-12 levels with the emphasis on high school programs. Thus, for so many of those in the field of business education, their career path was quite simple: earn a…

  11. Reconciling mass functions with the star-forming main sequence via mergers

    NASA Astrophysics Data System (ADS)

    Steinhardt, Charles L.; Yurk, Dominic; Capak, Peter

    2017-06-01

    We combine star formation along the 'main sequence', quiescence and clustering and merging to produce an empirical model for the evolution of individual galaxies. Main-sequence star formation alone would significantly steepen the stellar mass function towards low redshift, in sharp conflict with observation. However, a combination of star formation and merging produces a consistent result for correct choice of the merger rate function. As a result, we are motivated to propose a model in which hierarchical merging is disconnected from environmentally independent star formation. This model can be tested via correlation functions and would produce new constraints on clustering and merging.

  12. Experimental evidence for collisional shock formation via two obliquely merging supersonic plasma jets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merritt, Elizabeth C., E-mail: emerritt@lanl.gov; Adams, Colin S.; University of New Mexico, Albuquerque, New Mexico 87131

    We report spatially resolved measurements of the oblique merging of two supersonic laboratory plasma jets. The jets are formed and launched by pulsed-power-driven railguns using injected argon, and have electron density ∼10{sup 14} cm{sup −3}, electron temperature ≈1.4 eV, ionization fraction near unity, and velocity ≈40 km/s just prior to merging. The jet merging produces a few-cm-thick stagnation layer, as observed in both fast-framing camera images and multi-chord interferometer data, consistent with collisional shock formation [E. C. Merritt et al., Phys. Rev. Lett. 111, 085003 (2013)].

  13. DETECTION OF SHOCK MERGING IN THE CHROMOSPHERE OF A SOLAR PORE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chae, Jongchul; Song, Donguk; Seo, Minju

    2015-06-01

    It was theoretically demonstrated that a shock propagating in the solar atmosphere can overtake another and merge with it. We provide clear observational evidence that shock merging does occur quite often in the chromosphere of sunspots. Using Hα imaging spectral data taken by the Fast Imaging Solar Spectrograph of the 1.6 m New Solar Telescope at the Big Bear Soar Observatory, we construct time–distance maps of line-of-sight velocities along two appropriately chosen cuts in a pore. The maps show a number of alternating redshift and blueshift ridges, and we identify each interface between a preceding redshift ridge and the followingmore » blueshift ridge as a shock ridge. The important finding of ours is that two successive shock ridges often merge with each other. This finding can be theoretically explained by the merging of magneto-acoustic shock waves propagating with lower speeds of about 10 km s{sup −1} and those propagating at higher speeds of about 16–22 km s{sup −1}. The shock merging is an important nonlinear dynamical process of the solar chromosphere that can bridge the gap between higher-frequency chromospheric oscillations and lower-frequency dynamic phenomena such as fibrils.« less

  14. Effect of adaptive cruise control systems on mixed traffic flow near an on-ramp

    NASA Astrophysics Data System (ADS)

    Davis, L. C.

    2007-06-01

    Mixed traffic flow consisting of vehicles equipped with adaptive cruise control (ACC) and manually driven vehicles is analyzed using car-following simulations. Simulations of merging from an on-ramp onto a freeway reported in the literature have not thus far demonstrated a substantial positive impact of ACC. In this paper cooperative merging for ACC vehicles is proposed to improve throughput and increase distance traveled in a fixed time. In such a system an ACC vehicle senses not only the preceding vehicle in the same lane but also the vehicle immediately in front in the other lane. Prior to reaching the merge region, the ACC vehicle adjusts its velocity to ensure that a safe gap for merging is obtained. If on-ramp demand is moderate, cooperative merging produces significant improvement in throughput (20%) and increases up to 3.6 km in distance traveled in 600 s for 50% ACC mixed flow relative to the flow of all-manual vehicles. For large demand, it is shown that autonomous merging with cooperation in the flow of all ACC vehicles leads to throughput limited only by the downstream capacity, which is determined by speed limit and headway time.

  15. Velocity Inversion In Cylindrical Couette Gas Flows

    NASA Astrophysics Data System (ADS)

    Dongari, Nishanth; Barber, Robert W.; Emerson, David R.; Zhang, Yonghao; Reese, Jason M.

    2012-05-01

    We investigate a power-law probability distribution function to describe the mean free path of rarefied gas molecules in non-planar geometries. A new curvature-dependent model is derived by taking into account the boundary-limiting effects on the molecular mean free path for surfaces with both convex and concave curvatures. In comparison to a planar wall, we find that the mean free path for a convex surface is higher at the wall and exhibits a sharper gradient within the Knudsen layer. In contrast, a concave wall exhibits a lower mean free path near the surface and the gradients in the Knudsen layer are shallower. The Navier-Stokes constitutive relations and velocity-slip boundary conditions are modified based on a power-law scaling to describe the mean free path, in accordance with the kinetic theory of gases, i.e. transport properties can be described in terms of the mean free path. Velocity profiles for isothermal cylindrical Couette flow are obtained using the power-law model. We demonstrate that our model is more accurate than the classical slip solution, especially in the transition regime, and we are able to capture important non-linear trends associated with the non-equilibrium physics of the Knudsen layer. In addition, we establish a new criterion for the critical accommodation coefficient that leads to the non-intuitive phenomena of velocity-inversion. Our results are compared with conventional hydrodynamic models and direct simulation Monte Carlo data. The power-law model predicts that the critical accommodation coefficient is significantly lower than that calculated using the classical slip solution and is in good agreement with available DSMC data. Our proposed constitutive scaling for non-planar surfaces is based on simple physical arguments and can be readily implemented in conventional fluid dynamics codes for arbitrary geometric configurations.

  16. Iterative retrieval of surface emissivity and temperature for a hyperspectral sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borel, C.C.

    1997-11-01

    The central problem of temperature-emissivity separation is that we obtain N spectral measurements of radiance and need to find N + 1 unknowns (N emissivities and one temperature). To solve this problem in the presence of the atmosphere we need to find even more unknowns: N spectral transmissions {tau}{sub atmo}({lambda}) up-welling path radiances L{sub path}{up_arrow}({lambda}) and N down-welling path radiances L{sub path}{down_arrow}({lambda}). Fortunately there are radiative transfer codes such as MODTRAN 3 and FASCODE available to get good estimates of {tau}{sub atmo}({lambda}), L{sub path}{up_arrow}({lambda}) and L{sub path}{down_arrow}({lambda}) in the order of a few percent. With the growing use of hyperspectralmore » imagers, e.g. AVIRIS in the visible and short-wave infrared there is hope of using such instruments in the mid-wave and thermal IR (TIR) some day. We believe that this will enable us to get around using the present temperature - emissivity separation (TES) algorithms using methods which take advantage of the many channels available in hyperspectral imagers. The first idea we had is to take advantage of the simple fact that a typical surface emissivity spectrum is rather smooth compared to spectral features introduced by the atmosphere. Thus iterative solution techniques can be devised which retrieve emissivity spectra {epsilon} based on spectral smoothness. To make the emissivities realistic, atmospheric parameters are varied using approximations, look-up tables derived from a radiative transfer code and spectral libraries. By varying the surface temperature over a small range a series of emissivity spectra are calculated. The one with the smoothest characteristic is chosen. The algorithm was tested on synthetic data using MODTRAN and the Salisbury emissivity database.« less

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schiefer, H., E-mail: johann.schiefer@kssg.ch; Peters, S.; Plasswilm, L.

    Purpose: For stereotactic radiosurgery, the AAPM Report No. 54 [AAPM Task Group 42 (AAPM, 1995)] requires the overall stability of the isocenter (couch, gantry, and collimator) to be within a 1 mm radius. In reality, a rotating system has no rigid axis and thus no isocenter point which is fixed in space. As a consequence, the isocenter concept is reviewed here. It is the aim to develop a measurement method following the revised definitions. Methods: The mechanical isocenter is defined here by the point which rotates on the shortest path in the room coordinate system. The path is labeled asmore » “isocenter path.” Its center of gravity is assumed to be the mechanical isocenter. Following this definition, an image-based and radiation-free measurement method was developed. Multiple marker pairs in a plane perpendicular to the assumed gantry rotation axis of a linear accelerator are imaged with a smartphone application from several rotation angles. Each marker pair represents an independent measuring system. The room coordinates of the isocenter path and the mechanical isocenter are calculated based on the marker coordinates. The presented measurement method is by this means strictly focused on the mechanical isocenter. Results: The measurement result is available virtually immediately following completion of measurement. When 12 independent measurement systems are evaluated, the standard deviations of the isocenter path points and mechanical isocenter coordinates are 0.02 and 0.002 mm, respectively. Conclusions: The measurement is highly accurate, time efficient, and simple to adapt. It is therefore suitable for regular checks of the mechanical isocenter characteristics of the gantry and collimator rotation axis. When the isocenter path is reproducible and its extent is in the range of the needed geometrical accuracy, it should be taken into account in the planning process. This is especially true for stereotactic treatments and radiosurgery.« less

  18. Mass and Environment as Drivers of Galaxy Evolution in SDSS and zCOSMOS and the Origin of the Schechter Function

    NASA Astrophysics Data System (ADS)

    Peng, Ying-jie; Lilly, Simon J.; Kovač, Katarina; Bolzonella, Micol; Pozzetti, Lucia; Renzini, Alvio; Zamorani, Gianni; Ilbert, Olivier; Knobel, Christian; Iovino, Angela; Maier, Christian; Cucciati, Olga; Tasca, Lidia; Carollo, C. Marcella; Silverman, John; Kampczyk, Pawel; de Ravel, Loic; Sanders, David; Scoville, Nicholas; Contini, Thierry; Mainieri, Vincenzo; Scodeggio, Marco; Kneib, Jean-Paul; Le Fèvre, Olivier; Bardelli, Sandro; Bongiorno, Angela; Caputi, Karina; Coppa, Graziano; de la Torre, Sylvain; Franzetti, Paolo; Garilli, Bianca; Lamareille, Fabrice; Le Borgne, Jean-Francois; Le Brun, Vincent; Mignoli, Marco; Perez Montero, Enrique; Pello, Roser; Ricciardelli, Elena; Tanaka, Masayuki; Tresse, Laurence; Vergani, Daniela; Welikala, Niraj; Zucca, Elena; Oesch, Pascal; Abbas, Ummi; Barnes, Luke; Bordoloi, Rongmon; Bottini, Dario; Cappi, Alberto; Cassata, Paolo; Cimatti, Andrea; Fumana, Marco; Hasinger, Gunther; Koekemoer, Anton; Leauthaud, Alexei; Maccagni, Dario; Marinoni, Christian; McCracken, Henry; Memeo, Pierdomenico; Meneux, Baptiste; Nair, Preethi; Porciani, Cristiano; Presotto, Valentina; Scaramella, Roberto

    2010-09-01

    We explore the simple inter-relationships between mass, star formation rate, and environment in the SDSS, zCOSMOS, and other deep surveys. We take a purely empirical approach in identifying those features of galaxy evolution that are demanded by the data and then explore the analytic consequences of these. We show that the differential effects of mass and environment are completely separable to z ~ 1, leading to the idea of two distinct processes of "mass quenching" and "environment quenching." The effect of environment quenching, at fixed over-density, evidently does not change with epoch to z ~ 1 in zCOSMOS, suggesting that the environment quenching occurs as large-scale structure develops in the universe, probably through the cessation of star formation in 30%-70% of satellite galaxies. In contrast, mass quenching appears to be a more dynamic process, governed by a quenching rate. We show that the observed constancy of the Schechter M* and αs for star-forming galaxies demands that the quenching of galaxies around and above M* must follow a rate that is statistically proportional to their star formation rates (or closely mimic such a dependence). We then postulate that this simple mass-quenching law in fact holds over a much broader range of stellar mass (2 dex) and cosmic time. We show that the combination of these two quenching processes, plus some additional quenching due to merging naturally produces (1) a quasi-static single Schechter mass function for star-forming galaxies with an exponential cutoff at a value M* that is set uniquely by the constant of proportionality between the star formation and mass quenching rates and (2) a double Schechter function for passive galaxies with two components. The dominant component (at high masses) is produced by mass quenching and has exactly the same M* as the star-forming galaxies but a faint end slope that differs by Δαs ~ 1. The other component is produced by environment effects and has the same M* and αs as the star-forming galaxies but an amplitude that is strongly dependent on environment. Subsequent merging of quenched galaxies will modify these predictions somewhat in the denser environments, mildly increasing M* and making αs slightly more negative. All of these detailed quantitative inter-relationships between the Schechter parameters of the star-forming and passive galaxies, across a broad range of environments, are indeed seen to high accuracy in the SDSS, lending strong support to our simple empirically based model. We find that the amount of post-quenching "dry merging" that could have occurred is quite constrained. Our model gives a prediction for the mass function of the population of transitory objects that are in the process of being quenched. Our simple empirical laws for the cessation of star formation in galaxies also naturally produce the "anti-hierarchical" run of mean age with mass for passive galaxies, as well as the qualitative variation of formation timescale indicated by the relative α-element abundances. Based on observations undertaken at the European Southern Observatory (ESO) Very Large Telescope (VLT) under Large Program 175.A-0839. Also based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, operated by AURA Inc., under NASA contract NAS 5-26555, with the Subaru Telescope, operated by the National Astronomical Observatory of Japan, with the telescopes of the National Optical Astronomy Observatory, operated by the Association of Universities for Research in Astronomy, Inc. (AURA) under cooperative agreement with the National Science Foundation, and with the Canada-France-Hawaii Telescope, operated by the National Research Council of Canada, the Centre National de la Recherche Scientifique de France and the University of Hawaii.

  19. Transport properties of strongly correlated electrons in quantum dots studied with a simple circuit model.

    PubMed

    Martins, G B; Büsser, C A; Al-Hassanieh, K A; Anda, E V; Moreo, A; Dagotto, E

    2006-02-17

    Numerical calculations are shown to reproduce the main results of recent experiments involving nonlocal spin control in quantum dots [Craig, Science 304, 565 (2004).]. In particular, the experimentally reported zero-bias-peak splitting is clearly observed in our studies. To understand these results, a simple "circuit model" is introduced and shown to qualitatively describe the experiments. The main idea is that the splitting originates in a Fano antiresonance, which is caused by having one quantum dot side connected in relation to the current's path. This scenario provides an explanation of the results of Craig et al. that is an alternative to the RKKY proposal, also addressed here.

  20. Simple-to-Complex Transformation in Liquid Rubidium.

    PubMed

    Gorelli, Federico A; De Panfilis, Simone; Bryk, Taras; Ulivi, Lorenzo; Garbarino, Gaston; Parisiades, Paraskevas; Santoro, Mario

    2018-05-18

    We investigated the atomic structure of liquid Rb along an isothermal path at 573 K, up to 23 GPa, by X-ray diffraction measurements. By raising the pressure, we observed a liquid-liquid transformation from a simple metallic liquid to a complex one. The transition occurs at 7.5 ± 1 GPa which is slightly above the first maximum of the T-P melting line. This transformation is traced back to the density-induced hybridization of highest electronic orbitals leading to the accumulation of valence electrons between Rb atoms and to the formation of interstitial atomic shells, a behavior that Rb shares with Cs and is likely to be common to all alkali metals.

  1. Beam shuttering interferometer and method

    DOEpatents

    Deason, V.A.; Lassahn, G.D.

    1993-07-27

    A method and apparatus resulting in the simplification of phase shifting interferometry by eliminating the requirement to know the phase shift between interferograms or to keep the phase shift between interferograms constant. The present invention provides a simple, inexpensive means to shutter each independent beam of the interferometer in order to facilitate the data acquisition requirements for optical interferometry and phase shifting interferometry. By eliminating the requirement to know the phase shift between interferograms or to keep the phase shift constant, a simple, economical means and apparatus for performing the technique of phase shifting interferometry is provide which, by thermally expanding a fiber optical cable changes the optical path distance of one incident beam relative to another.

  2. Beam shuttering interferometer and method

    DOEpatents

    Deason, Vance A.; Lassahn, Gordon D.

    1993-01-01

    A method and apparatus resulting in the simplification of phase shifting interferometry by eliminating the requirement to know the phase shift between interferograms or to keep the phase shift between interferograms constant. The present invention provides a simple, inexpensive means to shutter each independent beam of the interferometer in order to facilitate the data acquisition requirements for optical interferometry and phase shifting interferometry. By eliminating the requirement to know the phase shift between interferograms or to keep the phase shift constant, a simple, economical means and apparatus for performing the technique of phase shifting interferometry is provide which, by thermally expanding a fiber optical cable changes the optical path distance of one incident beam relative to another.

  3. Simple turbulence measurements with azopolymer thin films.

    PubMed

    Barillé, Regis; Pérez, Darío G; Morille, Yohann; Zielińska, Sonia; Ortyl, Ewelina

    2013-04-01

    A simple method to measure the influence on the laser beam propagation by a turbid medium is proposed. This measurement is based on the inscription of a surface relief grating (SRG) on an azopolymer thin film. The grating obtained with a single laser beam after propagation into a turbulent medium is perturbed and directly analyzed by a CCD camera through its diffraction pattern. Later, by scanning the surface pattern with an atomic force microscope, the inscribed SRG is analyzed with the Radon transform. This method has the advantage of using a single beam to remotely inscribe a grating detecting perturbations during the beam path. A method to evaluate the refractive index constant structure is developed.

  4. Diazonium cation-exchanged clay: an efficient, unfrequented route for making clay/polymer nanocomposites.

    PubMed

    Salmi, Zakaria; Benzarti, Karim; Chehimi, Mohamed M

    2013-11-05

    We describe a simple, off-the-beaten-path strategy for making clay/polymer nanocomposites through tandem diazonium salt interface chemistry and radical photopolymerization. Prior to photopolymerization, sodium montmorillonite (MMT) was ion exchanged with N,N'-dimethylbenzenediazonium cation (DMA) from the tetrafluoroborate salt precursor. DMA acts as a hydrogen donor for benzophenone in solution; this pair of co-initiators permits us to photopolymerize glycidyl methacrylate (GMA) between the lamellae of the diazonium-modified clay, therefore providing intercalated MMT-PGMA nanocomposites with an onset of exfoliation. This work conclusively provides a new approach for bridging reactive and functional polymers to layered nanomaterials via aryl diazonium salts in a simple, fast, efficient cation-exchange approach.

  5. A Bayesian framework for infrasound location

    NASA Astrophysics Data System (ADS)

    Modrak, Ryan T.; Arrowsmith, Stephen J.; Anderson, Dale N.

    2010-04-01

    We develop a framework for location of infrasound events using backazimuth and infrasonic arrival times from multiple arrays. Bayesian infrasonic source location (BISL) developed here estimates event location and associated credibility regions. BISL accounts for unknown source-to-array path or phase by formulating infrasonic group velocity as random. Differences between observed and predicted source-to-array traveltimes are partitioned into two additive Gaussian sources, measurement error and model error, the second of which accounts for the unknown influence of wind and temperature on path. By applying the technique to both synthetic tests and ground-truth events, we highlight the complementary nature of back azimuths and arrival times for estimating well-constrained event locations. BISL is an extension to methods developed earlier by Arrowsmith et al. that provided simple bounds on location using a grid-search technique.

  6. Effect of the track potential on the motion and energy flow of secondary electrons created from heavy-ion irradiation

    NASA Astrophysics Data System (ADS)

    Moribayashi, Kengo

    2018-05-01

    Using simulations, we have evaluated the effect of the track potential on the motion and energy flow of secondary electrons, with the goal of determining the spatial distribution of energy deposition due to irradiation with heavy ions. We have simulated this effect as a function of the mean path τ between the incident ion-impact-ionization events at ion energies Eion. Here, the track potential is the potential formed from electric field near this incident ion path. The simulations indicate that this effect is mainly determined by τ and hardly depends on Eion. To understand heavy ion beam science more deeply and to reduce the time required by simulations, we have proposed simple approximation methods that almost reproduce the simulation results here.

  7. Inhibition of electron thermal conduction by electromagnetic instabilities. [in stellar coronas

    NASA Technical Reports Server (NTRS)

    Levinson, Amir; Eichler, David

    1992-01-01

    Heat flux inhibition by electromagnetic instabilities in a hot magnetized plasma is investigated. Low-frequency electromagnetic waves become unstable due to anisotropy of the electron distribution function. The chaotic magnetic field thus generated scatters the electrons with a specific effective mean free path. Saturation of the instability due to wave-wave interaction, nonlinear scattering, wave propagation, and collisional damping is considered. The effective mean free path is found self-consistently, using a simple model to estimate saturation level and scattering, and is shown to decrease with the temperature gradient length. The results, limited to the assumptions of the model, are applied to astrophysical systems. For some interstellar clouds the instability is found to be important. Collisional damping stabilizes the plasma, and the heat conduction can be dominated by superthermal electrons.

  8. Point and path performance of light aircraft: A review and analysis

    NASA Technical Reports Server (NTRS)

    Smetana, F. O.; Summey, D. C.; Johnson, W. D.

    1973-01-01

    The literature on methods for predicting the performance of light aircraft is reviewed. The methods discussed in the review extend from the classical instantaneous maximum or minimum technique to techniques for generating mathematically optimum flight paths. Classical point performance techniques are shown to be adequate in many cases but their accuracies are compromised by the need to use simple lift, drag, and thrust relations in order to get closed form solutions. Also the investigation of the effect of changes in weight, altitude, configuration, etc. involves many essentially repetitive calculations. Accordingly, computer programs are provided which can fit arbitrary drag polars and power curves with very high precision and which can then use the resulting fits to compute the performance under the assumption that the aircraft is not accelerating.

  9. Development of a visible light transmission (VLT) measurement system using an open-path optical method

    NASA Astrophysics Data System (ADS)

    Nurulain, S.; Manap, H.

    2017-09-01

    This paper describes about a visible light transmission (VLT) measurement system using an optical method. VLT rate plays an important role in order to determine the visibility of a medium. Current instrument to measure visibility has a gigantic set up, costly and mostly fails to function at low light condition environment. This research focuses on the development of a VLT measurement system using a simple experimental set-up and at a low cost. An open path optical technique is used to measure a few series of known-VLT thin film that act as sample of different visibilities. This measurement system is able to measure the light intensity of these thin films within the visible light region (535-540 nm) and the response time is less than 1s.

  10. Characterizing the Global Impact of P2P Overlays on the AS-Level Underlay

    NASA Astrophysics Data System (ADS)

    Rasti, Amir Hassan; Rejaie, Reza; Willinger, Walter

    This paper examines the problem of characterizing and assessing the global impact of the load imposed by a Peer-to-Peer (P2P) overlay on the AS-level underlay. In particular, we capture Gnutella snapshots for four consecutive years, obtain the corresponding AS-level topology snapshots of the Internet and infer the AS-paths associated with each overlay connection. Assuming a simple model of overlay traffic, we analyze the observed load imposed by these Gnutella snapshots on the AS-level underlay using metrics that characterize the load seen on individual AS-paths and by the transit ASes, illustrate the churn among the top transit ASes during this 4-year period, and describe the propagation of traffic within the AS-level hierarchy.

  11. Path planning on cellular nonlinear network using active wave computing technique

    NASA Astrophysics Data System (ADS)

    Yeniçeri, Ramazan; Yalçın, Müstak E.

    2009-05-01

    This paper introduces a simple algorithm to solve robot path finding problem using active wave computing techniques. A two-dimensional Cellular Neural/Nonlinear Network (CNN), consist of relaxation oscillators, has been used to generate active waves and to process the visual information. The network, which has been implemented on a Field Programmable Gate Array (FPGA) chip, has the feature of being programmed, controlled and observed by a host computer. The arena of the robot is modelled as the medium of the active waves on the network. Active waves are employed to cover the whole medium with their own dynamics, by starting from an initial point. The proposed algorithm is achieved by observing the motion of the wave-front of the active waves. Host program first loads the arena model onto the active wave generator network and command to start the generation. Then periodically pulls the network image from the generator hardware to analyze evolution of the active waves. When the algorithm is completed, vectorial data image is generated. The path from any of the pixel on this image to the active wave generating pixel is drawn by the vectors on this image. The robot arena may be a complicated labyrinth or may have a simple geometry. But, the arena surface always must be flat. Our Autowave Generator CNN implementation which is settled on the Xilinx University Program Virtex-II Pro Development System is operated by a MATLAB program running on the host computer. As the active wave generator hardware has 16, 384 neurons, an arena with 128 × 128 pixels can be modeled and solved by the algorithm. The system also has a monitor and network image is depicted on the monitor simultaneously.

  12. Microfluidic Chips Controlled with Elastomeric Microvalve Arrays

    PubMed Central

    Li, Nianzhen; Sip, Chris; Folch, Albert

    2007-01-01

    Miniaturized microfluidic systems provide simple and effective solutions for low-cost point-of-care diagnostics and high-throughput biomedical assays. Robust flow control and precise fluidic volumes are two critical requirements for these applications. We have developed microfluidic chips featuring elastomeric polydimethylsiloxane (PDMS) microvalve arrays that: 1) need no extra energy source to close the fluidic path, hence the loaded device is highly portable; and 2) allow for microfabricating deep (up to 1 mm) channels with vertical sidewalls and resulting in very precise features. The PDMS microvalves-based devices consist of three layers: a fluidic layer containing fluidic paths and microchambers of various sizes, a control layer containing the microchannels necessary to actuate the fluidic path with microvalves, and a middle thin PDMS membrane that is bound to the control layer. Fluidic layer and control layers are made by replica molding of PDMS from SU-8 photoresist masters, and the thin PDMS membrane is made by spinning PDMS at specified heights. The control layer is bonded to the thin PDMS membrane after oxygen activation of both, and then assembled with the fluidic layer. The microvalves are closed at rest and can be opened by applying negative pressure (e.g., house vacuum). Microvalve closure and opening are automated via solenoid valves controlled by computer software. Here, we demonstrate two microvalve-based microfluidic chips for two different applications. The first chip allows for storing and mixing precise sub-nanoliter volumes of aqueous solutions at various mixing ratios. The second chip allows for computer-controlled perfusion of microfluidic cell cultures. The devices are easy to fabricate and simple to control. Due to the biocompatibility of PDMS, these microchips could have broad applications in miniaturized diagnostic assays as well as basic cell biology studies. PMID:18989408

  13. A simple Lagrangian forecast system with aviation forecast potential

    NASA Technical Reports Server (NTRS)

    Petersen, R. A.; Homan, J. H.

    1983-01-01

    A trajectory forecast procedure is developed which uses geopotential tendency fields obtained from a simple, multiple layer, potential vorticity conservative isentropic model. This model can objectively account for short-term advective changes in the mass field when combined with fine-scale initial analyses. This procedure for producing short-term, upper-tropospheric trajectory forecasts employs a combination of a detailed objective analysis technique, an efficient mass advection model, and a diagnostically proven trajectory algorithm, none of which require extensive computer resources. Results of initial tests are presented, which indicate an exceptionally good agreement for trajectory paths entering the jet stream and passing through an intensifying trough. It is concluded that this technique not only has potential for aiding in route determination, fuel use estimation, and clear air turbulence detection, but also provides an example of the types of short range forecasting procedures which can be applied at local forecast centers using simple algorithms and a minimum of computer resources.

  14. Monte Carlo investigation of transient acoustic fields in partially or completely bounded medium. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Thanedar, B. D.

    1972-01-01

    A simple repetitive calculation was used to investigate what happens to the field in terms of the signal paths of disturbances originating from the energy source. The computation allowed the field to be reconstructed as a function of space and time on a statistical basis. The suggested Monte Carlo method is in response to the need for a numerical method to supplement analytical methods of solution which are only valid when the boundaries have simple shapes, rather than for a medium that is bounded. For the analysis, a suitable model was created from which was developed an algorithm for the estimation of acoustic pressure variations in the region under investigation. The validity of the technique was demonstrated by analysis of simple physical models with the aid of a digital computer. The Monte Carlo method is applicable to a medium which is homogeneous and is enclosed by either rectangular or curved boundaries.

  15. A simple formula for estimating Stark widths of neutral lines. [of stellar atmospheres

    NASA Technical Reports Server (NTRS)

    Freudenstein, S. A.; Cooper, J.

    1978-01-01

    A simple formula for the prediction of Stark widths of neutral lines similar to the semiempirical method of Griem (1968) for ion lines is presented. This formula is a simplification of the quantum-mechanical classical path impact theory and can be used for complicated atoms for which detailed calculations are not readily available, provided that the effective position of the closest interacting level is known. The expression does not require the use of a computer. The formula has been applied to a limited number of neutral lines of interest, and the width obtained is compared with the much more complete calculations of Bennett and Griem (1971). The agreement generally is well within 50% of the published value for the lines investigated. Comparisons with other formulas are also made. In addition, a simple estimate for the ion-broadening parameter is given.

  16. Robust high-performance control for robotic manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun (Inventor)

    1991-01-01

    Model-based and performance-based control techniques are combined for an electrical robotic control system. Thus, two distinct and separate design philosophies have been merged into a single control system having a control law formulation including two distinct and separate components, each of which yields a respective signal component that is combined into a total command signal for the system. Those two separate system components include a feedforward controller and a feedback controller. The feedforward controller is model-based and contains any known part of the manipulator dynamics that can be used for on-line control to produce a nominal feedforward component of the system's control signal. The feedback controller is performance-based and consists of a simple adaptive PID controller which generates an adaptive control signal to complement the nominal feedforward signal.

  17. Robust high-performance control for robotic manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun (Inventor)

    1989-01-01

    Model-based and performance-based control techniques are combined for an electrical robotic control system. Thus, two distinct and separate design philosophies were merged into a single control system having a control law formulation including two distinct and separate components, each of which yields a respective signal componet that is combined into a total command signal for the system. Those two separate system components include a feedforward controller and feedback controller. The feedforward controller is model-based and contains any known part of the manipulator dynamics that can be used for on-line control to produce a nominal feedforward component of the system's control signal. The feedback controller is performance-based and consists of a simple adaptive PID controller which generates an adaptive control signal to complement the nomical feedforward signal.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wallace, Joshua; Tremaine, Scott; Chambers, John, E-mail: joshuajw@princeton.edu

    Collisional fragmentation is shown to not be a barrier to rocky planet formation at small distances from the host star. Simple analytic arguments demonstrate that rocky planet formation via collisions of homogeneous gravity-dominated bodies is possible down to distances of order the Roche radius ( r {sub Roche}). Extensive N -body simulations with initial bodies ≳1700 km that include plausible models for fragmentation and merging of gravity-dominated bodies confirm this conclusion and demonstrate that rocky planet formation is possible down to ∼1.1 r {sub Roche}. At smaller distances, tidal effects cause collisions to be too fragmenting to allow mass buildupmore » to a final, dynamically stable planetary system. We argue that even differentiated bodies can accumulate to form planets at distances that are not much larger than r {sub Roche}.« less

  19. Motion of Molecular Probes and Viscosity Scaling in Polyelectrolyte Solutions at Physiological Ionic Strength

    PubMed Central

    Sozanski, Krzysztof; Wisniewska, Agnieszka; Kalwarczyk, Tomasz; Sznajder, Anna; Holyst, Robert

    2016-01-01

    We investigate transport properties of model polyelectrolyte systems at physiological ionic strength (0.154 M). Covering a broad range of flow length scales—from diffusion of molecular probes to macroscopic viscous flow—we establish a single, continuous function describing the scale dependent viscosity of high-salt polyelectrolyte solutions. The data are consistent with the model developed previously for electrically neutral polymers in a good solvent. The presented approach merges the power-law scaling concepts of de Gennes with the idea of exponential length scale dependence of effective viscosity in complex liquids. The result is a simple and applicable description of transport properties of high-salt polyelectrolyte solutions at all length scales, valid for motion of single molecules as well as macroscopic flow of the complex liquid. PMID:27536866

  20. Perl One-Liners: Bridging the Gap Between Large Data Sets and Analysis Tools.

    PubMed

    Hokamp, Karsten

    2015-01-01

    Computational analyses of biological data are becoming increasingly powerful, and researchers intending on carrying out their own analyses can often choose from a wide array of tools and resources. However, their application might be obstructed by the wide variety of different data formats that are in use, from standard, commonly used formats to output files from high-throughput analysis platforms. The latter are often too large to be opened, viewed, or edited by standard programs, potentially leading to a bottleneck in the analysis. Perl one-liners provide a simple solution to quickly reformat, filter, and merge data sets in preparation for downstream analyses. This chapter presents example code that can be easily adjusted to meet individual requirements. An online version is available at http://bioinf.gen.tcd.ie/pol.

Top