Two antenna, two pass interferometric synthetic aperture radar
Martinez, Ana; Doerry, Armin W.; Bickel, Douglas L.
2005-06-28
A multi-antenna, multi-pass IFSAR mode utilizing data driven alignment of multiple independent passes can combine the scaling accuracy of a two-antenna, one-pass IFSAR mode with the height-noise performance of a one-antenna, two-pass IFSAR mode. A two-antenna, two-pass IFSAR mode can accurately estimate the larger antenna baseline from the data itself and reduce height-noise, allowing for more accurate information about target ground position locations and heights. The two-antenna, two-pass IFSAR mode can use coarser IFSAR data to estimate the larger antenna baseline. Multi-pass IFSAR can be extended to more than two (2) passes, thereby allowing true three-dimensional radar imaging from stand-off aircraft and satellite platforms.
Nagaoka, Tomoaki; Watanabe, Soichi
2012-01-01
Electromagnetic simulation with anatomically realistic computational human model using the finite-difference time domain (FDTD) method has recently been performed in a number of fields in biomedical engineering. To improve the method's calculation speed and realize large-scale computing with the computational human model, we adapt three-dimensional FDTD code to a multi-GPU cluster environment with Compute Unified Device Architecture and Message Passing Interface. Our multi-GPU cluster system consists of three nodes. The seven GPU boards (NVIDIA Tesla C2070) are mounted on each node. We examined the performance of the FDTD calculation on multi-GPU cluster environment. We confirmed that the FDTD calculation on the multi-GPU clusters is faster than that on a multi-GPU (a single workstation), and we also found that the GPU cluster system calculate faster than a vector supercomputer. In addition, our GPU cluster system allowed us to perform the large-scale FDTD calculation because were able to use GPU memory of over 100 GB.
High-resolution Observations of Hα Spectra with a Subtractive Double Pass
NASA Astrophysics Data System (ADS)
Beck, C.; Rezaei, R.; Choudhary, D. P.; Gosain, S.; Tritschler, A.; Louis, R. E.
2018-02-01
High-resolution imaging spectroscopy in solar physics has relied on Fabry-Pérot interferometers (FPIs) in recent years. FPI systems, however, become technically challenging and expensive for telescopes larger than the 1 m class. A conventional slit spectrograph with a diffraction-limited performance over a large field of view (FOV) can be built at much lower cost and effort. It can be converted into an imaging spectro(polari)meter using the concept of a subtractive double pass (SDP). We demonstrate that an SDP system can reach a similar performance as FPI-based systems with a high spatial and moderate spectral resolution across a FOV of 100^'' ×100^' ' with a spectral coverage of 1 nm. We use Hα spectra taken with an SDP system at the Dunn Solar Telescope and complementary full-disc data to infer the properties of small-scale superpenumbral filaments. We find that the majority of all filaments end in patches of opposite-polarity fields. The internal fine-structure in the line-core intensity of Hα at spatial scales of about 0.5'' exceeds that in other parameters such as the line width, indicating small-scale opacity effects in a larger-scale structure with common properties. We conclude that SDP systems in combination with (multi-conjugate) adaptive optics are a valid alternative to FPI systems when high spatial resolution and a large FOV are required. They can also reach a cadence that is comparable to that of FPI systems, while providing a much larger spectral range and a simultaneous multi-line capability.
Hastrup, Sidsel; Damgaard, Dorte; Johnsen, Søren Paaske; Andersen, Grethe
2016-07-01
We designed and validated a simple prehospital stroke scale to identify emergent large vessel occlusion (ELVO) in patients with acute ischemic stroke and compared the scale to other published scales for prediction of ELVO. A national historical test cohort of 3127 patients with information on intracranial vessel status (angiography) before reperfusion therapy was identified. National Institutes of Health Stroke Scale (NIHSS) items with the highest predictive value of occlusion of a large intracranial artery were identified, and the most optimal combination meeting predefined criteria to ensure usefulness in the prehospital phase was determined. The predictive performance of Prehospital Acute Stroke Severity (PASS) scale was compared with other published scales for ELVO. The PASS scale was composed of 3 NIHSS scores: level of consciousness (month/age), gaze palsy/deviation, and arm weakness. In derivation of PASS 2/3 of the test cohort was used and showed accuracy (area under the curve) of 0.76 for detecting large arterial occlusion. Optimal cut point ≥2 abnormal scores showed: sensitivity=0.66 (95% CI, 0.62-0.69), specificity=0.83 (0.81-0.85), and area under the curve=0.74 (0.72-0.76). Validation on 1/3 of the test cohort showed similar performance. Patients with a large artery occlusion on angiography with PASS ≥2 had a median NIHSS score of 17 (interquartile range=6) as opposed to PASS <2 with a median NIHSS score of 6 (interquartile range=5). The PASS scale showed equal performance although more simple when compared with other scales predicting ELVO. The PASS scale is simple and has promising accuracy for prediction of ELVO in the field. © 2016 American Heart Association, Inc.
Multi-scale structures of turbulent magnetic reconnection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakamura, T. K. M., E-mail: takuma.nakamura@oeaw.ac.at; Nakamura, R.; Narita, Y.
2016-05-15
We have analyzed data from a series of 3D fully kinetic simulations of turbulent magnetic reconnection with a guide field. A new concept of the guide filed reconnection process has recently been proposed, in which the secondary tearing instability and the resulting formation of oblique, small scale flux ropes largely disturb the structure of the primary reconnection layer and lead to 3D turbulent features [W. Daughton et al., Nat. Phys. 7, 539 (2011)]. In this paper, we further investigate the multi-scale physics in this turbulent, guide field reconnection process by introducing a wave number band-pass filter (k-BPF) technique in whichmore » modes for the small scale (less than ion scale) fluctuations and the background large scale (more than ion scale) variations are separately reconstructed from the wave number domain to the spatial domain in the inverse Fourier transform process. Combining with the Fourier based analyses in the wave number domain, we successfully identify spatial and temporal development of the multi-scale structures in the turbulent reconnection process. When considering a strong guide field, the small scale tearing mode and the resulting flux ropes develop over a specific range of oblique angles mainly along the edge of the primary ion scale flux ropes and reconnection separatrix. The rapid merging of these small scale modes leads to a smooth energy spectrum connecting ion and electron scales. When the guide field is sufficiently weak, the background current sheet is strongly kinked and oblique angles for the small scale modes are widely scattered at the kinked regions. Similar approaches handling both the wave number and spatial domains will be applicable to the data from multipoint, high-resolution spacecraft observations such as the NASA magnetospheric multiscale (MMS) mission.« less
Multi-scale structures of turbulent magnetic reconnection
NASA Astrophysics Data System (ADS)
Nakamura, T. K. M.; Nakamura, R.; Narita, Y.; Baumjohann, W.; Daughton, W.
2016-05-01
We have analyzed data from a series of 3D fully kinetic simulations of turbulent magnetic reconnection with a guide field. A new concept of the guide filed reconnection process has recently been proposed, in which the secondary tearing instability and the resulting formation of oblique, small scale flux ropes largely disturb the structure of the primary reconnection layer and lead to 3D turbulent features [W. Daughton et al., Nat. Phys. 7, 539 (2011)]. In this paper, we further investigate the multi-scale physics in this turbulent, guide field reconnection process by introducing a wave number band-pass filter (k-BPF) technique in which modes for the small scale (less than ion scale) fluctuations and the background large scale (more than ion scale) variations are separately reconstructed from the wave number domain to the spatial domain in the inverse Fourier transform process. Combining with the Fourier based analyses in the wave number domain, we successfully identify spatial and temporal development of the multi-scale structures in the turbulent reconnection process. When considering a strong guide field, the small scale tearing mode and the resulting flux ropes develop over a specific range of oblique angles mainly along the edge of the primary ion scale flux ropes and reconnection separatrix. The rapid merging of these small scale modes leads to a smooth energy spectrum connecting ion and electron scales. When the guide field is sufficiently weak, the background current sheet is strongly kinked and oblique angles for the small scale modes are widely scattered at the kinked regions. Similar approaches handling both the wave number and spatial domains will be applicable to the data from multipoint, high-resolution spacecraft observations such as the NASA magnetospheric multiscale (MMS) mission.
Pros and Cons of the Acceleration Scheme (NF-IDS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bogacz, Alex; Bogacz, Slawomir
The overall goal of the acceleration systems: large acceptance acceleration to 25 GeV and beam shaping can be accomplished by various fixed field accelerators at different stages. They involve three superconducting linacs: a single pass linear Pre-accelerator followed by a pair of multi-pass Recirculating Linear Accelerators (RLA) and finally a nonâ scaling FFAG ring. The present baseline acceleration scenario has been optimized to take maximum advantage of appropriate acceleration scheme at a given stage. Pros and cons of various stages are discussed here in detail. The solenoid based Pre-accelerator offers very large acceptance and facilitates correction of energy gain acrossmore » the bunch and significant longitudinal compression trough induced synchrotron motion. However, far off-crest acceleration reduces the effective acceleration gradient and adds complexity through the requirement of individual RF phase control for each cavity. Close proximity of strong solenoids and superc« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoshikawa, M.; Morimoto, M.; Shima, Y.
2012-10-15
In the GAMMA 10 tandem mirror, the typical electron density is comparable to that of the peripheral plasma of torus-type fusion devices. Therefore, an effective method to increase Thomson scattering (TS) signals is required in order to improve signal quality. In GAMMA 10, the yttrium-aluminum-garnet (YAG)-TS system comprises a laser, incident optics, light collection optics, signal detection electronics, and a data recording system. We have been developing a multi-pass TS method for a polarization-based system based on the GAMMA 10 YAG TS. To evaluate the effectiveness of the polarization-based configuration, the multi-pass system was installed in the GAMMA 10 YAG-TSmore » system, which is capable of double-pass scattering. We carried out a Rayleigh scattering experiment and applied this double-pass scattering system to the GAMMA 10 plasma. The integrated scattering signal was made about twice as large by the double-pass system.« less
Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering.
Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus
2014-12-01
This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs.
Multi-scale Modeling of Radiation Damage: Large Scale Data Analysis
NASA Astrophysics Data System (ADS)
Warrier, M.; Bhardwaj, U.; Bukkuru, S.
2016-10-01
Modification of materials in nuclear reactors due to neutron irradiation is a multiscale problem. These neutrons pass through materials creating several energetic primary knock-on atoms (PKA) which cause localized collision cascades creating damage tracks, defects (interstitials and vacancies) and defect clusters depending on the energy of the PKA. These defects diffuse and recombine throughout the whole duration of operation of the reactor, thereby changing the micro-structure of the material and its properties. It is therefore desirable to develop predictive computational tools to simulate the micro-structural changes of irradiated materials. In this paper we describe how statistical averages of the collision cascades from thousands of MD simulations are used to provide inputs to Kinetic Monte Carlo (KMC) simulations which can handle larger sizes, more defects and longer time durations. Use of unsupervised learning and graph optimization in handling and analyzing large scale MD data will be highlighted.
2D deblending using the multi-scale shaping scheme
NASA Astrophysics Data System (ADS)
Li, Qun; Ban, Xingan; Gong, Renbin; Li, Jinnuo; Ge, Qiang; Zu, Shaohuan
2018-01-01
Deblending can be posed as an inversion problem, which is ill-posed and requires constraint to obtain unique and stable solution. In blended record, signal is coherent, whereas interference is incoherent in some domains (e.g., common receiver domain and common offset domain). Due to the different sparsity, coefficients of signal and interference locate in different curvelet scale domains and have different amplitudes. Take into account the two differences, we propose a 2D multi-scale shaping scheme to constrain the sparsity to separate the blended record. In the domain where signal concentrates, the multi-scale scheme passes all the coefficients representing signal, while, in the domain where interference focuses, the multi-scale scheme suppresses the coefficients representing interference. Because the interference is suppressed evidently at each iteration, the constraint of multi-scale shaping operator in all scale domains are weak to guarantee the convergence of algorithm. We evaluate the performance of the multi-scale shaping scheme and the traditional global shaping scheme by using two synthetic and one field data examples.
Guaranteeing Spoof-Resilient Multi-Robot Networks
2015-05-12
particularly challenging attack on this assumption is the so-called “Sybil attack.” In a Sybil attack a malicious agent can generate (or spoof) a large...cybersecurity in general multi-node networks (e.g. a wired LAN), the same is not true for multi- robot networks [14, 28], leaving them largely vulnerable...key passing or cryptographic authen- tication is difficult to maintain due to the highly dynamic and distributed nature of multi-robot teams where
Development of mpi_EPIC model for global agroecosystem modeling
Kang, Shujiang; Wang, Dali; Jeff A. Nichols; ...
2014-12-31
Models that address policy-maker concerns about multi-scale effects of food and bioenergy production systems are computationally demanding. We integrated the message passing interface algorithm into the process-based EPIC model to accelerate computation of ecosystem effects. Simulation performance was further enhanced by applying the Vampir framework. When this enhanced mpi_EPIC model was tested, total execution time for a global 30-year simulation of a switchgrass cropping system was shortened to less than 0.5 hours on a supercomputer. The results illustrate that mpi_EPIC using parallel design can balance simulation workloads and facilitate large-scale, high-resolution analysis of agricultural production systems, management alternatives and environmentalmore » effects.« less
Alvioli, M.; Baum, R.L.
2016-01-01
We describe a parallel implementation of TRIGRS, the Transient Rainfall Infiltration and Grid-Based Regional Slope-Stability Model for the timing and distribution of rainfall-induced shallow landslides. We have parallelized the four time-demanding execution modes of TRIGRS, namely both the saturated and unsaturated model with finite and infinite soil depth options, within the Message Passing Interface framework. In addition to new features of the code, we outline details of the parallel implementation and show the performance gain with respect to the serial code. Results are obtained both on commercial hardware and on a high-performance multi-node machine, showing the different limits of applicability of the new code. We also discuss the implications for the application of the model on large-scale areas and as a tool for real-time landslide hazard monitoring.
NASA Astrophysics Data System (ADS)
Mackler, D. A.; Avanov, L. A.; Boardsen, S. A.; Giles, B. L.; Pollock, C.; Smith, S. E.; Uritsky, V. M.
2016-12-01
Magnetic reconnection, a process in which the magnetic topology undergoes multi-scale changes, is a significant mechanism for particle energization as well as energy dissipation. Reconnection is observed to occur in thin current sheets generated between two regions of magnetized plasma merging with a non-zero shear angle. Within a thinning current sheet, the dominant scale size approaches first the ion and then electron kinetic scale. The plasma becomes demagnetized, field lines transform, then once again the plasma becomes frozen-in. The reconnection process accelerates particles, leading to heated jets of plasma. Turbulence is another fundamental process in collisionless plasmas. Despite decades of turbulence studies, an essential science question remains as to how turbulent energy dissipates at small scales by heating and accelerating particles. Turbulence in both plasmas and fluids has a fundamental property in that it follows an energy cascade into smaller scales. Energy introduced into a fluid or plasma can cause large scale motion, introducing vorticity, which merge and interact to make increasingly smaller eddies. It has been hypothesized that turbulent energy in magnetized plasmas may be dissipated by magnetic reconnection, just as viscosity dissipates energy in neutral fluid turbulence. The focus of this study is to use the new high temporal resolution suite of instruments on board the Magnetospheric MultiScale (MMS) mission to explore this hypothesis. An observable feature of the energy cascade in a turbulent magnetized plasma is its similarity to classical hydrodynamics in that the Power Spectral Density (PSD) of turbulent fluctuations follows a Kolmogorov-like power law (f -5/3). We use highly accurate (0.1 nT) Flux Gate Magnetometer (FGM) data to derive the PSD as a function of frequency in the magnetic fluctuations. Given that we are able to confirm the turbulent nature of the flow field; we apply the method of Partial Variance of Increments (PVI) to search for localized gradient steepening where turbulent dissipation may be occurring. Additionally, we take advantage of multi-spacecraft observations to compute the current density in the turbulent region. This analysis is done over multiple burst periods during MMS' first sub-solar apogee pass from November 2015 to January 2016.
MLP: A Parallel Programming Alternative to MPI for New Shared Memory Parallel Systems
NASA Technical Reports Server (NTRS)
Taft, James R.
1999-01-01
Recent developments at the NASA AMES Research Center's NAS Division have demonstrated that the new generation of NUMA based Symmetric Multi-Processing systems (SMPs), such as the Silicon Graphics Origin 2000, can successfully execute legacy vector oriented CFD production codes at sustained rates far exceeding processing rates possible on dedicated 16 CPU Cray C90 systems. This high level of performance is achieved via shared memory based Multi-Level Parallelism (MLP). This programming approach, developed at NAS and outlined below, is distinct from the message passing paradigm of MPI. It offers parallelism at both the fine and coarse grained level, with communication latencies that are approximately 50-100 times lower than typical MPI implementations on the same platform. Such latency reductions offer the promise of performance scaling to very large CPU counts. The method draws on, but is also distinct from, the newly defined OpenMP specification, which uses compiler directives to support a limited subset of multi-level parallel operations. The NAS MLP method is general, and applicable to a large class of NASA CFD codes.
Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering
Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus
2015-01-01
This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs. PMID:26146475
Multi-Point Interferometric Rayleigh Scattering using Dual-Pass Light Recirculation
NASA Technical Reports Server (NTRS)
Bivolaru, Daniel; Danehy, Paul M.; Cutler, Andrew D.
2008-01-01
This paper describes for the first time an interferometric Rayleigh scattering system using dual-pass light recirculation (IRS-LR) capable of simultaneously measuring at multiple points two orthogonal components of flow velocity in combustion flows using single shot laser probing. An additional optical path containing the interferometer input mirror, a quarter-wave plate, a polarization dependent beam combiner, and a high reflectivity mirror partially recirculates the light that is rejected by the interferometer. Temporally- and spatially-resolved acquisitions of Rayleigh spectra in a large-scale combustion-heated supersonic axi-symmetric jet were performed to demonstrate the technique. Recirculating of Rayleigh scattered light increases the number of photons analyzed by the system up to a factor of 1.8 compared with previous configurations. This is equivalent to performing measurements with less laser energy or performing measurements with the previous system in gas flows at higher temperatures.
NASA Astrophysics Data System (ADS)
Li, Xiaowen; Janiga, Matthew A.; Wang, Shuguang; Tao, Wei-Kuo; Rowe, Angela; Xu, Weixin; Liu, Chuntao; Matsui, Toshihisa; Zhang, Chidong
2018-04-01
Evolution of precipitation structures are simulated and compared with radar observations for the November Madden-Julian Oscillation (MJO) event during the DYNAmics of the MJO (DYNAMO) field campaign. Three ground-based, ship-borne, and spaceborne precipitation radars and three cloud-resolving models (CRMs) driven by observed large-scale forcing are used to study precipitation structures at different locations over the central equatorial Indian Ocean. Convective strength is represented by 0-dBZ echo-top heights, and convective organization by contiguous 17-dBZ areas. The multi-radar and multi-model framework allows for more stringent model validations. The emphasis is on testing models' ability to simulate subtle differences observed at different radar sites when the MJO event passed through. The results show that CRMs forced by site-specific large-scale forcing can reproduce not only common features in cloud populations but also subtle variations observed by different radars. The comparisons also revealed common deficiencies in CRM simulations where they underestimate radar echo-top heights for the strongest convection within large, organized precipitation features. Cross validations with multiple radars and models also enable quantitative comparisons in CRM sensitivity studies using different large-scale forcing, microphysical schemes and parameters, resolutions, and domain sizes. In terms of radar echo-top height temporal variations, many model sensitivity tests have better correlations than radar/model comparisons, indicating robustness in model performance on this aspect. It is further shown that well-validated model simulations could be used to constrain uncertainties in observed echo-top heights when the low-resolution surveillance scanning strategy is used.
Kim, Hyunjin; Sampath, Umesh; Song, Minho
2015-01-01
Fiber Bragg grating sensors are placed in a fiber-optic Sagnac loop to combine the grating temperature sensors and the fiber-optic mandrel acoustic emission sensors in single optical circuit. A wavelength-scanning fiber-optic laser is used as a common light source for both sensors. A fiber-optic attenuator is placed at a specific position in the Sagnac loop in order to separate buried Bragg wavelengths from the Sagnac interferometer output. The Bragg wavelength shifts are measured with scanning band-pass filter demodulation and the mandrel output is analyzed by applying a fast Fourier transform to the interference signal. This hybrid-scheme could greatly reduce the size and the complexity of optical circuitry and signal processing unit, making it suitable for low cost multi-stress monitoring of large scale power systems. PMID:26230700
Modeling of the static recrystallization for 7055 aluminum alloy by cellular automaton
NASA Astrophysics Data System (ADS)
Zhang, Tao; Lu, Shi-hong; Zhang, Jia-bin; Li, Zheng-fang; Chen, Peng; Gong, Hai; Wu, Yun-xin
2017-09-01
In order to simulate the flow behavior and microstructure evolution during the pass interval period of the multi-pass deformation process, models of static recovery (SR) and static recrystallization (SRX) by the cellular automaton (CA) method for the 7055 aluminum alloy were established. Double-pass hot compression tests were conducted to acquire flow stress and microstructure variation during the pass interval period. With the basis of the material constants obtained from the compression tests, models of the SR, incubation period, nucleation rate and grain growth were fitted by least square method. A model of the grain topology and a statistical computation of the CA results were also introduced. The effects of the pass interval time, temperature, strain, strain rate and initial grain size on the microstructure variation for the SRX of the 7055 aluminum alloy were studied. The results show that a long pass interval time, large strain, high temperature and large strain rate are beneficial for finer grains during the pass interval period. The stable size of the static recrystallized grain is not concerned with the initial grain size, but mainly depends on the strain rate and temperature. The SRX plays a vital role in grain refinement, while the SR has no effect on the variation of microstructure morphology. Using flow stress and microstructure comparisons of the simulated and experimental CA results, the established CA models can accurately predict the flow stress and microstructure evolution during the pass interval period, and provide guidance for the selection of optimized parameters for the multi-pass deformation process.
NASA Astrophysics Data System (ADS)
Wu, J.; Yang, Y.; Luo, Q.; Wu, J.
2012-12-01
This study presents a new hybrid multi-objective evolutionary algorithm, the niched Pareto tabu search combined with a genetic algorithm (NPTSGA), whereby the global search ability of niched Pareto tabu search (NPTS) is improved by the diversification of candidate solutions arose from the evolving nondominated sorting genetic algorithm II (NSGA-II) population. Also, the NPTSGA coupled with the commonly used groundwater flow and transport codes, MODFLOW and MT3DMS, is developed for multi-objective optimal design of groundwater remediation systems. The proposed methodology is then applied to a large-scale field groundwater remediation system for cleanup of large trichloroethylene (TCE) plume at the Massachusetts Military Reservation (MMR) in Cape Cod, Massachusetts. Furthermore, a master-slave (MS) parallelization scheme based on the Message Passing Interface (MPI) is incorporated into the NPTSGA to implement objective function evaluations in distributed processor environment, which can greatly improve the efficiency of the NPTSGA in finding Pareto-optimal solutions to the real-world application. This study shows that the MS parallel NPTSGA in comparison with the original NPTS and NSGA-II can balance the tradeoff between diversity and optimality of solutions during the search process and is an efficient and effective tool for optimizing the multi-objective design of groundwater remediation systems under complicated hydrogeologic conditions.
Efficient Parallelization of a Dynamic Unstructured Application on the Tera MTA
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak
1999-01-01
The success of parallel computing in solving real-life computationally-intensive problems relies on their efficient mapping and execution on large-scale multiprocessor architectures. Many important applications are both unstructured and dynamic in nature, making their efficient parallel implementation a daunting task. This paper presents the parallelization of a dynamic unstructured mesh adaptation algorithm using three popular programming paradigms on three leading supercomputers. We examine an MPI message-passing implementation on the Cray T3E and the SGI Origin2OOO, a shared-memory implementation using cache coherent nonuniform memory access (CC-NUMA) of the Origin2OOO, and a multi-threaded version on the newly-released Tera Multi-threaded Architecture (MTA). We compare several critical factors of this parallel code development, including runtime, scalability, programmability, and memory overhead. Our overall results demonstrate that multi-threaded systems offer tremendous potential for quickly and efficiently solving some of the most challenging real-life problems on parallel computers.
NASA Astrophysics Data System (ADS)
Kang, Yongjoon; Park, Gitae; Jeong, Seonghoon; Lee, Changhee
2018-01-01
A large fraction of reheated weld metal is formed during multi-pass welding, which significantly affects the mechanical properties (especially toughness) of welded structures. In this study, the low-temperature toughness of the simulated reheated zone in multi-pass weld metal was evaluated and compared to that of the as-deposited zone using microstructural analyses. Two kinds of high-strength steel welds with different hardenabilities were produced by single-pass, bead-in-groove welding, and both welds were thermally cycled to peak temperatures above Ac3 using a Gleeble simulator. When the weld metals were reheated, their toughness deteriorated in response to the increase in the fraction of detrimental microstructural components, i.e., grain boundary ferrite and coalesced bainite in the weld metals with low and high hardenabilities, respectively. In addition, toughness deterioration occurred in conjunction with an increase in the effective grain size, which was attributed to the decrease in nucleation probability of acicular ferrite; the main cause for this decrease changed depending on the hardenability of the weld metal.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoshikawa, M., E-mail: yosikawa@prc.tsukuba.ac.jp; Nagasu, K.; Shimamura, Y.
2014-11-15
A multi-pass Thomson scattering (TS) has the advantage of enhancing scattered signals. We constructed a multi-pass TS system for a polarisation-based system and an image relaying system modelled on the GAMMA 10 TS system. We undertook Raman scattering experiments both for the multi-pass setting and for checking the optical components. Moreover, we applied the system to the electron temperature measurements in the GAMMA 10 plasma for the first time. The integrated scattering signal was magnified by approximately three times by using the multi-pass TS system with four passes. The electron temperature measurement accuracy is improved by using this multi-pass system.
Yoshikawa, M; Yasuhara, R; Nagasu, K; Shimamura, Y; Shima, Y; Kohagura, J; Sakamoto, M; Nakashima, Y; Imai, T; Ichimura, M; Yamada, I; Funaba, H; Kawahata, K; Minami, T
2014-11-01
A multi-pass Thomson scattering (TS) has the advantage of enhancing scattered signals. We constructed a multi-pass TS system for a polarisation-based system and an image relaying system modelled on the GAMMA 10 TS system. We undertook Raman scattering experiments both for the multi-pass setting and for checking the optical components. Moreover, we applied the system to the electron temperature measurements in the GAMMA 10 plasma for the first time. The integrated scattering signal was magnified by approximately three times by using the multi-pass TS system with four passes. The electron temperature measurement accuracy is improved by using this multi-pass system.
Towards large scale multi-target tracking
NASA Astrophysics Data System (ADS)
Vo, Ba-Ngu; Vo, Ba-Tuong; Reuter, Stephan; Lam, Quang; Dietmayer, Klaus
2014-06-01
Multi-target tracking is intrinsically an NP-hard problem and the complexity of multi-target tracking solutions usually do not scale gracefully with problem size. Multi-target tracking for on-line applications involving a large number of targets is extremely challenging. This article demonstrates the capability of the random finite set approach to provide large scale multi-target tracking algorithms. In particular it is shown that an approximate filter known as the labeled multi-Bernoulli filter can simultaneously track one thousand five hundred targets in clutter on a standard laptop computer.
High energy, high average power solid state green or UV laser
Hackel, Lloyd A.; Norton, Mary; Dane, C. Brent
2004-03-02
A system for producing a green or UV output beam for illuminating a large area with relatively high beam fluence. A Nd:glass laser produces a near-infrared output by means of an oscillator that generates a high quality but low power output and then multi-pass through and amplification in a zig-zag slab amplifier and wavefront correction in a phase conjugator at the midway point of the multi-pass amplification. The green or UV output is generated by means of conversion crystals that follow final propagation through the zig-zag slab amplifier.
Large-Scale, Parallel, Multi-Sensor Data Fusion in the Cloud
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Manipon, G.; Hua, H.
2012-12-01
NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time "matchups" between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, assemble merged datasets, and compute fused products for further scientific and statistical analysis. To efficiently assemble such decade-scale datasets in a timely manner, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. "SciReduce" is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, in which simple tuples (keys & values) are passed between the map and reduce functions, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Thus, SciReduce uses the native datatypes (geolocated grids, swaths, and points) that geo-scientists are familiar with. We are deploying within SciReduce a versatile set of python operators for data lookup, access, subsetting, co-registration, mining, fusion, and statistical analysis. All operators take in sets of geo-located arrays and generate more arrays. Large, multi-year satellite and model datasets are automatically "sharded" by time and space across a cluster of nodes so that years of data (millions of granules) can be compared or fused in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP or webification URLs, thereby minimizing the size of the stored input and intermediate datasets. A typical map function might assemble and quality control AIRS Level-2 water vapor profiles for a year of data in parallel, then a reduce function would average the profiles in lat/lon bins (again, in parallel), and a final reduce would aggregate the climatology and write it to output files. We are using SciReduce to automate the production of multiple versions of a multi-year water vapor climatology (AIRS & MODIS), stratified by Cloudsat cloud classification, and compare it to models (ECMWF & MERRA reanalysis). We will present the architecture of SciReduce, describe the achieved "clock time" speedups in fusing huge datasets on our own nodes and in the Amazon Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer.
Large-Scale, Parallel, Multi-Sensor Data Fusion in the Cloud
NASA Astrophysics Data System (ADS)
Wilson, B.; Manipon, G.; Hua, H.
2012-04-01
NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time "matchups" between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, assemble merged datasets, and compute fused products for further scientific and statistical analysis. To efficiently assemble such decade-scale datasets in a timely manner, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. "SciReduce" is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, in which simple tuples (keys & values) are passed between the map and reduce functions, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Thus, SciReduce uses the native datatypes (geolocated grids, swaths, and points) that geo-scientists are familiar with. We are deploying within SciReduce a versatile set of python operators for data lookup, access, subsetting, co-registration, mining, fusion, and statistical analysis. All operators take in sets of geo-arrays and generate more arrays. Large, multi-year satellite and model datasets are automatically "sharded" by time and space across a cluster of nodes so that years of data (millions of granules) can be compared or fused in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP or webification URLs, thereby minimizing the size of the stored input and intermediate datasets. A typical map function might assemble and quality control AIRS Level-2 water vapor profiles for a year of data in parallel, then a reduce function would average the profiles in bins (again, in parallel), and a final reduce would aggregate the climatology and write it to output files. We are using SciReduce to automate the production of multiple versions of a multi-year water vapor climatology (AIRS & MODIS), stratified by Cloudsat cloud classification, and compare it to models (ECMWF & MERRA reanalysis). We will present the architecture of SciReduce, describe the achieved "clock time" speedups in fusing huge datasets on our own nodes and in the Amazon Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer.
Wang, Yaping; Nie, Jingxin; Yap, Pew-Thian; Li, Gang; Shi, Feng; Geng, Xiujuan; Guo, Lei; Shen, Dinggang
2014-01-01
Accurate and robust brain extraction is a critical step in most neuroimaging analysis pipelines. In particular, for the large-scale multi-site neuroimaging studies involving a significant number of subjects with diverse age and diagnostic groups, accurate and robust extraction of the brain automatically and consistently is highly desirable. In this paper, we introduce population-specific probability maps to guide the brain extraction of diverse subject groups, including both healthy and diseased adult human populations, both developing and aging human populations, as well as non-human primates. Specifically, the proposed method combines an atlas-based approach, for coarse skull-stripping, with a deformable-surface-based approach that is guided by local intensity information and population-specific prior information learned from a set of real brain images for more localized refinement. Comprehensive quantitative evaluations were performed on the diverse large-scale populations of ADNI dataset with over 800 subjects (55∼90 years of age, multi-site, various diagnosis groups), OASIS dataset with over 400 subjects (18∼96 years of age, wide age range, various diagnosis groups), and NIH pediatrics dataset with 150 subjects (5∼18 years of age, multi-site, wide age range as a complementary age group to the adult dataset). The results demonstrate that our method consistently yields the best overall results across almost the entire human life span, with only a single set of parameters. To demonstrate its capability to work on non-human primates, the proposed method is further evaluated using a rhesus macaque dataset with 20 subjects. Quantitative comparisons with popularly used state-of-the-art methods, including BET, Two-pass BET, BET-B, BSE, HWA, ROBEX and AFNI, demonstrate that the proposed method performs favorably with superior performance on all testing datasets, indicating its robustness and effectiveness. PMID:24489639
Detection of large-scale concentric gravity waves from a Chinese airglow imager network
NASA Astrophysics Data System (ADS)
Lai, Chang; Yue, Jia; Xu, Jiyao; Yuan, Wei; Li, Qinzeng; Liu, Xiao
2018-06-01
Concentric gravity waves (CGWs) contain a broad spectrum of horizontal wavelengths and periods due to their instantaneous localized sources (e.g., deep convection, volcanic eruptions, or earthquake, etc.). However, it is difficult to observe large-scale gravity waves of >100 km wavelength from the ground for the limited field of view of a single camera and local bad weather. Previously, complete large-scale CGW imagery could only be captured by satellite observations. In the present study, we developed a novel method that uses assembling separate images and applying low-pass filtering to obtain temporal and spatial information about complete large-scale CGWs from a network of all-sky airglow imagers. Coordinated observations from five all-sky airglow imagers in Northern China were assembled and processed to study large-scale CGWs over a wide area (1800 km × 1 400 km), focusing on the same two CGW events as Xu et al. (2015). Our algorithms yielded images of large-scale CGWs by filtering out the small-scale CGWs. The wavelengths, wave speeds, and periods of CGWs were measured from a sequence of consecutive assembled images. Overall, the assembling and low-pass filtering algorithms can expand the airglow imager network to its full capacity regarding the detection of large-scale gravity waves.
High-power single-pass pumped diamond Raman oscillator
NASA Astrophysics Data System (ADS)
Heinzig, Matthias; Walbaum, Till; Williams, Robert J.; Kitzler, Ondrej; Mildren, Richard P.; Schreiber, Thomas; Eberhardt, Ramona; Tünnermann, Andreas
2018-02-01
We present our recent advances on power scaling of a high-power single-pass pumped CVD-diamond Raman oscillator at 1.2 μm. The single pass scheme reduced feedback to the high gain fiber amplifier, which pumps the oscillator. The Yb-doped multi-stage fiber amplifier itself enables up to 1 kW output power at a narrow linewidth of 0.16 nm. We operate this laser in quasi-cw mode at 10% duty cycle and on-time (pulse) duration of 10 ms. With a maximum conversion efficiency of 39%, a maximum steady-state output power of 380 W and diffraction limited beam quality was achieved.
Performance Analysis of a Hybrid Overset Multi-Block Application on Multiple Architectures
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Biswas, Rupak
2003-01-01
This paper presents a detailed performance analysis of a multi-block overset grid compu- tational fluid dynamics app!ication on multiple state-of-the-art computer architectures. The application is implemented using a hybrid MPI+OpenMP programming paradigm that exploits both coarse and fine-grain parallelism; the former via MPI message passing and the latter via OpenMP directives. The hybrid model also extends the applicability of multi-block programs to large clusters of SNIP nodes by overcoming the restriction that the number of processors be less than the number of grid blocks. A key kernel of the application, namely the LU-SGS linear solver, had to be modified to enhance the performance of the hybrid approach on the target machines. Investigations were conducted on cacheless Cray SX6 vector processors, cache-based IBM Power3 and Power4 architectures, and single system image SGI Origin3000 platforms. Overall results for complex vortex dynamics simulations demonstrate that the SX6 achieves the highest performance and outperforms the RISC-based architectures; however, the best scaling performance was achieved on the Power3.
NOA: A Scalable Multi-Parent Clustering Hierarchy for WSNs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cree, Johnathan V.; Delgado-Frias, Jose; Hughes, Michael A.
2012-08-10
NOA is a multi-parent, N-tiered, hierarchical clustering algorithm that provides a scalable, robust and reliable solution to autonomous configuration of large-scale wireless sensor networks. The novel clustering hierarchy's inherent benefits can be utilized by in-network data processing techniques to provide equally robust, reliable and scalable in-network data processing solutions capable of reducing the amount of data sent to sinks. Utilizing a multi-parent framework, NOA reduces the cost of network setup when compared to hierarchical beaconing solutions by removing the expense of r-hop broadcasting (r is the radius of the cluster) needed to build the network and instead passes network topologymore » information among shared children. NOA2, a two-parent clustering hierarchy solution, and NOA3, the three-parent variant, saw up to an 83% and 72% reduction in overhead, respectively, when compared to performing one round of a one-parent hierarchical beaconing, as well as 92% and 88% less overhead when compared to one round of two- and three-parent hierarchical beaconing hierarchy.« less
Liu, Xingbin; Mei, Wenbo; Du, Huiqian
2018-02-13
In this paper, a detail-enhanced multimodality medical image fusion algorithm is proposed by using proposed multi-scale joint decomposition framework (MJDF) and shearing filter (SF). The MJDF constructed with gradient minimization smoothing filter (GMSF) and Gaussian low-pass filter (GLF) is used to decompose source images into low-pass layers, edge layers, and detail layers at multiple scales. In order to highlight the detail information in the fused image, the edge layer and the detail layer in each scale are weighted combined into a detail-enhanced layer. As directional filter is effective in capturing salient information, so SF is applied to the detail-enhanced layer to extract geometrical features and obtain directional coefficients. Visual saliency map-based fusion rule is designed for fusing low-pass layers, and the sum of standard deviation is used as activity level measurement for directional coefficients fusion. The final fusion result is obtained by synthesizing the fused low-pass layers and directional coefficients. Experimental results show that the proposed method with shift-invariance, directional selectivity, and detail-enhanced property is efficient in preserving and enhancing detail information of multimodality medical images. Graphical abstract The detailed implementation of the proposed medical image fusion algorithm.
Long-term monitoring of Sgr A* at 7 mm with VERA and KaVA
NASA Astrophysics Data System (ADS)
Akiyama, K.; Kino, M.; Sohn, B.; Lee, S.; Trippe, S.; Honma, M.
2014-05-01
We present the results of radio monitoring observations of Sgr A* at 7 mm (i.e. 43 GHz) with the VLBI Exploration of Radio Astrometry (VERA), which is a VLBI array in Japan. VERA provides angular resolution on millisecond scales, resolving structures within 100 Schwarzschild radii of Sgr A* , similar to the Very Large Baseline Array (VLBA). We performed multi-epoch observations of Sgr A* in 2005 - 2008, and started monitoring it again with VERA from 2013 January to trace the current G2 encounter event. Our preliminary results in 2013 show that Sgr A* on mas scales has been in an ordinary state as of August 2013, although some fraction of the G2 cloud already passed the pericenter of Sgr A* in April 2013. We will continue monitoring Sgr A* with VERA and the newly developed KaVA (KVN and VERA Array).
Regional-scale calculation of the LS factor using parallel processing
NASA Astrophysics Data System (ADS)
Liu, Kai; Tang, Guoan; Jiang, Ling; Zhu, A.-Xing; Yang, Jianyi; Song, Xiaodong
2015-05-01
With the increase of data resolution and the increasing application of USLE over large areas, the existing serial implementation of algorithms for computing the LS factor is becoming a bottleneck. In this paper, a parallel processing model based on message passing interface (MPI) is presented for the calculation of the LS factor, so that massive datasets at a regional scale can be processed efficiently. The parallel model contains algorithms for calculating flow direction, flow accumulation, drainage network, slope, slope length and the LS factor. According to the existence of data dependence, the algorithms are divided into local algorithms and global algorithms. Parallel strategy are designed according to the algorithm characters including the decomposition method for maintaining the integrity of the results, optimized workflow for reducing the time taken for exporting the unnecessary intermediate data and a buffer-communication-computation strategy for improving the communication efficiency. Experiments on a multi-node system show that the proposed parallel model allows efficient calculation of the LS factor at a regional scale with a massive dataset.
NASA Astrophysics Data System (ADS)
Hamlin, Robert J.
Martensitic precipitation strengthened stainless steels 17-4 and 13-8+Mo are candidate alloys for high strength military applications. These applications will require joining by fusion welding processes thus, it is necessary to develop an understanding of microstructural and mechanical property changes that occur during welding. Previous investigations on these materials have demonstrated that significant softening occurs in the heat affected zone (HAZ) during welding, due to dissolution of the strengthen precipitates. It was also observed that post weld heat treatments (PWHT's) were required to restore the properties. However, PWHT's are expensive and cannot be applied when welding on a large scale or making a repair in the field. Thus, the purpose of the current work is to gain a fundamental understanding of the precipitation kinetics in these systems so that optimized welding procedures can be developed that do not require a PWHT. Multi-pass welding provides an opportunity to restore the strengthening precipitates that dissolve during primary weld passes using the heat from secondary weld passes. Thus, a preliminary investigation was performed to determine whether the times and temperatures associated with welding thermal cycles were sufficient to restore the strength in these systems. A Gleeble thermo-mechanical simulator was used to perform multi-pass welding simulations on samples of each material using a 1000 J/mm and 2000 J/mm heat input. Additionally, base metal and weld metal samples were used as starting conditions to evaluate the difference in precipitation response between each. Hardness measurements were used to estimate the extent of precipitate dissolution and growth. Microstructures were characterized using light optical microscopy (LOM), scanning electron microscopy (SEM), and energy dispersive spectrometry (EDS). It was determined that precipitate dissolution occurred during primary welding thermal cycles and that significant hardening could be achieved using secondary welding thermal cycles for both heat inputs. Additionally, it was observed that the weld metal and base metal had similar precipitation responses. The preliminary multi-pass welding simulations demonstrated that the times and temperatures associated with welding thermal cycles were sufficient to promote precipitation in each system. Furthermore, these findings indicate that controlled weld metal deposition may be a viable method for optimizing welding procedures and eliminating the need for a PWHT. Next, an in-depth Gleeble study was performed to develop a fundamental understanding of the reactions that occur in 17-4 and 13-8+Mo during exposure to times and temperatures representative of multi-pass welding. Samples of each material were subjected to a series of short isothermal holds at high temperatures and hardness measurements were recorded to investigate the dissolution behavior of each alloy. Additional secondary isothermal experiments were performed on samples that had been subjected to a high temperature primary thermal cycle and hardness measurements were recorded. Matrix microstructures were characterized by LOM and reverted austenite measurements were recorded using X-ray diffraction techniques. The hardness data from the secondary heating tests was used in combination with Avrami kinetics equations to develop a relationship between the hardness and fraction transformed of the strengthening precipitates. It was determined that the Avrami relationships provide a useful approximation of the precipitation behavior at times and temperatures representative of welding thermal cycles. Finally, an autogenous gas tungsten arc (GTA) welding study was performed to demonstrate the utility of multi-pass welding for strength restoration in these alloys. Dual-pass welds were made on samples of each material using a range of heat inputs and secondary weld pass overlap percentages. Hardness mapping was then performed to estimate the extent of precipitate growth and dissolution. It was determined that significant softening occurs after primary weld passes and that secondary weld passes, using a high heat input, restored much of the strength. Furthermore, optimal weld overlap percentages were approximated. It was concluded that controlled weld metal deposition can significantly improve the properties of 17-4 and 13-8+Mo and potentially eliminate the need for costly PWHT's.
Large-Scale, Multi-Sensor Atmospheric Data Fusion Using Hybrid Cloud Computing
NASA Astrophysics Data System (ADS)
Wilson, Brian; Manipon, Gerald; Hua, Hook; Fetzer, Eric
2014-05-01
NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration analyses of important climate variables presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over 10 years of data. To efficiently assemble such datasets, we are utilizing Elastic Computing in the Cloud and parallel map-reduce-based algorithms. However, these problems are Data Intensive computing so the data transfer times and storage costs (for caching) are key issues. SciReduce is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in a hybrid Cloud (private eucalyptus & public Amazon). Unlike Hadoop, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Multi-year datasets are automatically "sharded" by time and space across a cluster of nodes so that years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the cached input and intermediate datasets. We are using SciReduce to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a NASA MEASURES grant. We will present the architecture of SciReduce, describe the achieved "clock time" speedups in fusing datasets on our own nodes and in the Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer. We will also present a concept and prototype for staging NASA's A-Train Atmospheric datasets (Levels 2 & 3) in the Amazon Cloud so that any number of compute jobs can be executed "near" the multi-sensor data. Given such a system, multi-sensor climate studies over 10-20 years of data could be perform
Persistent aerial video registration and fast multi-view mosaicing.
Molina, Edgardo; Zhu, Zhigang
2014-05-01
Capturing aerial imagery at high resolutions often leads to very low frame rate video streams, well under full motion video standards, due to bandwidth, storage, and cost constraints. Low frame rates make registration difficult when an aircraft is moving at high speeds or when global positioning system (GPS) contains large errors or it fails. We present a method that takes advantage of persistent cyclic video data collections to perform an online registration with drift correction. We split the persistent aerial imagery collection into individual cycles of the scene, identify and correct the registration errors on the first cycle in a batch operation, and then use the corrected base cycle as a reference pass to register and correct subsequent passes online. A set of multi-view panoramic mosaics is then constructed for each aerial pass for representation, presentation and exploitation of the 3D dynamic scene. These sets of mosaics are all in alignment to the reference cycle allowing their direct use in change detection, tracking, and 3D reconstruction/visualization algorithms. Stereo viewing with adaptive baselines and varying view angles is realized by choosing a pair of mosaics from a set of multi-view mosaics. Further, the mosaics for the second pass and later can be generated and visualized online as their is no further batch error correction.
Parallel and fault-tolerant algorithms for hypercube multiprocessors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aykanat, C.
1988-01-01
Several techniques for increasing the performance of parallel algorithms on distributed-memory message-passing multi-processor systems are investigated. These techniques are effectively implemented for the parallelization of the Scaled Conjugate Gradient (SCG) algorithm on a hypercube connected message-passing multi-processor. Significant performance improvement is achieved by using these techniques. The SCG algorithm is used for the solution phase of an FE modeling system. Almost linear speed-up is achieved, and it is shown that hypercube topology is scalable for an FE class of problem. The SCG algorithm is also shown to be suitable for vectorization, and near supercomputer performance is achieved on a vectormore » hypercube multiprocessor by exploiting both parallelization and vectorization. Fault-tolerance issues for the parallel SCG algorithm and for the hypercube topology are also addressed.« less
Illustrative visualization of 3D city models
NASA Astrophysics Data System (ADS)
Doellner, Juergen; Buchholz, Henrik; Nienhaus, Marc; Kirsch, Florian
2005-03-01
This paper presents an illustrative visualization technique that provides expressive representations of large-scale 3D city models, inspired by the tradition of artistic and cartographic visualizations typically found in bird"s-eye view and panoramic maps. We define a collection of city model components and a real-time multi-pass rendering algorithm that achieves comprehensible, abstract 3D city model depictions based on edge enhancement, color-based and shadow-based depth cues, and procedural facade texturing. Illustrative visualization provides an effective visual interface to urban spatial information and associated thematic information complementing visual interfaces based on the Virtual Reality paradigm, offering a huge potential for graphics design. Primary application areas include city and landscape planning, cartoon worlds in computer games, and tourist information systems.
NASA Astrophysics Data System (ADS)
Saenz, Juan; Grinstein, Fernando; Dolence, Joshua; Rauenzahn, Rick; Masser, Thomas; Francois, Marianne; LANL Team
2017-11-01
We report progress in evaluating an unsplit hydrodynamic solver being implemented in the radiation adaptive grid Eulerian (xRAGE) code, and compare to a split scheme. xRage is a Eulerian hydrodynamics code used for implicit large eddy simulations (ILES) of multi-material, multi-physics flows where low and high Mach number (Ma) processes and instabilities interact and co-exist. The hydrodynamic solver in xRAGE uses a directionally split, second order Godunov, finite volume (FV) scheme. However, a standard, unsplit, Godunov-type FV scheme with 2nd and 3rd order reconstruction options, low Ma correction and a variety of Riemann solvers has recently become available. To evaluate the hydrodynamic solvers for turbulent low Ma flows, we use simulations of the Taylor Green Vortex (TGV), where there is a transition to turbulence via vortex stretching and production of small-scale eddies. We also simulate a high-low Ma shock-tube flow, where a shock passing over a perturbed surface generates a baroclinic Richtmyer-Meshkov instability (RMI); after the shock has passed, the turbulence in the accelerated interface region resembles Rayleigh Taylor (RT) instability. We compare turbulence spectra and decay in simulated TGV flows, and we present progress in simulating the high-low Ma RMI-RT flow. LANL is operated by LANS LLC for the U.S. DOE NNSA under Contract No. DE-AC52-06NA25396.
NASA Astrophysics Data System (ADS)
Jin, Young-Gwan; Son, Il-Heon; Im, Yong-Taek
2010-06-01
Experiments with a square specimen made of commercially pure aluminum alloy (AA1050) were conducted to investigate deformation behaviour during a multi-pass Equal Channel Angular Pressing (ECAP) for routes A, Bc, and C up to four passes. Three-dimensional finite element numerical simulations of the multi-pass ECAP were carried out in order to evaluate the influence of processing routes and number of passes on local flow behaviour by applying a simplified saturation model of flow stress under an isothermal condition. Simulation results were investigated by comparing them with the experimentally measured data in terms of load variations and microhardness distributions. Also, transmission electron microscopy analysis was employed to investigate the microstructural changes. The present work clearly shows that the three-dimensional flow characteristics of the deformed specimen were dependent on the strain path changes due to the processing routes and number of passes that occurred during the multi-pass ECAP.
Reed, Darcy A; Shanafelt, Tait D; Satele, Daniel W; Power, David V; Eacker, Anne; Harper, William; Moutier, Christine; Durning, Steven; Massie, F Stanford; Thomas, Matthew R; Sloan, Jeff A; Dyrbye, Liselotte N
2011-11-01
Psychological distress is common among medical students. Curriculum structure and grading scales are modifiable learning environment factors that may influence student well-being. The authors sought to examine relationships among curriculum structures, grading scales, and student well-being. The authors surveyed 2,056 first- and second-year medical students at seven U.S. medical schools in 2007. They used the Perceived Stress Scale, Maslach Burnout Inventory, and Medical Outcomes Study Short Form (SF-8) to measure stress, burnout, and quality of life, respectively. They measured curriculum structure using hours spent in didactic, clinical, and testing experiences. Grading scales were categorized as two categories (pass/fail) versus three or more categories (e.g., honors/pass/fail). Of the 2,056 students, 1,192 (58%) responded. In multivariate analyses, students in schools using grading scales with three or more categories had higher levels of stress (beta 2.65; 95% CI 1.54-3.76, P<.0001), emotional exhaustion (beta 5.35; 95% CI 3.34-7.37, P<.0001), and depersonalization (beta 1.36; 95% CI 0.53-2.19, P=.001) and were more likely to have burnout (OR 2.17; 95% CI 1.41-3.35, P=.0005) and to have seriously considered dropping out of school (OR 2.24; 95% CI 1.54-3.27, P<.0001) compared with students in schools using pass/fail grading. There were no relationships between time spent in didactic and clinical experiences and well-being. How students are evaluated has a greater impact than other aspects of curriculum structure on their well-being. Curricular reform intended to enhance student well-being should incorporate pass/fail grading.
On-road energy harvesting from running vehicles : final report.
DOT National Transportation Integrated Search
2014-11-01
A new type of large-scale on-road energy harvester to harness the energy on the road when : traffic passes by is developed. When vehicles pass over the energy harvesting device, the : electrical energy can be produced by the mechanical motion even af...
SU-E-T-472: Improvement of IMRT QA Passing Rate by Correcting Angular Dependence of MatriXX
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Q; Watkins, W; Kim, T
2015-06-15
Purpose: Multi-channel planar detector arrays utilized for IMRT-QA, such as the MatriXX, exhibit an incident-beam angular dependent response which can Result in false-positive gamma-based QA results, especially for helical tomotherapy plans which encompass the full range of beam angles. Although MatriXX can use with gantry angle sensor to provide automatically angular correction, this sensor does not work with tomotherapy. The purpose of the study is to reduce IMRT-QA false-positives by correcting for the MatriXX angular dependence. Methods: MatriXX angular dependence was characterized by comparing multiple fixed-angle irradiation measurements with corresponding TPS computed doses. For 81 Tomo-helical IMRT-QA measurements, two differentmore » correction schemes were tested: (1) A Monte-Carlo dose engine was used to compute MatriXX signal based on the angular-response curve. The computed signal was then compared with measurement. (2) Uncorrected computed signal was compared with measurements uniformly scaled to account for the average angular dependence. Three scaling factor (+2%, +2.5%, +3%) were tested. Results: The MatriXX response is 8% less than predicted for a PA beam even when the couch is fully accounted for. Without angular correction, only 67% of the cases pass the >90% points γ<1 (3%, 3mm). After full angular correction, 96% of the cases pass the criteria. Of three scaling factors, +2% gave the highest passing rate (89%), which is still less than the full angular correction method. With a stricter γ(2%,3mm) criteria, the full angular correction method was still able to achieve the 90% passing rate while the scaling method only gives 53% passing rate. Conclusion: Correction for the MatriXX angular dependence reduced the false-positives rate of our IMRT-QA process. It is necessary to correct for the angular dependence to achieve the IMRT passing criteria specified in TG129.« less
Multi-year predictability in a coupled general circulation model
NASA Astrophysics Data System (ADS)
Power, Scott; Colman, Rob
2006-02-01
Multi-year to decadal variability in a 100-year integration of a BMRC coupled atmosphere-ocean general circulation model (CGCM) is examined. The fractional contribution made by the decadal component generally increases with depth and latitude away from surface waters in the equatorial Indo-Pacific Ocean. The relative importance of decadal variability is enhanced in off-equatorial “ wings” in the subtropical eastern Pacific. The model and observations exhibit “ENSO-like” decadal patterns. Analytic results are derived, which show that the patterns can, in theory, occur in the absence of any predictability beyond ENSO time-scales. In practice, however, modification to this stochastic view is needed to account for robust differences between ENSO-like decadal patterns and their interannual counterparts. An analysis of variability in the CGCM, a wind-forced shallow water model, and a simple mixed layer model together with existing and new theoretical results are used to improve upon this stochastic paradigm and to provide a new theory for the origin of decadal ENSO-like patterns like the Interdecadal Pacific Oscillation and Pacific Decadal Oscillation. In this theory, ENSO-driven wind-stress variability forces internal equatorially-trapped Kelvin waves that propagate towards the eastern boundary. Kelvin waves can excite reflected internal westward propagating equatorially-trapped Rossby waves (RWs) and coastally-trapped waves (CTWs). CTWs have no impact on the off-equatorial sub-surface ocean outside the coastal wave guide, whereas the RWs do. If the frequency of the incident wave is too high, then only CTWs are excited. At lower frequencies, both CTWs and RWs can be excited. The lower the frequency, the greater the fraction of energy transmitted to RWs. This lowers the characteristic frequency (reddens the spectrum) of variability off the equator relative to its equatorial counterpart. At low frequencies, dissipation acts as an additional low pass filter that becomes more effective, as latitude increases. At the same time, ENSO-driven off-equatorial surface heating anomalies drive mixed layer temperature responses in both hemispheres. Both the eastern boundary interactions and the accumulation of surface heat fluxes by the surface mixed layer act to low pass filter the ENSO-forcing. The resulting off-equatorial variability is therefore more coherent with low pass filtered (decadal) ENSO indices [e.g. NINO3 sea-surface temperature (SST)] than with unfiltered ENSO indices. Consequently large correlations between variability and NINO3 extend further poleward on decadal time-scales than they do on interannual time-scales. This explains why decadal ENSO-like patterns have a broader meridional structure than their interannual counterparts. This difference in appearance can occur even if ENSO indices do not have any predictability beyond interannual time-scales. The wings around 15-20°S, and sub-surface variability at many other locations are predictable on interannual and multi-year time-scales. This includes westward propagating internal RWs within about 25° of the equator. The slowest of these take up to 4 years to reach the western boundary. This sub-surface predictability has significant oceanographic interest. However, it is linked to only low levels of SST variability. Consequently, extrapolation of delayed action oscillator theory to decadal time-scales might not be justified.
Adapting Wave-front Algorithms to Efficiently Utilize Systems with Deep Communication Hierarchies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerbyson, Darren J.; Lang, Michael; Pakin, Scott
2011-09-30
Large-scale systems increasingly exhibit a differential between intra-chip and inter-chip communication performance especially in hybrid systems using accelerators. Processorcores on the same socket are able to communicate at lower latencies, and with higher bandwidths, than cores on different sockets either within the same node or between nodes. A key challenge is to efficiently use this communication hierarchy and hence optimize performance. We consider here the class of applications that contains wavefront processing. In these applications data can only be processed after their upstream neighbors have been processed. Similar dependencies result between processors in which communication is required to pass boundarymore » data downstream and whose cost is typically impacted by the slowest communication channel in use. In this work we develop a novel hierarchical wave-front approach that reduces the use of slower communications in the hierarchy but at the cost of additional steps in the parallel computation and higher use of on-chip communications. This tradeoff is explored using a performance model. An implementation using the Reverse-acceleration programming model on the petascale Roadrunner system demonstrates a 27% performance improvement at full system-scale on a kernel application. The approach is generally applicable to large-scale multi-core and accelerated systems where a differential in system communication performance exists.« less
Adapting wave-front algorithms to efficiently utilize systems with deep communication hierarchies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerbyson, Darren J; Lang, Michael; Pakin, Scott
2009-01-01
Large-scale systems increasingly exhibit a differential between intra-chip and inter-chip communication performance. Processor-cores on the same socket are able to communicate at lower latencies, and with higher bandwidths, than cores on different sockets either within the same node or between nodes. A key challenge is to efficiently use this communication hierarchy and hence optimize performance. We consider here the class of applications that contain wave-front processing. In these applications data can only be processed after their upstream neighbors have been processed. Similar dependencies result between processors in which communication is required to pass boundary data downstream and whose cost ismore » typically impacted by the slowest communication channel in use. In this work we develop a novel hierarchical wave-front approach that reduces the use of slower communications in the hierarchy but at the cost of additional computation and higher use of on-chip communications. This tradeoff is explored using a performance model and an implementation on the Petascale Roadrunner system demonstrates a 27% performance improvement at full system-scale on a kernel application. The approach is generally applicable to large-scale multi-core and accelerated systems where a differential in system communication performance exists.« less
NASA Astrophysics Data System (ADS)
Ivkin, N.; Liu, Z.; Yang, L. F.; Kumar, S. S.; Lemson, G.; Neyrinck, M.; Szalay, A. S.; Braverman, V.; Budavari, T.
2018-04-01
Cosmological N-body simulations play a vital role in studying models for the evolution of the Universe. To compare to observations and make a scientific inference, statistic analysis on large simulation datasets, e.g., finding halos, obtaining multi-point correlation functions, is crucial. However, traditional in-memory methods for these tasks do not scale to the datasets that are forbiddingly large in modern simulations. Our prior paper (Liu et al., 2015) proposes memory-efficient streaming algorithms that can find the largest halos in a simulation with up to 109 particles on a small server or desktop. However, this approach fails when directly scaling to larger datasets. This paper presents a robust streaming tool that leverages state-of-the-art techniques on GPU boosting, sampling, and parallel I/O, to significantly improve performance and scalability. Our rigorous analysis of the sketch parameters improves the previous results from finding the centers of the 103 largest halos (Liu et al., 2015) to ∼ 104 - 105, and reveals the trade-offs between memory, running time and number of halos. Our experiments show that our tool can scale to datasets with up to ∼ 1012 particles while using less than an hour of running time on a single GPU Nvidia GTX 1080.
An Application-Based Performance Characterization of the Columbia Supercluster
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Djomehri, Jahed M.; Hood, Robert; Jin, Hoaqiang; Kiris, Cetin; Saini, Subhash
2005-01-01
Columbia is a 10,240-processor supercluster consisting of 20 Altix nodes with 512 processors each, and currently ranked as the second-fastest computer in the world. In this paper, we present the performance characteristics of Columbia obtained on up to four computing nodes interconnected via the InfiniBand and/or NUMAlink4 communication fabrics. We evaluate floating-point performance, memory bandwidth, message passing communication speeds, and compilers using a subset of the HPC Challenge benchmarks, and some of the NAS Parallel Benchmarks including the multi-zone versions. We present detailed performance results for three scientific applications of interest to NASA, one from molecular dynamics, and two from computational fluid dynamics. Our results show that both the NUMAlink4 and the InfiniBand hold promise for application scaling to a large number of processors.
Large-Scale, Parallel, Multi-Sensor Atmospheric Data Fusion Using Cloud Computing
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E.
2013-05-01
NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration analyses of important climate variables presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over 10 years of data. To efficiently assemble such datasets, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. However, these problems are Data Intensive computing so the data transfer times and storage costs (for caching) are key issues. SciReduce is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Figure 1 shows the architecture of the full computational system, with SciReduce at the core. Multi-year datasets are automatically "sharded" by time and space across a cluster of nodes so that years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the cached input and intermediate datasets. We are using SciReduce to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a NASA MEASURES grant. We will present the architecture of SciReduce, describe the achieved "clock time" speedups in fusing datasets on our own nodes and in the Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer. We will also present a concept/prototype for staging NASA's A-Train Atmospheric datasets (Levels 2 & 3) in the Amazon Cloud so that any number of compute jobs can be executed "near" the multi-sensor data. Given such a system, multi-sensor climate studies over 10-20 years of data could be performed in an efficient way, with the researcher paying only his own Cloud compute bill.; Figure 1 -- Architecture.
Large-Scale, Parallel, Multi-Sensor Atmospheric Data Fusion Using Cloud Computing
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E. J.
2013-12-01
NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the 'A-Train' platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration analyses of important climate variables presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (MERRA), stratify the comparisons using a classification of the 'cloud scenes' from CloudSat, and repeat the entire analysis over 10 years of data. To efficiently assemble such datasets, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. However, these problems are Data Intensive computing so the data transfer times and storage costs (for caching) are key issues. SciReduce is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Figure 1 shows the architecture of the full computational system, with SciReduce at the core. Multi-year datasets are automatically 'sharded' by time and space across a cluster of nodes so that years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the cached input and intermediate datasets. We are using SciReduce to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a NASA MEASURES grant. We will present the architecture of SciReduce, describe the achieved 'clock time' speedups in fusing datasets on our own compute nodes and in the public Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer. We will also present a concept/prototype for staging NASA's A-Train Atmospheric datasets (Levels 2 & 3) in the Amazon Cloud so that any number of compute jobs can be executed 'near' the multi-sensor data. Given such a system, multi-sensor climate studies over 10-20 years of data could be performed in an efficient way, with the researcher paying only his own Cloud compute bill. SciReduce Architecture
NASA Astrophysics Data System (ADS)
Derbentsev, I.; Karyakin, A. A.; Volodin, A.
2017-11-01
The article deals with the behaviour of a contact-monolithic joint of large-panel buildings under compression. It gives a detailed analysis and the descriptions of the stages of such joints failure based on the results of the tests and computational modelling. The article is of interest to specialists who deal with computational modelling or the research of large-panel multi-storey buildings. The text gives a valuable information on the values of their bearing capacity and flexibility, the eccentricity of load transfer from upper panel to lower, the value of thrust passed to a ceiling panel. Recommendations are given to estimate all the above-listed parameters.
Hybrid/Tandem Laser-Arc Welding of Thick Low Carbon Martensitic Stainless Steel Plates =
NASA Astrophysics Data System (ADS)
Mirakhorli, Fatemeh
High efficiency and long-term life of hydraulic turbines and their assemblies are of utmost importance for the hydropower industry. Usually, hydroelectric turbine components are made of thick-walled low carbon martensitic stainless steels. The assembly of large hydroelectric turbine components has been a great challenge. The use of conventional welding processes involves typical large groove design and multi-pass welding to fill the groove which exposes the weld to a high heat input creating relatively large fusion zone and heat affected zone. The newly-developed hybrid/tandem laser-arc welding technique is believed to offer a highly competitive solution to improve the overall hydro-turbine performance by combining the high energy density and fast welding speed of the laser welding technology with the good gap bridging and feeding ability of the gas metal arc welding process to increase the productivity and reduce the consumable material. The main objective of this research work is to understand different challenges appearing during hybrid laser-arc welding (HLAW) of thick gauge assemblies of low carbon 13%Cr- 4%Ni martensitic stainless steel and find a practical solution by adapting and optimizing this relatively new welding process in order to reduce the number of welding passes necessary to fill the groove gap. The joint integrity was evaluated in terms of microstructure, defects and mechanical properties in both as-welded and post-welded conditions. A special focus was given to the hybrid and tandem laser-arc welding technique for the root pass. Based on the thickness of the low carbon martensitic stainless steel plates, this work is mainly focused on the following two tasks: • Single pass hybrid laser-arc welding of 10-mm thick low carbon martensitic stainless steel. • Multi-pass hybrid/tandem laser-arc welding of 25-mm thick martensitic stainless steel.
Oryspayev, Dossay; Aktulga, Hasan Metin; Sosonkina, Masha; ...
2015-07-14
In this article, sparse matrix vector multiply (SpMVM) is an important kernel that frequently arises in high performance computing applications. Due to its low arithmetic intensity, several approaches have been proposed in literature to improve its scalability and efficiency in large scale computations. In this paper, our target systems are high end multi-core architectures and we use messaging passing interface + open multiprocessing hybrid programming model for parallelism. We analyze the performance of recently proposed implementation of the distributed symmetric SpMVM, originally developed for large sparse symmetric matrices arising in ab initio nuclear structure calculations. We also study important featuresmore » of this implementation and compare with previously reported implementations that do not exploit underlying symmetry. Our SpMVM implementations leverage the hybrid paradigm to efficiently overlap expensive communications with computations. Our main comparison criterion is the "CPU core hours" metric, which is the main measure of resource usage on supercomputers. We analyze the effects of topology-aware mapping heuristic using simplified network load model. Furthermore, we have tested the different SpMVM implementations on two large clusters with 3D Torus and Dragonfly topology. Our results show that the distributed SpMVM implementation that exploits matrix symmetry and hides communication yields the best value for the "CPU core hours" metric and significantly reduces data movement overheads.« less
NASA Astrophysics Data System (ADS)
Cheng, Boyang; Jin, Longxu; Li, Guoning
2018-06-01
Visible light and infrared images fusion has been a significant subject in imaging science. As a new contribution to this field, a novel fusion framework of visible light and infrared images based on adaptive dual-channel unit-linking pulse coupled neural networks with singular value decomposition (ADS-PCNN) in non-subsampled shearlet transform (NSST) domain is present in this paper. First, the source images are decomposed into multi-direction and multi-scale sub-images by NSST. Furthermore, an improved novel sum modified-Laplacian (INSML) of low-pass sub-image and an improved average gradient (IAVG) of high-pass sub-images are input to stimulate the ADS-PCNN, respectively. To address the large spectral difference between infrared and visible light and the occurrence of black artifacts in fused images, a local structure information operator (LSI), which comes from local area singular value decomposition in each source image, is regarded as the adaptive linking strength that enhances fusion accuracy. Compared with PCNN models in other studies, the proposed method simplifies certain peripheral parameters, and the time matrix is utilized to decide the iteration number adaptively. A series of images from diverse scenes are used for fusion experiments and the fusion results are evaluated subjectively and objectively. The results of the subjective and objective evaluation show that our algorithm exhibits superior fusion performance and is more effective than the existing typical fusion techniques.
NASA Astrophysics Data System (ADS)
Min, D.
2008-12-01
Understanding the nature of water exchange and material transport processes at tidal inlets is critical in improving our knowledge of land-sea connection and exchange processes. High-frequency multi-parameter water property measurement was conducted over a month period during mid-June to mid-July in 2008 at the UT Marine Science Institute pier at Port Aransas, Texas throughout 12-m water column. The pier is at the Aransas Pass tidal inlet, which is a major water and property exchange pathway in South Texas between several local bays and the Gulf of Mexico. Unlike the summer 2007 when a large-scale freshwater discharge event occurred, the summer 2008 during the observation period was relatively dry in general. Offshore influence was more pronounced this year than 2007 with multiple days of higher salinity water (higher than 36 psu) dominating over tidal cycles. The offshore influence was also marked by lower oxygen and chlorophyll concentrations. The lower oxygen content water with higher salinity seems to be connected to low-oxygen bottom water on near shore shelf area. Additional instrument mooring data during hurricane Dolly will also be presented along with the current meter and tide gauge information. Comparison of the data with that observed from nearby Mission-Aransas National Estuarine Research Reserve SWMP stations will be presented as well. Continuous water column measurements at a local inlet show a potential to quantify water property flux and to detect episodic events in the coastal environment.
Luce, J.S.; Martin, J.A.
1960-02-23
Well focused, intense ion beams are obtained by providing a multi- apertured source grid in front of an ion source chamber and an accelerating multi- apertured grid closely spaced from and in alignment with the source grid. The longest dimensions of the elongated apertures in the grids are normal to the direction of the magnetic field used with the device. Large ion currents may be withdrawn from the source, since they do not pass through any small focal region between the grids.
Weatherill, John; Krause, Stefan; Voyce, Kevin; Drijfhout, Falko; Levy, Amir; Cassidy, Nigel
2014-03-01
Integrated approaches for the identification of pollutant linkages between aquifers and streams are of crucial importance for evaluating the environmental risks posed by industrial contaminants like trichloroethene (TCE). This study presents a systematic, multi-scale approach to characterising groundwater TCE discharge to a 'gaining' UK lowland stream receiving baseflow from a major Permo-Triassic sandstone aquifer. Beginning with a limited number of initial monitoring points, we aim to provide a 'first pass' mechanistic understanding of the plume's fate at the aquifer/stream interface using a novel combination of streambed diffusion samplers, riparian monitoring wells and drive-point mini-piezometers in a spatially nested sampling configuration. Our results indicate the potential discharge zone of the plume to extend along a stream reach of 120 m in length, delineated by a network of 60 in-situ diffusion samplers. Within this section, a 40 m long sub-reach of higher concentration (>10 μg L(-1)) was identified; centred on a meander bend in the floodplain. 25 multi-level mini-piezometers installed to target this down-scaled reach revealed even higher TCE concentrations (20-40 μg L(-1)), significantly above alluvial groundwater samples (<6 μg L(-1)) from 15 riparian monitoring wells. Significant lateral and vertical spatial heterogeneity in TCE concentrations within the top 1m of the streambed was observed with the decimetre-scale vertical resolution provided by multi-level mini-piezometers. It appears that the distribution of fine-grained material in the Holocene deposits of the riparian floodplain and below the channel is exerting significant local-scale geological controls on the location and magnitude of the TCE discharge. Large-scale in-situ biodegradation of the plume was not evident during the monitoring campaigns. However, detections of cis-1,2-dichloroethene and vinyl chloride in discrete sections of the sediment profile indicate that shallow (e.g., <20 cm) TCE transformation may be significant at a local scale in the streambed deposits. Our findings highlight the need for efficient multi-scale monitoring strategies in geologically heterogeneous lowland stream/aquifer systems in order to more adequately quantify the risk to surface water ecological receptors posed by point-source groundwater contaminants like TCE. Copyright © 2013 Elsevier B.V. All rights reserved.
Multi-level discriminative dictionary learning with application to large scale image classification.
Shen, Li; Sun, Gang; Huang, Qingming; Wang, Shuhui; Lin, Zhouchen; Wu, Enhua
2015-10-01
The sparse coding technique has shown flexibility and capability in image representation and analysis. It is a powerful tool in many visual applications. Some recent work has shown that incorporating the properties of task (such as discrimination for classification task) into dictionary learning is effective for improving the accuracy. However, the traditional supervised dictionary learning methods suffer from high computation complexity when dealing with large number of categories, making them less satisfactory in large scale applications. In this paper, we propose a novel multi-level discriminative dictionary learning method and apply it to large scale image classification. Our method takes advantage of hierarchical category correlation to encode multi-level discriminative information. Each internal node of the category hierarchy is associated with a discriminative dictionary and a classification model. The dictionaries at different layers are learnt to capture the information of different scales. Moreover, each node at lower layers also inherits the dictionary of its parent, so that the categories at lower layers can be described with multi-scale information. The learning of dictionaries and associated classification models is jointly conducted by minimizing an overall tree loss. The experimental results on challenging data sets demonstrate that our approach achieves excellent accuracy and competitive computation cost compared with other sparse coding methods for large scale image classification.
DOT National Transportation Integrated Search
2006-12-01
Over the last several years, researchers at the University of Arizonas ATLAS Center have developed an adaptive ramp : metering system referred to as MILOS (Multi-Objective, Integrated, Large-Scale, Optimized System). The goal of this project : is ...
Simultaneous Processing of Visible and Long-Wave Infrared Satellite Imagery
2015-10-19
Telescope, formerly the Gamma-ray Large Area Telescope (GLAST), and the Hubble Space Telescope (HST). The visible data was processed with a multi-frame...prevalent in the Hubble pass where the panels were already close to the background values. These results are promising; the dimmer features of the object
NASA Astrophysics Data System (ADS)
Chern, J. D.; Tao, W. K.; Lang, S. E.; Matsui, T.; Mohr, K. I.
2014-12-01
Four six-month (March-August 2014) experiments with the Goddard Multi-scale Modeling Framework (MMF) were performed to study the impacts of different Goddard one-moment bulk microphysical schemes and large-scale forcings on the performance of the MMF. Recently a new Goddard one-moment bulk microphysics with four-ice classes (cloud ice, snow, graupel, and frozen drops/hail) has been developed based on cloud-resolving model simulations with large-scale forcings from field campaign observations. The new scheme has been successfully implemented to the MMF and two MMF experiments were carried out with this new scheme and the old three-ice classes (cloud ice, snow graupel) scheme. The MMF has global coverage and can rigorously evaluate microphysics performance for different cloud regimes. The results show MMF with the new scheme outperformed the old one. The MMF simulations are also strongly affected by the interaction between large-scale and cloud-scale processes. Two MMF sensitivity experiments with and without nudging large-scale forcings to those of ERA-Interim reanalysis were carried out to study the impacts of large-scale forcings. The model simulated mean and variability of surface precipitation, cloud types, cloud properties such as cloud amount, hydrometeors vertical profiles, and cloud water contents, etc. in different geographic locations and climate regimes are evaluated against GPM, TRMM, CloudSat/CALIPSO satellite observations. The Goddard MMF has also been coupled with the Goddard Satellite Data Simulation Unit (G-SDSU), a system with multi-satellite, multi-sensor, and multi-spectrum satellite simulators. The statistics of MMF simulated radiances and backscattering can be directly compared with satellite observations to assess the strengths and/or deficiencies of MMF simulations and provide guidance on how to improve the MMF and microphysics.
Addressing the challenges of standalone multi-core simulations in molecular dynamics
NASA Astrophysics Data System (ADS)
Ocaya, R. O.; Terblans, J. J.
2017-07-01
Computational modelling in material science involves mathematical abstractions of force fields between particles with the aim to postulate, develop and understand materials by simulation. The aggregated pairwise interactions of the material's particles lead to a deduction of its macroscopic behaviours. For practically meaningful macroscopic scales, a large amount of data are generated, leading to vast execution times. Simulation times of hours, days or weeks for moderately sized problems are not uncommon. The reduction of simulation times, improved result accuracy and the associated software and hardware engineering challenges are the main motivations for many of the ongoing researches in the computational sciences. This contribution is concerned mainly with simulations that can be done on a "standalone" computer based on Message Passing Interfaces (MPI), parallel code running on hardware platforms with wide specifications, such as single/multi- processor, multi-core machines with minimal reconfiguration for upward scaling of computational power. The widely available, documented and standardized MPI library provides this functionality through the MPI_Comm_size (), MPI_Comm_rank () and MPI_Reduce () functions. A survey of the literature shows that relatively little is written with respect to the efficient extraction of the inherent computational power in a cluster. In this work, we discuss the main avenues available to tap into this extra power without compromising computational accuracy. We also present methods to overcome the high inertia encountered in single-node-based computational molecular dynamics. We begin by surveying the current state of the art and discuss what it takes to achieve parallelism, efficiency and enhanced computational accuracy through program threads and message passing interfaces. Several code illustrations are given. The pros and cons of writing raw code as opposed to using heuristic, third-party code are also discussed. The growing trend towards graphical processor units and virtual computing clouds for high-performance computing is also discussed. Finally, we present the comparative results of vacancy formation energy calculations using our own parallelized standalone code called Verlet-Stormer velocity (VSV) operating on 30,000 copper atoms. The code is based on the Sutton-Chen implementation of the Finnis-Sinclair pairwise embedded atom potential. A link to the code is also given.
Pass-transistor very large scale integration
NASA Technical Reports Server (NTRS)
Maki, Gary K. (Inventor); Bhatia, Prakash R. (Inventor)
2004-01-01
Logic elements are provided that permit reductions in layout size and avoidance of hazards. Such logic elements may be included in libraries of logic cells. A logical function to be implemented by the logic element is decomposed about logical variables to identify factors corresponding to combinations of the logical variables and their complements. A pass transistor network is provided for implementing the pass network function in accordance with this decomposition. The pass transistor network includes ordered arrangements of pass transistors that correspond to the combinations of variables and complements resulting from the logical decomposition. The logic elements may act as selection circuits and be integrated with memory and buffer elements.
IMPETUS - Interactive MultiPhysics Environment for Unified Simulations.
Ha, Vi Q; Lykotrafitis, George
2016-12-08
We introduce IMPETUS - Interactive MultiPhysics Environment for Unified Simulations, an object oriented, easy-to-use, high performance, C++ program for three-dimensional simulations of complex physical systems that can benefit a large variety of research areas, especially in cell mechanics. The program implements cross-communication between locally interacting particles and continuum models residing in the same physical space while a network facilitates long-range particle interactions. Message Passing Interface is used for inter-processor communication for all simulations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Zapata, Luis E.
2004-12-21
The average power output of a laser is scaled, to first order, by increasing the transverse dimension of the gain medium while increasing the thickness of an index matched light guide proportionately. Strategic facets cut at the edges of the laminated gain medium provide a method by which the pump light introduced through edges of the composite structure is trapped and passes through the gain medium repeatedly. Spontaneous emission escapes the laser volume via these facets. A multi-faceted disk geometry with grooves cut into the thickness of the gain medium is optimized to passively reject spontaneous emission generated within the laser material, which would otherwise be trapped and amplified within the high index composite disk. Such geometry allows the useful size of the laser aperture to be increased, enabling the average laser output power to be scaled.
NASA Astrophysics Data System (ADS)
Bai, Jianwen; Shen, Zhenyao; Yan, Tiezhu
2017-09-01
An essential task in evaluating global water resource and pollution problems is to obtain the optimum set of parameters in hydrological models through calibration and validation. For a large-scale watershed, single-site calibration and validation may ignore spatial heterogeneity and may not meet the needs of the entire watershed. The goal of this study is to apply a multi-site calibration and validation of the Soil andWater Assessment Tool (SWAT), using the observed flow data at three monitoring sites within the Baihe watershed of the Miyun Reservoir watershed, China. Our results indicate that the multi-site calibration parameter values are more reasonable than those obtained from single-site calibrations. These results are mainly due to significant differences in the topographic factors over the large-scale area, human activities and climate variability. The multi-site method involves the division of the large watershed into smaller watersheds, and applying the calibrated parameters of the multi-site calibration to the entire watershed. It was anticipated that this case study could provide experience of multi-site calibration in a large-scale basin, and provide a good foundation for the simulation of other pollutants in followup work in the Miyun Reservoir watershed and other similar large areas.
Medical image classification based on multi-scale non-negative sparse coding.
Zhang, Ruijie; Shen, Jian; Wei, Fushan; Li, Xiong; Sangaiah, Arun Kumar
2017-11-01
With the rapid development of modern medical imaging technology, medical image classification has become more and more important in medical diagnosis and clinical practice. Conventional medical image classification algorithms usually neglect the semantic gap problem between low-level features and high-level image semantic, which will largely degrade the classification performance. To solve this problem, we propose a multi-scale non-negative sparse coding based medical image classification algorithm. Firstly, Medical images are decomposed into multiple scale layers, thus diverse visual details can be extracted from different scale layers. Secondly, for each scale layer, the non-negative sparse coding model with fisher discriminative analysis is constructed to obtain the discriminative sparse representation of medical images. Then, the obtained multi-scale non-negative sparse coding features are combined to form a multi-scale feature histogram as the final representation for a medical image. Finally, SVM classifier is combined to conduct medical image classification. The experimental results demonstrate that our proposed algorithm can effectively utilize multi-scale and contextual spatial information of medical images, reduce the semantic gap in a large degree and improve medical image classification performance. Copyright © 2017 Elsevier B.V. All rights reserved.
Scalability and Portability of Two Parallel Implementations of ADI
NASA Technical Reports Server (NTRS)
Phung, Thanh; VanderWijngaart, Rob F.
1994-01-01
Two domain decompositions for the implementation of the NAS Scalar Penta-diagonal Parallel Benchmark on MIMD systems are investigated, namely transposition and multi-partitioning. Hardware platforms considered are the Intel iPSC/860 and Paragon XP/S-15, and clusters of SGI workstations on ethernet, communicating through PVM. It is found that the multi-partitioning strategy offers the kind of coarse granularity that allows scaling up to hundreds of processors on a massively parallel machine. Moreover, efficiency is retained when the code is ported verbatim (save message passing syntax) to a PVM environment on a modest size cluster of workstations.
Grid-Enabled Quantitative Analysis of Breast Cancer
2010-10-01
large-scale, multi-modality computerized image analysis . The central hypothesis of this research is that large-scale image analysis for breast cancer...research, we designed a pilot study utilizing large scale parallel Grid computing harnessing nationwide infrastructure for medical image analysis . Also
ERIC Educational Resources Information Center
Decker, Dawn M.; Hixson, Michael D.; Shaw, Amber; Johnson, Gloria
2014-01-01
The purpose of this study was to examine whether using a multiple-measure framework yielded better classification accuracy than oral reading fluency (ORF) or maze alone in predicting pass/fail rates for middle-school students on a large-scale reading assessment. Participants were 178 students in Grades 7 and 8 from a Midwestern school district.…
NASA Astrophysics Data System (ADS)
Mokhtabad Amrei, Mohsen
13Cr4Ni martensitic stainless steels are known for their outstanding performances in the hydroelectric industry, where they are mainly used in the construction of turbine components. Considering the size and geometry of turbine runners and blades, multi-pass welding procedures are commonly used in the fabrication and repair of such turbines. The final microstructure and mechanical properties of the weld are sensitive to the welding process parameters and thermal history. In the case of 13Cr4Ni steel, the thermal cycles imposed by the multi-pass welding operation have significant effects on the complex weld microstructure. Additionally, post-weld heat treatments are commonly used to reduce weld heterogeneity and improve the material's mechanical properties by tempering the microstructure and by forming a "room-temperature-stable austenite." In the first phase of this research, the microstructures and crystallographic textures of aswelded single-pass and double-pass welds were studied as a basis to studying the more complex multi-pass weld microstructure. This study found that the maximum hardness is obtained in high temperature heat affected zone inside the base metal. In particular, the results showed that the heat cycle exposed by the second pass increases the hardness of the previous pass because it produces a finer martensite microstructure. In areas of heat affected zone, a tempering effect is reported from 3 up to 6 millimeters far from the fusion line. Finding austenite phase in these areas are matter of interest and it can be indicative of the microstructure complexity of multi-pass welds. In the second phase of research, the microstructure of multi-pass welds was found to be more heterogeneous than that of single- and double-pass welds. Any individual pass in a multi-pass weld consists of several regions formed by adjacent weld passes heat cycle. Results showed that former austenite grains modification occurred in areas close to the subsequent weld passes. Furthermore, low angle interface laths were observed inside martensite sub-blocks over different regions. The hardness profile of a multi-pass weld was explained by the overlaying heat effects of surrounding passes. In some regions, a tempered matrix was observed, while in other regions a double-quenched microstructure was found. The final aspect of this study focused on the effects of post-weld heat treatments on reformed austenite and carbide formations, and evolution of hardness. The effects of tempering duration and temperature on microstructure were investigated. The study found that nanometer-sized carbides form at martensite lath interfaces and sub-block boundaries. Additionally, it was determined that for any holding duration, the maximum austenite percentage is achievable by tempering at 610 °C. Similarly, the maximum softening was reported for tempering at 610 °C, for any given holding period.
NASA Technical Reports Server (NTRS)
Dittmar, J. H.
1985-01-01
Noise data on the Large-scale Advanced Propfan (LAP) propeller model SR-7A were taken into the NASA Lewis 8- by 6-Foot Wind Tunnel. The maximum blade passing tone decreases from the peak level when going to higher helical tip Mach numbers. This noise reduction points to the use of higher propeller speeds as a possible method to reduce airplane cabin noise while maintaining high flight speed and efficiency. Comparison of the SR-7A blade passing noise with the noise of the similarly designed SR-3 propeller shows good agreement as expected. The SR-7A propeller is slightly noisier than the SR-3 model in the plane of rotation at the cruise condition. Projections of the tunnel model data are made to the full-scale LAP propeller mounted on the test bed aircraft and compared with design predictions. The prediction method is conservative in the sense that it overpredicts the projected model data.
Yoshikawa, Masayuki; Yasuhara, Ryo; Ohta, Koichi; Chikatsu, Masayuki; Shima, Yoriko; Kohagura, Junko; Sakamoto, Mizuki; Nakashima, Yousuke; Imai, Tsuyoshi; Ichimura, Makoto; Yamada, Ichihiro; Funaba, Hisamichi; Minami, Takashi
2016-11-01
High time resolved electron temperature measurements are useful for fluctuation study. A multi-pass Thomson scattering (MPTS) system is proposed for the improvement of both increasing the TS signal intensity and time resolution. The MPTS system in GAMMA 10/PDX has been constructed for enhancing the Thomson scattered signals for the improvement of measurement accuracy. The MPTS system has a polarization-based configuration with an image relaying system. We optimized the image relaying optics for improving the multi-pass laser confinement and obtaining the stable MPTS signals over ten passing TS signals. The integrated MPTS signals increased about five times larger than that in the single pass system. Finally, time dependent electron temperatures were obtained in MHz sampling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bogacz, Alex
We summarize the current state of a concept for muon acceleration aimed at a future Neutrino Factory and extendable to Higgs Factory. The main thrust of these studies was to reduce the overall cost while maintaining performance by exploring the interplay between the complexity of the cooling systems and the acceptance of the accelerator complex. To ensure adequate survival for the short-lived muons, acceleration must occur at high average gradient. The need for large transverse and longitudinal acceptances drives the design of the acceleration system to an initially low RF frequency, e.g., 325 MHz, which is then increased to 650more » MHz as the transverse size shrinks with increasing energy. High-gradient normal conducting RF cavities at these frequencies require extremely high peak-power RF sources. Hence superconducting RF (SRF) cavities are chosen. We consider an SRF-efficient design based on a multi-pass (4.5) ?dogbone? RLA, extendable to multi-pass FFAG-like arcs.« less
Multi-view L2-SVM and its multi-view core vector machine.
Huang, Chengquan; Chung, Fu-lai; Wang, Shitong
2016-03-01
In this paper, a novel L2-SVM based classifier Multi-view L2-SVM is proposed to address multi-view classification tasks. The proposed Multi-view L2-SVM classifier does not have any bias in its objective function and hence has the flexibility like μ-SVC in the sense that the number of the yielded support vectors can be controlled by a pre-specified parameter. The proposed Multi-view L2-SVM classifier can make full use of the coherence and the difference of different views through imposing the consensus among multiple views to improve the overall classification performance. Besides, based on the generalized core vector machine GCVM, the proposed Multi-view L2-SVM classifier is extended into its GCVM version MvCVM which can realize its fast training on large scale multi-view datasets, with its asymptotic linear time complexity with the sample size and its space complexity independent of the sample size. Our experimental results demonstrated the effectiveness of the proposed Multi-view L2-SVM classifier for small scale multi-view datasets and the proposed MvCVM classifier for large scale multi-view datasets. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Jitsuhiro, Takatoshi; Toriyama, Tomoji; Kogure, Kiyoshi
We propose a noise suppression method based on multi-model compositions and multi-pass search. In real environments, input speech for speech recognition includes many kinds of noise signals. To obtain good recognized candidates, suppressing many kinds of noise signals at once and finding target speech is important. Before noise suppression, to find speech and noise label sequences, we introduce multi-pass search with acoustic models including many kinds of noise models and their compositions, their n-gram models, and their lexicon. Noise suppression is frame-synchronously performed using the multiple models selected by recognized label sequences with time alignments. We evaluated this method using the E-Nightingale task, which contains voice memoranda spoken by nurses during actual work at hospitals. The proposed method obtained higher performance than the conventional method.
The Emergence of Dominant Design(s) in Large Scale Cyber-Infrastructure Systems
ERIC Educational Resources Information Center
Diamanti, Eirini Ilana
2012-01-01
Cyber-infrastructure systems are integrated large-scale IT systems designed with the goal of transforming scientific practice by enabling multi-disciplinary, cross-institutional collaboration. Their large scale and socio-technical complexity make design decisions for their underlying architecture practically irreversible. Drawing on three…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Togashi, H., E-mail: togashi@fusion.k.u-tokyo.ac.jp; Ejiri, A.; Nakamura, K.
2014-11-15
The multi-pass Thomson scattering (TS) scheme enables obtaining many photons by accumulating multiple TS signals. The signal-to-noise ratio (SNR) depends on the accumulation number. In this study, we performed multi-pass TS measurements for ohmically heated plasmas, and the relationship between SNR and the accumulation number was investigated. As a result, improvement of SNR in this experiment indicated similar tendency to that calculated for the background noise dominant situation.
Capturing remote mixing due to internal tides using multi-scale modeling tool: SOMAR-LES
NASA Astrophysics Data System (ADS)
Santilli, Edward; Chalamalla, Vamsi; Scotti, Alberto; Sarkar, Sutanu
2016-11-01
Internal tides that are generated during the interaction of an oscillating barotropic tide with the bottom bathymetry dissipate only a fraction of their energy near the generation region. The rest is radiated away in the form of low- high-mode internal tides. These internal tides dissipate energy at remote locations when they interact with the upper ocean pycnocline, continental slope, and large scale eddies. Capturing the wide range of length and time scales involved during the life-cycle of internal tides is computationally very expensive. A recently developed multi-scale modeling tool called SOMAR-LES combines the adaptive grid refinement features of SOMAR with the turbulence modeling features of a Large Eddy Simulation (LES) to capture multi-scale processes at a reduced computational cost. Numerical simulations of internal tide generation at idealized bottom bathymetries are performed to demonstrate this multi-scale modeling technique. Although each of the remote mixing phenomena have been considered independently in previous studies, this work aims to capture remote mixing processes during the life cycle of an internal tide in more realistic settings, by allowing multi-level (coarse and fine) grids to co-exist and exchange information during the time stepping process.
NASA Astrophysics Data System (ADS)
Zhang, Daili
Increasing societal demand for automation has led to considerable efforts to control large-scale complex systems, especially in the area of autonomous intelligent control methods. The control system of a large-scale complex system needs to satisfy four system level requirements: robustness, flexibility, reusability, and scalability. Corresponding to the four system level requirements, there arise four major challenges. First, it is difficult to get accurate and complete information. Second, the system may be physically highly distributed. Third, the system evolves very quickly. Fourth, emergent global behaviors of the system can be caused by small disturbances at the component level. The Multi-Agent Based Control (MABC) method as an implementation of distributed intelligent control has been the focus of research since the 1970s, in an effort to solve the above-mentioned problems in controlling large-scale complex systems. However, to the author's best knowledge, all MABC systems for large-scale complex systems with significant uncertainties are problem-specific and thus difficult to extend to other domains or larger systems. This situation is partly due to the control architecture of multiple agents being determined by agent to agent coupling and interaction mechanisms. Therefore, the research objective of this dissertation is to develop a comprehensive, generalized framework for the control system design of general large-scale complex systems with significant uncertainties, with the focus on distributed control architecture design and distributed inference engine design. A Hybrid Multi-Agent Based Control (HyMABC) architecture is proposed by combining hierarchical control architecture and module control architecture with logical replication rings. First, it decomposes a complex system hierarchically; second, it combines the components in the same level as a module, and then designs common interfaces for all of the components in the same module; third, replications are made for critical agents and are organized into logical rings. This architecture maintains clear guidelines for complexity decomposition and also increases the robustness of the whole system. Multiple Sectioned Dynamic Bayesian Networks (MSDBNs) as a distributed dynamic probabilistic inference engine, can be embedded into the control architecture to handle uncertainties of general large-scale complex systems. MSDBNs decomposes a large knowledge-based system into many agents. Each agent holds its partial perspective of a large problem domain by representing its knowledge as a Dynamic Bayesian Network (DBN). Each agent accesses local evidence from its corresponding local sensors and communicates with other agents through finite message passing. If the distributed agents can be organized into a tree structure, satisfying the running intersection property and d-sep set requirements, globally consistent inferences are achievable in a distributed way. By using different frequencies for local DBN agent belief updating and global system belief updating, it balances the communication cost with the global consistency of inferences. In this dissertation, a fully factorized Boyen-Koller (BK) approximation algorithm is used for local DBN agent belief updating, and the static Junction Forest Linkage Tree (JFLT) algorithm is used for global system belief updating. MSDBNs assume a static structure and a stable communication network for the whole system. However, for a real system, sub-Bayesian networks as nodes could be lost, and the communication network could be shut down due to partial damage in the system. Therefore, on-line and automatic MSDBNs structure formation is necessary for making robust state estimations and increasing survivability of the whole system. A Distributed Spanning Tree Optimization (DSTO) algorithm, a Distributed D-Sep Set Satisfaction (DDSSS) algorithm, and a Distributed Running Intersection Satisfaction (DRIS) algorithm are proposed in this dissertation. Combining these three distributed algorithms and a Distributed Belief Propagation (DBP) algorithm in MSDBNs makes state estimations robust to partial damage in the whole system. Combining the distributed control architecture design and the distributed inference engine design leads to a process of control system design for a general large-scale complex system. As applications of the proposed methodology, the control system design of a simplified ship chilled water system and a notional ship chilled water system have been demonstrated step by step. Simulation results not only show that the proposed methodology gives a clear guideline for control system design for general large-scale complex systems with dynamic and uncertain environment, but also indicate that the combination of MSDBNs and HyMABC can provide excellent performance for controlling general large-scale complex systems.
NASA Astrophysics Data System (ADS)
Alvera-Azcarate, A.; Barth, A.; Virmani, J. I.; Weisberg, R. H.
2007-05-01
The Intra-Americas Sea (IAS) surface circulation is characterized by large scale currents. The Caribbean current, which originates in the Lesser Antilles, travels westwards through the Caribbean Sea and eastern Mexico and passes through the Gulf of Mexico to finally form the Gulf Stream. This complex system of currents is also characterized by a high mesoscale variability, such as eddies and meanders. The objectives of this work are twofold: first, the multi-scale surface circulation of the IAS is described using satellite altimetry. The topographic influence of the different basins forming the IAS, the characteristic time and spatial scales, and the time variability of the surface circulation will be addressed. The second objective is to analyze the influence of this large scale circulation on a small scale coastal domain with a ROMS-based model of the Cariaco basin (Venezuela). Cariaco is a deep (1400 m), semi-enclosed basin connected to the open ocean by two shallow channels (Tortuga and Centinela Channels). Its connection with the open sea, and therefore the ventilation of the basin, occurs in the surface layers. The Cariaco ROMS model will be used to study the exchanges of mass, heat and salt through the channels. A 1/60 degree ROMS model nested in the global 1/12 degree HYCOM model from the Naval Research Laboratory will be used for this study. In addition, a series of observations (satellite altimetry and in situ temperature, salinity and velocity data), will be used to assess the influence of the Caribbean circulation on the basin.
Large-scale delamination of multi-layers transition metal carbides and carbonitrides “MXenes”
Naguib, Michael; Unocic, Raymond R.; Armstrong, Beth L.; ...
2015-04-17
Herein we report on a general approach to delaminate multi-layered MXenes using an organic base to induce swelling that in turn weakens the bonds between the MX layers. Simple agitation or mild sonication of the swollen MXene in water resulted in the large-scale delamination of the MXene layers. The delamination method is demonstrated for vanadium carbide, and titanium carbonitrides MXenes.
Young Kim, Eun; Johnson, Hans J
2013-01-01
A robust multi-modal tool, for automated registration, bias correction, and tissue classification, has been implemented for large-scale heterogeneous multi-site longitudinal MR data analysis. This work focused on improving the an iterative optimization framework between bias-correction, registration, and tissue classification inspired from previous work. The primary contributions are robustness improvements from incorporation of following four elements: (1) utilize multi-modal and repeated scans, (2) incorporate high-deformable registration, (3) use extended set of tissue definitions, and (4) use of multi-modal aware intensity-context priors. The benefits of these enhancements were investigated by a series of experiments with both simulated brain data set (BrainWeb) and by applying to highly-heterogeneous data from a 32 site imaging study with quality assessments through the expert visual inspection. The implementation of this tool is tailored for, but not limited to, large-scale data processing with great data variation with a flexible interface. In this paper, we describe enhancements to a joint registration, bias correction, and the tissue classification, that improve the generalizability and robustness for processing multi-modal longitudinal MR scans collected at multi-sites. The tool was evaluated by using both simulated and simulated and human subject MRI images. With these enhancements, the results showed improved robustness for large-scale heterogeneous MRI processing.
Large 3D direct laser written scaffolds for tissue engineering applications
NASA Astrophysics Data System (ADS)
Trautmann, Anika; Rüth, Marieke; Lemke, Horst-Dieter; Walther, Thomas; Hellmann, Ralf
2018-01-01
We report on the fabrication of three-dimensional direct laser written scaffolds for tissue engineering and the seeding of primary fibroblasts on these structures. Scaffolds are realized by two-photon absorption induced polymerization in the inorganic-organic hybrid polymer OrmoComp using a 515 nm femtosecond laser. A nonstop single-line single-pass writing process is implemented in order to produce periodic reproducible large scaled structures with a dimension in the range of several millimeters and reduce process time to less than one hour. This method allows us to determine optimized process parameters for writing stable structures while achieving pore sizes ranging from 5 μm to 90 μm and a scanning speed of up to 5 mm/s. After a multi-stage post-treatment, normal human dermal fibroblasts are applied to the scaffolds to test if these macroscopic structures with large surface and numerous small gaps between the pores provide nontoxic conditions. Furthermore, we study the cell behavior in this environment and observe both cell growth on as well as ingrowth on the three-dimensional structures. In particular, fibroblasts adhere and grow also on the vertical walls of the scaffolds.
Multi-thread parallel algorithm for reconstructing 3D large-scale porous structures
NASA Astrophysics Data System (ADS)
Ju, Yang; Huang, Yaohui; Zheng, Jiangtao; Qian, Xu; Xie, Heping; Zhao, Xi
2017-04-01
Geomaterials inherently contain many discontinuous, multi-scale, geometrically irregular pores, forming a complex porous structure that governs their mechanical and transport properties. The development of an efficient reconstruction method for representing porous structures can significantly contribute toward providing a better understanding of the governing effects of porous structures on the properties of porous materials. In order to improve the efficiency of reconstructing large-scale porous structures, a multi-thread parallel scheme was incorporated into the simulated annealing reconstruction method. In the method, four correlation functions, which include the two-point probability function, the linear-path functions for the pore phase and the solid phase, and the fractal system function for the solid phase, were employed for better reproduction of the complex well-connected porous structures. In addition, a random sphere packing method and a self-developed pre-conditioning method were incorporated to cast the initial reconstructed model and select independent interchanging pairs for parallel multi-thread calculation, respectively. The accuracy of the proposed algorithm was evaluated by examining the similarity between the reconstructed structure and a prototype in terms of their geometrical, topological, and mechanical properties. Comparisons of the reconstruction efficiency of porous models with various scales indicated that the parallel multi-thread scheme significantly shortened the execution time for reconstruction of a large-scale well-connected porous model compared to a sequential single-thread procedure.
Quarter Scale RLV Multi-Lobe LH2 Tank Test Program
NASA Technical Reports Server (NTRS)
Blum, Celia; Puissegur, Dennis; Tidwell, Zeb; Webber, Carol
1998-01-01
Thirty cryogenic pressure cycles have been completed on the Lockheed Martin Michoud Space Systems quarter scale RLV composite multi-lobe liquid hydrogen propellant tank assembly, completing the initial phases of testing and demonstrating technologies key to the success of large scale composite cryogenic tankage for X33, RLV, and other future launch vehicles.
NASA Astrophysics Data System (ADS)
Cardall, Christian Y.; Budiardja, Reuben D.
2017-05-01
GenASiS Basics provides Fortran 2003 classes furnishing extensible object-oriented utilitarian functionality for large-scale physics simulations on distributed memory supercomputers. This functionality includes physical units and constants; display to the screen or standard output device; message passing; I/O to disk; and runtime parameter management and usage statistics. This revision -Version 2 of Basics - makes mostly minor additions to functionality and includes some simplifying name changes.
SegAN: Adversarial Network with Multi-scale L1 Loss for Medical Image Segmentation.
Xue, Yuan; Xu, Tao; Zhang, Han; Long, L Rodney; Huang, Xiaolei
2018-05-03
Inspired by classic Generative Adversarial Networks (GANs), we propose a novel end-to-end adversarial neural network, called SegAN, for the task of medical image segmentation. Since image segmentation requires dense, pixel-level labeling, the single scalar real/fake output of a classic GAN's discriminator may be ineffective in producing stable and sufficient gradient feedback to the networks. Instead, we use a fully convolutional neural network as the segmentor to generate segmentation label maps, and propose a novel adversarial critic network with a multi-scale L 1 loss function to force the critic and segmentor to learn both global and local features that capture long- and short-range spatial relationships between pixels. In our SegAN framework, the segmentor and critic networks are trained in an alternating fashion in a min-max game: The critic is trained by maximizing a multi-scale loss function, while the segmentor is trained with only gradients passed along by the critic, with the aim to minimize the multi-scale loss function. We show that such a SegAN framework is more effective and stable for the segmentation task, and it leads to better performance than the state-of-the-art U-net segmentation method. We tested our SegAN method using datasets from the MICCAI BRATS brain tumor segmentation challenge. Extensive experimental results demonstrate the effectiveness of the proposed SegAN with multi-scale loss: on BRATS 2013 SegAN gives performance comparable to the state-of-the-art for whole tumor and tumor core segmentation while achieves better precision and sensitivity for Gd-enhance tumor core segmentation; on BRATS 2015 SegAN achieves better performance than the state-of-the-art in both dice score and precision.
Gary M. Tabor; Anne Carlson; Travis Belote
2014-01-01
The Yellowstone to Yukon Conservation Initiative (Y2Y) was established over 20 years ago as an experiment in large landscape conservation. Initially, Y2Y emerged as a response to large scale habitat fragmentation by advancing ecological connectivity. It also laid the foundation for large scale multi-stakeholder conservation collaboration with almost 200 non-...
The Large-scale Structure of the Universe: Probes of Cosmology and Structure Formation
NASA Astrophysics Data System (ADS)
Noh, Yookyung
The usefulness of large-scale structure as a probe of cosmology and structure formation is increasing as large deep surveys in multi-wavelength bands are becoming possible. The observational analysis of large-scale structure guided by large volume numerical simulations are beginning to offer us complementary information and crosschecks of cosmological parameters estimated from the anisotropies in Cosmic Microwave Background (CMB) radiation. Understanding structure formation and evolution and even galaxy formation history is also being aided by observations of different redshift snapshots of the Universe, using various tracers of large-scale structure. This dissertation work covers aspects of large-scale structure from the baryon acoustic oscillation scale, to that of large scale filaments and galaxy clusters. First, I discuss a large- scale structure use for high precision cosmology. I investigate the reconstruction of Baryon Acoustic Oscillation (BAO) peak within the context of Lagrangian perturbation theory, testing its validity in a large suite of cosmological volume N-body simulations. Then I consider galaxy clusters and the large scale filaments surrounding them in a high resolution N-body simulation. I investigate the geometrical properties of galaxy cluster neighborhoods, focusing on the filaments connected to clusters. Using mock observations of galaxy clusters, I explore the correlations of scatter in galaxy cluster mass estimates from multi-wavelength observations and different measurement techniques. I also examine the sources of the correlated scatter by considering the intrinsic and environmental properties of clusters.
NASA Astrophysics Data System (ADS)
Fei, Peng; Lee, Juhyun; Packard, René R. Sevag; Sereti, Konstantina-Ioanna; Xu, Hao; Ma, Jianguo; Ding, Yichen; Kang, Hanul; Chen, Harrison; Sung, Kevin; Kulkarni, Rajan; Ardehali, Reza; Kuo, C.-C. Jay; Xu, Xiaolei; Ho, Chih-Ming; Hsiai, Tzung K.
2016-03-01
Light Sheet Fluorescence Microscopy (LSFM) enables multi-dimensional and multi-scale imaging via illuminating specimens with a separate thin sheet of laser. It allows rapid plane illumination for reduced photo-damage and superior axial resolution and contrast. We hereby demonstrate cardiac LSFM (c-LSFM) imaging to assess the functional architecture of zebrafish embryos with a retrospective cardiac synchronization algorithm for four-dimensional reconstruction (3-D space + time). By combining our approach with tissue clearing techniques, we reveal the entire cardiac structures and hypertrabeculation of adult zebrafish hearts in response to doxorubicin treatment. By integrating the resolution enhancement technique with c-LSFM to increase the resolving power under a large field-of-view, we demonstrate the use of low power objective to resolve the entire architecture of large-scale neonatal mouse hearts, revealing the helical orientation of individual myocardial fibers. Therefore, our c-LSFM imaging approach provides multi-scale visualization of architecture and function to drive cardiovascular research with translational implication in congenital heart diseases.
Multi-level structure in the large scale distribution of optically luminous galaxies
NASA Astrophysics Data System (ADS)
Deng, Xin-fa; Deng, Zu-gan; Liu, Yong-zhen
1992-04-01
Fractal dimensions in the large scale distribution of galaxies have been calculated with the method given by Wen et al. [1] Samples are taken from CfA redshift survey in northern and southern galactic [2] hemisphere in our analysis respectively. Results from these two regions are compared with each other. There are significant differences between the distributions in these two regions. However, our analyses do show some common features of the distributions in these two regions. All subsamples show multi-level fractal character distinctly. Combining it with the results from analyses of samples given by IRAS galaxies and results from samples given by redshift survey in pencil-beam fields, [3,4] we suggest that multi-level fractal structure is most likely to be a general and important character in the large scale distribution of galaxies. The possible implications of this character are discussed.
McManus, IC; Thompson, M; Mollon, J
2006-01-01
Background A potential problem of clinical examinations is known as the hawk-dove problem, some examiners being more stringent and requiring a higher performance than other examiners who are more lenient. Although the problem has been known qualitatively for at least a century, we know of no previous statistical estimation of the size of the effect in a large-scale, high-stakes examination. Here we use FACETS to carry out a multi-facet Rasch modelling of the paired judgements made by examiners in the clinical examination (PACES) of MRCP(UK), where identical candidates were assessed in identical situations, allowing calculation of examiner stringency. Methods Data were analysed from the first nine diets of PACES, which were taken between June 2001 and March 2004 by 10,145 candidates. Each candidate was assessed by two examiners on each of seven separate tasks. with the candidates assessed by a total of 1,259 examiners, resulting in a total of 142,030 marks. Examiner demographics were described in terms of age, sex, ethnicity, and total number of candidates examined. Results FACETS suggested that about 87% of main effect variance was due to candidate differences, 1% due to station differences, and 12% due to differences between examiners in leniency-stringency. Multiple regression suggested that greater examiner stringency was associated with greater examiner experience and being from an ethnic minority. Male and female examiners showed no overall difference in stringency. Examination scores were adjusted for examiner stringency and it was shown that for the present pass mark, the outcome for 95.9% of candidates would be unchanged using adjusted marks, whereas 2.6% of candidates would have passed, even though they had failed on the basis of raw marks, and 1.5% of candidates would have failed, despite passing on the basis of raw marks. Conclusion Examiners do differ in their leniency or stringency, and the effect can be estimated using Rasch modelling. The reasons for differences are not clear, but there are some demographic correlates, and the effects appear to be reliable across time. Account can be taken of differences, either by adjusting marks or, perhaps more effectively and more justifiably, by pairing high and low stringency examiners, so that raw marks can be used in the determination of pass and fail. PMID:16919156
Study of multi-functional precision optical measuring system for large scale equipment
NASA Astrophysics Data System (ADS)
Jiang, Wei; Lao, Dabao; Zhou, Weihu; Zhang, Wenying; Jiang, Xingjian; Wang, Yongxi
2017-10-01
The effective application of high performance measurement technology can greatly improve the large-scale equipment manufacturing ability. Therefore, the geometric parameters measurement, such as size, attitude and position, requires the measurement system with high precision, multi-function, portability and other characteristics. However, the existing measuring instruments, such as laser tracker, total station, photogrammetry system, mostly has single function, station moving and other shortcomings. Laser tracker needs to work with cooperative target, but it can hardly meet the requirement of measurement in extreme environment. Total station is mainly used for outdoor surveying and mapping, it is hard to achieve the demand of accuracy in industrial measurement. Photogrammetry system can achieve a wide range of multi-point measurement, but the measuring range is limited and need to repeatedly move station. The paper presents a non-contact opto-electronic measuring instrument, not only it can work by scanning the measurement path but also measuring the cooperative target by tracking measurement. The system is based on some key technologies, such as absolute distance measurement, two-dimensional angle measurement, automatically target recognition and accurate aiming, precision control, assembly of complex mechanical system and multi-functional 3D visualization software. Among them, the absolute distance measurement module ensures measurement with high accuracy, and the twodimensional angle measuring module provides precision angle measurement. The system is suitable for the case of noncontact measurement of large-scale equipment, it can ensure the quality and performance of large-scale equipment throughout the process of manufacturing and improve the manufacturing ability of large-scale and high-end equipment.
Employing multi-GPU power for molecular dynamics simulation: an extension of GALAMOST
NASA Astrophysics Data System (ADS)
Zhu, You-Liang; Pan, Deng; Li, Zhan-Wei; Liu, Hong; Qian, Hu-Jun; Zhao, Yang; Lu, Zhong-Yuan; Sun, Zhao-Yan
2018-04-01
We describe the algorithm of employing multi-GPU power on the basis of Message Passing Interface (MPI) domain decomposition in a molecular dynamics code, GALAMOST, which is designed for the coarse-grained simulation of soft matters. The code of multi-GPU version is developed based on our previous single-GPU version. In multi-GPU runs, one GPU takes charge of one domain and runs single-GPU code path. The communication between neighbouring domains takes a similar algorithm of CPU-based code of LAMMPS, but is optimised specifically for GPUs. We employ a memory-saving design which can enlarge maximum system size at the same device condition. An optimisation algorithm is employed to prolong the update period of neighbour list. We demonstrate good performance of multi-GPU runs on the simulation of Lennard-Jones liquid, dissipative particle dynamics liquid, polymer and nanoparticle composite, and two-patch particles on workstation. A good scaling of many nodes on cluster for two-patch particles is presented.
NASA Astrophysics Data System (ADS)
Zhu, Chen-Xi; Wang, Chi-Chuan
2018-01-01
This study proposes a numerical model for plate heat exchanger that is capable of handling supercritical CO2 fluid. The plate heat exchangers under investigation include Z-type (1-pass), U-type (1-pass), and 1-2 pass configurations. The plate spacing is 2.9 mm with a plate thickness of 0.8 mm, and the size of the plate is 600 mm wide and 218 mm in height with 60 degrees chevron angle. The proposed model takes into account the influence of gigantic change of CO2 properties. The simulation is first compared with some existing data for water-to-water plate heat exchangers with good agreements. The flow distribution, pressure drop, and heat transfer performance subject to the supercritical CO2 in plate heat exchangers are then investigated. It is found that the flow velocity increases consecutively from the entrance plate toward the last plate for the Z-type arrangement, and this is applicable for either water side or CO2 side. However, the flow distribution of the U-type arrangement in the water side shows opposite trend. Conversely, the flow distribution for U-type arrangement of CO2 depends on the specific flow ratio (C*). A lower C* like 0.1 may reverse the distribution, i.e. the flow velocity increases moderately alongside the plate channel like Z-type while a large C* of 1 would resemble the typical distribution in water channel. The flow distribution of CO2 side at the first and last plate shows a pronounced drop/surge phenomenon while the channels in water side does not reveal this kind of behavior. The performance of 2-pass plate heat exchanger, in terms of heat transfer rate, is better than that of 1-pass design only when C* is comparatively small (C* < 0.5). Multi-pass design is more effective when the dominant thermal resistance falls in the CO2 side.
Numerical and experimental study on multi-pass laser bending of AH36 steel strips
NASA Astrophysics Data System (ADS)
Fetene, Besufekad N.; Kumar, Vikash; Dixit, Uday S.; Echempati, Raghu
2018-02-01
Laser bending is a process of bending of plates, small sized sheets, strips and tubes, in which a moving or stationary laser beam heats the workpiece to achieve the desired curvature due to thermal stresses. Researchers studied the effects of different process parameters related to the laser source, material and workpiece geometry on laser bending of metal sheets. The studies are focused on large sized sheets. The workpiece geometry parameters like sheet thickness, length and width also affect the bend angle considerably. In this work, the effects of width and thickness on multi-pass laser bending of AH36 steel strips were studied experimentally and numerically. Finite element model using ABAQUS® was developed to investigate the size effect on the prediction of the bend angle. Microhardness and flexure tests showed an increase in the flexural strength as well as microhardness in the scanned zone. The microstructures of the bent strips also supported the physical observations.
An improved method to characterise the modulation of small-scale turbulent by large-scale structures
NASA Astrophysics Data System (ADS)
Agostini, Lionel; Leschziner, Michael; Gaitonde, Datta
2015-11-01
A key aspect of turbulent boundary layer dynamics is ``modulation,'' which refers to degree to which the intensity of coherent large-scale structures (LS) cause an amplification or attenuation of the intensity of the small-scale structures (SS) through large-scale-linkage. In order to identify the variation of the amplitude of the SS motion, the envelope of the fluctuations needs to be determined. Mathis et al. (2009) proposed to define this latter by low-pass filtering the modulus of the analytic signal built from the Hilbert transform of SS. The validity of this definition, as a basis for quantifying the modulated SS signal, is re-examined on the basis of DNS data for a channel flow. The analysis shows that the modulus of the analytic signal is very sensitive to the skewness of its PDF, which is dependent, in turn, on the sign of the LS fluctuation and thus of whether these fluctuations are associated with sweeps or ejections. The conclusion is that generating an envelope by use of a low-pass filtering step leads to an important loss of information associated with the effects of the local skewness of the PDF of the SS on the modulation process. An improved Hilbert-transform-based method is proposed to characterize the modulation of SS turbulence by LS structures
Multi-petascale highly efficient parallel supercomputer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.
A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaflop-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC). The ASIC nodes are interconnected by a five dimensional torus network that optimally maximize the throughput of packet communications between nodes and minimize latency. The network implements collective network and a global asynchronous network that provides global barrier and notification functions. Integrated in the node design include a list-based prefetcher. The memory system implements transaction memory, thread level speculation, and multiversioning cache that improves soft error rate at the same time andmore » supports DMA functionality allowing for parallel processing message-passing.« less
Multi-Scale Models for the Scale Interaction of Organized Tropical Convection
NASA Astrophysics Data System (ADS)
Yang, Qiu
Assessing the upscale impact of organized tropical convection from small spatial and temporal scales is a research imperative, not only for having a better understanding of the multi-scale structures of dynamical and convective fields in the tropics, but also for eventually helping in the design of new parameterization strategies to improve the next-generation global climate models. Here self-consistent multi-scale models are derived systematically by following the multi-scale asymptotic methods and used to describe the hierarchical structures of tropical atmospheric flows. The advantages of using these multi-scale models lie in isolating the essential components of multi-scale interaction and providing assessment of the upscale impact of the small-scale fluctuations onto the large-scale mean flow through eddy flux divergences of momentum and temperature in a transparent fashion. Specifically, this thesis includes three research projects about multi-scale interaction of organized tropical convection, involving tropical flows at different scaling regimes and utilizing different multi-scale models correspondingly. Inspired by the observed variability of tropical convection on multiple temporal scales, including daily and intraseasonal time scales, the goal of the first project is to assess the intraseasonal impact of the diurnal cycle on the planetary-scale circulation such as the Hadley cell. As an extension of the first project, the goal of the second project is to assess the intraseasonal impact of the diurnal cycle over the Maritime Continent on the Madden-Julian Oscillation. In the third project, the goals are to simulate the baroclinic aspects of the ITCZ breakdown and assess its upscale impact on the planetary-scale circulation over the eastern Pacific. These simple multi-scale models should be useful to understand the scale interaction of organized tropical convection and help improve the parameterization of unresolved processes in global climate models.
Real-time digital signal recovery for a multi-pole low-pass transfer function system.
Lee, Jhinhwan
2017-08-01
In order to solve the problems of waveform distortion and signal delay by many physical and electrical systems with multi-pole linear low-pass transfer characteristics, a simple digital-signal-processing (DSP)-based method of real-time recovery of the original source waveform from the distorted output waveform is proposed. A mathematical analysis on the convolution kernel representation of the single-pole low-pass transfer function shows that the original source waveform can be accurately recovered in real time using a particular moving average algorithm applied on the input stream of the distorted waveform, which can also significantly reduce the overall delay time constant. This method is generalized for multi-pole low-pass systems and has noise characteristics of the inverse of the low-pass filter characteristics. This method can be applied to most sensors and amplifiers operating close to their frequency response limits to improve the overall performance of data acquisition systems and digital feedback control systems.
Liu, Yu; Sun, Changfeng; Li, Qiang; Cai, Qiufang
2016-01-01
The historical May-October mean temperature since 1831 was reconstructed based on tree-ring width of Qinghai spruce (Picea crassifolia Kom.) collected on Mt. Dongda, North of the Hexi Corridor in Northwest China. The regression model explained 46.6% of the variance of the instrumentally observed temperature. The cold periods in the reconstruction were 1831-1889, 1894-1901, 1908-1934 and 1950-1952, and the warm periods were 1890-1893, 1902-1907, 1935-1949 and 1953-2011. During the instrumental period (1951-2011), an obvious warming trend appeared in the last twenty years. The reconstruction displayed similar patterns to a temperature reconstruction from the east-central Tibetan Plateau at the inter-decadal timescale, indicating that the temperature reconstruction in this study was a reliable proxy for Northwest China. It was also found that the reconstruction series had good consistency with the Northern Hemisphere temperature at a decadal timescale. Multi-taper method spectral analysis detected some low- and high-frequency cycles (2.3-2.4-year, 2.8-year, 3.4-3.6-year, 5.0-year, 9.9-year and 27.0-year). Combining these cycles, the relationship of the low-frequency change with the Pacific Decadal Oscillation (PDO), North Atlantic Oscillation (NAO) and Southern Oscillation (SO) suggested that the reconstructed temperature variations may be related to large-scale atmospheric-oceanic variations. Major volcanic eruptions were partly reflected in the reconstructed temperatures after high-pass filtering; these events promoted anomalous cooling in this region. The results of this study not only provide new information for assessing the long-term temperature changes in the Hexi Corridor of Northwest China, but also further demonstrate the effects of large-scale atmospheric-oceanic circulation on climate change in Northwest China.
Spectrometer capillary vessel and method of making same
Linehan, John C.; Yonker, Clement R.; Zemanian, Thomas S.; Franz, James A.
1995-01-01
The present invention is an arrangement of a glass capillary tube for use in spectroscopy. In particular, the invention is a capillary arranged in a manner permitting a plurality or multiplicity of passes of a sample material through a spectroscopic measurement zone. In a preferred embodiment, the multi-pass capillary is insertable within a standard NMR sample tube. The present invention further includes a method of making the multi-pass capillary tube and an apparatus for spinning the tube.
Zeng, Jinle; Chang, Baohua; Du, Dong; Wang, Li; Chang, Shuhe; Peng, Guodong; Wang, Wenzhu
2018-01-05
Multi-layer/multi-pass welding (MLMPW) technology is widely used in the energy industry to join thick components. During automatic welding using robots or other actuators, it is very important to recognize the actual weld pass position using visual methods, which can then be used not only to perform reasonable path planning for actuators, but also to correct any deviations between the welding torch and the weld pass position in real time. However, due to the small geometrical differences between adjacent weld passes, existing weld position recognition technologies such as structured light methods are not suitable for weld position detection in MLMPW. This paper proposes a novel method for weld position detection, which fuses various kinds of information in MLMPW. First, a synchronous acquisition method is developed to obtain various kinds of visual information when directional light and structured light sources are on, respectively. Then, interferences are eliminated by fusing adjacent images. Finally, the information from directional and structured light images is fused to obtain the 3D positions of the weld passes. Experiment results show that each process can be done in 30 ms and the deviation is less than 0.6 mm. The proposed method can be used for automatic path planning and seam tracking in the robotic MLMPW process as well as electron beam freeform fabrication process.
NASA Astrophysics Data System (ADS)
Vikram, B. S.; Prakash, Roopa; K. P., Nagarjun; Selvaraja, Shankar Kumar; Supradeepa, V. R.
2018-02-01
Demand for bandwidth in optical communications necessitates the development of scalable transceivers that cater to these needs. For this, in DWDM systems with/without Superchannels, the optical source needs to provide a large number of optical carriers. The conventional method of utilizing separate lasers makes the system bulky and inefficient. A multi-wavelength source which spans the entire C-band with sufficient power is needed to replace individual lasers. In addition, multi-wavelength sources at high repetition rates are necessary in various applications such as spectroscopy, astronomical spectrograph calibration, microwave photonics and arbitrary waveform generation. Here, we demonstrate a novel technique for equalized, multi-wavelength source generation which generates over 160 lines at 25GHz repetition rate, spanning the entire C-band with total power >700mW. A 25GHz Comb with 16 lines is generated around 1550nm starting with two individual lasers using a system of directly driven, cascaded intensity and phase modulators. This is then amplified to >1W using an optimized, Erbium-Ytterbium co-doped fiber amplifier. Subsequently, they are passed through Highly NonLinear Fiber at its zero-dispersion wavelength. Through cascaded Four Wave Mixing, a ten-fold increase in the number of lines is demonstrated. A bandwidth of 4.32 THz (174 lines, SNR>15 dB), covering the entire C-band is generated. Enhanced spectral broadening is enabled by two key aspects - Dual laser input provides the optimal temporal profile for spectral broadening while the comb generation prior to amplification enables greater power scaling by suppression of Brillouin scattering. The multi-wavelength source is extremely agile with tunable center frequency and repetition rate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jatana, Gurneesh; Geckler, Sam; Koeberlein, David
We designed and developed a 4-probe multiplexed multi-species absorption spectroscopy sensor system for gas property measurements on the intake side of commercial multi-cylinder internal-combustion (I.C.) engines; the resulting cycle- and cylinder-resolved concentration, temperature and pressure measurements are applicable for assessing spatial and temporal variations in the recirculated exhaust gas (EGR) distribution at various locations along the intake gas path, which in turn is relevant to assessing cylinder charge uniformity, control strategies, and CFD models. Furthermore, the diagnostic is based on absorption spectroscopy and includes an H 2O absorption system (utilizing a 1.39 m distributed feedback (DFB) diode laser) for measuringmore » gas temperature, pressure, and H 2O concentration, and a CO 2 absorption system (utilizing a 2.7 m DFB laser) for measuring CO 2 concentration. The various lasers, optical components and detectors were housed in an instrument box, and the 1.39- m and 2.7- m lasers were guided to and from the engine-mounted probes via optical fibers and hollow waveguides, respectively. The 5kHz measurement bandwidth allows for near-crank angle resolved measurements, with a resolution of 1.2 crank angle degrees at 1000 RPM. Our use of compact stainless steel measurement probes enables simultaneous multi-point measurements at various locations on the engine with minimal changes to the base engine hardware; in addition to resolving large-scale spatial variations via simultaneous multi-probe measurements, local spatial gradients can be resolved by translating individual probes. Along with details of various sensor design features and performance, we also demonstrate validation of the spectral parameters of the associated CO 2 absorption transitions using both a multi-pass heated cell and the sensor probes.« less
Scale Interactions in the Tropics from a Simple Multi-Cloud Model
NASA Astrophysics Data System (ADS)
Niu, X.; Biello, J. A.
2017-12-01
Our lack of a complete understanding of the interaction between the moisture convection and equatorial waves remains an impediment in the numerical simulation of large-scale organization, such as the Madden-Julian Oscillation (MJO). The aim of this project is to understand interactions across spatial scales in the tropics from a simplified framework for scale interactions while a using a simplified framework to describe the basic features of moist convection. Using multiple asymptotic scales, Biello and Majda[1] derived a multi-scale model of moist tropical dynamics (IMMD[1]), which separates three regimes: the planetary scale climatology, the synoptic scale waves, and the planetary scale anomalies regime. The scales and strength of the observed MJO would categorize it in the regime of planetary scale anomalies - which themselves are forced from non-linear upscale fluxes from the synoptic scales waves. In order to close this model and determine whether it provides a self-consistent theory of the MJO. A model for diabatic heating due to moist convection must be implemented along with the IMMD. The multi-cloud parameterization is a model proposed by Khouider and Majda[2] to describe the three basic cloud types (congestus, deep and stratiform) that are most responsible for tropical diabatic heating. We implement a simplified version of the multi-cloud model that is based on results derived from large eddy simulations of convection [3]. We present this simplified multi-cloud model and show results of numerical experiments beginning with a variety of convective forcing states. Preliminary results on upscale fluxes, from synoptic scales to planetary scale anomalies, will be presented. [1] Biello J A, Majda A J. Intraseasonal multi-scale moist dynamics of the tropical atmosphere[J]. Communications in Mathematical Sciences, 2010, 8(2): 519-540. [2] Khouider B, Majda A J. A simple multicloud parameterization for convectively coupled tropical waves. Part I: Linear analysis[J]. Journal of the atmospheric sciences, 2006, 63(4): 1308-1323. [3] Dorrestijn J, Crommelin D T, Biello J A, et al. A data-driven multi-cloud model for stochastic parametrization of deep convection[J]. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 2013, 371(1991): 20120374.
NASA Astrophysics Data System (ADS)
Schweser, Ferdinand; Dwyer, Michael G.; Deistung, Andreas; Reichenbach, Jürgen R.; Zivadinov, Robert
2013-10-01
The assessment of abnormal accumulation of tissue iron in the basal ganglia nuclei and in white matter plaques using the gradient echo magnetic resonance signal phase has become a research focus in many neurodegenerative diseases such as multiple sclerosis or Parkinson’s disease. A common and natural approach is to calculate the mean high-pass-filtered phase of previously delineated brain structures. Unfortunately, the interpretation of such an analysis requires caution: in this paper we demonstrate that regional gray matter atrophy, which is concomitant with many neurodegenerative diseases, may itself directly result in a phase shift seemingly indicative of increased iron concentration even without any real change in the tissue iron concentration. Although this effect is relatively small results of large-scale group comparisons may be driven by anatomical changes rather than by changes of the iron concentration.
Multi-scale signed envelope inversion
NASA Astrophysics Data System (ADS)
Chen, Guo-Xin; Wu, Ru-Shan; Wang, Yu-Qing; Chen, Sheng-Chang
2018-06-01
Envelope inversion based on modulation signal mode was proposed to reconstruct large-scale structures of underground media. In order to solve the shortcomings of conventional envelope inversion, multi-scale envelope inversion was proposed using new envelope Fréchet derivative and multi-scale inversion strategy to invert strong contrast models. In multi-scale envelope inversion, amplitude demodulation was used to extract the low frequency information from envelope data. However, only to use amplitude demodulation method will cause the loss of wavefield polarity information, thus increasing the possibility of inversion to obtain multiple solutions. In this paper we proposed a new demodulation method which can contain both the amplitude and polarity information of the envelope data. Then we introduced this demodulation method into multi-scale envelope inversion, and proposed a new misfit functional: multi-scale signed envelope inversion. In the numerical tests, we applied the new inversion method to the salt layer model and SEG/EAGE 2-D Salt model using low-cut source (frequency components below 4 Hz were truncated). The results of numerical test demonstrated the effectiveness of this method.
NASA Astrophysics Data System (ADS)
Ghosh, Sayantan; Manimaran, P.; Panigrahi, Prasanta K.
2011-11-01
We make use of wavelet transform to study the multi-scale, self-similar behavior and deviations thereof, in the stock prices of large companies, belonging to different economic sectors. The stock market returns exhibit multi-fractal characteristics, with some of the companies showing deviations at small and large scales. The fact that, the wavelets belonging to the Daubechies’ (Db) basis enables one to isolate local polynomial trends of different degrees, plays the key role in isolating fluctuations at different scales. One of the primary motivations of this work is to study the emergence of the k-3 behavior [X. Gabaix, P. Gopikrishnan, V. Plerou, H. Stanley, A theory of power law distributions in financial market fluctuations, Nature 423 (2003) 267-270] of the fluctuations starting with high frequency fluctuations. We make use of Db4 and Db6 basis sets to respectively isolate local linear and quadratic trends at different scales in order to study the statistical characteristics of these financial time series. The fluctuations reveal fat tail non-Gaussian behavior, unstable periodic modulations, at finer scales, from which the characteristic k-3 power law behavior emerges at sufficiently large scales. We further identify stable periodic behavior through the continuous Morlet wavelet.
Grid-Enabled Quantitative Analysis of Breast Cancer
2009-10-01
large-scale, multi-modality computerized image analysis . The central hypothesis of this research is that large-scale image analysis for breast cancer...pilot study to utilize large scale parallel Grid computing to harness the nationwide cluster infrastructure for optimization of medical image ... analysis parameters. Additionally, we investigated the use of cutting edge dataanalysis/ mining techniques as applied to Ultrasound, FFDM, and DCE-MRI Breast
Micro-optical-mechanical system photoacoustic spectrometer
Kotovsky, Jack; Benett, William J.; Tooker, Angela C.; Alameda, Jennifer B.
2013-01-01
All-optical photoacoustic spectrometer sensing systems (PASS system) and methods include all the hardware needed to analyze the presence of a large variety of materials (solid, liquid and gas). Some of the all-optical PASS systems require only two optical-fibers to communicate with the opto-electronic power and readout systems that exist outside of the material environment. Methods for improving the signal-to-noise are provided and enable mirco-scale systems and methods for operating such systems.
Nie, Kaibo; Guo, Yachao; Deng, Kunkun; Wang, Xiaojun; Wu, Kun
2018-01-01
In this study, SiC nanoparticles were added into matrix alloy through a combination of semisolid stirring and ultrasonic vibration while dynamic precipitation of second phases was obtained through multi-pass forging with varying temperatures. During single-pass forging of the present composite, as the deformation temperature increased, the extent of recrystallization increased, and grains were refined due to the inhibition effect of the increasing amount of dispersed SiC nanoparticles. A small amount of twins within the SiC nanoparticle dense zone could be found while the precipitated phases of Mg17Al12 in long strips and deformation bands with high density dislocations were formed in the particle sparse zone after single-pass forging at 350 °C. This indicated that the particle sparse zone was mainly deformed by dislocation slip while the nanoparticle dense zone may have been deformed by twinning. The yield strength and ultimate tensile strength of the composites were gradually enhanced through increasing the single-pass forging temperature from 300 °C to 400 °C, which demonstrated that initial high forging temperature contributed to the improvement of the mechanical properties. During multi-pass forging with varying temperatures, the grain size of the composite was gradually decreased while the grain size distribution tended to be uniform with reducing the deformation temperature and extending the forging passes. In addition, the amount of precipitated second phases was significantly increased compared with that after multi-pass forging under a constant temperature. The improvement in the yield strength of the developed composite was related to grain refinement strengthening and Orowan strengthening resulting from synergistical effect of the externally applied SiC nanoparticles and internally precipitated second phases. PMID:29342883
Nie, Kaibo; Guo, Yachao; Deng, Kunkun; Wang, Xiaojun; Wu, Kun
2018-01-13
In this study, SiC nanoparticles were added into matrix alloy through a combination of semisolid stirring and ultrasonic vibration while dynamic precipitation of second phases was obtained through multi-pass forging with varying temperatures. During single-pass forging of the present composite, as the deformation temperature increased, the extent of recrystallization increased, and grains were refined due to the inhibition effect of the increasing amount of dispersed SiC nanoparticles. A small amount of twins within the SiC nanoparticle dense zone could be found while the precipitated phases of Mg 17 Al 12 in long strips and deformation bands with high density dislocations were formed in the particle sparse zone after single-pass forging at 350 °C. This indicated that the particle sparse zone was mainly deformed by dislocation slip while the nanoparticle dense zone may have been deformed by twinning. The yield strength and ultimate tensile strength of the composites were gradually enhanced through increasing the single-pass forging temperature from 300 °C to 400 °C, which demonstrated that initial high forging temperature contributed to the improvement of the mechanical properties. During multi-pass forging with varying temperatures, the grain size of the composite was gradually decreased while the grain size distribution tended to be uniform with reducing the deformation temperature and extending the forging passes. In addition, the amount of precipitated second phases was significantly increased compared with that after multi-pass forging under a constant temperature. The improvement in the yield strength of the developed composite was related to grain refinement strengthening and Orowan strengthening resulting from synergistical effect of the externally applied SiC nanoparticles and internally precipitated second phases.
HD-MTL: Hierarchical Deep Multi-Task Learning for Large-Scale Visual Recognition.
Fan, Jianping; Zhao, Tianyi; Kuang, Zhenzhong; Zheng, Yu; Zhang, Ji; Yu, Jun; Peng, Jinye
2017-02-09
In this paper, a hierarchical deep multi-task learning (HD-MTL) algorithm is developed to support large-scale visual recognition (e.g., recognizing thousands or even tens of thousands of atomic object classes automatically). First, multiple sets of multi-level deep features are extracted from different layers of deep convolutional neural networks (deep CNNs), and they are used to achieve more effective accomplishment of the coarseto- fine tasks for hierarchical visual recognition. A visual tree is then learned by assigning the visually-similar atomic object classes with similar learning complexities into the same group, which can provide a good environment for determining the interrelated learning tasks automatically. By leveraging the inter-task relatedness (inter-class similarities) to learn more discriminative group-specific deep representations, our deep multi-task learning algorithm can train more discriminative node classifiers for distinguishing the visually-similar atomic object classes effectively. Our hierarchical deep multi-task learning (HD-MTL) algorithm can integrate two discriminative regularization terms to control the inter-level error propagation effectively, and it can provide an end-to-end approach for jointly learning more representative deep CNNs (for image representation) and more discriminative tree classifier (for large-scale visual recognition) and updating them simultaneously. Our incremental deep learning algorithms can effectively adapt both the deep CNNs and the tree classifier to the new training images and the new object classes. Our experimental results have demonstrated that our HD-MTL algorithm can achieve very competitive results on improving the accuracy rates for large-scale visual recognition.
Spectrometer capillary vessel and method of making same
Linehan, J.C.; Yonker, C.R.; Zemanian, T.S.; Franz, J.A.
1995-11-21
The present invention is an arrangement of a glass capillary tube for use in spectroscopy. In particular, the invention is a capillary arranged in a manner permitting a plurality or multiplicity of passes of a sample material through a spectroscopic measurement zone. In a preferred embodiment, the multi-pass capillary is insertable within a standard NMR sample tube. The present invention further includes a method of making the multi-pass capillary tube and an apparatus for spinning the tube. 13 figs.
Ocean Variability Effects on Underwater Acoustic Communications
2007-09-30
sea surface was rougher. To recover the transmitted symbols which have been passed through the time-varying multi-path acoustic channels, a new ...B is about 6 dB higher than that during enviromental case A. Due to the large aperture and deployment range of the MPL array, the channel impulse...environmental fluctuations and the performance of coherent underwater acoustic communications presents new insights into the operational effectiveness of
Detecting Multi-scale Structures in Chandra Images of Centaurus A
NASA Astrophysics Data System (ADS)
Karovska, M.; Fabbiano, G.; Elvis, M. S.; Evans, I. N.; Kim, D. W.; Prestwich, A. H.; Schwartz, D. A.; Murray, S. S.; Forman, W.; Jones, C.; Kraft, R. P.; Isobe, T.; Cui, W.; Schreier, E. J.
1999-12-01
Centaurus A (NGC 5128) is a giant early-type galaxy with a merger history, containing the nearest radio-bright AGN. Recent Chandra High Resolution Camera (HRC) observations of Cen A reveal X-ray multi-scale structures in this object with unprecedented detail and clarity. We show the results of an analysis of the Chandra data with smoothing and edge enhancement techniques that allow us to enhance and quantify the multi-scale structures present in the HRC images. These techniques include an adaptive smoothing algorithm (Ebeling et al 1999), and a multi-directional gradient detection algorithm (Karovska et al 1994). The Ebeling et al adaptive smoothing algorithm, which is incorporated in the CXC analysis s/w package, is a powerful tool for smoothing images containing complex structures at various spatial scales. The adaptively smoothed images of Centaurus A show simultaneously the high-angular resolution bright structures at scales as small as an arcsecond and the extended faint structures as large as several arc minutes. The large scale structures suggest complex symmetry, including a component possibly associated with the inner radio lobes (as suggested by the ROSAT HRI data, Dobereiner et al 1996), and a separate component with an orthogonal symmetry that may be associated with the galaxy as a whole. The dust lane and the x-ray ridges are very clearly visible. The adaptively smoothed images and the edge-enhanced images also suggest several filamentary features including a large filament-like structure extending as far as about 5 arcminutes to North-West.
Muanprasart, Pongchanok; Traivaree, Chanchai; Arunyanart, Wirongrong; Teeranate, Chakriya
2014-02-01
Though attention deficit, hyperactivity disorder ADHD is a common problem in childhood. Thai teachers' knowledge regarding the disease has never been assessed. To identify knowledge of Thai teachers regardingADHD and its influencingfactors. Cross-sectional study was operated in three primary schools in Ayutthaya, Thailand. Standardized questionnaires comprised ofdemographic data, ADHD experiences and the Knowledge of Attention Deficit Disorder Scale, KADDS, were distributed to participating teachers. Results were reported using frequency, percent, mean, and standard deviation. Association between demographic and ADHD experiences and the KADDS score was identified by logistic regression analysis. Lack ofknowledge of ADHD among teachers was apparent. Only 19.4% of them passed the total scale of KADDS. Teachers under 31-years-old were more likely to pass general information and signs, symptoms & diagnosis subscales and total scale. In addition, familiarity with ADHD patients was associated with passing scores of general information subscale and total scale. Despite public awareness of ADHD, Thai teachers lacked knowledge concerning the disease. Young teachers were more acquainted with ADHD. Direct experience with ADHD patient might help teachers develop their knowledge on ADHD.
The Impact of Large, Multi-Function/Multi-Site Competitions
2003-08-01
this approach generates larger savings and improved service quality , and is less expensive to implement. Moreover, it is a way to meet the President s...of the study is to assess the degree to which large-scale competitions completed have resulted in increased savings and service quality and decreased
Two Formal Gas Models For Multi-Agent Sweeping and Obstacle Avoidance
NASA Technical Reports Server (NTRS)
Kerr, Wesley; Spears, Diana; Spears, William; Thayer, David
2004-01-01
The task addressed here is a dynamic search through a bounded region, while avoiding multiple large obstacles, such as buildings. In the case of limited sensors and communication, maintaining spatial coverage - especially after passing the obstacles - is a challenging problem. Here, we investigate two physics-based approaches to solving this task with multiple simulated mobile robots, one based on artificial forces and the other based on the kinetic theory of gases. The desired behavior is achieved with both methods, and a comparison is made between them. Because both approaches are physics-based, formal assurances about the multi-robot behavior are straightforward, and are included in the paper.
Stereo Imaging Miniature Endoscope with Single Imaging Chip and Conjugated Multi-Bandpass Filters
NASA Technical Reports Server (NTRS)
Shahinian, Hrayr Karnig (Inventor); Bae, Youngsam (Inventor); White, Victor E. (Inventor); Shcheglov, Kirill V. (Inventor); Manohara, Harish M. (Inventor); Kowalczyk, Robert S. (Inventor)
2018-01-01
A dual objective endoscope for insertion into a cavity of a body for providing a stereoscopic image of a region of interest inside of the body including an imaging device at the distal end for obtaining optical images of the region of interest (ROI), and processing the optical images for forming video signals for wired and/or wireless transmission and display of 3D images on a rendering device. The imaging device includes a focal plane detector array (FPA) for obtaining the optical images of the ROI, and processing circuits behind the FPA. The processing circuits convert the optical images into the video signals. The imaging device includes right and left pupil for receiving a right and left images through a right and left conjugated multi-band pass filters. Illuminators illuminate the ROI through a multi-band pass filter having three right and three left pass bands that are matched to the right and left conjugated multi-band pass filters. A full color image is collected after three or six sequential illuminations with the red, green and blue lights.
Yoo, Sun K; Kim, Dong Keun; Kim, Jung C; Park, Youn Jung; Chang, Byung Chul
2008-01-01
With the increase in demand for high quality medical services, the need for an innovative hospital information system has become essential. An improved system has been implemented in all hospital units of the Yonsei University Health System. Interoperability between multi-units required appropriate hardware infrastructure and software architecture. This large-scale hospital information system encompassed PACS (Picture Archiving and Communications Systems), EMR (Electronic Medical Records) and ERP (Enterprise Resource Planning). It involved two tertiary hospitals and 50 community hospitals. The monthly data production rate by the integrated hospital information system is about 1.8 TByte and the total quantity of data produced so far is about 60 TByte. Large scale information exchange and sharing will be particularly useful for telemedicine applications.
NASA Astrophysics Data System (ADS)
Phillips, M.; Denning, A. S.; Randall, D. A.; Branson, M.
2016-12-01
Multi-scale models of the atmosphere provide an opportunity to investigate processes that are unresolved by traditional Global Climate Models while at the same time remaining viable in terms of computational resources for climate-length time scales. The MMF represents a shift away from large horizontal grid spacing in traditional GCMs that leads to overabundant light precipitation and lack of heavy events, toward a model where precipitation intensity is allowed to vary over a much wider range of values. Resolving atmospheric motions on the scale of 4 km makes it possible to recover features of precipitation, such as intense downpours, that were previously only obtained by computationally expensive regional simulations. These heavy precipitation events may have little impact on large-scale moisture and energy budgets, but are outstanding in terms of interaction with the land surface and potential impact on human life. Three versions of the Community Earth System Model were used in this study; the standard CESM, the multi-scale `Super-Parameterized' CESM where large-scale parameterizations have been replaced with a 2D cloud-permitting model, and a multi-instance land version of the SP-CESM where each column of the 2D CRM is allowed to interact with an individual land unit. These simulations were carried out using prescribed Sea Surface Temperatures for the period from 1979-2006 with daily precipitation saved for all 28 years. Comparisons of the statistical properties of precipitation between model architectures and against observations from rain gauges were made, with specific focus on detection and evaluation of extreme precipitation events.
Multi-scale pixel-based image fusion using multivariate empirical mode decomposition.
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P; McDonald-Maier, Klaus D
2015-05-08
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.
Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P.; McDonald-Maier, Klaus D.
2015-01-01
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences. PMID:26007714
NASA Technical Reports Server (NTRS)
Frieler, K.; Elliott, Joshua; Levermann, A.; Heinke, J.; Arneth, A.; Bierkens, M. F. P.; Ciais, P.; Clark, D. B.; Deryng, D.; Doll, P.;
2015-01-01
Climate change and its impacts already pose considerable challenges for societies that will further increase with global warming (IPCC, 2014a, b). Uncertainties of the climatic response to greenhouse gas emissions include the potential passing of large-scale tipping points (e.g. Lenton et al., 2008; Levermann et al., 2012; Schellnhuber, 2010) and changes in extreme meteorological events (Field et al., 2012) with complex impacts on societies (Hallegatte et al., 2013). Thus climate change mitigation is considered a necessary societal response for avoiding uncontrollable impacts (Conference of the Parties, 2010). On the other hand, large-scale climate change mitigation itself implies fundamental changes in, for example, the global energy system. The associated challenges come on top of others that derive from equally important ethical imperatives like the fulfilment of increasing food demand that may draw on the same resources. For example, ensuring food security for a growing population may require an expansion of cropland, thereby reducing natural carbon sinks or the area available for bio-energy production. So far, available studies addressing this problem have relied on individual impact models, ignoring uncertainty in crop model and biome model projections. Here, we propose a probabilistic decision framework that allows for an evaluation of agricultural management and mitigation options in a multi-impactmodel setting. Based on simulations generated within the Inter-Sectoral Impact Model Intercomparison Project (ISI-MIP), we outline how cross-sectorally consistent multi-model impact simulations could be used to generate the information required for robust decision making. Using an illustrative future land use pattern, we discuss the trade-off between potential gains in crop production and associated losses in natural carbon sinks in the new multiple crop- and biome-model setting. In addition, crop and water model simulations are combined to explore irrigation increases as one possible measure of agricultural intensification that could limit the expansion of cropland required in response to climate change and growing food demand. This example shows that current impact model uncertainties pose an important challenge to long-term mitigation planning and must not be ignored in long-term strategic decision making
NASA Astrophysics Data System (ADS)
Frieler, K.; Levermann, A.; Elliott, J.; Heinke, J.; Arneth, A.; Bierkens, M. F. P.; Ciais, P.; Clark, D. B.; Deryng, D.; Döll, P.; Falloon, P.; Fekete, B.; Folberth, C.; Friend, A. D.; Gellhorn, C.; Gosling, S. N.; Haddeland, I.; Khabarov, N.; Lomas, M.; Masaki, Y.; Nishina, K.; Neumann, K.; Oki, T.; Pavlick, R.; Ruane, A. C.; Schmid, E.; Schmitz, C.; Stacke, T.; Stehfest, E.; Tang, Q.; Wisser, D.; Huber, V.; Piontek, F.; Warszawski, L.; Schewe, J.; Lotze-Campen, H.; Schellnhuber, H. J.
2015-07-01
Climate change and its impacts already pose considerable challenges for societies that will further increase with global warming (IPCC, 2014a, b). Uncertainties of the climatic response to greenhouse gas emissions include the potential passing of large-scale tipping points (e.g. Lenton et al., 2008; Levermann et al., 2012; Schellnhuber, 2010) and changes in extreme meteorological events (Field et al., 2012) with complex impacts on societies (Hallegatte et al., 2013). Thus climate change mitigation is considered a necessary societal response for avoiding uncontrollable impacts (Conference of the Parties, 2010). On the other hand, large-scale climate change mitigation itself implies fundamental changes in, for example, the global energy system. The associated challenges come on top of others that derive from equally important ethical imperatives like the fulfilment of increasing food demand that may draw on the same resources. For example, ensuring food security for a growing population may require an expansion of cropland, thereby reducing natural carbon sinks or the area available for bio-energy production. So far, available studies addressing this problem have relied on individual impact models, ignoring uncertainty in crop model and biome model projections. Here, we propose a probabilistic decision framework that allows for an evaluation of agricultural management and mitigation options in a multi-impact-model setting. Based on simulations generated within the Inter-Sectoral Impact Model Intercomparison Project (ISI-MIP), we outline how cross-sectorally consistent multi-model impact simulations could be used to generate the information required for robust decision making. Using an illustrative future land use pattern, we discuss the trade-off between potential gains in crop production and associated losses in natural carbon sinks in the new multiple crop- and biome-model setting. In addition, crop and water model simulations are combined to explore irrigation increases as one possible measure of agricultural intensification that could limit the expansion of cropland required in response to climate change and growing food demand. This example shows that current impact model uncertainties pose an important challenge to long-term mitigation planning and must not be ignored in long-term strategic decision making.
Hierarchical Learning of Tree Classifiers for Large-Scale Plant Species Identification.
Fan, Jianping; Zhou, Ning; Peng, Jinye; Gao, Ling
2015-11-01
In this paper, a hierarchical multi-task structural learning algorithm is developed to support large-scale plant species identification, where a visual tree is constructed for organizing large numbers of plant species in a coarse-to-fine fashion and determining the inter-related learning tasks automatically. For a given parent node on the visual tree, it contains a set of sibling coarse-grained categories of plant species or sibling fine-grained plant species, and a multi-task structural learning algorithm is developed to train their inter-related classifiers jointly for enhancing their discrimination power. The inter-level relationship constraint, e.g., a plant image must first be assigned to a parent node (high-level non-leaf node) correctly if it can further be assigned to the most relevant child node (low-level non-leaf node or leaf node) on the visual tree, is formally defined and leveraged to learn more discriminative tree classifiers over the visual tree. Our experimental results have demonstrated the effectiveness of our hierarchical multi-task structural learning algorithm on training more discriminative tree classifiers for large-scale plant species identification.
NASA Astrophysics Data System (ADS)
Aksenov, A. G.; Chechetkin, V. M.
2018-04-01
Most of the energy released in the gravitational collapse of the cores of massive stars is carried away by neutrinos. Neutrinos play a pivotal role in explaining core-collape supernovae. Currently, mathematical models of the gravitational collapse are based on multi-dimensional gas dynamics and thermonuclear reactions, while neutrino transport is considered in a simplified way. Multidimensional gas dynamics is used with neutrino transport in the flux-limited diffusion approximation to study the role of multi-dimensional effects. The possibility of large-scale convection is discussed, which is interesting both for explaining SN II and for setting up observations to register possible high-energy (≳10MeV) neutrinos from the supernova. A new multi-dimensional, multi-temperature gas dynamics method with neutrino transport is presented.
Rey-Villamizar, Nicolas; Somasundar, Vinay; Megjhani, Murad; Xu, Yan; Lu, Yanbin; Padmanabhan, Raghav; Trett, Kristen; Shain, William; Roysam, Badri
2014-01-01
In this article, we describe the use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes, including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis tasks, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral images of brain tissue surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels. Each channel consists of 6000 × 10,000 × 500 voxels with 16 bits/voxel, implying image sizes exceeding 250 GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analysis for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN) capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment. Our Python script enables efficient data storage and movement between computers and storage servers, logs all the processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries.
Spanish validation of the Premorbid Adjustment Scale (PAS-S).
Barajas, Ana; Ochoa, Susana; Baños, Iris; Dolz, Montse; Villalta-Gil, Victoria; Vilaplana, Miriam; Autonell, Jaume; Sánchez, Bernardo; Cervilla, Jorge A; Foix, Alexandrina; Obiols, Jordi E; Haro, Josep Maria; Usall, Judith
2013-02-01
The Premorbid Adjustment Scale (PAS) has been the most widely used scale to quantify premorbid status in schizophrenia, coming to be regarded as the gold standard of retrospective assessment instruments. To examine the psychometric properties of the Spanish version of the PAS (PAS-S). Retrospective study of 140 individuals experiencing a first episode of psychosis (n=77) and individuals who have schizophrenia (n=63), both adult and adolescent patients. Data were collected through a socio-demographic questionnaire and a battery of instruments which includes the following scales: PAS-S, PANSS, LSP, GAF and DAS-sv. The Cronbach's alpha was performed to assess the internal consistency of PAS-S. Pearson's correlations were performed to assess the convergent and discriminant validity. The Cronbach's alpha of the PAS-S scale was 0.85. The correlation between social PAS-S and total PAS-S was 0.85 (p<0.001); while for academic PAS-S and total PAS-S it was 0.53 (p<0.001). Significant correlations were observed between all the scores of each age period evaluated across the PAS-S scale, with a significance value less than 0.001. There was a relationship between negative symptoms and social PAS-S (0.20, p<0.05) and total PAS-S (0.22, p<0.05), but not with academic PAS-S. However, there was a correlation between academic PAS-S and general subscale of the PANSS (0.19, p<0.05). Social PAS-S was related to disability measures (DAS-sv); and academic PAS-S showed discriminant validity with most of the variables of social functioning. PAS-S did not show association with the total LSP scale (discriminant validity). The Spanish version of the Premorbid Adjustment Scale showed appropriate psychometric properties in patients experiencing a first episode of psychosis and who have a chronic evolution of the illness. Moreover, each domain of the PAS-S (social and academic premorbid functioning) showed a differential relationship to other characteristics such as psychotic symptoms, disability or social functioning after onset of illness. Copyright © 2013 Elsevier Inc. All rights reserved.
The Parallel System for Integrating Impact Models and Sectors (pSIMS)
NASA Technical Reports Server (NTRS)
Elliott, Joshua; Kelly, David; Chryssanthacopoulos, James; Glotter, Michael; Jhunjhnuwala, Kanika; Best, Neil; Wilde, Michael; Foster, Ian
2014-01-01
We present a framework for massively parallel climate impact simulations: the parallel System for Integrating Impact Models and Sectors (pSIMS). This framework comprises a) tools for ingesting and converting large amounts of data to a versatile datatype based on a common geospatial grid; b) tools for translating this datatype into custom formats for site-based models; c) a scalable parallel framework for performing large ensemble simulations, using any one of a number of different impacts models, on clusters, supercomputers, distributed grids, or clouds; d) tools and data standards for reformatting outputs to common datatypes for analysis and visualization; and e) methodologies for aggregating these datatypes to arbitrary spatial scales such as administrative and environmental demarcations. By automating many time-consuming and error-prone aspects of large-scale climate impacts studies, pSIMS accelerates computational research, encourages model intercomparison, and enhances reproducibility of simulation results. We present the pSIMS design and use example assessments to demonstrate its multi-model, multi-scale, and multi-sector versatility.
Implementation of a multi-threaded framework for large-scale scientific applications
Sexton-Kennedy, E.; Gartung, Patrick; Jones, C. D.; ...
2015-05-22
The CMS experiment has recently completed the development of a multi-threaded capable application framework. In this paper, we will discuss the design, implementation and application of this framework to production applications in CMS. For the 2015 LHC run, this functionality is particularly critical for both our online and offline production applications, which depend on faster turn-around times and a reduced memory footprint relative to before. These applications are complex codes, each including a large number of physics-driven algorithms. While the framework is capable of running a mix of thread-safe and 'legacy' modules, algorithms running in our production applications need tomore » be thread-safe for optimal use of this multi-threaded framework at a large scale. Towards this end, we discuss the types of changes, which were necessary for our algorithms to achieve good performance of our multithreaded applications in a full-scale application. Lastly performance numbers for what has been achieved for the 2015 run are presented.« less
Zheng, Wei; Yan, Xiaoyong; Zhao, Wei; Qian, Chengshan
2017-12-20
A novel large-scale multi-hop localization algorithm based on regularized extreme learning is proposed in this paper. The large-scale multi-hop localization problem is formulated as a learning problem. Unlike other similar localization algorithms, the proposed algorithm overcomes the shortcoming of the traditional algorithms which are only applicable to an isotropic network, therefore has a strong adaptability to the complex deployment environment. The proposed algorithm is composed of three stages: data acquisition, modeling and location estimation. In data acquisition stage, the training information between nodes of the given network is collected. In modeling stage, the model among the hop-counts and the physical distances between nodes is constructed using regularized extreme learning. In location estimation stage, each node finds its specific location in a distributed manner. Theoretical analysis and several experiments show that the proposed algorithm can adapt to the different topological environments with low computational cost. Furthermore, high accuracy can be achieved by this method without setting complex parameters.
ASSESSING ECOLOGICAL RISKS AT LARGE SPATIAL SCALES
The history of environmental management and regulation in the United States has been one of initial focus on localized, end-of-the-pipe problems to increasing attention to multi-scalar, multi-stressor, and multi- resource issues. Concomitant with this reorientation is the need fo...
NASA Astrophysics Data System (ADS)
Agrawal, B. P.; Ghosh, P. K.
2017-03-01
Butt weld joints are produced using pulse current gas metal arc welding process by employing the technique of centrally laid multi-pass single-seam per layer weld deposition in extra narrow groove of thick HSLA steel plates. The weld joints are prepared by using different combination of pulse parameters. The selection of parameter of pulse current gas metal arc welding is done considering a summarized influence of simultaneously interacting pulse parameters defined by a dimensionless hypothetical factor ϕ. The effect of diverse pulse parameters on the characteristics of weld has been studied. Weld joint is also prepared by using commonly used multi-pass multi-seam per layer weld deposition in conventional groove. The extra narrow gap weld joints have been found much superior to the weld joint prepared by multi-pass multi-seam per layer deposition in conventional groove with respect to its metallurgical characteristics and mechanical properties.
Santala, M. K.; Raoux, S.; Campbell, G. H.
2015-12-24
The kinetics of laser-induced, liquid-mediated crystallization of amorphous Ge thin films were studied using multi-frame dynamic transmission electron microscopy (DTEM), a nanosecond-scale photo-emission transmission electron microscopy technique. In these experiments, high temperature gradients are established in thin amorphous Ge films with a 12-ns laser pulse with a Gaussian spatial profile. The hottest region at the center of the laser spot crystallizes in ~100 ns and becomes nano-crystalline. Over the next several hundred nanoseconds crystallization continues radially outward from the nano-crystalline region forming elongated grains, some many microns long. The growth rate during the formation of these radial grains is measuredmore » with time-resolved imaging experiments. Crystal growth rates exceed 10 m/s, which are consistent with crystallization mediated by a very thin, undercooled transient liquid layer, rather than a purely solid-state transformation mechanism. The kinetics of this growth mode have been studied in detail under steady-state conditions, but here we provide a detailed study of liquid-mediated growth in high temperature gradients. Unexpectedly, the propagation rate of the crystallization front was observed to remain constant during this growth mode even when passing through large local temperature gradients, in stark contrast to other similar studies that suggested the growth rate changed dramatically. As a result, the high throughput of multi-frame DTEM provides gives a more complete picture of the role of temperature and temperature gradient on laser crystallization than previous DTEM experiments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santala, M. K., E-mail: melissa.santala@oregonstate.edu; Campbell, G. H.; Raoux, S.
2015-12-21
The kinetics of laser-induced, liquid-mediated crystallization of amorphous Ge thin films were studied using multi-frame dynamic transmission electron microscopy (DTEM), a nanosecond-scale photo-emission transmission electron microscopy technique. In these experiments, high temperature gradients are established in thin amorphous Ge films with a 12-ns laser pulse with a Gaussian spatial profile. The hottest region at the center of the laser spot crystallizes in ∼100 ns and becomes nano-crystalline. Over the next several hundred nanoseconds crystallization continues radially outward from the nano-crystalline region forming elongated grains, some many microns long. The growth rate during the formation of these radial grains is measured withmore » time-resolved imaging experiments. Crystal growth rates exceed 10 m/s, which are consistent with crystallization mediated by a very thin, undercooled transient liquid layer, rather than a purely solid-state transformation mechanism. The kinetics of this growth mode have been studied in detail under steady-state conditions, but here we provide a detailed study of liquid-mediated growth in high temperature gradients. Unexpectedly, the propagation rate of the crystallization front was observed to remain constant during this growth mode even when passing through large local temperature gradients, in stark contrast to other similar studies that suggested the growth rate changed dramatically. The high throughput of multi-frame DTEM provides gives a more complete picture of the role of temperature and temperature gradient on laser crystallization than previous DTEM experiments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tome, Carlos N; Caro, J A; Lebensohn, R A
2010-01-01
Advancing the performance of Light Water Reactors, Advanced Nuclear Fuel Cycles, and Advanced Reactors, such as the Next Generation Nuclear Power Plants, requires enhancing our fundamental understanding of fuel and materials behavior under irradiation. The capability to accurately model the nuclear fuel systems to develop predictive tools is critical. Not only are fabrication and performance models needed to understand specific aspects of the nuclear fuel, fully coupled fuel simulation codes are required to achieve licensing of specific nuclear fuel designs for operation. The backbone of these codes, models, and simulations is a fundamental understanding and predictive capability for simulating themore » phase and microstructural behavior of the nuclear fuel system materials and matrices. In this paper we review the current status of the advanced modeling and simulation of nuclear reactor cladding, with emphasis on what is available and what is to be developed in each scale of the project, how we propose to pass information from one scale to the next, and what experimental information is required for benchmarking and advancing the modeling at each scale level.« less
Winter sky brightness and cloud cover at Dome A, Antarctica
NASA Astrophysics Data System (ADS)
Moore, Anna M.; Yang, Yi; Fu, Jianning; Ashley, Michael C. B.; Cui, Xiangqun; Feng, Long Long; Gong, Xuefei; Hu, Zhongwen; Lawrence, Jon S.; Luong-Van, Daniel M.; Riddle, Reed; Shang, Zhaohui; Sims, Geoff; Storey, John W. V.; Tothill, Nicholas F. H.; Travouillon, Tony; Wang, Lifan; Yang, Huigen; Yang, Ji; Zhou, Xu; Zhu, Zhenxi
2013-01-01
At the summit of the Antarctic plateau, Dome A offers an intriguing location for future large scale optical astronomical observatories. The Gattini Dome A project was created to measure the optical sky brightness and large area cloud cover of the winter-time sky above this high altitude Antarctic site. The wide field camera and multi-filter system was installed on the PLATO instrument module as part of the Chinese-led traverse to Dome A in January 2008. This automated wide field camera consists of an Apogee U4000 interline CCD coupled to a Nikon fisheye lens enclosed in a heated container with glass window. The system contains a filter mechanism providing a suite of standard astronomical photometric filters (Bessell B, V, R) and a long-pass red filter for the detection and monitoring of airglow emission. The system operated continuously throughout the 2009, and 2011 winter seasons and part-way through the 2010 season, recording long exposure images sequentially for each filter. We have in hand one complete winter-time dataset (2009) returned via a manned traverse. We present here the first measurements of sky brightness in the photometric V band, cloud cover statistics measured so far and an estimate of the extinction.
NASA Astrophysics Data System (ADS)
Asgari, Somayyeh; Granpayeh, Nosrat
2017-06-01
Two parallel graphene sheet waveguides and a graphene cylindrical resonator between them is proposed, analyzed, and simulated numerically by using the finite-difference time-domain method. One end of each graphene waveguide is the input and output port. The resonance and the prominent mid-infrared band-pass filtering effect are achieved. The transmittance spectrum is tuned by varying the radius of the graphene cylindrical resonator, the dielectric inside it, and also the chemical potential of graphene utilizing gate voltage. Simulation results are in good agreement with theoretical calculations. As an application, a multi/demultiplexer is proposed and analyzed. Our studies demonstrate that graphene based ultra-compact, nano-scale devices can be designed for optical processing and photonic integrated devices.
NASA Astrophysics Data System (ADS)
Niu, Jun; Chen, Ji; Wang, Keyi; Sivakumar, Bellie
2017-08-01
This paper examines the multi-scale streamflow variability responses to precipitation over 16 headwater catchments in the Pearl River basin, South China. The long-term daily streamflow data (1952-2000), obtained using a macro-scale hydrological model, the Variable Infiltration Capacity (VIC) model, and a routing scheme, are studied. Temporal features of streamflow variability at 10 different timescales, ranging from 6 days to 8.4 years, are revealed with the Haar wavelet transform. The principal component analysis (PCA) is performed to categorize the headwater catchments with the coherent modes of multi-scale wavelet spectra. The results indicate that three distinct modes, with different variability distributions at small timescales and seasonal scales, can explain 95% of the streamflow variability. A large majority of the catchments (i.e. 12 out of 16) exhibit consistent mode feature on multi-scale variability throughout three sub-periods (1952-1968, 1969-1984, and 1985-2000). The multi-scale streamflow variability responses to precipitation are identified to be associated with the regional flood and drought tendency over the headwater catchments in southern China.
Jaiswal, Astha; Godinez, William J; Eils, Roland; Lehmann, Maik Jorg; Rohr, Karl
2015-11-01
Automatic fluorescent particle tracking is an essential task to study the dynamics of a large number of biological structures at a sub-cellular level. We have developed a probabilistic particle tracking approach based on multi-scale detection and two-step multi-frame association. The multi-scale detection scheme allows coping with particles in close proximity. For finding associations, we have developed a two-step multi-frame algorithm, which is based on a temporally semiglobal formulation as well as spatially local and global optimization. In the first step, reliable associations are determined for each particle individually in local neighborhoods. In the second step, the global spatial information over multiple frames is exploited jointly to determine optimal associations. The multi-scale detection scheme and the multi-frame association finding algorithm have been combined with a probabilistic tracking approach based on the Kalman filter. We have successfully applied our probabilistic tracking approach to synthetic as well as real microscopy image sequences of virus particles and quantified the performance. We found that the proposed approach outperforms previous approaches.
ERIC Educational Resources Information Center
Cassimon, Danny; Essers, Dennis; Renard, Robrecht
2011-01-01
A decade has passed since participants in the World Education Forum committed themselves to achieve, by 2015, the six Education for All (EFA) goals under the Dakar Framework for Action. Despite significant progress, some of the goals are likely to be missed by a large margin. Besides the absence of a well co-ordinated multi-donor approach in…
Compact Multimedia Systems in Multi-chip Module Technology
NASA Technical Reports Server (NTRS)
Fang, Wai-Chi; Alkalaj, Leon
1995-01-01
This tutorial paper shows advanced multimedia system designs based on multi-chip module (MCM) technologies that provide essential computing, compression, communication, and storage capabilities for various large scale information highway applications.!.
Meador, M.R.; McIntyre, J.P.; Pollock, K.H.
2003-01-01
Two-pass backpack electrofishing data collected as part of the U.S. Geological Survey's National Water-Quality Assessment Program were analyzed to assess the efficacy of single-pass backpack electrofishing. A two-capture removal model was used to estimate, within 10 river basins across the United States, proportional fish species richness from one-pass electrofishing and probabilities of detection for individual fish species. Mean estimated species richness from first-pass sampling (ps1) ranged from 80.7% to 100% of estimated total species richness for each river basin, based on at least seven samples per basin. However, ps1 values for individual sites ranged from 40% to 100% of estimated total species richness. Additional species unique to the second pass were collected in 50.3% of the samples. Of these, cyprinids and centrarchids were collected most frequently. Proportional fish species richness estimated for the first pass increased significantly with decreasing stream width for 1 of the 10 river basins. When used to calculate probabilities of detection of individual fish species, the removal model failed 48% of the time because the number of individuals of a species was greater in the second pass than in the first pass. Single-pass backpack electrofishing data alone may make it difficult to determine whether characterized fish community structure data are real or spurious. The two-pass removal model can be used to assess the effectiveness of sampling species richness with a single electrofishing pass. However, the two-pass removal model may have limited utility to determine probabilities of detection of individual species and, thus, limit the ability to assess the effectiveness of single-pass sampling to characterize species relative abundances. Multiple-pass (at least three passes) backpack electrofishing at a large number of sites may not be cost-effective as part of a standardized sampling protocol for large-geographic-scale studies. However, multiple-pass electrofishing at some sites may be necessary to better evaluate the adequacy of single-pass electrofishing and to help make meaningful interpretations of fish community structure.
L-band InSAR Penetration Depth Experiment, North Slope Alaska
NASA Astrophysics Data System (ADS)
Muskett, Reginald
2017-04-01
Since the first spacecraft-based synthetic aperture radar (SAR) mission NASA's SEASAT in 1978 radars have been flown in Low Earth Orbit (LEO) by other national space agencies including the Canadian Space Agency, European Space Agency, India Space Research Organization and the Japanese Aerospace Exploration Agency. Improvements in electronics, miniaturization and production have allowed for the deployment of SAR systems on aircraft for usage in agriculture, hazards assessment, land-use management and planning, meteorology, oceanography and surveillance. LEO SAR systems still provide a range of needful and timely information on large and small-scale weather conditions like those found across the Arctic where ground-base weather radars currently provide limited coverage. For investigators of solid-earth deformation attention must be given to the atmosphere on Interferometric SAR (InSAR) by aircraft and spacecraft multi-pass operations. Because radar has the capability to penetrate earth materials at frequencies from the P- to X-band attention must be given to the frequency dependent penetration depth and volume scattering. This is the focus of our new research project: to test the penetration depth of L-band SAR/InSAR by aircraft and spacecraft systems at a test site in Arctic Alaska using multi-frequency analysis and progressive burial of radar mesh-reflectors at measured depths below tundra while monitoring environmental conditions. Knowledge of the L-band penetration depth on lowland Arctic tundra is necessary to constrain analysis of carbon mass balance and hazardous conditions arising form permafrost degradation and thaw, surface heave and subsidence and thermokarst formation at local and regional scales.
A Multi-Scale Settlement Matching Algorithm Based on ARG
NASA Astrophysics Data System (ADS)
Yue, Han; Zhu, Xinyan; Chen, Di; Liu, Lingjia
2016-06-01
Homonymous entity matching is an important part of multi-source spatial data integration, automatic updating and change detection. Considering the low accuracy of existing matching methods in dealing with matching multi-scale settlement data, an algorithm based on Attributed Relational Graph (ARG) is proposed. The algorithm firstly divides two settlement scenes at different scales into blocks by small-scale road network and constructs local ARGs in each block. Then, ascertains candidate sets by merging procedures and obtains the optimal matching pairs by comparing the similarity of ARGs iteratively. Finally, the corresponding relations between settlements at large and small scales are identified. At the end of this article, a demonstration is presented and the results indicate that the proposed algorithm is capable of handling sophisticated cases.
MPIRUN: A Portable Loader for Multidisciplinary and Multi-Zonal Applications
NASA Technical Reports Server (NTRS)
Fineberg, Samuel A.; Woodrow, Thomas S. (Technical Monitor)
1994-01-01
Multidisciplinary and multi-zonal applications are an important class of applications in the area of Computational Aerosciences. In these codes, two or more distinct parallel programs or copies of a single program are utilized to model a single problem. To support such applications, it is common to use a programming model where a program is divided into several single program multiple data stream (SPMD) applications, each of which solves the equations for a single physical discipline or grid zone. These SPMD applications are then bound together to form a single multidisciplinary or multi-zonal program in which the constituent parts communicate via point-to-point message passing routines. One method for implementing the message passing portion of these codes is with the new Message Passing Interface (MPI) standard. Unfortunately, this standard only specifies the message passing portion of an application, but does not specify any portable mechanisms for loading an application. MPIRUN was developed to provide a portable means for loading MPI programs, and was specifically targeted at multidisciplinary and multi-zonal applications. Programs using MPIRUN for loading and MPI for message passing are then portable between all machines supported by MPIRUN. MPIRUN is currently implemented for the Intel iPSC/860, TMC CM5, IBM SP-1 and SP-2, Intel Paragon, and workstation clusters. Further, MPIRUN is designed to be simple enough to port easily to any system supporting MPI.
NASA Astrophysics Data System (ADS)
Fonseca, R. A.; Vieira, J.; Fiuza, F.; Davidson, A.; Tsung, F. S.; Mori, W. B.; Silva, L. O.
2013-12-01
A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ˜106 cores and sustained performance over ˜2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bogacz, Slawomir Alex
Here, we summarize current state of concept for muon acceleration aimed at future Neutrino Factory. The main thrust of these studies was to reduce the overall cost while maintaining performance through exploring interplay between complexity of the cooling systems and the acceptance of the accelerator complex. To ensure adequate survival of the short-lived muons, acceleration must occur at high average gradient. The need for large transverse and longitudinal acceptances drives the design of the acceleration system to initially low RF frequency, e.g. 325 MHz, and then increased to 650 MHz, as the transverse size shrinks with increasing energy. High-gradient normalmore » conducting RF cavities at these frequencies require extremely high peak-power RF sources. Hence superconducting RF (SRF) cavities are chosen. Here, we considered two cost effective schemes for accelerating muon beams for a stagable Neutrino Factory: Exploration of the so-called 'dual-use' linac concept, where the same linac structure is used for acceleration of both H- and muons and alternatively, the SRF efficient design based on multi-pass (4.5) 'dogbone' RLA, extendable to multi-pass FFAG-like arcs.« less
Jatana, Gurneesh; Geckler, Sam; Koeberlein, David; ...
2016-09-01
We designed and developed a 4-probe multiplexed multi-species absorption spectroscopy sensor system for gas property measurements on the intake side of commercial multi-cylinder internal-combustion (I.C.) engines; the resulting cycle- and cylinder-resolved concentration, temperature and pressure measurements are applicable for assessing spatial and temporal variations in the recirculated exhaust gas (EGR) distribution at various locations along the intake gas path, which in turn is relevant to assessing cylinder charge uniformity, control strategies, and CFD models. Furthermore, the diagnostic is based on absorption spectroscopy and includes an H 2O absorption system (utilizing a 1.39 m distributed feedback (DFB) diode laser) for measuringmore » gas temperature, pressure, and H 2O concentration, and a CO 2 absorption system (utilizing a 2.7 m DFB laser) for measuring CO 2 concentration. The various lasers, optical components and detectors were housed in an instrument box, and the 1.39- m and 2.7- m lasers were guided to and from the engine-mounted probes via optical fibers and hollow waveguides, respectively. The 5kHz measurement bandwidth allows for near-crank angle resolved measurements, with a resolution of 1.2 crank angle degrees at 1000 RPM. Our use of compact stainless steel measurement probes enables simultaneous multi-point measurements at various locations on the engine with minimal changes to the base engine hardware; in addition to resolving large-scale spatial variations via simultaneous multi-probe measurements, local spatial gradients can be resolved by translating individual probes. Along with details of various sensor design features and performance, we also demonstrate validation of the spectral parameters of the associated CO 2 absorption transitions using both a multi-pass heated cell and the sensor probes.« less
Users matter : multi-agent systems model of high performance computing cluster users.
DOE Office of Scientific and Technical Information (OSTI.GOV)
North, M. J.; Hood, C. S.; Decision and Information Sciences
2005-01-01
High performance computing clusters have been a critical resource for computational science for over a decade and have more recently become integral to large-scale industrial analysis. Despite their well-specified components, the aggregate behavior of clusters is poorly understood. The difficulties arise from complicated interactions between cluster components during operation. These interactions have been studied by many researchers, some of whom have identified the need for holistic multi-scale modeling that simultaneously includes network level, operating system level, process level, and user level behaviors. Each of these levels presents its own modeling challenges, but the user level is the most complex duemore » to the adaptability of human beings. In this vein, there are several major user modeling goals, namely descriptive modeling, predictive modeling and automated weakness discovery. This study shows how multi-agent techniques were used to simulate a large-scale computing cluster at each of these levels.« less
NASA Astrophysics Data System (ADS)
Steinke, R. C.; Ogden, F. L.; Lai, W.; Moreno, H. A.; Pureza, L. G.
2014-12-01
Physics-based watershed models are useful tools for hydrologic studies, water resources management and economic analyses in the contexts of climate, land-use, and water-use changes. This poster presents a parallel implementation of a quasi 3-dimensional, physics-based, high-resolution, distributed water resources model suitable for simulating large watersheds in a massively parallel computing environment. Developing this model is one of the objectives of the NSF EPSCoR RII Track II CI-WATER project, which is joint between Wyoming and Utah EPSCoR jurisdictions. The model, which we call ADHydro, is aimed at simulating important processes in the Rocky Mountain west, including: rainfall and infiltration, snowfall and snowmelt in complex terrain, vegetation and evapotranspiration, soil heat flux and freezing, overland flow, channel flow, groundwater flow, water management and irrigation. Model forcing is provided by the Weather Research and Forecasting (WRF) model, and ADHydro is coupled with the NOAH-MP land-surface scheme for calculating fluxes between the land and atmosphere. The ADHydro implementation uses the Charm++ parallel run time system. Charm++ is based on location transparent message passing between migrateable C++ objects. Each object represents an entity in the model such as a mesh element. These objects can be migrated between processors or serialized to disk allowing the Charm++ system to automatically provide capabilities such as load balancing and checkpointing. Objects interact with each other by passing messages that the Charm++ system routes to the correct destination object regardless of its current location. This poster discusses the algorithms, communication patterns, and caching strategies used to implement ADHydro with Charm++. The ADHydro model code will be released to the hydrologic community in late 2014.
Harada, Sei; Hirayama, Akiyoshi; Chan, Queenie; Kurihara, Ayako; Fukai, Kota; Iida, Miho; Kato, Suzuka; Sugiyama, Daisuke; Kuwabara, Kazuyo; Takeuchi, Ayano; Akiyama, Miki; Okamura, Tomonori; Ebbels, Timothy M D; Elliott, Paul; Tomita, Masaru; Sato, Asako; Suzuki, Chizuru; Sugimoto, Masahiro; Soga, Tomoyoshi; Takebayashi, Toru
2018-01-01
Cohort studies with metabolomics data are becoming more widespread, however, large-scale studies involving 10,000s of participants are still limited, especially in Asian populations. Therefore, we started the Tsuruoka Metabolomics Cohort Study enrolling 11,002 community-dwelling adults in Japan, and using capillary electrophoresis-mass spectrometry (CE-MS) and liquid chromatography-mass spectrometry. The CE-MS method is highly amenable to absolute quantification of polar metabolites, however, its reliability for large-scale measurement is unclear. The aim of this study is to examine reproducibility and validity of large-scale CE-MS measurements. In addition, the study presents absolute concentrations of polar metabolites in human plasma, which can be used in future as reference ranges in a Japanese population. Metabolomic profiling of 8,413 fasting plasma samples were completed using CE-MS, and 94 polar metabolites were structurally identified and quantified. Quality control (QC) samples were injected every ten samples and assessed throughout the analysis. Inter- and intra-batch coefficients of variation of QC and participant samples, and technical intraclass correlation coefficients were estimated. Passing-Bablok regression of plasma concentrations by CE-MS on serum concentrations by standard clinical chemistry assays was conducted for creatinine and uric acid. In QC samples, coefficient of variation was less than 20% for 64 metabolites, and less than 30% for 80 metabolites out of the 94 metabolites. Inter-batch coefficient of variation was less than 20% for 81 metabolites. Estimated technical intraclass correlation coefficient was above 0.75 for 67 metabolites. The slope of Passing-Bablok regression was estimated as 0.97 (95% confidence interval: 0.95, 0.98) for creatinine and 0.95 (0.92, 0.96) for uric acid. Compared to published data from other large cohort measurement platforms, reproducibility of metabolites common to the platforms was similar to or better than in the other studies. These results show that our CE-MS platform is suitable for conducting large-scale epidemiological studies.
NASA Astrophysics Data System (ADS)
Yang, Liping; Zhang, Lei; He, Jiansen; Tu, Chuanyi; Li, Shengtai; Wang, Xin; Wang, Linghua
2018-03-01
Multi-order structure functions in the solar wind are reported to display a monofractal scaling when sampled parallel to the local magnetic field and a multifractal scaling when measured perpendicularly. Whether and to what extent will the scaling anisotropy be weakened by the enhancement of turbulence amplitude relative to the background magnetic strength? In this study, based on two runs of the magnetohydrodynamic (MHD) turbulence simulation with different relative levels of turbulence amplitude, we investigate and compare the scaling of multi-order magnetic structure functions and magnetic probability distribution functions (PDFs) as well as their dependence on the direction of the local field. The numerical results show that for the case of large-amplitude MHD turbulence, the multi-order structure functions display a multifractal scaling at all angles to the local magnetic field, with PDFs deviating significantly from the Gaussian distribution and a flatness larger than 3 at all angles. In contrast, for the case of small-amplitude MHD turbulence, the multi-order structure functions and PDFs have different features in the quasi-parallel and quasi-perpendicular directions: a monofractal scaling and Gaussian-like distribution in the former, and a conversion of a monofractal scaling and Gaussian-like distribution into a multifractal scaling and non-Gaussian tail distribution in the latter. These results hint that when intermittencies are abundant and intense, the multifractal scaling in the structure functions can appear even if it is in the quasi-parallel direction; otherwise, the monofractal scaling in the structure functions remains even if it is in the quasi-perpendicular direction.
News: Tripping over tipping points/elements
The term “tipping point” has been used to identify a critical threshold susceptible to a tiny perturbation that can qualitatively alter the state or development of a system. “Tipping element” has been introduced to describe large-scale components of the Earth system that may pass...
USDA-ARS?s Scientific Manuscript database
In recent years, large-scale watershed modeling has been implemented broadly in the field of water resources planning and management. Complex hydrological, sediment, and nutrient processes can be simulated by sophisticated watershed simulation models for important issues such as water resources all...
MO-FG-202-09: Virtual IMRT QA Using Machine Learning: A Multi-Institutional Validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valdes, G; Scheuermann, R; Solberg, T
Purpose: To validate a machine learning approach to Virtual IMRT QA for accurately predicting gamma passing rates using different QA devices at different institutions. Methods: A Virtual IMRT QA was constructed using a machine learning algorithm based on 416 IMRT plans, in which QA measurements were performed using diode-array detectors and a 3%local/3mm with 10% threshold. An independent set of 139 IMRT measurements from a different institution, with QA data based on portal dosimetry using the same gamma index and 10% threshold, was used to further test the algorithm. Plans were characterized by 90 different complexity metrics. A weighted poisonmore » regression with Lasso regularization was trained to predict passing rates using the complexity metrics as input. Results: In addition to predicting passing rates with 3% accuracy for all composite plans using diode-array detectors, passing rates for portal dosimetry on per-beam basis were predicted with an error <3.5% for 120 IMRT measurements. The remaining measurements (19) had large areas of low CU, where portal dosimetry has larger disagreement with the calculated dose and, as such, large errors were expected. These beams need to be further modeled to correct the under-response in low dose regions. Important features selected by Lasso to predict gamma passing rates were: complete irradiated area outline (CIAO) area, jaw position, fraction of MLC leafs with gaps smaller than 20 mm or 5mm, fraction of area receiving less than 50% of the total CU, fraction of the area receiving dose from penumbra, weighted Average Irregularity Factor, duty cycle among others. Conclusion: We have demonstrated that the Virtual IMRT QA can predict passing rates using different QA devices and across multiple institutions. Prediction of QA passing rates could have profound implications on the current IMRT process.« less
Thrust vector control of upper stage with a gimbaled thruster during orbit transfer
NASA Astrophysics Data System (ADS)
Wang, Zhaohui; Jia, Yinghong; Jin, Lei; Duan, Jiajia
2016-10-01
In launching Multi-Satellite with One-Vehicle, the main thruster provided by the upper stage is mounted on a two-axis gimbal. During orbit transfer, the thrust vector of this gimbaled thruster (GT) should theoretically pass through the mass center of the upper stage and align with the command direction to provide orbit transfer impetus. However, it is hard to be implemented from the viewpoint of the engineering mission. The deviations of the thrust vector from the command direction would result in large velocity errors. Moreover, the deviations of the thrust vector from the upper stage mass center would produce large disturbance torques. This paper discusses the thrust vector control (TVC) of the upper stage during its orbit transfer. Firstly, the accurate nonlinear coupled kinematic and dynamic equations of the upper stage body, the two-axis gimbal and the GT are derived by taking the upper stage as a multi-body system. Then, a thrust vector control system consisting of the special attitude control of the upper stage and the gimbal rotation of the gimbaled thruster is proposed. The special attitude control defined by the desired attitude that draws the thrust vector to align with the command direction when the gimbal control makes the thrust vector passes through the upper stage mass center. Finally, the validity of the proposed method is verified through numerical simulations.
Recirculating linacs for a neutrino factory - Arc optics design and optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alex Bogacz; Valeri Lebedev
2001-10-21
A conceptual lattice design for a muon accelerator based on recirculating linacs (Nucl. Instr. and Meth. A 472 (2001) 499, these proceedings) is presented here. The challenge of accelerating and transporting a large phase space of short-lived muons is answered here by presenting a proof-of-principle lattice design for a recirculating linac accelerator. It is the centerpiece of a chain of accelerators consisting of a 3GeV linac and two consecutive recirculating linear accelerators, which facilitates acceleration starting after ionization cooling at 190MeV/c and proceeding to 50GeV. Beam transport issues for large-momentum-spread beams are accommodated by appropriate lattice design choices. The resultingmore » arc optics is further optimized with a sextupole correction to suppress chromatic effects contributing to the emittance dilution. The presented proof-of-principle design of the arc optics with horizontal separation of multi-pass beams can be extended to all passes in both recirculating linacs.« less
Recirculating linacs for a neutrino factory - Arc optics design and optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valeri Lebedev; S. Bogacz
2001-10-25
A conceptual lattice design for a muon accelerator based on recirculating linacs (Nucl. Instr. and Meth. A 472 (2001) 499, these proceedings) is presented here. The challenge of accelerating and transporting a large phase space of short-lived muons is answered here by presenting a proof-of-principle lattice design for a recirculating linac accelerator. It is the centerpiece of a chain of accelerators consisting of a 3 GeV linac and two consecutive recirculating linear accelerators, which facilitates acceleration starting after ionization cooling at 190 MeV/c and proceeding to 50 GeV. Beam transport issues for large-momentum-spread beams are accommodated by appropriate lattice designmore » choices. The resulting arc optics is further optimized with a sextupole correction to suppress chromatic effects contributing to the emittance dilution. The presented proof-of-principle design of the arc optics with horizontal separation of multi-pass beams can be extended to all passes in both recirculating linacs.« less
NASA Astrophysics Data System (ADS)
Zhang, Jingwen; Wang, Xu; Liu, Pan; Lei, Xiaohui; Li, Zejun; Gong, Wei; Duan, Qingyun; Wang, Hao
2017-01-01
The optimization of large-scale reservoir system is time-consuming due to its intrinsic characteristics of non-commensurable objectives and high dimensionality. One way to solve the problem is to employ an efficient multi-objective optimization algorithm in the derivation of large-scale reservoir operating rules. In this study, the Weighted Multi-Objective Adaptive Surrogate Model Optimization (WMO-ASMO) algorithm is used. It consists of three steps: (1) simplifying the large-scale reservoir operating rules by the aggregation-decomposition model, (2) identifying the most sensitive parameters through multivariate adaptive regression splines (MARS) for dimensional reduction, and (3) reducing computational cost and speeding the searching process by WMO-ASMO, embedded with weighted non-dominated sorting genetic algorithm II (WNSGAII). The intercomparison of non-dominated sorting genetic algorithm (NSGAII), WNSGAII and WMO-ASMO are conducted in the large-scale reservoir system of Xijiang river basin in China. Results indicate that: (1) WNSGAII surpasses NSGAII in the median of annual power generation, increased by 1.03% (from 523.29 to 528.67 billion kW h), and the median of ecological index, optimized by 3.87% (from 1.879 to 1.809) with 500 simulations, because of the weighted crowding distance and (2) WMO-ASMO outperforms NSGAII and WNSGAII in terms of better solutions (annual power generation (530.032 billion kW h) and ecological index (1.675)) with 1000 simulations and computational time reduced by 25% (from 10 h to 8 h) with 500 simulations. Therefore, the proposed method is proved to be more efficient and could provide better Pareto frontier.
NASA Astrophysics Data System (ADS)
Hsieh, Chih-Chun; Chang, Tao-Chih; Lin, Dong-Yih; Chen, Ming-Che; Wu, Weite
2007-10-01
The purpose of this study is to investigate the precipitation characteristics of σ phase in the fusion zone of stainless steel welds at various welding passes during a tungsten are welding (GTAW) process. The morphology, quantity, and chemical composition of the δ-ferrite and σ phase were analyzed using optical microscopy (OM), a ferritscope (FS), a X-ray diffractometer (XRD), scanning electron microscopy (SEM), an electron probe micro-analyzer (EPMA), and a wavelength dispersive spectrometer (WDS), respectively. Massive δ-ferrite was observed in the fusion zone of the first pass welds during welding of dissimilar stainless steels. The σ phase precipitated at the inner δ-ferrite particles and decreased δ-ferrite content during the third pass welding. The σ and δ phases can be stabilized by Si element, which promoted the phase transformation of σ→ϱ+λ2 in the fusion zone of the third pass welds. It was found that the σ phase was a Fe-Cr-Si intermetallic compound found in the fusion zone of the third pass welds during multi-pass welding.
NASA Astrophysics Data System (ADS)
Fiore, S.; Płóciennik, M.; Doutriaux, C.; Blanquer, I.; Barbera, R.; Williams, D. N.; Anantharaj, V. G.; Evans, B. J. K.; Salomoni, D.; Aloisio, G.
2017-12-01
The increased models resolution in the development of comprehensive Earth System Models is rapidly leading to very large climate simulations output that pose significant scientific data management challenges in terms of data sharing, processing, analysis, visualization, preservation, curation, and archiving.Large scale global experiments for Climate Model Intercomparison Projects (CMIP) have led to the development of the Earth System Grid Federation (ESGF), a federated data infrastructure which has been serving the CMIP5 experiment, providing access to 2PB of data for the IPCC Assessment Reports. In such a context, running a multi-model data analysis experiment is very challenging, as it requires the availability of a large amount of data related to multiple climate models simulations and scientific data management tools for large-scale data analytics. To address these challenges, a case study on climate models intercomparison data analysis has been defined and implemented in the context of the EU H2020 INDIGO-DataCloud project. The case study has been tested and validated on CMIP5 datasets, in the context of a large scale, international testbed involving several ESGF sites (LLNL, ORNL and CMCC), one orchestrator site (PSNC) and one more hosting INDIGO PaaS services (UPV). Additional ESGF sites, such as NCI (Australia) and a couple more in Europe, are also joining the testbed. The added value of the proposed solution is summarized in the following: it implements a server-side paradigm which limits data movement; it relies on a High-Performance Data Analytics (HPDA) stack to address performance; it exploits the INDIGO PaaS layer to support flexible, dynamic and automated deployment of software components; it provides user-friendly web access based on the INDIGO Future Gateway; and finally it integrates, complements and extends the support currently available through ESGF. Overall it provides a new "tool" for climate scientists to run multi-model experiments. At the time this contribution is being written, the proposed testbed represents the first implementation of a distributed large-scale, multi-model experiment in the ESGF/CMIP context, joining together server-side approaches for scientific data analysis, HPDA frameworks, end-to-end workflow management, and cloud computing.
NASA Astrophysics Data System (ADS)
Breuillard, H.; Aunai, N.; Le Contel, O.; Catapano, F.; Alexandrova, A.; Retino, A.; Cozzani, G.; Gershman, D. J.; Giles, B. L.; Khotyaintsev, Y. V.; Lindqvist, P. A.; Ergun, R.; Strangeway, R. J.; Russell, C. T.; Magnes, W.; Plaschke, F.; Nakamura, R.; Fuselier, S. A.; Turner, D. L.; Schwartz, S. J.; Torbert, R. B.; Burch, J.
2017-12-01
Transient and localized jets of hot plasma, also known as Bursty Bulk Flows (BBFs), play a crucial role in Earth's magnetotail dynamics because the energy input from the solar wind is partly dissipated in their vicinity, notably in their embedded dipolarization front (DF). This dissipation is in the form of strong low-frequency waves that can heat and accelerate energetic particles up to the high-latitude plasma sheet. The ion-scale dynamics of BBFs have been revealed by the Cluster and THEMIS multi-spacecraft missions. However, the dynamics of BBF propagation in the magnetotail are still under debate due to instrumental limitations and spacecraft separation distances, as well as simulation limitations. The NASA/MMS fleet, which features unprecedented high time resolution instruments and four spacecraft separated by kinetic-scale distances, has also shown recently that the DF normal dynamics and its associated emissions are below the ion gyroradius scale in this region. Large variations in the dawn-dusk direction were also observed. However, most of large-scale simulations are using the MHD approach and are assumed 2D in the XZ plane. Thus, in this study we take advantage of both multi-spacecraft observations by MMS and large-scale 3D hybrid simulations to investigate the 3D dynamics of BBFs and their associated emissions at ion-scale in Earth's magnetotail, and their impact on particle heating and acceleration.
NASA Technical Reports Server (NTRS)
Stanley, H. R.; Martin, C. F.; Roy, N. A.; Vetter, J. R.
1971-01-01
Error analyses were performed to examine the height error in a relative sea-surface profile as determined by a combination of land-based multistation C-band radars and optical lasers and one ship-based radar tracking the GEOS 2 satellite. It was shown that two relative profiles can be obtained: one using available south-to-north passes of the satellite and one using available north-to-south type passes. An analysis of multi-station tracking capability determined that only Antigua and Grand Turk radars are required to provide satisfactory orbits for south-to-north type satellite passes, while a combination of Merritt Island, Bermuda, and Wallops radars provide secondary orbits for north-to-south passes. Analysis of ship tracking capabilities shows that high elevation single pass range-only solutions are necessary to give only moderate sensitivity to systematic error effects.
High peak-power kilohertz laser system employing single-stage multi-pass amplification
Shan, Bing; Wang, Chun; Chang, Zenghu
2006-05-23
The present invention describes a technique for achieving high peak power output in a laser employing single-stage, multi-pass amplification. High gain is achieved by employing a very small "seed" beam diameter in gain medium, and maintaining the small beam diameter for multiple high-gain pre-amplification passes through a pumped gain medium, then leading the beam out of the amplifier cavity, changing the beam diameter and sending it back to the amplifier cavity for additional, high-power amplification passes through the gain medium. In these power amplification passes, the beam diameter in gain medium is increased and carefully matched to the pump laser's beam diameter for high efficiency extraction of energy from the pumped gain medium. A method of "grooming" the beam by means of a far-field spatial filter in the process of changing the beam size within the single-stage amplifier is also described.
Multi-scale heterogeneity of the 2011 Great Tohoku-oki Earthquake from dynamic simulations
NASA Astrophysics Data System (ADS)
Aochi, H.; Ide, S.
2011-12-01
In order to explain the scaling issues of earthquakes of different sizes, multi-scale heterogeneity conception is necessary to characterize earthquake faulting property (Ide and Aochi, JGR, 2005; Aochi and Ide, JGR, 2009).The 2011 Great Tohoku-oki earthquake (M9) is characterized by a slow initial phase of about M7, a M8 class deep rupture, and a M9 main rupture with quite large slip near the trench (e.g. Ide et al., Science, 2011) as well as the presence of foreshocks. We dynamically model these features based on the multi-scale conception. We suppose a significantly large fracture energy (corresponding to slip-weakening distance of 3.2 m) in most of the fault dimension to represent the M9 rupture. However we give local heterogeneity with relatively small circular patches of smaller fracture energy, by assuming the linear scaling relation between the radius and fracture energy. The calculation is carried out using 3D Boundary Integral Equation Method. We first begin only with the mainshock (Aochi and Ide, EPS, 2011), but later we find it important to take into account of a series of foreshocks since the 9th March (M7.4). The smaller patches including the foreshock area are necessary to launch the M9 rupture area of large fracture energy. We then simulate the ground motion in low frequencies using Finite Difference Method. Qualitatively, the observed tendency is consistent with our simulations, in the meaning of the transition from the central part to the southern part in low frequencies (10 - 20 sec). At higher frequencies (1-10 sec), further small asperities are inferred in the observed signals, and this feature matches well with our multi-scale conception.
Multi-partitioning for ADI-schemes on message passing architectures
NASA Technical Reports Server (NTRS)
Vanderwijngaart, Rob F.
1994-01-01
A kind of discrete-operator splitting called Alternating Direction Implicit (ADI) has been found to be useful in simulating fluid flow problems. In particular, it is being used to study the effects of hot exhaust jets from high performance aircraft on landing surfaces. Decomposition techniques that minimize load imbalance and message-passing frequency are described. Three strategies that are investigated for implementing the NAS Scalar Penta-diagonal Parallel Benchmark (SP) are transposition, pipelined Gaussian elimination, and multipartitioning. The multipartitioning strategy, which was used on Ethernet, was found to be the most efficient, although it was considered only a moderate success because of Ethernet's limited communication properties. The efficiency derived largely from the coarse granularity of the strategy, which reduced latencies and allowed overlap of communication and computation.
NASA Technical Reports Server (NTRS)
Le, Guan; Wang, Yongli; Slavin, James A.; Strangeway, Robert J.
2007-01-01
Space Technology 5 (ST5) is a three micro-satellite constellation deployed into a 300 x 4500 km, dawn-dusk, sun-synchronous polar orbit from March 22 to June 21, 2006, for technology validations. In this paper, we present a study of the temporal variability of field-aligned currents using multi-point magnetic field measurements from ST5. The data demonstrate that meso-scale current structures are commonly embedded within large-scale field-aligned current sheets. The meso-scale current structures are very dynamic with highly variable current density and/or polarity in time scales of - 10 min. They exhibit large temporal variations during both quiet and disturbed times in such time scales. On the other hand, the data also shown that the time scales for the currents to be relatively stable are approx. 1 min for meso-scale currents and approx. 10 min for large scale current sheets. These temporal features are obviously associated with dynamic variations of their particle carriers (mainly electrons) as they respond to the variations of the parallel electric field in auroral acceleration region. The characteristic time scales for the temporal variability of meso-scale field-aligned currents are found to be consistent with those of auroral parallel electric field.
Multi-fidelity methods for uncertainty quantification in transport problems
NASA Astrophysics Data System (ADS)
Tartakovsky, G.; Yang, X.; Tartakovsky, A. M.; Barajas-Solano, D. A.; Scheibe, T. D.; Dai, H.; Chen, X.
2016-12-01
We compare several multi-fidelity approaches for uncertainty quantification in flow and transport simulations that have a lower computational cost than the standard Monte Carlo method. The cost reduction is achieved by combining a small number of high-resolution (high-fidelity) simulations with a large number of low-resolution (low-fidelity) simulations. We propose a new method, a re-scaled Multi Level Monte Carlo (rMLMC) method. The rMLMC is based on the idea that the statistics of quantities of interest depends on scale/resolution. We compare rMLMC with existing multi-fidelity methods such as Multi Level Monte Carlo (MLMC) and reduced basis methods and discuss advantages of each approach.
NASA Astrophysics Data System (ADS)
Huang, Y.; Liu, M.; Wada, Y.; He, X.; Sun, X.
2017-12-01
In recent decades, with rapid economic growth, industrial development and urbanization, expanding pollution of polycyclic aromatic hydrocarbons (PAHs) has become a diversified and complicated phenomenon in China. However, the availability of sufficient monitoring activities for PAHs in multi-compartment and the corresponding multi-interface migration processes are still limited, especially at a large geographic area. In this study, we couple the Multimedia Fate Model (MFM) to the Community Multi-Scale Air Quality (CMAQ) model in order to consider the fugacity and the transient contamination processes. This coupled dynamic contaminant model can evaluate the detailed local variations and mass fluxes of PAHs in different environmental media (e.g., air, surface film, soil, sediment, water and vegetation) across different spatial (a county to country) and temporal (days to years) scales. This model has been applied to a large geographical domain of China at a 36 km by 36 km grid resolution. The model considers response characteristics of typical environmental medium to complex underlying surface. Results suggest that direct emission is the main input pathway of PAHs entering the atmosphere, while advection is the main outward flow of pollutants from the environment. In addition, both soil and sediment act as the main sink of PAHs and have the longest retention time. Importantly, the highest PAHs loadings are found in urbanized and densely populated regions of China, such as Yangtze River Delta and Pearl River Delta. This model can provide a good scientific basis towards a better understanding of the large-scale dynamics of environmental pollutants for land conservation and sustainable development. In a next step, the dynamic contaminant model will be integrated with the continental-scale hydrological and water resources model (i.e., Community Water Model, CWatM) to quantify a more accurate representation and feedbacks between the hydrological cycle and water quality at even larger geographical domains. Keywords: PAHs; Community multi-scale air quality model; Multimedia fate model; Land use
Rupert Seidl; Thomas A. Spies; Werner Rammer; E. Ashley Steel; Robert J. Pabst; Keith. Olsen
2012-01-01
Forest ecosystems are the most important terrestrial carbon (C) storage globally, and presently mitigate anthropogenic climate change by acting as a large and persistent sink for atmospheric CO2. Yet, forest C density varies greatly in space, both globally and at stand and landscape levels. Understanding the multi-scale drivers of this variation...
Wen J. Wang; Hong S. He; Martin A. Spetich; Stephen R. Shifley; Frank R. Thompson III; David R. Larsen; Jacob S. Fraser; Jian Yang
2013-01-01
Two challenges confronting forest landscape models (FLMs) are how to simulate fine, standscale processes while making large-scale (i.e., .107 ha) simulation possible, and how to take advantage of extensive forest inventory data such as U.S. Forest Inventory and Analysis (FIA) data to initialize and constrain model parameters. We present the LANDIS PRO model that...
Compiled MPI: Cost-Effective Exascale Applications Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bronevetsky, G; Quinlan, D; Lumsdaine, A
2012-04-10
The complexity of petascale and exascale machines makes it increasingly difficult to develop applications that can take advantage of them. Future systems are expected to feature billion-way parallelism, complex heterogeneous compute nodes and poor availability of memory (Peter Kogge, 2008). This new challenge for application development is motivating a significant amount of research and development on new programming models and runtime systems designed to simplify large-scale application development. Unfortunately, DoE has significant multi-decadal investment in a large family of mission-critical scientific applications. Scaling these applications to exascale machines will require a significant investment that will dwarf the costs of hardwaremore » procurement. A key reason for the difficulty in transitioning today's applications to exascale hardware is their reliance on explicit programming techniques, such as the Message Passing Interface (MPI) programming model to enable parallelism. MPI provides a portable and high performance message-passing system that enables scalable performance on a wide variety of platforms. However, it also forces developers to lock the details of parallelization together with application logic, making it very difficult to adapt the application to significant changes in the underlying system. Further, MPI's explicit interface makes it difficult to separate the application's synchronization and communication structure, reducing the amount of support that can be provided by compiler and run-time tools. This is in contrast to the recent research on more implicit parallel programming models such as Chapel, OpenMP and OpenCL, which promise to provide significantly more flexibility at the cost of reimplementing significant portions of the application. We are developing CoMPI, a novel compiler-driven approach to enable existing MPI applications to scale to exascale systems with minimal modifications that can be made incrementally over the application's lifetime. It includes: (1) New set of source code annotations, inserted either manually or automatically, that will clarify the application's use of MPI to the compiler infrastructure, enabling greater accuracy where needed; (2) A compiler transformation framework that leverages these annotations to transform the original MPI source code to improve its performance and scalability; (3) Novel MPI runtime implementation techniques that will provide a rich set of functionality extensions to be used by applications that have been transformed by our compiler; and (4) A novel compiler analysis that leverages simple user annotations to automatically extract the application's communication structure and synthesize most complex code annotations.« less
Multi-atlas learner fusion: An efficient segmentation approach for large-scale data.
Asman, Andrew J; Huo, Yuankai; Plassard, Andrew J; Landman, Bennett A
2015-12-01
We propose multi-atlas learner fusion (MLF), a framework for rapidly and accurately replicating the highly accurate, yet computationally expensive, multi-atlas segmentation framework based on fusing local learners. In the largest whole-brain multi-atlas study yet reported, multi-atlas segmentations are estimated for a training set of 3464 MR brain images. Using these multi-atlas estimates we (1) estimate a low-dimensional representation for selecting locally appropriate example images, and (2) build AdaBoost learners that map a weak initial segmentation to the multi-atlas segmentation result. Thus, to segment a new target image we project the image into the low-dimensional space, construct a weak initial segmentation, and fuse the trained, locally selected, learners. The MLF framework cuts the runtime on a modern computer from 36 h down to 3-8 min - a 270× speedup - by completely bypassing the need for deformable atlas-target registrations. Additionally, we (1) describe a technique for optimizing the weak initial segmentation and the AdaBoost learning parameters, (2) quantify the ability to replicate the multi-atlas result with mean accuracies approaching the multi-atlas intra-subject reproducibility on a testing set of 380 images, (3) demonstrate significant increases in the reproducibility of intra-subject segmentations when compared to a state-of-the-art multi-atlas framework on a separate reproducibility dataset, (4) show that under the MLF framework the large-scale data model significantly improve the segmentation over the small-scale model under the MLF framework, and (5) indicate that the MLF framework has comparable performance as state-of-the-art multi-atlas segmentation algorithms without using non-local information. Copyright © 2015 Elsevier B.V. All rights reserved.
Space Technology 5 Multi-Point Observations of Temporal Variability of Field-Aligned Currents
NASA Technical Reports Server (NTRS)
Le, Guan; Wang, Yongli; Slavin, James A.; Strangeway, Robert J.
2008-01-01
Space Technology 5 (ST5) is a three micro-satellite constellation deployed into a 300 x 4500 km, dawn-dusk, sun-synchronous polar orbit from March 22 to June 21, 2006, for technology validations. In this paper, we present a study of the temporal variability of field-aligned currents using multi-point magnetic field measurements from ST5. The data demonstrate that meso-scale current structures are commonly embedded within large-scale field-aligned current sheets. The meso-scale current structures are very dynamic with highly variable current density and/or polarity in time scales of approximately 10 min. They exhibit large temporal variations during both quiet and disturbed times in such time scales. On the other hand, the data also shown that the time scales for the currents to be relatively stable are approximately 1 min for meso-scale currents and approximately 10 min for large scale current sheets. These temporal features are obviously associated with dynamic variations of their particle carriers (mainly electrons) as they respond to the variations of the parallel electric field in auroral acceleration region. The characteristic time scales for the temporal variability of meso-scale field-aligned currents are found to be consistent with those of auroral parallel electric field.
Michael Keller; Maria Assunção Silva-Dias; Daniel C. Nepstad; Meinrat O. Andreae
2004-01-01
The Large-Scale Biosphere-Atmosphere Experiment in Amazonia (LBA) is a multi-disciplinary, multinational scientific project led by Brazil. LBA researchers seek to understand Amazonia in its global context especially with regard to regional and global climate. Current development activities in Amazonia including deforestation, logging, cattle ranching, and agriculture...
Scaling and criticality in a stochastic multi-agent model of a financial market
NASA Astrophysics Data System (ADS)
Lux, Thomas; Marchesi, Michele
1999-02-01
Financial prices have been found to exhibit some universal characteristics that resemble the scaling laws characterizing physical systems in which large numbers of units interact. This raises the question of whether scaling in finance emerges in a similar way - from the interactions of a large ensemble of market participants. However, such an explanation is in contradiction to the prevalent `efficient market hypothesis' in economics, which assumes that the movements of financial prices are an immediate and unbiased reflection of incoming news about future earning prospects. Within this hypothesis, scaling in price changes would simply reflect similar scaling in the `input' signals that influence them. Here we describe a multi-agent model of financial markets which supports the idea that scaling arises from mutual interactions of participants. Although the `news arrival process' in our model lacks both power-law scaling and any temporal dependence in volatility, we find that it generates such behaviour as a result of interactions between agents.
Land-Atmosphere Coupling in the Multi-Scale Modelling Framework
NASA Astrophysics Data System (ADS)
Kraus, P. M.; Denning, S.
2015-12-01
The Multi-Scale Modeling Framework (MMF), in which cloud-resolving models (CRMs) are embedded within general circulation model (GCM) gridcells to serve as the model's cloud parameterization, has offered a number of benefits to GCM simulations. The coupling of these cloud-resolving models directly to land surface model instances, rather than passing averaged atmospheric variables to a single instance of a land surface model, the logical next step in model development, has recently been accomplished. This new configuration offers conspicuous improvements to estimates of precipitation and canopy through-fall, but overall the model exhibits warm surface temperature biases and low productivity.This work presents modifications to a land-surface model that take advantage of the new multi-scale modeling framework, and accommodate the change in spatial scale from a typical GCM range of ~200 km to the CRM grid-scale of 4 km.A parameterization is introduced to apportion modeled surface radiation into direct-beam and diffuse components. The diffuse component is then distributed among the land-surface model instances within each GCM cell domain. This substantially reduces the number excessively low light values provided to the land-surface model when cloudy conditions are modeled in the CRM, associated with its 1-D radiation scheme. The small spatial scale of the CRM, ~4 km, as compared with the typical ~200 km GCM scale, provides much more realistic estimates of precipitation intensity, this permits the elimination of a model parameterization of canopy through-fall. However, runoff at such scales can no longer be considered as an immediate flow to the ocean. Allowing sub-surface water flow between land-surface instances within the GCM domain affords better realism and also reduces temperature and productivity biases.The MMF affords a number of opportunities to land-surface modelers, providing both the advantages of direct simulation at the 4 km scale and a much reduced conceptual gap between model resolution and parameterized processes.
The XMM Large Scale Structure Survey
NASA Astrophysics Data System (ADS)
Pierre, Marguerite
2005-10-01
We propose to complete, by an additional 5 deg2, the XMM-LSS Survey region overlying the Spitzer/SWIRE field. This field already has CFHTLS and Integral coverage, and will encompass about 10 deg2. The resulting multi-wavelength medium-depth survey, which complements XMM and Chandra deep surveys, will provide a unique view of large-scale structure over a wide range of redshift, and will show active galaxies in the full range of environments. The complete coverage by optical and IR surveys provides high-quality photometric redshifts, so that cosmological results can quickly be extracted. In the spirit of a Legacy survey, we will make the raw X-ray data immediately public. Multi-band catalogues and images will also be made available on short time scales.
2012-01-01
Physique des Oceans UMR6523 (CNRS. I B(). IFREMER. IRD). Brest , France C. N. Barron E. Joseph Metzger Naval Research Laboratory, Stennis Space...AF447 flight from Rio to Paris . The airplane disappeared on June 1st 2009 near 3° N and 31° W, and a large international effort was organized to...to Runge-Kutta trajectory integration. The low- pass filter was accomplished by convolving the original (XiCM velocity fields at each time step and
Computational simulation of weld microstructure and distortion by considering process mechanics
NASA Astrophysics Data System (ADS)
Mochizuki, M.; Mikami, Y.; Okano, S.; Itoh, S.
2009-05-01
Highly precise fabrication of welded materials is in great demand, and so microstructure and distortion controls are essential. Furthermore, consideration of process mechanics is important for intelligent fabrication. In this study, the microstructure and hardness distribution in multi-pass weld metal are evaluated by computational simulations under the conditions of multiple heat cycles and phase transformation. Because conventional CCT diagrams of weld metal are not available even for single-pass weld metal, new diagrams for multi-pass weld metals are created. The weld microstructure and hardness distribution are precisely predicted when using the created CCT diagram for multi-pass weld metal and calculating the weld thermal cycle. Weld distortion is also investigated by using numerical simulation with a thermal elastic-plastic analysis. In conventional evaluations of weld distortion, the average heat input has been used as the dominant parameter; however, it is difficult to consider the effect of molten pool configurations on weld distortion based only on the heat input. Thus, the effect of welding process conditions on weld distortion is studied by considering molten pool configurations, determined by temperature distribution and history.
Katouda, Michio; Naruse, Akira; Hirano, Yukihiko; Nakajima, Takahito
2016-11-15
A new parallel algorithm and its implementation for the RI-MP2 energy calculation utilizing peta-flop-class many-core supercomputers are presented. Some improvements from the previous algorithm (J. Chem. Theory Comput. 2013, 9, 5373) have been performed: (1) a dual-level hierarchical parallelization scheme that enables the use of more than 10,000 Message Passing Interface (MPI) processes and (2) a new data communication scheme that reduces network communication overhead. A multi-node and multi-GPU implementation of the present algorithm is presented for calculations on a central processing unit (CPU)/graphics processing unit (GPU) hybrid supercomputer. Benchmark results of the new algorithm and its implementation using the K computer (CPU clustering system) and TSUBAME 2.5 (CPU/GPU hybrid system) demonstrate high efficiency. The peak performance of 3.1 PFLOPS is attained using 80,199 nodes of the K computer. The peak performance of the multi-node and multi-GPU implementation is 514 TFLOPS using 1349 nodes and 4047 GPUs of TSUBAME 2.5. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Multi-scale calculation based on dual domain material point method combined with molecular dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dhakal, Tilak Raj
This dissertation combines the dual domain material point method (DDMP) with molecular dynamics (MD) in an attempt to create a multi-scale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically non-equilibrium state, and conventional constitutive relations are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a MD simulation of a group of atoms surrounding the material point. Rather than restricting the multi-scale simulation in a small spatial region, such as phase interfaces, or crackmore » tips, this multi-scale method can be used to consider non-equilibrium thermodynamic e ects in a macroscopic domain. This method takes advantage that the material points only communicate with mesh nodes, not among themselves; therefore MD simulations for material points can be performed independently in parallel. First, using a one-dimensional shock problem as an example, the numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the DDMP method are investigated. Among these methods, only the DDMP method converges as the number of particles increases, but the large number of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the computation. The numerical properties of the multiscale method are investigated as well as the results from this multi-scale calculation are compared of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the computation. The numerical properties of the multiscale method are investigated as well as the results from this multi-scale calculation are compared with direct MD simulation results to demonstrate the feasibility of the method. Also, the multi-scale method is applied for a two dimensional problem of jet formation around copper notch under a strong impact.« less
A cloud-based framework for large-scale traditional Chinese medical record retrieval.
Liu, Lijun; Liu, Li; Fu, Xiaodong; Huang, Qingsong; Zhang, Xianwen; Zhang, Yin
2018-01-01
Electronic medical records are increasingly common in medical practice. The secondary use of medical records has become increasingly important. It relies on the ability to retrieve the complete information about desired patient populations. How to effectively and accurately retrieve relevant medical records from large- scale medical big data is becoming a big challenge. Therefore, we propose an efficient and robust framework based on cloud for large-scale Traditional Chinese Medical Records (TCMRs) retrieval. We propose a parallel index building method and build a distributed search cluster, the former is used to improve the performance of index building, and the latter is used to provide high concurrent online TCMRs retrieval. Then, a real-time multi-indexing model is proposed to ensure the latest relevant TCMRs are indexed and retrieved in real-time, and a semantics-based query expansion method and a multi- factor ranking model are proposed to improve retrieval quality. Third, we implement a template-based visualization method for displaying medical reports. The proposed parallel indexing method and distributed search cluster can improve the performance of index building and provide high concurrent online TCMRs retrieval. The multi-indexing model can ensure the latest relevant TCMRs are indexed and retrieved in real-time. The semantics expansion method and the multi-factor ranking model can enhance retrieval quality. The template-based visualization method can enhance the availability and universality, where the medical reports are displayed via friendly web interface. In conclusion, compared with the current medical record retrieval systems, our system provides some advantages that are useful in improving the secondary use of large-scale traditional Chinese medical records in cloud environment. The proposed system is more easily integrated with existing clinical systems and be used in various scenarios. Copyright © 2017. Published by Elsevier Inc.
Probing Inflation Using Galaxy Clustering On Ultra-Large Scales
NASA Astrophysics Data System (ADS)
Dalal, Roohi; de Putter, Roland; Dore, Olivier
2018-01-01
A detailed understanding of curvature perturbations in the universe is necessary to constrain theories of inflation. In particular, measurements of the local non-gaussianity parameter, flocNL, enable us to distinguish between two broad classes of inflationary theories, single-field and multi-field inflation. While most single-field theories predict flocNL ≈ ‑5/12 (ns -1), in multi-field theories, flocNL is not constrained to this value and is allowed to be observably large. Achieving σ(flocNL) = 1 would give us discovery potential for detecting multi-field inflation, while finding flocNL=0 would rule out a good fraction of interesting multi-field models. We study the use of galaxy clustering on ultra-large scales to achieve this level of constraint on flocNL. Upcoming surveys such as Euclid and LSST will give us galaxy catalogs from which we can construct the galaxy power spectrum and hence infer a value of flocNL. We consider two possible methods of determining the galaxy power spectrum from a catalog of galaxy positions: the traditional Feldman Kaiser Peacock (FKP) Power Spectrum Estimator, and an Optimal Quadratic Estimator (OQE). We implemented and tested each method using mock galaxy catalogs, and compared the resulting constraints on flocNL. We find that the FKP estimator can measure flocNL in an unbiased way, but there remains room for improvement in its precision. We also find that the OQE is not computationally fast, but remains a promising option due to its ability to isolate the power spectrum at large scales. We plan to extend this research to study alternative methods, such as pixel-based likelihood functions. We also plan to study the impact of general relativistic effects at these scales on our ability to measure flocNL.
Implementing Multidisciplinary and Multi-Zonal Applications Using MPI
NASA Technical Reports Server (NTRS)
Fineberg, Samuel A.
1995-01-01
Multidisciplinary and multi-zonal applications are an important class of applications in the area of Computational Aerosciences. In these codes, two or more distinct parallel programs or copies of a single program are utilized to model a single problem. To support such applications, it is common to use a programming model where a program is divided into several single program multiple data stream (SPMD) applications, each of which solves the equations for a single physical discipline or grid zone. These SPMD applications are then bound together to form a single multidisciplinary or multi-zonal program in which the constituent parts communicate via point-to-point message passing routines. Unfortunately, simple message passing models, like Intel's NX library, only allow point-to-point and global communication within a single system-defined partition. This makes implementation of these applications quite difficult, if not impossible. In this report it is shown that the new Message Passing Interface (MPI) standard is a viable portable library for implementing the message passing portion of multidisciplinary applications. Further, with the extension of a portable loader, fully portable multidisciplinary application programs can be developed. Finally, the performance of MPI is compared to that of some native message passing libraries. This comparison shows that MPI can be implemented to deliver performance commensurate with native message libraries.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-30
... include new and existing small-scale wind energy facilities, such as single-turbine demonstration projects, as well as large, multi-turbine commercial wind facilities. Covered Species The planning partners are...-FF03E00000] Draft Midwest Wind Energy Multi-Species Habitat Conservation Plan Within Eight-State Planning...
Liu, Yu; Sun, Changfeng; Li, Qiang; Cai, Qiufang
2016-01-01
The historical May–October mean temperature since 1831 was reconstructed based on tree-ring width of Qinghai spruce (Picea crassifolia Kom.) collected on Mt. Dongda, North of the Hexi Corridor in Northwest China. The regression model explained 46.6% of the variance of the instrumentally observed temperature. The cold periods in the reconstruction were 1831–1889, 1894–1901, 1908–1934 and 1950–1952, and the warm periods were 1890–1893, 1902–1907, 1935–1949 and 1953–2011. During the instrumental period (1951–2011), an obvious warming trend appeared in the last twenty years. The reconstruction displayed similar patterns to a temperature reconstruction from the east-central Tibetan Plateau at the inter-decadal timescale, indicating that the temperature reconstruction in this study was a reliable proxy for Northwest China. It was also found that the reconstruction series had good consistency with the Northern Hemisphere temperature at a decadal timescale. Multi-taper method spectral analysis detected some low- and high-frequency cycles (2.3–2.4-year, 2.8-year, 3.4–3.6-year, 5.0-year, 9.9-year and 27.0-year). Combining these cycles, the relationship of the low-frequency change with the Pacific Decadal Oscillation (PDO), North Atlantic Oscillation (NAO) and Southern Oscillation (SO) suggested that the reconstructed temperature variations may be related to large-scale atmospheric-oceanic variations. Major volcanic eruptions were partly reflected in the reconstructed temperatures after high-pass filtering; these events promoted anomalous cooling in this region. The results of this study not only provide new information for assessing the long-term temperature changes in the Hexi Corridor of Northwest China, but also further demonstrate the effects of large-scale atmospheric-oceanic circulation on climate change in Northwest China. PMID:27509206
L-band InSAR Penetration Depth Experiment, North Slope Alaska
NASA Astrophysics Data System (ADS)
Muskett, R. R.
2017-12-01
Since the first spacecraft-based synthetic aperture radar (SAR) mission NASA's SEASAT in 1978 radars have been flown in Low Earth Orbit (LEO) by other national space agencies including the Canadian Space Agency, European Space Agency, India Space Research Organization and the Japanese Aerospace Exploration Agency. Improvements in electronics, miniaturization and production have allowed for the deployment of SAR systems on aircraft for usage in agriculture, hazards assessment, land-use management and planning, meteorology, oceanography and surveillance. LEO SAR systems still provide a range of needful and timely information on large and small-scale weather conditions like those found across the Arctic where ground-base weather radars currently provide limited coverage. For investigators of solid-earth deformation attention must be given to the atmosphere on Interferometric SAR (InSAR) by aircraft and spacecraft multi-pass operations. Because radar has the capability to penetrate earth materials at frequencies from the P- to X-band attention must be given to the frequency dependent penetration depth and volume scattering. This is the focus of our new research project: to test the penetration depth of L-band SAR/InSAR by aircraft and spacecraft systems at a test site in Arctic Alaska using multi-frequency analysis and progressive burial of radar mesh-reflectors at measured depths below tundra while monitoring environmental conditions. Knowledge of the L-band penetration depth on lowland Arctic tundra is necessary to constrain analysis of carbon mass balance and hazardous conditions arising form permafrost degradation and thaw, surface heave and subsidence and thermokarst formation at local and regional scales. Ref.: Geoscience and Environment Protection, vol. 5, no. 3, p. 14-30, 2017. DOI: 10.4236/gep.2017.53002.
NASA Astrophysics Data System (ADS)
Dong, Min; Dong, Chenghui; Guo, Miao; Wang, Zhe; Mu, Xiaomin
2018-04-01
Multiresolution-based methods, such as wavelet and Contourlet are usually used to image fusion. This work presents a new image fusion frame-work by utilizing area-based standard deviation in dual tree Contourlet trans-form domain. Firstly, the pre-registered source images are decomposed with dual tree Contourlet transform; low-pass and high-pass coefficients are obtained. Then, the low-pass bands are fused with weighted average based on area standard deviation rather than the simple "averaging" rule. While the high-pass bands are merged with the "max-absolute' fusion rule. Finally, the modified low-pass and high-pass coefficients are used to reconstruct the final fused image. The major advantage of the proposed fusion method over conventional fusion is the approximately shift invariance and multidirectional selectivity of dual tree Contourlet transform. The proposed method is compared with wavelet- , Contourletbased methods and other the state-of-the art methods on common used multi focus images. Experiments demonstrate that the proposed fusion framework is feasible and effective, and it performs better in both subjective and objective evaluation.
Wu, Haiming; Fan, Jinlin; Zhang, Jian; Ngo, Huu Hao; Guo, Wenshan
2018-02-01
Multi-stage constructed wetlands (CWs) have been proved to be a cost-effective alternative in the treatment of various wastewaters for improving the treatment performance as compared with the conventional single-stage CWs. However, few long-term full-scale multi-stage CWs have been performed and evaluated for polishing effluents from domestic wastewater treatment plants (WWTP). This study investigated the seasonal and spatial dynamics of carbon and the effects of the key factors (input loading and temperature) in the large-scale seven-stage Wu River CW polishing domestic WWTP effluents in northern China. The results indicated a significant improvement in water quality. Significant seasonal and spatial variations of organics removal were observed in the Wu River CW with a higher COD removal efficiency of 64-66% in summer and fall. Obvious seasonal and spatial variations of CH 4 and CO 2 emissions were also found with the average CH 4 and CO 2 emission rates of 3.78-35.54 mg m -2 d -1 and 610.78-8992.71 mg m -2 d -1 , respectively, while the higher CH 4 and CO 2 emission flux was obtained in spring and summer. Seasonal air temperatures and inflow COD loading rates significantly affected organics removal and CH 4 emission, but they appeared to have a weak influence on CO 2 emission. Overall, this study suggested that large-scale Wu River CW might be a potential source of GHG, but considering the sustainability of the multi-stage CW, the inflow COD loading rate of 1.8-2.0 g m -2 d -1 and temperature of 15-20 °C may be the suitable condition for achieving the higher organics removal efficiency and lower greenhouse gases (GHG) emission in polishing the domestic WWTP effluent. The obtained knowledge of the carbon dynamics in large-scale Wu River CW will be helpful for understanding the carbon cycles, but also can provide useful field experience for the design, operation and management of multi-stage CW treatments. Copyright © 2017 Elsevier Ltd. All rights reserved.
Gunderson, Alex R; Armstrong, Eric J; Stillman, Jonathon H
2016-01-01
Abiotic conditions (e.g., temperature and pH) fluctuate through time in most marine environments, sometimes passing intensity thresholds that induce physiological stress. Depending on habitat and season, the peak intensity of different abiotic stressors can occur in or out of phase with one another. Thus, some organisms are exposed to multiple stressors simultaneously, whereas others experience them sequentially. Understanding these physicochemical dynamics is critical because how organisms respond to multiple stressors depends on the magnitude and relative timing of each stressor. Here, we first discuss broad patterns of covariation between stressors in marine systems at various temporal scales. We then describe how these dynamics will influence physiological responses to multi-stressor exposures. Finally, we summarize how multi-stressor effects are currently assessed. We find that multi-stressor experiments have rarely incorporated naturalistic physicochemical variation into their designs, and emphasize the importance of doing so to make ecologically relevant inferences about physiological responses to global change.
Up-scaling of multi-variable flood loss models from objects to land use units at the meso-scale
NASA Astrophysics Data System (ADS)
Kreibich, Heidi; Schröter, Kai; Merz, Bruno
2016-05-01
Flood risk management increasingly relies on risk analyses, including loss modelling. Most of the flood loss models usually applied in standard practice have in common that complex damaging processes are described by simple approaches like stage-damage functions. Novel multi-variable models significantly improve loss estimation on the micro-scale and may also be advantageous for large-scale applications. However, more input parameters also reveal additional uncertainty, even more in upscaling procedures for meso-scale applications, where the parameters need to be estimated on a regional area-wide basis. To gain more knowledge about challenges associated with the up-scaling of multi-variable flood loss models the following approach is applied: Single- and multi-variable micro-scale flood loss models are up-scaled and applied on the meso-scale, namely on basis of ATKIS land-use units. Application and validation is undertaken in 19 municipalities, which were affected during the 2002 flood by the River Mulde in Saxony, Germany by comparison to official loss data provided by the Saxon Relief Bank (SAB).In the meso-scale case study based model validation, most multi-variable models show smaller errors than the uni-variable stage-damage functions. The results show the suitability of the up-scaling approach, and, in accordance with micro-scale validation studies, that multi-variable models are an improvement in flood loss modelling also on the meso-scale. However, uncertainties remain high, stressing the importance of uncertainty quantification. Thus, the development of probabilistic loss models, like BT-FLEMO used in this study, which inherently provide uncertainty information are the way forward.
PRATHAM: Parallel Thermal Hydraulics Simulations using Advanced Mesoscopic Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joshi, Abhijit S; Jain, Prashant K; Mudrich, Jaime A
2012-01-01
At the Oak Ridge National Laboratory, efforts are under way to develop a 3D, parallel LBM code called PRATHAM (PaRAllel Thermal Hydraulic simulations using Advanced Mesoscopic Methods) to demonstrate the accuracy and scalability of LBM for turbulent flow simulations in nuclear applications. The code has been developed using FORTRAN-90, and parallelized using the message passing interface MPI library. Silo library is used to compact and write the data files, and VisIt visualization software is used to post-process the simulation data in parallel. Both the single relaxation time (SRT) and multi relaxation time (MRT) LBM schemes have been implemented in PRATHAM.more » To capture turbulence without prohibitively increasing the grid resolution requirements, an LES approach [5] is adopted allowing large scale eddies to be numerically resolved while modeling the smaller (subgrid) eddies. In this work, a Smagorinsky model has been used, which modifies the fluid viscosity by an additional eddy viscosity depending on the magnitude of the rate-of-strain tensor. In LBM, this is achieved by locally varying the relaxation time of the fluid.« less
Huang, Yi-Jing; Lin, Gong-Hong; Lee, Shih-Chieh; Chen, Yi-Miau; Huang, Sheau-Ling; Hsieh, Ching-Lin
2018-03-01
To examine both group- and individual-level responsiveness of the 3-point Berg Balance Scale (BBS-3P) and 3-point Postural Assessment Scale for Stroke Patients (PASS-3P) in patients with stroke, and to compare the responsiveness of both 3-point measures versus their original measures (Berg Balance Scale [BBS] and Postural Assessment Scale for Stroke Patients [PASS]) and their short forms (short-form Berg Balance Scale [SFBBS] and short-form Postural Assessment Scale for Stroke Patients [SFPASS]) and between the BBS-3P and PASS-3P. Data were retrieved from a previous study wherein 212 patients were assessed at 14 and 30 days after stroke with the BBS and PASS. Medical center. Patients (N=212) with first onset of stroke within 14 days before hospitalization. Not applicable. Group-level responsiveness was examined by the standardized response mean (SRM), and individual-level responsiveness was examined by the proportion of patients whose change scores exceeded the minimal detectable change of each measure. The responsiveness was compared using the bootstrap approach. The BBS-3P and PASS-3P had good group-level (SRM, .60 and SRM, .56, respectively) and individual-level (48.1% and 44.8% of the patients with significant improvement, respectively) responsiveness. Bootstrap analyses showed that the BBS-3P generally had superior responsiveness to the BBS and SFBBS, and the PASS-3P had similar responsiveness to the PASS and SFPASS. The BBS-3P and PASS-3P were equally responsive to both group and individual change. The responsiveness of the BBS-3P and PASS-3P was comparable or superior to those of the original and short-form measures. We recommend the BBS-3P and PASS-3P as responsive outcome measures of balance for individuals with stroke. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Muon Acceleration Concepts for NuMAX: "Dual-use" Linac and "Dogbone" RLA
Bogacz, S. A.
2018-02-01
In this paper, we summarize the current state of a concept for muon acceleration aimed at a future Neutrino Factory. The main thrust of these studies was to reduce the overall cost while maintaining performance by exploring the interplay between the complexity of the cooling systems and the acceptance of the accelerator complex. To ensure adequate survival for the short-lived muons, acceleration must occur at high average gradient. The need for large transverse and longitudinal acceptances drives the design of the acceleration system to an initially low RF frequency, e.g., 325 MHz, which is then increased to 650 MHz asmore » the transverse size shrinks with increasing energy. High-gradient normal conducting RF cavities at these frequencies require extremely high peak-power RF sources. Hence superconducting RF (SRF) cavities are chosen. Finally, we consider two cost effective schemes for accelerating muon beams for a stageable Neutrino Factory: exploration of the so-called "dual-use" linac concept, where the same linac structure is used for acceleration of both H - and muons and, alternatively, an SRF-efficient design based on a multi-pass (4.5) "dogbone" RLA, extendable to multi-pass FFAG-like arcs.« less
Supermassive Black Holes and Galaxy Evolution
NASA Technical Reports Server (NTRS)
Merritt, D.
2004-01-01
Supermassive black holes appear to be generic components of galactic nuclei. The formation and growth of black holes is intimately connected with the evolution of galaxies on a wide range of scales. For instance, mergers between galaxies containing nuclear black holes would produce supermassive binaries which eventually coalesce via the emission of gravitational radiation. The formation and decay of these binaries is expected to produce a number of observable signatures in the stellar distribution. Black holes can also affect the large-scale structure of galaxies by perturbing the orbits of stars that pass through the nucleus. Large-scale N-body simulations are beginning to generate testable predictions about these processes which will allow us to draw inferences about the formation history of supermassive black holes.
Multi-Scale Three-Dimensional Variational Data Assimilation System for Coastal Ocean Prediction
NASA Technical Reports Server (NTRS)
Li, Zhijin; Chao, Yi; Li, P. Peggy
2012-01-01
A multi-scale three-dimensional variational data assimilation system (MS-3DVAR) has been formulated and the associated software system has been developed for improving high-resolution coastal ocean prediction. This system helps improve coastal ocean prediction skill, and has been used in support of operational coastal ocean forecasting systems and field experiments. The system has been developed to improve the capability of data assimilation for assimilating, simultaneously and effectively, sparse vertical profiles and high-resolution remote sensing surface measurements into coastal ocean models, as well as constraining model biases. In this system, the cost function is decomposed into two separate units for the large- and small-scale components, respectively. As such, data assimilation is implemented sequentially from large to small scales, the background error covariance is constructed to be scale-dependent, and a scale-dependent dynamic balance is incorporated. This scheme then allows effective constraining large scales and model bias through assimilating sparse vertical profiles, and small scales through assimilating high-resolution surface measurements. This MS-3DVAR enhances the capability of the traditional 3DVAR for assimilating highly heterogeneously distributed observations, such as along-track satellite altimetry data, and particularly maximizing the extraction of information from limited numbers of vertical profile observations.
Geomorphic analysis of large alluvial rivers
NASA Astrophysics Data System (ADS)
Thorne, Colin R.
2002-05-01
Geomorphic analysis of a large river presents particular challenges and requires a systematic and organised approach because of the spatial scale and system complexity involved. This paper presents a framework and blueprint for geomorphic studies of large rivers developed in the course of basic, strategic and project-related investigations of a number of large rivers. The framework demonstrates the need to begin geomorphic studies early in the pre-feasibility stage of a river project and carry them through to implementation and post-project appraisal. The blueprint breaks down the multi-layered and multi-scaled complexity of a comprehensive geomorphic study into a number of well-defined and semi-independent topics, each of which can be performed separately to produce a clearly defined, deliverable product. Geomorphology increasingly plays a central role in multi-disciplinary river research and the importance of effective quality assurance makes it essential that audit trails and quality checks are hard-wired into study design. The structured approach presented here provides output products and production trails that can be rigorously audited, ensuring that the results of a geomorphic study can stand up to the closest scrutiny.
Wu, Dingming; Wang, Dongfang; Zhang, Michael Q; Gu, Jin
2015-12-01
One major goal of large-scale cancer omics study is to identify molecular subtypes for more accurate cancer diagnoses and treatments. To deal with high-dimensional cancer multi-omics data, a promising strategy is to find an effective low-dimensional subspace of the original data and then cluster cancer samples in the reduced subspace. However, due to data-type diversity and big data volume, few methods can integrative and efficiently find the principal low-dimensional manifold of the high-dimensional cancer multi-omics data. In this study, we proposed a novel low-rank approximation based integrative probabilistic model to fast find the shared principal subspace across multiple data types: the convexity of the low-rank regularized likelihood function of the probabilistic model ensures efficient and stable model fitting. Candidate molecular subtypes can be identified by unsupervised clustering hundreds of cancer samples in the reduced low-dimensional subspace. On testing datasets, our method LRAcluster (low-rank approximation based multi-omics data clustering) runs much faster with better clustering performances than the existing method. Then, we applied LRAcluster on large-scale cancer multi-omics data from TCGA. The pan-cancer analysis results show that the cancers of different tissue origins are generally grouped as independent clusters, except squamous-like carcinomas. While the single cancer type analysis suggests that the omics data have different subtyping abilities for different cancer types. LRAcluster is a very useful method for fast dimension reduction and unsupervised clustering of large-scale multi-omics data. LRAcluster is implemented in R and freely available via http://bioinfo.au.tsinghua.edu.cn/software/lracluster/ .
Rebelling for a Reason: Protein Structural “Outliers”
Arumugam, Gandhimathi; Nair, Anu G.; Hariharaputran, Sridhar; Ramanathan, Sowdhamini
2013-01-01
Analysis of structural variation in domain superfamilies can reveal constraints in protein evolution which aids protein structure prediction and classification. Structure-based sequence alignment of distantly related proteins, organized in PASS2 database, provides clues about structurally conserved regions among different functional families. Some superfamily members show large structural differences which are functionally relevant. This paper analyses the impact of structural divergence on function for multi-member superfamilies, selected from the PASS2 superfamily alignment database. Functional annotations within superfamilies, with structural outliers or ‘rebels’, are discussed in the context of structural variations. Overall, these data reinforce the idea that functional similarities cannot be extrapolated from mere structural conservation. The implication for fold-function prediction is that the functional annotations can only be inherited with very careful consideration, especially at low sequence identities. PMID:24073209
The design of multi-core DSP parallel model based on message passing and multi-level pipeline
NASA Astrophysics Data System (ADS)
Niu, Jingyu; Hu, Jian; He, Wenjing; Meng, Fanrong; Li, Chuanrong
2017-10-01
Currently, the design of embedded signal processing system is often based on a specific application, but this idea is not conducive to the rapid development of signal processing technology. In this paper, a parallel processing model architecture based on multi-core DSP platform is designed, and it is mainly suitable for the complex algorithms which are composed of different modules. This model combines the ideas of multi-level pipeline parallelism and message passing, and summarizes the advantages of the mainstream model of multi-core DSP (the Master-Slave model and the Data Flow model), so that it has better performance. This paper uses three-dimensional image generation algorithm to validate the efficiency of the proposed model by comparing with the effectiveness of the Master-Slave and the Data Flow model.
A new hybrid meta-heuristic algorithm for optimal design of large-scale dome structures
NASA Astrophysics Data System (ADS)
Kaveh, A.; Ilchi Ghazaan, M.
2018-02-01
In this article a hybrid algorithm based on a vibrating particles system (VPS) algorithm, multi-design variable configuration (Multi-DVC) cascade optimization, and an upper bound strategy (UBS) is presented for global optimization of large-scale dome truss structures. The new algorithm is called MDVC-UVPS in which the VPS algorithm acts as the main engine of the algorithm. The VPS algorithm is one of the most recent multi-agent meta-heuristic algorithms mimicking the mechanisms of damped free vibration of single degree of freedom systems. In order to handle a large number of variables, cascade sizing optimization utilizing a series of DVCs is used. Moreover, the UBS is utilized to reduce the computational time. Various dome truss examples are studied to demonstrate the effectiveness and robustness of the proposed method, as compared to some existing structural optimization techniques. The results indicate that the MDVC-UVPS technique is a powerful search and optimization method for optimizing structural engineering problems.
ERIC Educational Resources Information Center
Wladis, Claire; Offenholley, Kathleen; George, Michael
2014-01-01
This study hypothesizes that course passing rates in remedial mathematics classes can be improved through early identification of at-risk students using a department-wide midterm, followed by a mandated set of online intervention assignments incorporating immediate and elaborate feedback for all students identified as "at-risk" by their…
Bellamy, Chloe; Altringham, John
2015-01-01
Conservation increasingly operates at the landscape scale. For this to be effective, we need landscape scale information on species distributions and the environmental factors that underpin them. Species records are becoming increasingly available via data centres and online portals, but they are often patchy and biased. We demonstrate how such data can yield useful habitat suitability models, using bat roost records as an example. We analysed the effects of environmental variables at eight spatial scales (500 m - 6 km) on roost selection by eight bat species (Pipistrellus pipistrellus, P. pygmaeus, Nyctalus noctula, Myotis mystacinus, M. brandtii, M. nattereri, M. daubentonii, and Plecotus auritus) using the presence-only modelling software MaxEnt. Modelling was carried out on a selection of 418 data centre roost records from the Lake District National Park, UK. Target group pseudoabsences were selected to reduce the impact of sampling bias. Multi-scale models, combining variables measured at their best performing spatial scales, were used to predict roosting habitat suitability, yielding models with useful predictive abilities. Small areas of deciduous woodland consistently increased roosting habitat suitability, but other habitat associations varied between species and scales. Pipistrellus were positively related to built environments at small scales, and depended on large-scale woodland availability. The other, more specialist, species were highly sensitive to human-altered landscapes, avoiding even small rural towns. The strength of many relationships at large scales suggests that bats are sensitive to habitat modifications far from the roost itself. The fine resolution, large extent maps will aid targeted decision-making by conservationists and planners. We have made available an ArcGIS toolbox that automates the production of multi-scale variables, to facilitate the application of our methods to other taxa and locations. Habitat suitability modelling has the potential to become a standard tool for supporting landscape-scale decision-making as relevant data and open source, user-friendly, and peer-reviewed software become widely available.
Multi-filter spectrophotometry simulations
NASA Technical Reports Server (NTRS)
Callaghan, Kim A. S.; Gibson, Brad K.; Hickson, Paul
1993-01-01
To complement both the multi-filter observations of quasar environments described in these proceedings, as well as the proposed UBC 2.7 m Liquid Mirror Telescope (LMT) redshift survey, we have initiated a program of simulated multi-filter spectrophotometry. The goal of this work, still very much in progress, is a better quantitative assessment of the multiband technique as a viable mechanism for obtaining useful redshift and morphological class information from large scale multi-filter surveys.
Wei, Cai-Jie; Wu, Wei-Zhong
2018-09-01
Two kinds of hybrid two-step multi-soil-layering (MSL) systems loaded with different filter medias (zeolite-ceramsite MSL-1 and ceramsite-red clay MSL-2) were set-up for the low-(C/N)-ratio polluted river water treatment. A long-term pollutant removal performance of these two kinds of MSL systems was evaluated for 214 days. By-pass was employed in MSL systems to evaluate its effect on nitrogen removal enhancement. Zeolite-ceramsite single-pass MSL-1 system owns outstanding ammonia removal capability (24 g NH 4 + -Nm -2 d -1 ), 3 times higher than MSL-2 without zeolite under low aeration rate condition (0.8 × 10 4 L m -2 .h -1 ). Aeration rate up to 1.6 × 10 4 L m -2 .h -1 well satisfied the requirement of complete nitrification in first unit of both two MSLs. However, weak denitrification in second unit was commonly observed. By-pass of 50% influent into second unit can improve about 20% TN removal rate for both MSL-1 and MSL-2. Complete nitrification and denitrification was achieved in by-pass MSL systems after addition of carbon source with the resulting C/N ratio up to 2.5. The characters of biofilms distributed in different sections inside MSL-1 system well illustrated the nitrogen removal mechanism inside MSL systems. Two kinds of MSLs are both promising as an appealing nitrifying biofilm reactor. Recirculation can be considered further for by-pass MSL-2 system to ensure a complete ammonia removal. Copyright © 2018 Elsevier Ltd. All rights reserved.
Practical system for the generation of pulsed quantum frequency combs.
Roztocki, Piotr; Kues, Michael; Reimer, Christian; Wetzel, Benjamin; Sciara, Stefania; Zhang, Yanbing; Cino, Alfonso; Little, Brent E; Chu, Sai T; Moss, David J; Morandotti, Roberto
2017-08-07
The on-chip generation of large and complex optical quantum states will enable low-cost and accessible advances for quantum technologies, such as secure communications and quantum computation. Integrated frequency combs are on-chip light sources with a broad spectrum of evenly-spaced frequency modes, commonly generated by four-wave mixing in optically-excited nonlinear micro-cavities, whose recent use for quantum state generation has provided a solution for scalable and multi-mode quantum light sources. Pulsed quantum frequency combs are of particular interest, since they allow the generation of single-frequency-mode photons, required for scaling state complexity towards, e.g., multi-photon states, and for quantum information applications. However, generation schemes for such pulsed combs have, to date, relied on micro-cavity excitation via lasers external to the sources, being neither versatile nor power-efficient, and impractical for scalable realizations of quantum technologies. Here, we introduce an actively-modulated, nested-cavity configuration that exploits the resonance pass-band characteristic of the micro-cavity to enable a mode-locked and energy-efficient excitation. We demonstrate that the scheme allows the generation of high-purity photons at large coincidence-to-accidental ratios (CAR). Furthermore, by increasing the repetition rate of the excitation field via harmonic mode-locking (i.e. driving the cavity modulation at harmonics of the fundamental repetition rate), we managed to increase the pair production rates (i.e. source efficiency), while maintaining a high CAR and photon purity. Our approach represents a significant step towards the realization of fully on-chip, stable, and versatile sources of pulsed quantum frequency combs, crucial for the development of accessible quantum technologies.
Wong, Chung-Ki; Luo, Qingfei; Zotev, Vadim; Phillips, Raquel; Chan, Kam Wai Clifford; Bodurka, Jerzy
2018-03-31
In simultaneous EEG-fMRI, identification of the period of cardioballistic artifact (BCG) in EEG is required for the artifact removal. Recording the electrocardiogram (ECG) waveform during fMRI is difficult, often causing inaccurate period detection. Since the waveform of the BCG extracted by independent component analysis (ICA) is relatively invariable compared to the ECG waveform, we propose a multiple-scale peak-detection algorithm to determine the BCG cycle directly from the EEG data. The algorithm first extracts the high contrast BCG component from the EEG data by ICA. The BCG cycle is then estimated by band-pass filtering the component around the fundamental frequency identified from its energy spectral density, and the peak of BCG artifact occurrence is selected from each of the estimated cycle. The algorithm is shown to achieve a high accuracy on a large EEG-fMRI dataset. It is also adaptive to various heart rates without the needs of adjusting the threshold parameters. The cycle detection remains accurate with the scan duration reduced to half a minute. Additionally, the algorithm gives a figure of merit to evaluate the reliability of the detection accuracy. The algorithm is shown to give a higher detection accuracy than the commonly used cycle detection algorithm fmrib_qrsdetect implemented in EEGLAB. The achieved high cycle detection accuracy of our algorithm without using the ECG waveforms makes possible to create and automate pipelines for processing large EEG-fMRI datasets, and virtually eliminates the need for ECG recordings for BCG artifact removal. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
PKI security in large-scale healthcare networks.
Mantas, Georgios; Lymberopoulos, Dimitrios; Komninos, Nikos
2012-06-01
During the past few years a lot of PKI (Public Key Infrastructures) infrastructures have been proposed for healthcare networks in order to ensure secure communication services and exchange of data among healthcare professionals. However, there is a plethora of challenges in these healthcare PKI infrastructures. Especially, there are a lot of challenges for PKI infrastructures deployed over large-scale healthcare networks. In this paper, we propose a PKI infrastructure to ensure security in a large-scale Internet-based healthcare network connecting a wide spectrum of healthcare units geographically distributed within a wide region. Furthermore, the proposed PKI infrastructure facilitates the trust issues that arise in a large-scale healthcare network including multi-domain PKI infrastructures.
Chauvin, Alan; Chassot, Steve; Chenevière, Xavier; Taube, Wolfgang
2017-01-01
The cognitive-motor performance (CMP), defined here as the capacity to rapidly use sensory information and transfer it into efficient motor output, represents a major contributor to performance in almost all sports, including soccer. Here, we used a high-technology system (COGNIFOOT) which combines a visual environment simulator fully synchronized with a motion capture system. This system allowed us to measure objective real-time CMP parameters (passing accuracy/speed and response times) in a large turf-artificial grass playfield. Forty-six (46) young elite soccer players (including 2 female players) aged between 11 and 16 years who belonged to the same youth soccer academy were tested. Each player had to pass the ball as fast and as accurately as possible towards visual targets projected onto a large screen located 5.32 meters in front of him (a short pass situation). We observed a linear age-related increase in the CMP: the passing accuracy, speed and reactiveness of players improved by 4 centimeters, 2.3 km/h and 30 milliseconds per year of age, respectively. These data were converted into 5 point-scales and compared to the judgement of expert coaches, who also used a 5 point-scale to evaluate the same CMP parameters but based on their experience with the players during games and training. The objectively-measured age-related CMP changes were also observed in expert coaches’ judgments although these were more variable across coaches and age categories. This demonstrates that high-technology systems like COGNIFOOT can be used in complement to traditional approaches of talent identification and to objectively monitor the progress of soccer players throughout a cognitive-motor training cycle. PMID:28953958
Hicheur, Halim; Chauvin, Alan; Chassot, Steve; Chenevière, Xavier; Taube, Wolfgang
2017-01-01
The cognitive-motor performance (CMP), defined here as the capacity to rapidly use sensory information and transfer it into efficient motor output, represents a major contributor to performance in almost all sports, including soccer. Here, we used a high-technology system (COGNIFOOT) which combines a visual environment simulator fully synchronized with a motion capture system. This system allowed us to measure objective real-time CMP parameters (passing accuracy/speed and response times) in a large turf-artificial grass playfield. Forty-six (46) young elite soccer players (including 2 female players) aged between 11 and 16 years who belonged to the same youth soccer academy were tested. Each player had to pass the ball as fast and as accurately as possible towards visual targets projected onto a large screen located 5.32 meters in front of him (a short pass situation). We observed a linear age-related increase in the CMP: the passing accuracy, speed and reactiveness of players improved by 4 centimeters, 2.3 km/h and 30 milliseconds per year of age, respectively. These data were converted into 5 point-scales and compared to the judgement of expert coaches, who also used a 5 point-scale to evaluate the same CMP parameters but based on their experience with the players during games and training. The objectively-measured age-related CMP changes were also observed in expert coaches' judgments although these were more variable across coaches and age categories. This demonstrates that high-technology systems like COGNIFOOT can be used in complement to traditional approaches of talent identification and to objectively monitor the progress of soccer players throughout a cognitive-motor training cycle.
News Release: May 25, 2016 — Building on data from The Cancer Genome Atlas (TCGA) project, a multi-institutional team of scientists has completed the first large-scale “proteogenomic” study of breast cancer, linking DNA mutations to protein signaling and helping pinpoint the genes that drive cancer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wojcik, Roza; Webb, Ian K.; Deng, Liulin
Understanding the biological mechanisms related to lipids and glycolipids is challenging due to the vast number of possible isomers. Mass spectrometry (MS) measurements are currently the dominant approach for studying and providing detailed information on lipid and glycolipid structures. However, difficulties in distinguishing many structural isomers (e.g. distinct acyl chain positions, double bond locations, as well as glycan isomers) inhibit the understanding of their biological roles. Here we utilized ultra-high resolution ion mobility spectrometry (IMS) separations based upon the use of traveling waves in a serpentine long path length multi-pass Structures for Lossless Manipulations (SLIM) to enhance isomer resolution. Themore » multi-pass arrangement allowed separations ranging from ~16 m (1 pass) to ~470 m (32 passes) to be investigated for the distinction of lipids and glycolipids with extremely small structural differences. Lastly, these ultra-high resolution SLIM IMS-MS analyses provide a foundation for exploring and better understanding isomer specific biological and disease processes.« less
Optical Magnetometry using Multipass Cells with overlapping beams
NASA Astrophysics Data System (ADS)
McDonough, Nathaniel David; Lucivero, Vito Giovanni; Dural, Nezih; Romalis, Michael
2017-04-01
In recent years, multipass cells with cylindrical mirrors have proven to be a successful way of making highly sensitive atomic magnetometers. In such cells a small laser beam makes 40 to 100 passes within the cell without significant overlap with itself. Here we describe a new multi-pass geometry which uses spherical mirrors to reflect the probe beam multiple times over the same cell region. Such geometry reduces the effects of atomic diffusion while preserving the advantages of multi-pass cells over standing-wave cavities, namely a deterministic number of passes and absence of interference. We have fabricated several cells with this geometry and obtained good agreement between the measured and calculated levels of quantum spin noise. We will report on our effort to characterize the diffusion spin-correlation function in these cells and operation of the cell as a magnetometer. This work is supported by DARPA.
NASA Astrophysics Data System (ADS)
Abedian, A.; Poursina, M.; Golestanian, H.
2007-05-01
Radial forging is an open die forging process used for reducing the diameter of shafts, tubes, stepped shafts and axels, and creating internal profiles for tubes such as rifling of gun barrels. In this work, a comprehensive study of multi-pass hot radial forging of short hollow and solid products are presented using 2-D axisymmetric finite element simulation. The workpiece is modeled as an elastic-viscoplastic material. A mixture of Coulomb law and constant limit shear is used to model the die-workpiece and mandrel-workpiece contacts. Thermal effects are also taken in to account. Three-pass radial forging of solid cylinders and tube products are considered. Temperature, stress, strain and metal flow distribution are obtained in each pass through thermo-mechanical simulation. The numerical results are compared with available experimental data and are in good agreement with them.
Fingeret, Abbey L; Arnell, Tracey; McNelis, John; Statter, Mindy; Dresner, Lisa; Widmann, Warren
We sought to determine whether sequential participation in a multi-institutional mock oral examination affected the likelihood of passing the American Board of Surgery Certifying Examination (ABSCE) in first attempt. Residents from 3 academic medical centers were able to participate in a regional mock oral examination in the fall and spring of their fourth and fifth postgraduate year from 2011 to 2014. Candidate׳s highest composite score of all mock orals attempts was classified as risk for failure, intermediate, or likely to pass. Factors including United States Medical Licensing Examination steps 1, 2, and 3, number of cases logged, American Board of Surgery In-Training Examination performance, American Board of Surgery Qualifying Examination (ABSQE) performance, number of attempts, and performance in the mock orals were assessed to determine factors predictive of passing the ABSCE. A total of 128 mock oral examinations were administered to 88 (71%) of 124 eligible residents. The overall first-time pass rate for the ABSCE was 82%. There was no difference in pass rates between participants and nonparticipants. Of them, 16 (18%) residents were classified as at risk, 47 (53%) as intermediate, and 25 (29%) as likely to pass. ABSCE pass rate for each group was as follows: 36% for at risk, 84% for intermediate, and 96% for likely pass. The following 4 factors were associated with first-time passing of ABSCE on bivariate analysis: mock orals participation in postgraduate year 4 (p = 0.05), sequential participation in mock orals (p = 0.03), ABSQE performance (p = 0.01), and best performance on mock orals (p = 0.001). In multivariable logistic regression, the following 3 factors remained associated with ABSCE passing: ABSQE performance, odds ratio (OR) = 2.9 (95% CI: 1.3-6.1); mock orals best performance, OR = 1.7 (1.2-2.4); and participation in multiple mock oral examinations, OR = 1.4 (1.1-2.7). Performance on a multi-institutional mock oral examination can identify residents at risk for failure of the ABSCE. Sequential participation in mock oral examinations is associated with improved ABSCE first-time pass rate. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Microphysics in the Multi-Scale Modeling Systems with Unified Physics
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.
2011-01-01
In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (l) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, the microphysics developments of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the heavy precipitation processes will be presented.
Plastic mechanism of multi-pass double-roller clamping spinning for arc-shaped surface flange
NASA Astrophysics Data System (ADS)
Fan, Shuqin; Zhao, Shengdun; Zhang, Qi; Li, Yongyi
2013-11-01
Compared with the conventional single-roller spinning process, the double-roller clamping spinning(DRCS) process can effectively prevent the sheet metal surface wrinkling and improve the the production efficiency and the shape precision of final spun part. Based on ABAQUS/Explicit nonlinear finite element software, the finite element model of the multi-pass DRCS for the sheet metal is established, and the material model, the contact definition, the mesh generation, the loading trajectory and other key technical problems are solved. The simulations on the multi-pass DRCS of the ordinary Q235A steel cylindrical part with the arc-shaped surface flange are carried out. The effects of number of spinning passes on the production efficiency, the spinning moment, the shape error of the workpiece, and the wall thickness distribution of the final part are obtained. It is indicated definitely that with the increase of the number of spinning passes the geometrical precision of the spun part increases while the production efficiency reduces. Moreover, the variations of the spinning forces and the distributions of the stresses, strains, wall thickness during the multi-pass DRCS process are revealed. It is indicated that during the DRCS process the radical force is the largest, and the whole deformation area shows the tangential tensile strain and the radial compressive strain, while the thickness strain changes along the generatrix directions from the compressive strain on the outer edge of the flange to the tensile strain on the inner edge of the flange. Based on the G-CNC6135 NC lathe, the three-axis linkage computer-controlled experimental device for DRCS which is driven by the AC servo motor is developed. And then using the experimental device, the Q235A cylindrical parts with the arc-shape surface flange are formed by the DRCS. The simulation results of spun parts have good consistency with the experimental results, which verifies the feasibility of DRCS process and the reliability of the finite element model for DRCS.
Dust Dynamics in Protoplanetary Disks: Parallel Computing with PVM
NASA Astrophysics Data System (ADS)
de La Fuente Marcos, Carlos; Barge, Pierre; de La Fuente Marcos, Raúl
2002-03-01
We describe a parallel version of our high-order-accuracy particle-mesh code for the simulation of collisionless protoplanetary disks. We use this code to carry out a massively parallel, two-dimensional, time-dependent, numerical simulation, which includes dust particles, to study the potential role of large-scale, gaseous vortices in protoplanetary disks. This noncollisional problem is easy to parallelize on message-passing multicomputer architectures. We performed the simulations on a cache-coherent nonuniform memory access Origin 2000 machine, using both the parallel virtual machine (PVM) and message-passing interface (MPI) message-passing libraries. Our performance analysis suggests that, for our problem, PVM is about 25% faster than MPI. Using PVM and MPI made it possible to reduce CPU time and increase code performance. This allows for simulations with a large number of particles (N ~ 105-106) in reasonable CPU times. The performances of our implementation of the pa! rallel code on an Origin 2000 supercomputer are presented and discussed. They exhibit very good speedup behavior and low load unbalancing. Our results confirm that giant gaseous vortices can play a dominant role in giant planet formation.
Action detection by double hierarchical multi-structure space-time statistical matching model
NASA Astrophysics Data System (ADS)
Han, Jing; Zhu, Junwei; Cui, Yiyin; Bai, Lianfa; Yue, Jiang
2018-03-01
Aimed at the complex information in videos and low detection efficiency, an actions detection model based on neighboring Gaussian structure and 3D LARK features is put forward. We exploit a double hierarchical multi-structure space-time statistical matching model (DMSM) in temporal action localization. First, a neighboring Gaussian structure is presented to describe the multi-scale structural relationship. Then, a space-time statistical matching method is proposed to achieve two similarity matrices on both large and small scales, which combines double hierarchical structural constraints in model by both the neighboring Gaussian structure and the 3D LARK local structure. Finally, the double hierarchical similarity is fused and analyzed to detect actions. Besides, the multi-scale composite template extends the model application into multi-view. Experimental results of DMSM on the complex visual tracker benchmark data sets and THUMOS 2014 data sets show the promising performance. Compared with other state-of-the-art algorithm, DMSM achieves superior performances.
Action detection by double hierarchical multi-structure space–time statistical matching model
NASA Astrophysics Data System (ADS)
Han, Jing; Zhu, Junwei; Cui, Yiyin; Bai, Lianfa; Yue, Jiang
2018-06-01
Aimed at the complex information in videos and low detection efficiency, an actions detection model based on neighboring Gaussian structure and 3D LARK features is put forward. We exploit a double hierarchical multi-structure space-time statistical matching model (DMSM) in temporal action localization. First, a neighboring Gaussian structure is presented to describe the multi-scale structural relationship. Then, a space-time statistical matching method is proposed to achieve two similarity matrices on both large and small scales, which combines double hierarchical structural constraints in model by both the neighboring Gaussian structure and the 3D LARK local structure. Finally, the double hierarchical similarity is fused and analyzed to detect actions. Besides, the multi-scale composite template extends the model application into multi-view. Experimental results of DMSM on the complex visual tracker benchmark data sets and THUMOS 2014 data sets show the promising performance. Compared with other state-of-the-art algorithm, DMSM achieves superior performances.
NASA Astrophysics Data System (ADS)
Yamamoto, Keisuke; Nakayama, Katsuyuki
2017-11-01
Development or decay of a vortex in terms of the local flow topology has been shown to be highly correlated with its topological feature, i.e., vortical flow symmetry (skewness), in an isotropic homogeneous turbulence. Since a turbulent flow might include vortices in multi-scales, the present study investigates the characteristics of this relationships between the development or decay of a vortex and the vortical flow symmetry in several scales in an isotropic homogeneous turbulence in low Reynols number. Swirlity is a physical quantity of an intensity of swirling in terms of the geometrical average of the azimuthal flow, and represents the behavior of the development or decay of a vortex in this study. Flow scales are decomposed into three scales specified by the Fourier coefficients of the velocity applying the band-pass filter. The analysis shows that vortices in the different scales have a universal feature that the time derivative of swirlity and that of the symmetry have high correlation. Especially they have more stronger correlation at their birth and extinction.
NASA Astrophysics Data System (ADS)
DeTemple, B.; Wilcock, P.
2011-12-01
In an alluvial, gravel-bed stream governed by a plane-bed bedload transport regime, the physicochemical properties, size distribution, and granular architecture of the sediment grains that constitute the streambed surface influence many hydrodynamic, geomorphic, chemical, and ecological processes. Consequently, the abilities to accurately characterize the morphology and model the morphodynamics of the streambed surface and its interaction with the bedload above and subsurface below are necessary for a more complete understanding of how sediment, flow, organisms, and biogeochemistry interact. We report on our progress in the bottom-up development of low-pass filtered continuum streambed and bedload sediment mass balance laws for an alluvial, gravel-bed stream. These balance laws are assembled in a four stage process. First, the stream sediment-water system is conceptually abstracted as a nested, multi-phase, multi-species, structured continuum. Second, the granular surface of an aggregate of sediment grains is mathematically defined. Third, an integral approach to mass balance, founded in the continuum theory of multiphase flow, is used to formulate primordial, differential, instantaneous, local, continuum, mass balance laws applicable at any material point within a gravel-bed stream. Fourth, area averaging and time-after-area averaging, employing planform, low-pass filtering expressed as correlation or convolution integrals and based on the spatial and temporal filtering techniques found in the fields of multiphase flow, porous media flow, and large eddy simulation of turbulent fluid flow, are applied to smooth the primordial equations while maximizing stratigraphic resolution and preserving the definitions of relevant morphodynamic surfaces. Our approach unifies, corrects, contextualizes, and generalizes prior efforts at developing stream sediment continuity equations, including the top-down derivations of the surface layer (or "active layer") approach of Hirano [1971a,b] and probabilistic approach of Parker et al. [2000], as well as the bottom-up, low-pass filtered continuum approach of Coleman & Nikora [2009] which employed volume and volume-after-time averaging. It accommodates partial transport (e.g., Wilcock & McArdell [1997], Wilcock [1997a,b]). Additionally, it provides: (1) precise definitions of the geometry and kinematics of sediment in a gravel-bed stream required to collect and analyze the high resolution spatial and temporal datasets that are becoming ever more present in both laboratory and field investigations, (2) a mathematical framework for the use of tracer grains in gravel-bed streams, including the fate of streambed-emplaced tracers as well as the dispersion of tracers in the bedload, (3) spatial and temporal averaging uncompromised by the Reynolds rules necessary to assess the nature of scale separation, and (4) a kinematic foundation for hybrid Langrangian-Eulerian models of sediment morphodynamics.
Assessment of soil compaction properties based on surface wave techniques
NASA Astrophysics Data System (ADS)
Jihan Syamimi Jafri, Nur; Rahim, Mohd Asri Ab; Zahid, Mohd Zulham Affandi Mohd; Faizah Bawadi, Nor; Munsif Ahmad, Muhammad; Faizal Mansor, Ahmad; Omar, Wan Mohd Sabki Wan
2018-03-01
Soil compaction plays an important role in every construction activities to reduce risks of any damage. Traditionally, methods of assessing compaction include field tests and invasive penetration tests for compacted areas have great limitations, which caused time-consuming in evaluating large areas. Thus, this study proposed the possibility of using non-invasive surface wave method like Multi-channel Analysis of Surface Wave (MASW) as a useful tool for assessing soil compaction. The aim of this study was to determine the shear wave velocity profiles and field density of compacted soils under varying compaction efforts by using MASW method. Pre and post compaction of MASW survey were conducted at Pauh Campus, UniMAP after applying rolling compaction with variation of passes (2, 6 and 10). Each seismic data was recorded by GEODE seismograph. Sand replacement test was conducted for each survey line to obtain the field density data. All seismic data were processed using SeisImager/SW software. The results show the shear wave velocity profiles increase with the number of passes from 0 to 6 passes, but decrease after 10 passes. This method could attract the interest of geotechnical community, as it can be an alternative tool to the standard test for assessing of soil compaction in the field operation.
The effects and outcomes of electrolyte disturbances and asphyxia on newborns hearing
Liang, Chun; Hong, Qi; Jiang, Tao-Tao; Gao, Yan; Yao, Xiao-Fang; Luo, Xiao-Xing; Zhuo, Xiu-Hui; Shinn, Jennifer B.; Jones, Raleigh O.; Zhao, Hong-Bo; Lu, Guang-Jin
2013-01-01
Objective To determine the effect of electrolyte disturbances (ED) and asphyxia on infant hearing and hearing outcomes. Study Design We conducted newborn hearing screening with transient evoked otoacoustic emission (TEOAE) test on a large scale (>5,000 infants). The effects of ED and asphyxia on infant hearing and hearing outcomes were evaluated. Result The pass rate of TEOAE test was significantly reduced in preterm infants with ED (83.1%, multiple logistic regression analysis: P<0.01) but not in full-term infants with ED (93.6%, P=0.41). However, there was no significant reduction in the pass rate in infants with asphyxia (P=0.85). We further found that hypocalcaemia significantly reduced the pass rate of TEOAE test (86.8%, P<0.01). In the follow-up recheck at 3 months of age, the pass rate remained low (44.4%, P<0.01). Conclusion ED is a high-risk factor for preterm infant hearing. Hypocalcaemia can produce more significant impairment with a low recovery rate. PMID:23648318
Multi-Scale Multi-Domain Model | Transportation Research | NREL
framework for NREL's MSMD model. NREL's MSMD model quantifies the impacts of electrical/thermal pathway : NREL Macroscopic design factors and highly dynamic environmental conditions significantly influence the design of affordable, long-lasting, high-performing, and safe large battery systems. The MSMD framework
NASA Technical Reports Server (NTRS)
Engin, Doruk; Mathason, Brian; Stephen, Mark; Yu, Anthony; Cao, He; Fouron, Jean-Luc; Storm, Mark
2016-01-01
Accurate global measurements of tropospheric CO2 mixing ratios are needed to study CO2 emissions and CO2 exchange with the land and oceans. NASA Goddard Space Flight Center (GSFC) is developing a pulsed lidar approach for an integrated path differential absorption (IPDA) lidar to allow global measurements of atmospheric CO2 column densities from space. Our group has developed, and successfully flown, an airborne pulsed lidar instrument that uses two tunable pulsed laser transmitters allowing simultaneous measurement of a single CO2 absorption line in the 1570 nm band, absorption of an O2 line pair in the oxygen A-band (765 nm), range, and atmospheric backscatter profiles in the same path. Both lasers are pulsed at 10 kHz, and the two absorption line regions are sampled at typically a 300 Hz rate. A space-based version of this lidar must have a much larger lidar power-area product due to the x40 longer range and faster along track velocity compared to airborne instrument. Initial link budget analysis indicated that for a 400 km orbit, a 1.5 m diameter telescope and a 10 second integration time, a 2 mJ laser energy is required to attain the precision needed for each measurement. To meet this energy requirement, we have pursued parallel power scaling efforts to enable space-based lidar measurement of CO2 concentrations. These included a multiple aperture approach consists of multi-element large mode area fiber amplifiers and a single-aperture approach consists of a multi-pass Er:Yb:Phosphate glass based planar waveguide amplifier (PWA). In this paper we will present our laser amplifier design approaches and preliminary results.
Use of the Progressive Aphasia Severity Scale (PASS) in monitoring speech and language status in PPA
Sapolsky, Daisy; Domoto-Reilly, Kimiko; Dickerson, Bradford C.
2014-01-01
Background Primary progressive aphasia (PPA) is a devastating neurodegenerative syndrome involving the gradual development of aphasia, slowly impairing the patient’s ability to communicate. Pharmaceutical treatments do not currently exist and intervention often focuses on speech-language behavioral therapies, although further investigation is warranted to determine how best to harness functional benefits. Efforts to develop pharmaceutical and behavioral treatments have been hindered by a lack of standardized methods to monitor disease progression and treatment efficacy. Aims Here we describe our current approach to monitoring progression of PPA, including the development and applications of a novel clinical instrument for this purpose, the Progressive Aphasia Severity Scale (PASS). We also outline some of the issues related to initial evaluation and longitudinal monitoring of PPA. Methods & Procedures In our clinical and research practice we perform initial and follow-up assessments of PPA patients using a multi-faceted approach. In addition to standardized assessment measures, we use the PASS to rate presence and severity of symptoms across distinct domains of speech, language, and functional and pragmatic aspects of communication. Ratings are made using the clinician’s best judgment, integrating information from patient test performance in the office as well as a companion’s description of routine daily functioning. Outcomes & Results Monitoring symptom characteristics and severity with the PASS can assist in developing behavioral therapies, planning treatment goals, and counseling patients and families on clinical status and prognosis. The PASS also has potential to advance the implementation of PPA clinical trials. Conclusions PPA patients display heterogeneous language profiles that change over time given the progressive nature of the disease. The monitoring of symptom progression is therefore crucial to ensure that proposed treatments are appropriate at any given stage, including speech-language therapy and potentially pharmaceutical treatments once these become available. Because of the discrepancy that can exist between a patient’s daily functioning and standardized test performance, we believe a comprehensive assessment and monitoring battery must include performance-based instruments, interviews with the patient and partner, questionnaires about functioning in daily life, and measures of clinician judgment. We hope that our clinician judgment-based rating scale described here will be a valuable addition to the PPA assessment and monitoring battery. PMID:25419031
Modeling needs for very large systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stein, Joshua S.
2010-10-01
Most system performance models assume a point measurement for irradiance and that, except for the impact of shading from nearby obstacles, incident irradiance is uniform across the array. Module temperature is also assumed to be uniform across the array. For small arrays and hourly-averaged simulations, this may be a reasonable assumption. Stein is conducting research to characterize variability in large systems and to develop models that can better accommodate large system factors. In large, multi-MW arrays, passing clouds may block sunlight from a portion of the array but never affect another portion. Figure 22 shows that two irradiance measurements atmore » opposite ends of a multi-MW PV plant appear to have similar irradiance (left), but in fact the irradiance is not always the same (right). Module temperature may also vary across the array, with modules on the edges being cooler because they have greater wind exposure. Large arrays will also have long wire runs and will be subject to associated losses. Soiling patterns may also vary, with modules closer to the source of soiling, such as an agricultural field, receiving more dust load. One of the primary concerns associated with this effort is how to work with integrators to gain access to better and more comprehensive data for model development and validation.« less
Horst, Folkert; Green, William M J; Assefa, Solomon; Shank, Steven M; Vlasov, Yurii A; Offrein, Bert Jan
2013-05-20
We present 1-to-8 wavelength (de-)multiplexer devices based on a binary tree of cascaded Mach-Zehnder-like lattice filters, and manufactured using a 90 nm CMOS-integrated silicon photonics technology. We demonstrate that these devices combine a flat pass-band over more than 50% of the channel spacing with low insertion loss of less than 1.6 dB, and have a small device size of approximately 500 × 400 µm. This makes this type of filters well suited for application as WDM (de-)multiplexer in silicon photonics transceivers for optical data communication in large scale computer systems.
Huo, Mengmeng; Li, Wenyan; Chaudhuri, Arka Sen; Fan, Yuchao; Han, Xiu; Yang, Chen; Wu, Zhenghong; Qi, Xiaole
2017-09-01
In this study, we developed bio-stimuli-responsive multi-scale hyaluronic acid (HA) nanoparticles encapsulated with polyamidoamine (PAMAM) dendrimers as the subunits. These HA/PAMAM nanoparticles of large scale (197.10±3.00nm) were stable during systematic circulation then enriched at the tumor sites; however, they were prone to be degraded by the high expressed hyaluronidase (HAase) to release inner PAMAM dendrimers and regained a small scale (5.77±0.25nm) with positive charge. After employing tumor spheroids penetration assay on A549 3D tumor spheroids for 8h, the fluorescein isothiocyanate (FITC) labeled multi-scale HA/PAMAM-FITC nanoparticles could penetrate deeply into these tumor spheroids with the degradation of HAase. Moreover, small animal imaging technology in male nude mice bearing H22 tumor showed HA/PAMAM-FITC nanoparticles possess higher prolonged systematic circulation compared with both PAMAM-FITC nanoparticles and free FITC. In addition, after intravenous administration in mice bearing H22 tumors, methotrexate (MTX) loaded multi-scale HA/PAMAM-MTX nanoparticles exhibited a 2.68-fold greater antitumor activity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Scale-invariance underlying the logistic equation and its social applications
NASA Astrophysics Data System (ADS)
Hernando, A.; Plastino, A.
2013-01-01
On the basis of dynamical principles we i) advance a derivation of the Logistic Equation (LE), widely employed (among multiple applications) in the simulation of population growth, and ii) demonstrate that scale-invariance and a mean-value constraint are sufficient and necessary conditions for obtaining it. We also generalize the LE to multi-component systems and show that the above dynamical mechanisms underlie a large number of scale-free processes. Examples are presented regarding city-populations, diffusion in complex networks, and popularity of technological products, all of them obeying the multi-component logistic equation in an either stochastic or deterministic way.
Subsurface Monitoring of CO2 Sequestration - A Review and Look Forward
NASA Astrophysics Data System (ADS)
Daley, T. M.
2012-12-01
The injection of CO2 into subsurface formations is at least 50 years old with large-scale utilization of CO2 for enhanced oil recovery (CO2-EOR) beginning in the 1970s. Early monitoring efforts had limited measurements in available boreholes. With growing interest in CO2 sequestration beginning in the 1990's, along with growth in geophysical reservoir monitoring, small to mid-size sequestration monitoring projects began to appear. The overall goals of a subsurface monitoring plan are to provide measurement of CO2 induced changes in subsurface properties at a range of spatial and temporal scales. The range of spatial scales allows tracking of the location and saturation of the plume with varying detail, while finer temporal sampling (up to continuous) allows better understanding of dynamic processes (e.g. multi-phase flow) and constraining of reservoir models. Early monitoring of small scale pilots associated with CO2-EOR (e.g., the McElroy field and the Lost Hills field), developed many of the methodologies including tomographic imaging and multi-physics measurements. Large (reservoir) scale sequestration monitoring began with the Sleipner and Weyburn projects. Typically, large scale monitoring, such as 4D surface seismic, has limited temporal sampling due to costs. Smaller scale pilots can allow more frequent measurements as either individual time-lapse 'snapshots' or as continuous monitoring. Pilot monitoring examples include the Frio, Nagaoka and Otway pilots using repeated well logging, crosswell imaging, vertical seismic profiles and CASSM (continuous active-source seismic monitoring). For saline reservoir sequestration projects, there is typically integration of characterization and monitoring, since the sites are not pre-characterized resource developments (oil or gas), which reinforces the need for multi-scale measurements. As we move beyond pilot sites, we need to quantify CO2 plume and reservoir properties (e.g. pressure) over large scales, while still obtaining high resolution. Typically the high-resolution (spatial and temporal) tools are deployed in permanent or semi-permanent borehole installations, where special well design may be necessary, such as non-conductive casing for electrical surveys. Effective utilization of monitoring wells requires an approach of modular borehole monitoring (MBM) were multiple measurements can be made. An example is recent work at the Citronelle pilot injection site where an MBM package with seismic, fluid sampling and distributed fiber sensing was deployed. For future large scale sequestration monitoring, an adaptive borehole-monitoring program is proposed.
Spaceborne radar interferometry for coastal DEM construction
Hong, S.-H.; Lee, C.-W.; Won, J.-S.; Kwoun, Oh-Ig; Lu, Z.
2005-01-01
Topographic features in coastal regions including tidal flats change more significantly than landmass, and are characterized by extremely low slopes. High precision DEMs are required to monitor dynamic changes in coastal topography. It is difficult to obtain coherent interferometric SAR pairs especially over tidal flats mainly because of variation of tidal conditions. Here we focus on i) coherence of multi-pass ERS SAR interferometric pairs and ii) DEM construction from ERS-ENVISAT pairs. Coherences of multi-pass ERS interferograms were good enough to construct DEM under favorable tidal conditions. Coherence in sand dominant area was generally higher than that in muddy surface. The coarse grained coastal areas are favorable for multi-pass interferometry. Utilization of ERS-ENVISAT interferometric pairs is taken a growing interest. We carried out investigation using a cross-interferometric pair with a normal baseline of about 1.3 km, a 30 minutes temporal separation and the height sensitivity of about 6 meters. Preliminary results of ERS-ENVISAT interferometry were not successful due to baseline and unfavorable scattering conditions. ?? 2005 IEEE.
NASA Astrophysics Data System (ADS)
Cao, H.; Kalashnikov, M.; Osvay, K.; Khodakovskiy, N.; Nagymihaly, R. S.; Chvykov, V.
2018-04-01
A combination of a polarization-encoded (PE) and a conventional multi-pass amplifier was studied to overcome gain narrowing in the Ti:sapphire active medium. The seed spectrum was pre-shaped and blue-shifted during PE amplification and was then further broadened in a conventional, saturated multi-pass amplifier, resulting in an overall increase of the amplified bandwidth. Using this technique, seed pulses of 44 nm were amplified and simultaneously spectrally broadened to 57 nm without the use of passive spectral corrections. The amplified pulse after the PE amplifier was recompressed to 19 fs. The supported simulations confirm all aspects of experimental operation.
Packed Bed Bioreactor for the Isolation and Expansion of Placental-Derived Mesenchymal Stromal Cells
Osiecki, Michael J.; Michl, Thomas D.; Kul Babur, Betul; Kabiri, Mahboubeh; Atkinson, Kerry; Lott, William B.; Griesser, Hans J.; Doran, Michael R.
2015-01-01
Large numbers of Mesenchymal stem/stromal cells (MSCs) are required for clinical relevant doses to treat a number of diseases. To economically manufacture these MSCs, an automated bioreactor system will be required. Herein we describe the development of a scalable closed-system, packed bed bioreactor suitable for large-scale MSCs expansion. The packed bed was formed from fused polystyrene pellets that were air plasma treated to endow them with a surface chemistry similar to traditional tissue culture plastic. The packed bed was encased within a gas permeable shell to decouple the medium nutrient supply and gas exchange. This enabled a significant reduction in medium flow rates, thus reducing shear and even facilitating single pass medium exchange. The system was optimised in a small-scale bioreactor format (160 cm2) with murine-derived green fluorescent protein-expressing MSCs, and then scaled-up to a 2800 cm2 format. We demonstrated that placental derived MSCs could be isolated directly within the bioreactor and subsequently expanded. Our results demonstrate that the closed system large-scale packed bed bioreactor is an effective and scalable tool for large-scale isolation and expansion of MSCs. PMID:26660475
Osiecki, Michael J; Michl, Thomas D; Kul Babur, Betul; Kabiri, Mahboubeh; Atkinson, Kerry; Lott, William B; Griesser, Hans J; Doran, Michael R
2015-01-01
Large numbers of Mesenchymal stem/stromal cells (MSCs) are required for clinical relevant doses to treat a number of diseases. To economically manufacture these MSCs, an automated bioreactor system will be required. Herein we describe the development of a scalable closed-system, packed bed bioreactor suitable for large-scale MSCs expansion. The packed bed was formed from fused polystyrene pellets that were air plasma treated to endow them with a surface chemistry similar to traditional tissue culture plastic. The packed bed was encased within a gas permeable shell to decouple the medium nutrient supply and gas exchange. This enabled a significant reduction in medium flow rates, thus reducing shear and even facilitating single pass medium exchange. The system was optimised in a small-scale bioreactor format (160 cm2) with murine-derived green fluorescent protein-expressing MSCs, and then scaled-up to a 2800 cm2 format. We demonstrated that placental derived MSCs could be isolated directly within the bioreactor and subsequently expanded. Our results demonstrate that the closed system large-scale packed bed bioreactor is an effective and scalable tool for large-scale isolation and expansion of MSCs.
Cluster galaxy dynamics and the effects of large-scale environment
NASA Astrophysics Data System (ADS)
White, Martin; Cohn, J. D.; Smit, Renske
2010-11-01
Advances in observational capabilities have ushered in a new era of multi-wavelength, multi-physics probes of galaxy clusters and ambitious surveys are compiling large samples of cluster candidates selected in different ways. We use a high-resolution N-body simulation to study how the influence of large-scale structure in and around clusters causes correlated signals in different physical probes and discuss some implications this has for multi-physics probes of clusters (e.g. richness, lensing, Compton distortion and velocity dispersion). We pay particular attention to velocity dispersions, matching galaxies to subhaloes which are explicitly tracked in the simulation. We find that not only do haloes persist as subhaloes when they fall into a larger host, but groups of subhaloes retain their identity for long periods within larger host haloes. The highly anisotropic nature of infall into massive clusters, and their triaxiality, translates into an anisotropic velocity ellipsoid: line-of-sight galaxy velocity dispersions for any individual halo show large variance depending on viewing angle. The orientation of the velocity ellipsoid is correlated with the large-scale structure, and thus velocity outliers correlate with outliers caused by projection in other probes. We quantify this orientation uncertainty and give illustrative examples. Such a large variance suggests that velocity dispersion estimators will work better in an ensemble sense than for any individual cluster, which may inform strategies for obtaining redshifts of cluster members. We similarly find that the ability of substructure indicators to find kinematic substructures is highly viewing angle dependent. While groups of subhaloes which merge with a larger host halo can retain their identity for many Gyr, they are only sporadically picked up by substructure indicators. We discuss the effects of correlated scatter on scaling relations estimated through stacking, both analytically and in the simulations, showing that the strong correlation of measures with mass and the large scatter in mass at fixed observable mitigate line-of-sight projections.
NASA Astrophysics Data System (ADS)
Lee, Juhwa; Hwang, Jeongho; Bae, Dongho
2018-03-01
In this paper, welding residual stress analysis and fatigue strength assessment were performed at elevated temperature for multi-pass dissimilar material weld between Alloy 617 and P92 steel, which are used in thermal power plant. Multi-pass welding between Alloy 617 and P92 steel was performed under optimized welding condition determined from repeated pre-test welding. In particular, for improving dissimilar material weld-ability, the buttering welding technique was applied on the P92 steel side before multi-pass welding. Welding residual stress distribution at the dissimilar material weld joint was numerically analyzed by using the finite element method, and compared with experimental results which were obtained by the hole-drilling method. Additionally, fatigue strength of dissimilar material weld joint was assessed at the room temperature (R.T), 300, 500, and 700 °C. In finite element analysis results, numerical peak values; longitudinal (410 MPa), transverse (345 MPa) were higher than those of experiments; longitudinal (298 MPa), transverse (245 MPa). There are quantitatively big differences between numerical and experimental results, due to some assumption about the thermal conductivity, specific heat, effects of enforced convection of the molten pool, dilution, and volume change during phase transformation caused by actual shield gas. The low fatigue limit at R.T, 300 °C, 500 °C and 700 °C was assessed to be 368, 276, 173 and 137 MPa respectively.
NASA Astrophysics Data System (ADS)
Lee, Juhwa; Hwang, Jeongho; Bae, Dongho
2018-07-01
In this paper, welding residual stress analysis and fatigue strength assessment were performed at elevated temperature for multi-pass dissimilar material weld between Alloy 617 and P92 steel, which are used in thermal power plant. Multi-pass welding between Alloy 617 and P92 steel was performed under optimized welding condition determined from repeated pre-test welding. In particular, for improving dissimilar material weld-ability, the buttering welding technique was applied on the P92 steel side before multi-pass welding. Welding residual stress distribution at the dissimilar material weld joint was numerically analyzed by using the finite element method, and compared with experimental results which were obtained by the hole-drilling method. Additionally, fatigue strength of dissimilar material weld joint was assessed at the room temperature (R.T), 300, 500, and 700 °C. In finite element analysis results, numerical peak values; longitudinal (410 MPa), transverse (345 MPa) were higher than those of experiments; longitudinal (298 MPa), transverse (245 MPa). There are quantitatively big differences between numerical and experimental results, due to some assumption about the thermal conductivity, specific heat, effects of enforced convection of the molten pool, dilution, and volume change during phase transformation caused by actual shield gas. The low fatigue limit at R.T, 300 °C, 500 °C and 700 °C was assessed to be 368, 276, 173 and 137 MPa respectively.
Atmospheric Science Data Center
2013-04-19
article title: Closed Large Cell Clouds in the South Pacific ... the Multi-angle Imaging SpectroRadiometer (MISR) provide an example of very large scale closed cells, and can be contrasted with the ... MD. The MISR data were obtained from the NASA Langley Research Center Atmospheric Science Data Center in Hampton, VA. Image ...
SmallTool - a toolkit for realizing shared virtual environments on the Internet
NASA Astrophysics Data System (ADS)
Broll, Wolfgang
1998-09-01
With increasing graphics capabilities of computers and higher network communication speed, networked virtual environments have become available to a large number of people. While the virtual reality modelling language (VRML) provides users with the ability to exchange 3D data, there is still a lack of appropriate support to realize large-scale multi-user applications on the Internet. In this paper we will present SmallTool, a toolkit to support shared virtual environments on the Internet. The toolkit consists of a VRML-based parsing and rendering library, a device library, and a network library. This paper will focus on the networking architecture, provided by the network library - the distributed worlds transfer and communication protocol (DWTP). DWTP provides an application-independent network architecture to support large-scale multi-user environments on the Internet.
A Multi-scale Modeling System with Unified Physics to Study Precipitation Processes
NASA Astrophysics Data System (ADS)
Tao, W. K.
2017-12-01
In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), and (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF). The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the precipitation, processes and their sensitivity on model resolution and microphysics schemes will be presented. Also how to use of the multi-satellite simulator to improve precipitation processes will be discussed.
Using Multi-Scale Modeling Systems and Satellite Data to Study the Precipitation Processes
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.
2011-01-01
In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (l) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, the recent developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the precipitating systems and hurricanes/typhoons will be presented. The high-resolution spatial and temporal visualization will be utilized to show the evolution of precipitation processes. Also how to use of the multi-satellite simulator tqimproy precipitation processes will be discussed.
Using Multi-Scale Modeling Systems and Satellite Data to Study the Precipitation Processes
NASA Technical Reports Server (NTRS)
Tao, Wei--Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.
2010-01-01
In recent years, exponentially increasing computer power extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 sq km in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale models can be run in grid size similar to cloud resolving models through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model). (2) a regional scale model (a NASA unified weather research and forecast, W8F). (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling systems to study the interactions between clouds, precipitation, and aerosols will be presented. Also how to use the multi-satellite simulator to improve precipitation processes will be discussed.
Using Multi-Scale Modeling Systems to Study the Precipitation Processes
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo
2010-01-01
In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the interactions between clouds, precipitation, and aerosols will be presented. Also how to use of the multi-satellite simulator to improve precipitation processes will be discussed.
Primordial black hole production in Critical Higgs Inflation
NASA Astrophysics Data System (ADS)
Ezquiaga, Jose María; García-Bellido, Juan; Ruiz Morales, Ester
2018-01-01
Primordial Black Holes (PBH) arise naturally from high peaks in the curvature power spectrum of near-inflection-point single-field inflation, and could constitute today the dominant component of the dark matter in the universe. In this letter we explore the possibility that a broad spectrum of PBH is formed in models of Critical Higgs Inflation (CHI), where the near-inflection point is related to the critical value of the RGE running of both the Higgs self-coupling λ (μ) and its non-minimal coupling to gravity ξ (μ). We show that, for a wide range of model parameters, a half-domed-shaped peak in the matter spectrum arises at sufficiently small scales that it passes all the constraints from large scale structure observations. The predicted cosmic microwave background spectrum at large scales is in agreement with Planck 2015 data, and has a relatively large tensor-to-scalar ratio that may soon be detected by B-mode polarization experiments. Moreover, the wide peak in the power spectrum gives an approximately lognormal PBH distribution in the range of masses 0.01- 100M⊙, which could explain the LIGO merger events, while passing all present PBH observational constraints. The stochastic background of gravitational waves coming from the unresolved black-hole-binary mergers could also be detected by LISA or PTA. Furthermore, the parameters of the CHI model are consistent, within 2σ, with the measured Higgs parameters at the LHC and their running. Future measurements of the PBH mass spectrum could allow us to obtain complementary information about the Higgs couplings at energies well above the EW scale, and thus constrain new physics beyond the Standard Model.
2004-10-01
MONITORING AGENCY NAME(S) AND ADDRESS(ES) Defense Advanced Research Projects Agency AFRL/IFTC 3701 North Fairfax Drive...Scalable Parallel Libraries for Large-Scale Concurrent Applications," Technical Report UCRL -JC-109251, Lawrence Livermore National Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schunert, Sebastian; Schwen, Daniel; Ghassemi, Pedram
This work presents a multi-physics, multi-scale approach to modeling the Transient Test Reactor (TREAT) currently prepared for restart at the Idaho National Laboratory. TREAT fuel is made up of microscopic fuel grains (r ˜ 20µm) dispersed in a graphite matrix. The novelty of this work is in coupling a binary collision Monte-Carlo (BCMC) model to the Finite Element based code Moose for solving a microsopic heat-conduction problem whose driving source is provided by the BCMC model tracking fission fragment energy deposition. This microscopic model is driven by a transient, engineering scale neutronics model coupled to an adiabatic heating model. Themore » macroscopic model provides local power densities and neutron energy spectra to the microscpic model. Currently, no feedback from the microscopic to the macroscopic model is considered. TREAT transient 15 is used to exemplify the capabilities of the multi-physics, multi-scale model, and it is found that the average fuel grain temperature differs from the average graphite temperature by 80 K despite the low-power transient. The large temperature difference has strong implications on the Doppler feedback a potential LEU TREAT core would see, and it underpins the need for multi-physics, multi-scale modeling of a TREAT LEU core.« less
Size effects on plasticity and fatigue microstructure evolution in FCC single crystals
NASA Astrophysics Data System (ADS)
El-Awady, Jaafar Abbas
In aircraft structures and engines, fatigue damage is manifest in the progressive emergence of distributed surface cracks near locations of high stress concentrations. At the present time, reliable methods for prediction of fatigue crack initiation are not available, because the phenomenon starts at the atomic scale. Initiation of fatigue cracks is associated with the formation of Persistent slip bands (PSBs), which start at certain critical conditions inside metals with specific microstructure dimensions. The main objective of this research is to develop predictive computational capabilities for plasticity and fatigue damage evolution in finite volumes. In that attempt, a dislocation dynamics model that incorporates the influence of free and internal interfaces on dislocation motion is presented. The model is based on a self-consistent formulation of 3-D Parametric Dislocation Dynamics (PDD) with the Boundary Element method (BEM) to describe dislocation motion, and hence microscopic plastic flow in finite volumes. The developed computer models are bench-marked by detailed comparisons with the experimental data, developed at the Wright-Patterson Air Force Lab (WP-AFRL), by three dimensional large scale simulations of compression loading on micro-scale samples of FCC single crystals. These simulation results provide an understanding of plastic deformation of micron-size single crystals. The plastic flow characteristics as well as the stress-strain behavior of simulated micropillars are shown to be in general agreement with experimental observations. New size scaling aspects of plastic flow and work-hardening are identified through the use of these simulations. The flow strength versus the diameter of the micropillar follows a power law with an exponent equal to -0.69. A stronger correlation is observed between the flow strength and the average length of activated dislocation sources. This relationship is again a power law, with an exponent -0.85. Simulation results with and without the activation of cross-slip are compared. Discontinuous hardening is observed when cross-slip is included. Experimentally-observed size effects on plastic flow and work- hardening are consistent with a "weakest-link activation mechanism". In addition, the variations and periodicity of dislocation activation are analyzed using the Fast Fourier Transform (FFT). We then present models of localized plastic deformation inside Persistent Slip Band channels. We investigate the interaction between screw dislocations as they pass one another inside channel walls in copper. The model shows the mechanisms of dislocation bowing, dipole formation and binding, and dipole destruction as screw dislocations pass one another. The mechanism of (dipole passing) is assessed and interpreted in terms of the fatigue saturation stress. We also present results for the effects of the wall dipole structure on the dipole passing mechanism. The edge dislocation dipolar walls is seen to have an effect on the passing stress as well. It is shown that the passing stress in the middle of the channel is reduced by 11 to 23% depending on the initial configuration of the screw dislocations with respect to one another. Finally, from large scale simulations of the expansion process of the edge dipoles from the walls in the channel the screw dislocations in the PSB channels may not meet "symmetrically", i.e. precisely in the center of the channel but preferably a little on one or the other side. For this configuration the passing stress will be lowered which is in agreement to experimental observations.
Multi-GPU implementation of a VMAT treatment plan optimization algorithm.
Tian, Zhen; Peng, Fei; Folkerts, Michael; Tan, Jun; Jia, Xun; Jiang, Steve B
2015-06-01
Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU's relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors' group, on a multi-GPU platform to solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors' method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H&N) cancer case is then used to validate the authors' method. The authors also compare their multi-GPU implementation with three different single GPU implementation strategies, i.e., truncating DDC matrix (S1), repeatedly transferring DDC matrix between CPU and GPU (S2), and porting computations involving DDC matrix to CPU (S3), in terms of both plan quality and computational efficiency. Two more H&N patient cases and three prostate cases are used to demonstrate the advantages of the authors' method. The authors' multi-GPU implementation can finish the optimization process within ∼ 1 min for the H&N patient case. S1 leads to an inferior plan quality although its total time was 10 s shorter than the multi-GPU implementation due to the reduced matrix size. S2 and S3 yield the same plan quality as the multi-GPU implementation but take ∼4 and ∼6 min, respectively. High computational efficiency was consistently achieved for the other five patient cases tested, with VMAT plans of clinically acceptable quality obtained within 23-46 s. Conversely, to obtain clinically comparable or acceptable plans for all six of these VMAT cases that the authors have tested in this paper, the optimization time needed in a commercial TPS system on CPU was found to be in an order of several minutes. The results demonstrate that the multi-GPU implementation of the authors' column-generation-based VMAT optimization can handle the large-scale VMAT optimization problem efficiently without sacrificing plan quality. The authors' study may serve as an example to shed some light on other large-scale medical physics problems that require multi-GPU techniques.
A fast learning method for large scale and multi-class samples of SVM
NASA Astrophysics Data System (ADS)
Fan, Yu; Guo, Huiming
2017-06-01
A multi-class classification SVM(Support Vector Machine) fast learning method based on binary tree is presented to solve its low learning efficiency when SVM processing large scale multi-class samples. This paper adopts bottom-up method to set up binary tree hierarchy structure, according to achieved hierarchy structure, sub-classifier learns from corresponding samples of each node. During the learning, several class clusters are generated after the first clustering of the training samples. Firstly, central points are extracted from those class clusters which just have one type of samples. For those which have two types of samples, cluster numbers of their positive and negative samples are set respectively according to their mixture degree, secondary clustering undertaken afterwards, after which, central points are extracted from achieved sub-class clusters. By learning from the reduced samples formed by the integration of extracted central points above, sub-classifiers are obtained. Simulation experiment shows that, this fast learning method, which is based on multi-level clustering, can guarantee higher classification accuracy, greatly reduce sample numbers and effectively improve learning efficiency.
[Research on non-rigid registration of multi-modal medical image based on Demons algorithm].
Hao, Peibo; Chen, Zhen; Jiang, Shaofeng; Wang, Yang
2014-02-01
Non-rigid medical image registration is a popular subject in the research areas of the medical image and has an important clinical value. In this paper we put forward an improved algorithm of Demons, together with the conservation of gray model and local structure tensor conservation model, to construct a new energy function processing multi-modal registration problem. We then applied the L-BFGS algorithm to optimize the energy function and solve complex three-dimensional data optimization problem. And finally we used the multi-scale hierarchical refinement ideas to solve large deformation registration. The experimental results showed that the proposed algorithm for large de formation and multi-modal three-dimensional medical image registration had good effects.
NASA Astrophysics Data System (ADS)
Verdon-Kidd, D. C.; Kiem, A. S.
2009-04-01
In this paper regional (synoptic) and large-scale climate drivers of rainfall are investigated for Victoria, Australia. A non-linear classification methodology known as self-organizing maps (SOM) is used to identify 20 key regional synoptic patterns, which are shown to capture a range of significant synoptic features known to influence the climate of the region. Rainfall distributions are assigned to each of the 20 patterns for nine rainfall stations located across Victoria, resulting in a clear distinction between wet and dry synoptic types at each station. The influence of large-scale climate modes on the frequency and timing of the regional synoptic patterns is also investigated. This analysis revealed that phase changes in the El Niño Southern Oscillation (ENSO), the Indian Ocean Dipole (IOD) and/or the Southern Annular Mode (SAM) are associated with a shift in the relative frequency of wet and dry synoptic types on an annual to inter-annual timescale. In addition, the relative frequency of synoptic types is shown to vary on a multi-decadal timescale, associated with changes in the Inter-decadal Pacific Oscillation (IPO). Importantly, these results highlight the potential to utilise the link between the regional synoptic patterns derived in this study and large-scale climate modes to improve rainfall forecasting for Victoria, both in the short- (i.e. seasonal) and long-term (i.e. decadal/multi-decadal scale). In addition, the regional and large-scale climate drivers identified in this study provide a benchmark by which the performance of Global Climate Models (GCMs) may be assessed.
Patent ductus arteriosus: patho-physiology, hemodynamic effects and clinical complications.
Capozzi, Giovanbattista; Santoro, Giuseppe
2011-10-01
During fetal life, patent arterial duct diverts placental oxygenated blood from the pulmonary artery into the aorta by-passing lungs. After birth, decrease of prostacyclins and prostaglandins concentration usually causes arterial duct closure. This process may be delayed, or may even completely fail in preterm infants with arterial duct still remaining patent. If that happens, blood flow by-pass of the systemic circulation through the arterial duct results in pulmonary overflow and systemic hypoperfusion. When pulmonary flow is 50% higher than systemic flow, a hemodynamic "paradox" results, with an increase of left ventricular output without a subsequent increase of systemic output. Cardiac overload support neuro-humoral effects (activation of sympathetic nervous system and renin-angiotensin system) that finally promote heart failure. Moreover, increased pulmonary blood flow can cause vascular congestion and pulmonary edema. However, the most dangerous effect is cerebral under-perfusion due to diastolic reverse-flow and resulting in cerebral hypoxia. At last, blood flow decreases through the abdominal aorta, reducing perfusion of liver, gut and kidneys and may cause hepatic failure, renal insufficiency and necrotizing enterocolitis. Conclusions Large patent arterial duct may cause life-threatening multi-organ effects. In pre-term infant early diagnosis and timely effective treatment are cornerstones in the prevention of cerebral damage and long-term multi-organ failure.
LARGE-SCALE PREDICTIONS OF MOBILE SOURCE CONTRIBUTIONS TO CONCENTRATIONS OF TOXIC AIR POLLUTANTS
This presentation shows concentrations and deposition of toxic air pollutants predicted by a 3-D air quality model, the Community Multi Scale Air Quality (CMAQ) modeling system. Contributions from both on-road and non-road mobile sources are analyzed.
NASA: Assessments of Selected Large-Scale Projects
2011-03-01
REPORT DATE MAR 2011 2. REPORT TYPE 3. DATES COVERED 00-00-2011 to 00-00-2011 4. TITLE AND SUBTITLE Assessments Of Selected Large-Scale Projects...Volatile EvolutioN MEP Mars Exploration Program MIB Mishap Investigation Board MMRTG Multi Mission Radioisotope Thermoelectric Generator MMS Magnetospheric...probes designed to explore the Martian surface, to satellites equipped with advanced sensors to study the earth , to telescopes intended to explore the
DOE Office of Scientific and Technical Information (OSTI.GOV)
Procassini, R.J.
1997-12-31
The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution ofmore » particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.« less
ERIC Educational Resources Information Center
Nese, Joseph F. T.; Tindal, Gerald; Stevens, Joseph J.; Elliott, Stephen N.
2015-01-01
The stakes of large-scale testing programs have grown considerably in the past decade with the enactment of the No Child Left Behind (NCLB) and Race To The Top (RTTT) legislations. A significant component of NCLB has been required reporting of annual yearly progress (AYP) of student subgroups disaggregated by sex, special education status, English…
The Observations of Redshift Evolution in Large Scale Environments (ORELSE) Survey
NASA Astrophysics Data System (ADS)
Squires, Gordon K.; Lubin, L. M.; Gal, R. R.
2007-05-01
We present the motivation, design, and latest results from the Observations of Redshift Evolution in Large Scale Environments (ORELSE) Survey, a systematic search for structure on scales greater than 10 Mpc around 20 known galaxy clusters at z > 0.6. When complete, the survey will cover nearly 5 square degrees, all targeted at high-density regions, making it complementary and comparable to field surveys such as DEEP2, GOODS, and COSMOS. For the survey, we are using the Large Format Camera on the Palomar 5-m and SuPRIME-Cam on the Subaru 8-m to obtain optical/near-infrared imaging of an approximately 30 arcmin region around previously studied high-redshift clusters. Colors are used to identify likely member galaxies which are targeted for follow-up spectroscopy with the DEep Imaging Multi-Object Spectrograph on the Keck 10-m. This technique has been used to identify successfully the Cl 1604 supercluster at z = 0.9, a large scale structure containing at least eight clusters (Gal & Lubin 2004; Gal, Lubin & Squires 2005). We present the most recent structures to be photometrically and spectroscopically confirmed through this program, discuss the properties of the member galaxies as a function of environment, and describe our planned multi-wavelength (radio, mid-IR, and X-ray) observations of these systems. The goal of this survey is to identify and examine a statistical sample of large scale structures during an active period in the assembly history of the most massive clusters. With such a sample, we can begin to constrain large scale cluster dynamics and determine the effect of the larger environment on galaxy evolution.
NASA Astrophysics Data System (ADS)
Harris, B.; McDougall, K.; Barry, M.
2012-07-01
Digital Elevation Models (DEMs) allow for the efficient and consistent creation of waterways and catchment boundaries over large areas. Studies of waterway delineation from DEMs are usually undertaken over small or single catchment areas due to the nature of the problems being investigated. Improvements in Geographic Information Systems (GIS) techniques, software, hardware and data allow for analysis of larger data sets and also facilitate a consistent tool for the creation and analysis of waterways over extensive areas. However, rarely are they developed over large regional areas because of the lack of available raw data sets and the amount of work required to create the underlying DEMs. This paper examines definition of waterways and catchments over an area of approximately 25,000 km2 to establish the optimal DEM scale required for waterway delineation over large regional projects. The comparative study analysed multi-scale DEMs over two test areas (Wivenhoe catchment, 543 km2 and a detailed 13 km2 within the Wivenhoe catchment) including various data types, scales, quality, and variable catchment input parameters. Historic and available DEM data was compared to high resolution Lidar based DEMs to assess variations in the formation of stream networks. The results identified that, particularly in areas of high elevation change, DEMs at 20 m cell size created from broad scale 1:25,000 data (combined with more detailed data or manual delineation in flat areas) are adequate for the creation of waterways and catchments at a regional scale.
NASA Astrophysics Data System (ADS)
Sun, Y. S.; Zhang, L.; Xu, B.; Zhang, Y.
2018-04-01
The accurate positioning of optical satellite image without control is the precondition for remote sensing application and small/medium scale mapping in large abroad areas or with large-scale images. In this paper, aiming at the geometric features of optical satellite image, based on a widely used optimization method of constraint problem which is called Alternating Direction Method of Multipliers (ADMM) and RFM least-squares block adjustment, we propose a GCP independent block adjustment method for the large-scale domestic high resolution optical satellite image - GISIBA (GCP-Independent Satellite Imagery Block Adjustment), which is easy to parallelize and highly efficient. In this method, the virtual "average" control points are built to solve the rank defect problem and qualitative and quantitative analysis in block adjustment without control. The test results prove that the horizontal and vertical accuracy of multi-covered and multi-temporal satellite images are better than 10 m and 6 m. Meanwhile the mosaic problem of the adjacent areas in large area DOM production can be solved if the public geographic information data is introduced as horizontal and vertical constraints in the block adjustment process. Finally, through the experiments by using GF-1 and ZY-3 satellite images over several typical test areas, the reliability, accuracy and performance of our developed procedure will be presented and studied in this paper.
Multi-scale Material Appearance
NASA Astrophysics Data System (ADS)
Wu, Hongzhi
Modeling and rendering the appearance of materials is important for a diverse range of applications of computer graphics - from automobile design to movies and cultural heritage. The appearance of materials varies considerably at different scales, posing significant challenges due to the sheer complexity of the data, as well the need to maintain inter-scale consistency constraints. This thesis presents a series of studies around the modeling, rendering and editing of multi-scale material appearance. To efficiently render material appearance at multiple scales, we develop an object-space precomputed adaptive sampling method, which precomputes a hierarchy of view-independent points that preserve multi-level appearance. To support bi-scale material appearance design, we propose a novel reflectance filtering algorithm, which rapidly computes the large-scale appearance from small-scale details, by exploiting the low-rank structures of Bidirectional Visible Normal Distribution Functions and pre-rotated Bidirectional Reflectance Distribution Functions in the matrix formulation of the rendering algorithm. This approach can guide the physical realization of appearance, as well as the modeling of real-world materials using very sparse measurements. Finally, we present a bi-scale-inspired high-quality general representation for material appearance described by Bidirectional Texture Functions. Our representation is at once compact, easily editable, and amenable to efficient rendering.
Multi-GNSS PPP-RTK: From Large- to Small-Scale Networks
Nadarajah, Nandakumaran; Wang, Kan; Choudhury, Mazher
2018-01-01
Precise point positioning (PPP) and its integer ambiguity resolution-enabled variant, PPP-RTK (real-time kinematic), can benefit enormously from the integration of multiple global navigation satellite systems (GNSS). In such a multi-GNSS landscape, the positioning convergence time is expected to be reduced considerably as compared to the one obtained by a single-GNSS setup. It is therefore the goal of the present contribution to provide numerical insights into the role taken by the multi-GNSS integration in delivering fast and high-precision positioning solutions (sub-decimeter and centimeter levels) using PPP-RTK. To that end, we employ the Curtin PPP-RTK platform and process data-sets of GPS, BeiDou Navigation Satellite System (BDS) and Galileo in stand-alone and combined forms. The data-sets are collected by various receiver types, ranging from high-end multi-frequency geodetic receivers to low-cost single-frequency mass-market receivers. The corresponding stations form a large-scale (Australia-wide) network as well as a small-scale network with inter-station distances less than 30 km. In case of the Australia-wide GPS-only ambiguity-float setup, 90% of the horizontal positioning errors (kinematic mode) are shown to become less than five centimeters after 103 min. The stated required time is reduced to 66 min for the corresponding GPS + BDS + Galieo setup. The time is further reduced to 15 min by applying single-receiver ambiguity resolution. The outcomes are supported by the positioning results of the small-scale network. PMID:29614040
Multi-GNSS PPP-RTK: From Large- to Small-Scale Networks.
Nadarajah, Nandakumaran; Khodabandeh, Amir; Wang, Kan; Choudhury, Mazher; Teunissen, Peter J G
2018-04-03
Precise point positioning (PPP) and its integer ambiguity resolution-enabled variant, PPP-RTK (real-time kinematic), can benefit enormously from the integration of multiple global navigation satellite systems (GNSS). In such a multi-GNSS landscape, the positioning convergence time is expected to be reduced considerably as compared to the one obtained by a single-GNSS setup. It is therefore the goal of the present contribution to provide numerical insights into the role taken by the multi-GNSS integration in delivering fast and high-precision positioning solutions (sub-decimeter and centimeter levels) using PPP-RTK. To that end, we employ the Curtin PPP-RTK platform and process data-sets of GPS, BeiDou Navigation Satellite System (BDS) and Galileo in stand-alone and combined forms. The data-sets are collected by various receiver types, ranging from high-end multi-frequency geodetic receivers to low-cost single-frequency mass-market receivers. The corresponding stations form a large-scale (Australia-wide) network as well as a small-scale network with inter-station distances less than 30 km. In case of the Australia-wide GPS-only ambiguity-float setup, 90% of the horizontal positioning errors (kinematic mode) are shown to become less than five centimeters after 103 min. The stated required time is reduced to 66 min for the corresponding GPS + BDS + Galieo setup. The time is further reduced to 15 min by applying single-receiver ambiguity resolution. The outcomes are supported by the positioning results of the small-scale network.
NASA Astrophysics Data System (ADS)
Shvarts, Dov
2017-10-01
Hydrodynamic instabilities, and the mixing that they cause, are of crucial importance in describing many phenomena, from very large scales such as stellar explosions (supernovae) to very small scales, such as inertial confinement fusion (ICF) implosions. Such mixing causes the ejection of stellar core material in supernovae, and impedes attempts at ICF ignition. The Rayleigh-Taylor instability (RTI) occurs at an accelerated interface between two fluids with the lower density accelerating the higher density fluid. The Richtmyer-Meshkov (RM) instability occurs when a shock wave passes an interface between the two fluids of different density. In the RTI, buoyancy causes ``bubbles'' of the light fluid to rise through (penetrate) the denser fluid, while ``spikes'' of the heavy fluid sink through (penetrate) the lighter fluid. With realistic multi-mode initial conditions, in the deep nonlinear regime, the mixing zone width, H, and its internal structure, progress through an inverse cascade of spatial scales, reaching an asymptotic self-similar evolution: hRT =αRT Agt2 for RT and hRM =αRM tθ for RM. While this characteristic behavior has been known for years, the self-similar parameters αRT and θRM and their dependence on dimensionality and density ratio have continued to be intensively studied and a relatively wide distribution of those values have emerged. This talk will describe recent theoretical advances in the description of this turbulent mixing evolution that sheds light on the spread in αRT and θRM. Results of new and specially designed experiments, done by scientists from several laboratories, were performed recently using NIF, the only facility that is powerful enough to reach the self-similar regime, for quantitative testing of this theoretical advance, will be presented.
Wojcik, Roza; Webb, Ian K.; Deng, Liulin; ...
2017-01-18
Understanding the biological mechanisms related to lipids and glycolipids is challenging due to the vast number of possible isomers. Mass spectrometry (MS) measurements are currently the dominant approach for studying and providing detailed information on lipid and glycolipid structures. However, difficulties in distinguishing many structural isomers (e.g. distinct acyl chain positions, double bond locations, as well as glycan isomers) inhibit the understanding of their biological roles. Here we utilized ultra-high resolution ion mobility spectrometry (IMS) separations based upon the use of traveling waves in a serpentine long path length multi-pass Structures for Lossless Manipulations (SLIM) to enhance isomer resolution. Themore » multi-pass arrangement allowed separations ranging from ~16 m (1 pass) to ~470 m (32 passes) to be investigated for the distinction of lipids and glycolipids with extremely small structural differences. Lastly, these ultra-high resolution SLIM IMS-MS analyses provide a foundation for exploring and better understanding isomer specific biological and disease processes.« less
2017-01-01
In the present work, an aluminum metal matrix reinforced with (Al2O3) nanoparticles was fabricated as a surface composite sheet using friction stir processing (FSP). The effects of processing parameters on mechanical properties, hardness, and microstructure grain were investigated. The results revealed that multi-pass FSP causes a homogeneous distribution and good dispersion of Al2O3 in the metal matrix, and consequently an increase in the hardness of the matrix composites. A finer grain is observed in the microstructure examination in specimens subjected to second and third passes of FSP. The improvement in the grain refinement is 80% compared to base metal. The processing parameters, particularly rotational tool speed and pass number in FSP, have a major effect on strength properties and surface hardness. The ultimate tensile strength (UTS) and the average hardness are improved by 25% and 46%, respectively, due to presence of reinforcement Al2O3 nanoparticles. PMID:28885575
NASA Astrophysics Data System (ADS)
Qin, Fangcheng; Li, Yongtang; Qi, Huiping; Lv, Zhenhua
2016-11-01
The isothermal and non-isothermal multi-pass compression tests of centrifugal casting 42CrMo steel were conducted on a Gleeble-3500 thermal simulation machine. The effects of compression passes and finishing temperatures on deformation behavior and microstructure evolution were investigated. It is found that the microstructure is homogeneous with equiaxed grains, and the flow stress does not show significant change with the increase in passes, while the peak softening coefficient increases first and then decreases during inter-pass. Moreover, the dominant mechanisms of controlled temperature and accumulated static recrystallization for grain refinement and its homogeneous distribution are found after 5 passes deformation. As the finishing temperature increases, the flow stress decreases gradually, but the dynamic recrystallization accelerates and softening effect increases, resulting in the larger grain size and homogeneous microstructure. The microhardness decreases sharply because the sufficient softening occurs in microstructure. When the finishing temperature is 890 °C, the carbide particles are precipitated in the vicinity of the grain boundaries, thus inhibiting the dislocation motion. Thus, the higher finishing temperature (≥970 °C) for centrifugal casting 42CrMo alloy should be avoided in non-isothermal multi-pass deformation, which is beneficial to grain refinement and properties improvement.
NASA Astrophysics Data System (ADS)
Fischer, Andreas; Keller, Denise; Liniger, Mark; Rajczak, Jan; Schär, Christoph; Appenzeller, Christof
2014-05-01
Fundamental changes in the hydrological cycle are expected in a future warmer climate. This is of particular relevance for the Alpine region, as a source and reservoir of several major rivers in Europe and being prone to extreme events such as floodings. For this region, climate change assessments based on the ENSEMBLES regional climate models (RCMs) project a significant decrease in summer mean precipitation under the A1B emission scenario by the mid-to-end of this century, while winter mean precipitation is expected to slightly rise. From an impact perspective, projected changes in seasonal means, however, are often insufficient to adequately address the multifaceted challenges of climate change adaptation. In this study, we revisit the full matrix of the ENSEMBLES RCM projections regarding changes in frequency and intensity, precipitation-type (convective versus stratiform) and temporal structure (wet/dry spells and transition probabilities) over Switzerland and surroundings. As proxies for raintype changes, we rely on the model parameterized convective and large-scale precipitation components. Part of the analysis involves a Bayesian multi-model combination algorithm to infer changes from the multi-model ensemble. The analysis suggests a summer drying that evolves altitude-specific: over low-land regions it is associated with wet-day frequency decreases of convective and large-scale precipitation, while over elevated regions it is primarily associated with a decline in large-scale precipitation only. As a consequence, almost all the models project an increase in the convective fraction at elevated Alpine altitudes. The decrease in the number of wet days during summer is accompanied by decreases (increases) in multi-day wet (dry) spells. This shift in multi-day episodes also lowers the likelihood of short dry spell occurrence in all of the models. For spring and autumn the combined multi-model projections indicate higher mean precipitation intensity north of the Alps, while a similar tendency is expected for the winter season over most of Switzerland.
NASA Astrophysics Data System (ADS)
Hullo, J.-F.; Thibault, G.; Boucheny, C.
2015-02-01
In a context of increased maintenance operations and workers generational renewal, a nuclear owner and operator like Electricité de France (EDF) is interested in the scaling up of tools and methods of "as-built virtual reality" for larger buildings and wider audiences. However, acquisition and sharing of as-built data on a large scale (large and complex multi-floored buildings) challenge current scientific and technical capacities. In this paper, we first present a state of the art of scanning tools and methods for industrial plants with very complex architecture. Then, we introduce the inner characteristics of the multi-sensor scanning and visualization of the interior of the most complex building of a power plant: a nuclear reactor building. We introduce several developments that made possible a first complete survey of such a large building, from acquisition, processing and fusion of multiple data sources (3D laser scans, total-station survey, RGB panoramic, 2D floor plans, 3D CAD as-built models). In addition, we present the concepts of a smart application developed for the painless exploration of the whole dataset. The goal of this application is to help professionals, unfamiliar with the manipulation of such datasets, to take into account spatial constraints induced by the building complexity while preparing maintenance operations. Finally, we discuss the main feedbacks of this large experiment, the remaining issues for the generalization of such large scale surveys and the future technical and scientific challenges in the field of industrial "virtual reality".
USDA-ARS?s Scientific Manuscript database
NASA’s SMAP satellite, launched in November of 2014, produces estimates of average volumetric soil moisture at 3, 9, and 36-kilometer scales. The calibration and validation process of these estimates requires the generation of an identically-scaled soil moisture product from existing in-situ networ...
Strategies for Energy Efficient Resource Management of Hybrid Programming Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Dong; Supinski, Bronis de; Schulz, Martin
2013-01-01
Many scientific applications are programmed using hybrid programming models that use both message-passing and shared-memory, due to the increasing prevalence of large-scale systems with multicore, multisocket nodes. Previous work has shown that energy efficiency can be improved using software-controlled execution schemes that consider both the programming model and the power-aware execution capabilities of the system. However, such approaches have focused on identifying optimal resource utilization for one programming model, either shared-memory or message-passing, in isolation. The potential solution space, thus the challenge, increases substantially when optimizing hybrid models since the possible resource configurations increase exponentially. Nonetheless, with the accelerating adoptionmore » of hybrid programming models, we increasingly need improved energy efficiency in hybrid parallel applications on large-scale systems. In this work, we present new software-controlled execution schemes that consider the effects of dynamic concurrency throttling (DCT) and dynamic voltage and frequency scaling (DVFS) in the context of hybrid programming models. Specifically, we present predictive models and novel algorithms based on statistical analysis that anticipate application power and time requirements under different concurrency and frequency configurations. We apply our models and methods to the NPB MZ benchmarks and selected applications from the ASC Sequoia codes. Overall, we achieve substantial energy savings (8.74% on average and up to 13.8%) with some performance gain (up to 7.5%) or negligible performance loss.« less
Design and Optimization of Multi-Pixel Transition-Edge Sensors for X-Ray Astronomy Applications
NASA Technical Reports Server (NTRS)
Smith, Stephen J.; Adams, Joseph S.; Bandler, Simon R.; Chervenak, James A.; Datesman, Aaron Michael; Eckart, Megan E.; Ewin, Audrey J.; Finkbeiner, Fred M.; Kelley, Richard L.; Kilbourne, Caroline A.;
2017-01-01
Multi-pixel transition-edge sensors (TESs), commonly referred to as 'hydras', are a type of position sensitive micro-calorimeter that enables very large format arrays to be designed without commensurate increase in the number of readout channels and associated wiring. In the hydra design, a single TES is coupled to discrete absorbers via varied thermal links. The links act as low pass thermal filters that are tuned to give a different characteristic pulse shape for x-ray photons absorbed in each of the hydra sub pixels. In this contribution we report on the experimental results from hydras consisting of up to 20 pixels per TES. We discuss the design trade-offs between energy resolution, position discrimination and number of pixels and investigate future design optimizations specifically targeted at meeting the readout technology considered for Lynx.
NASA Technical Reports Server (NTRS)
Spirkovska, Lilly; Reid, Max B.
1993-01-01
A higher-order neural network (HONN) can be designed to be invariant to changes in scale, translation, and inplane rotation. Invariances are built directly into the architecture of a HONN and do not need to be learned. Consequently, fewer training passes and a smaller training set are required to learn to distinguish between objects. The size of the input field is limited, however, because of the memory required for the large number of interconnections in a fully connected HONN. By coarse coding the input image, the input field size can be increased to allow the larger input scenes required for practical object recognition problems. We describe a coarse coding technique and present simulation results illustrating its usefulness and its limitations. Our simulations show that a third-order neural network can be trained to distinguish between two objects in a 4096 x 4096 pixel input field independent of transformations in translation, in-plane rotation, and scale in less than ten passes through the training set. Furthermore, we empirically determine the limits of the coarse coding technique in the object recognition domain.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Xingye; Hu, Bin; Wei, Changdong
Lanthanum zirconate (La2Zr2O7) is a promising candidate material for thermal barrier coating (TBC) applications due to its low thermal conductivity and high-temperature phase stability. In this work, a novel image-based multi-scale simulation framework combining molecular dynamics (MD) and finite element (FE) calculations is proposed to study the thermal conductivity of La2Zr2O7 coatings. Since there is no experimental data of single crystal La2Zr2O7 thermal conductivity, a reverse non-equilibrium molecular dynamics (reverse NEMD) approach is first employed to compute the temperature-dependent thermal conductivity of single crystal La2Zr2O7. The single crystal data is then passed to a FE model which takes into accountmore » of realistic thermal barrier coating microstructures. The predicted thermal conductivities from the FE model are in good agreement with experimental validations using both flash laser technique and pulsed thermal imaging-multilayer analysis. The framework proposed in this work provides a powerful tool for future design of advanced coating systems. (C) 2016 Elsevier Ltd. All rights reserved.« less
Multi-scale approaches for high-speed imaging and analysis of large neural populations
Ahrens, Misha B.; Yuste, Rafael; Peterka, Darcy S.; Paninski, Liam
2017-01-01
Progress in modern neuroscience critically depends on our ability to observe the activity of large neuronal populations with cellular spatial and high temporal resolution. However, two bottlenecks constrain efforts towards fast imaging of large populations. First, the resulting large video data is challenging to analyze. Second, there is an explicit tradeoff between imaging speed, signal-to-noise, and field of view: with current recording technology we cannot image very large neuronal populations with simultaneously high spatial and temporal resolution. Here we describe multi-scale approaches for alleviating both of these bottlenecks. First, we show that spatial and temporal decimation techniques based on simple local averaging provide order-of-magnitude speedups in spatiotemporally demixing calcium video data into estimates of single-cell neural activity. Second, once the shapes of individual neurons have been identified at fine scale (e.g., after an initial phase of conventional imaging with standard temporal and spatial resolution), we find that the spatial/temporal resolution tradeoff shifts dramatically: after demixing we can accurately recover denoised fluorescence traces and deconvolved neural activity of each individual neuron from coarse scale data that has been spatially decimated by an order of magnitude. This offers a cheap method for compressing this large video data, and also implies that it is possible to either speed up imaging significantly, or to “zoom out” by a corresponding factor to image order-of-magnitude larger neuronal populations with minimal loss in accuracy or temporal resolution. PMID:28771570
Tracing Multi-Scale Climate Change at Low Latitude from Glacier Shrinkage
NASA Astrophysics Data System (ADS)
Moelg, T.; Cullen, N. J.; Hardy, D. R.; Kaser, G.
2009-12-01
Significant shrinkage of glaciers on top of Africa's highest mountain (Kilimanjaro, 5895 m a.s.l.) has been observed between the late 19th century and the present. Multi-year data from our automatic weather station on the largest remaining slope glacier at 5873 m allow us to force and verify a process-based distributed glacier mass balance model. This generates insights into energy and mass fluxes at the glacier-atmosphere interface, their feedbacks, and how they are linked to atmospheric conditions. By means of numerical atmospheric modeling and global climate model simulations, we explore the linkages of the local climate in Kilimanjaro's summit zone to larger-scale climate dynamics - which suggests a causal connection between Indian Ocean dynamics, mesoscale mountain circulation, and glacier mass balance. Based on this knowledge, the verified mass balance model is used for backward modeling of the steady-state glacier extent observed in the 19th century, which yields the characteristics of local climate change between that time and the present (30-45% less precipitation, 0.1-0.3 hPa less water vapor pressure, 2-4 percentage units less cloud cover at present). Our multi-scale approach provides an important contribution, from a cryospheric viewpoint, to the understanding of how large-scale climate change propagates to the tropical free troposphere. Ongoing work in this context targets the millennium-scale relation between large-scale climate and glacier behavior (by downscaling precipitation), and the possible effects of regional anthropogenic activities (land use change) on glacier mass balance.
NASA Astrophysics Data System (ADS)
Sacks, L. E.; Edgar, L. A.; Edwards, C. S.; Anderson, R. B.
2016-12-01
Images acquired by the Mars Hand Lens Imager (MAHLI) and the ChemCam Remote Micro Imager (RMI) onboard the Mars Science Laboratory (MSL) Curiosity rover provide grain-scale data that are critical for interpreting sedimentary deposits. At the location informally known as Marias Pass, Curiosity used both cameras to image the nine rock targets used in this study. We used manual point-counts to measure grain size distributions from those images to compare the abilities of the two cameras. The manually derived results were compared to automated grain size data obtained using pyDGS (Digital Grain Size), an open-source python program. Grain size analyses were used to test the lacustrine and aeolian depositional hypotheses for the Murray and Stimson formations at Marias Pass. Results indicate that the MAHLI and RMI instruments, despite their different fields of view and properties, provide comparable grain size measurements. Additionally, pyDGS does not account for grains smaller than a few pixels and thus does not report representative grain size data and should not be used on images with a large fraction of unresolved grains. Finally, the data collected at Marias Pass are consistent with the existing interpretations of the Murray and Stimson formations. The fine-grained results of the Murray formation analyses support lacustrine deposition, while the mean grain size of the Stimson formation is fine to medium sized sand, consistent with aeolian deposition. However, directly above the contact with the Murray formation, larger rip-up clasts of the Murray formation are present in the Stimson formation. It is possible that water was involved at this stage of erosion and re-deposition, prior to aeolian deposition. Additionally, the grain-scale analyses conducted in this study show that the Dust Removal Tool on Curiosity should be used prior to capturing images for grain-scale analysis. Two images of the target informally named Ronan, taken before and after brushing, resulted in dramatically different grain size results, suggesting that the common, thin layer of dust obscured the true grain size distribution. These grain-scale analyses at Marias Pass have important implications for the collection and processing of image data, as well as the depositional environments recorded in Gale crater. Funded by NSF Grant AST-1461200
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, William Michael; Plimpton, Steven James; Wang, Peng
2010-03-01
LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. LAMMPS has potentials for soft materials (biomolecules, polymers) and solid-state materials (metals, semiconductors) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale. LAMMPS runs on single processors or in parallel using message-passing techniques and a spatial-decomposition of the simulation domain. The code is designed to be easy to modify or extend with new functionality.
Moon-based Earth Observation for Large Scale Geoscience Phenomena
NASA Astrophysics Data System (ADS)
Guo, Huadong; Liu, Guang; Ding, Yixing
2016-07-01
The capability of Earth observation for large-global-scale natural phenomena needs to be improved and new observing platform are expected. We have studied the concept of Moon as an Earth observation in these years. Comparing with manmade satellite platform, Moon-based Earth observation can obtain multi-spherical, full-band, active and passive information,which is of following advantages: large observation range, variable view angle, long-term continuous observation, extra-long life cycle, with the characteristics of longevity ,consistency, integrity, stability and uniqueness. Moon-based Earth observation is suitable for monitoring the large scale geoscience phenomena including large scale atmosphere change, large scale ocean change,large scale land surface dynamic change,solid earth dynamic change,etc. For the purpose of establishing a Moon-based Earth observation platform, we already have a plan to study the five aspects as follows: mechanism and models of moon-based observing earth sciences macroscopic phenomena; sensors' parameters optimization and methods of moon-based Earth observation; site selection and environment of moon-based Earth observation; Moon-based Earth observation platform; and Moon-based Earth observation fundamental scientific framework.
Multi-format all-optical processing based on a large-scale, hybridly integrated photonic circuit.
Bougioukos, M; Kouloumentas, Ch; Spyropoulou, M; Giannoulis, G; Kalavrouziotis, D; Maziotis, A; Bakopoulos, P; Harmon, R; Rogers, D; Harrison, J; Poustie, A; Maxwell, G; Avramopoulos, H
2011-06-06
We investigate through numerical studies and experiments the performance of a large scale, silica-on-silicon photonic integrated circuit for multi-format regeneration and wavelength-conversion. The circuit encompasses a monolithically integrated array of four SOAs inside two parallel Mach-Zehnder structures, four delay interferometers and a large number of silica waveguides and couplers. Exploiting phase-incoherent techniques, the circuit is capable of processing OOK signals at variable bit rates, DPSK signals at 22 or 44 Gb/s and DQPSK signals at 44 Gbaud. Simulation studies reveal the wavelength-conversion potential of the circuit with enhanced regenerative capabilities for OOK and DPSK modulation formats and acceptable quality degradation for DQPSK format. Regeneration of 22 Gb/s OOK signals with amplified spontaneous emission (ASE) noise and DPSK data signals degraded with amplitude, phase and ASE noise is experimentally validated demonstrating a power penalty improvement up to 1.5 dB.
NASA Astrophysics Data System (ADS)
Qin, Fangcheng; Li, Yongtang; Qi, Huiping; Ju, Li
2017-01-01
Research on compact manufacturing technology for shape and performance controllability of metallic components can realize the simplification and high-reliability of manufacturing process on the premise of satisfying the requirement of macro/micro-structure. It is not only the key paths in improving performance, saving material and energy, and green manufacturing of components used in major equipments, but also the challenging subjects in frontiers of advanced plastic forming. To provide a novel horizon for the manufacturing in the critical components is significant. Focused on the high-performance large-scale components such as bearing rings, flanges, railway wheels, thick-walled pipes, etc, the conventional processes and their developing situations are summarized. The existing problems including multi-pass heating, wasting material and energy, high cost and high-emission are discussed, and the present study unable to meet the manufacturing in high-quality components is also pointed out. Thus, the new techniques related to casting-rolling compound precise forming of rings, compact manufacturing for duplex-metal composite rings, compact manufacturing for railway wheels, and casting-extruding continuous forming of thick-walled pipes are introduced in detail, respectively. The corresponding research contents, such as casting ring blank, hot ring rolling, near solid-state pressure forming, hot extruding, are elaborated. Some findings in through-thickness microstructure evolution and mechanical properties are also presented. The components produced by the new techniques are mainly characterized by fine and homogeneous grains. Moreover, the possible directions for further development of those techniques are suggested. Finally, the key scientific problems are first proposed. All of these results and conclusions have reference value and guiding significance for the integrated control of shape and performance in advanced compact manufacturing.
Multi-service small-cell cloud wired/wireless access network based on tunable optical frequency comb
NASA Astrophysics Data System (ADS)
Xiang, Yu; Zhou, Kun; Yang, Liu; Pan, Lei; Liao, Zhen-wan; Zhang, Qiang
2015-11-01
In this paper, we demonstrate a novel multi-service wired/wireless integrated access architecture of cloud radio access network (C-RAN) based on radio-over-fiber passive optical network (RoF-PON) system, which utilizes scalable multiple- frequency millimeter-wave (MF-MMW) generation based on tunable optical frequency comb (TOFC). In the baseband unit (BBU) pool, the generated optical comb lines are modulated into wired, RoF and WiFi/WiMAX signals, respectively. The multi-frequency RoF signals are generated by beating the optical comb line pairs in the small cell. The WiFi/WiMAX signals are demodulated after passing through the band pass filter (BPF) and band stop filter (BSF), respectively, whereas the wired signal can be received directly. The feasibility and scalability of the proposed multi-service wired/wireless integrated C-RAN are confirmed by the simulations.
ACCURATE CHEMICAL MASTER EQUATION SOLUTION USING MULTI-FINITE BUFFERS
Cao, Youfang; Terebus, Anna; Liang, Jie
2016-01-01
The discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multi-scale nature of many networks where reaction rates have large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the Accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multi-finite buffers for reducing the state space by O(n!), exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes, and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be pre-computed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multi-scale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks. PMID:27761104
Corominas-Faja, Bruna; Santangelo, Elvira; Cuyàs, Elisabet; Micol, Vicente; Joven, Jorge; Ariza, Xavier; Segura-Carretero, Antonio; García, Jordi; Menendez, Javier A
2014-09-01
Aging is associated with common conditions, including cancer, diabetes, cardiovascular disease, and Alzheimer's disease. The type of multi-targeted pharmacological approach necessary to address a complex multifaceted disease such as aging might take advantage of pleiotropic natural polyphenols affecting a wide variety of biological processes. We have recently postulated that the secoiridoids oleuropein aglycone (OA) and decarboxymethyl oleuropein aglycone (DOA), two complex polyphenols present in health-promoting extra virgin olive oil (EVOO), might constitute a new family of plant-produced gerosuppressant agents. This paper describes an analysis of the biological activity spectra (BAS) of OA and DOA using PASS (Prediction of Activity Spectra for Substances) software. PASS can predict thousands of biological activities, as the BAS of a compound is an intrinsic property that is largely dependent on the compound's structure and reflects pharmacological effects, physiological and biochemical mechanisms of action, and specific toxicities. Using Pharmaexpert, a tool that analyzes the PASS-predicted BAS of substances based on thousands of "mechanism-effect" and "effect-mechanism" relationships, we illuminate hypothesis-generating pharmacological effects, mechanisms of action, and targets that might underlie the anti-aging/anti-cancer activities of the gerosuppressant EVOO oleuropeins.
Large-scale standardized phenotyping of strawberry in RosBREED
USDA-ARS?s Scientific Manuscript database
A large, multi-institutional, international, research project with the goal of bringing genomicists and plant breeders together was funded by USDA-NIFA Specialty Crop Research Initiative. Apple, cherry, peach, and strawberry are the Rosaceous crops included in the project. Many (900+) strawberry g...
Mechanical Properties of Friction Stir Welds in A12195-T8
NASA Technical Reports Server (NTRS)
Kinchen, David G.; Li, Zhixian; Adams, Glynn P.
1999-01-01
An extensive study of the mechanical properties of friction stir welded Al-Li 2195 has been conducted by Lockheed Martin Michoud Space Systems under contract to NASA. The study was part of a development program in which weld parameters were defined for using FSW to assemble large-scale aluminum cryogenic tanks. In excess of 300 feet of 0.320 in. gage plate material was welded and tested. The tests include room temperature and cryogenic temperature tensile tests and surface crack tension (SCT) tests, nondestructive evaluation, metallurgical studies, and photostress analysis. The results of the testing demonstrated improved mechanical properties with FSW as compared to typical fusion welding processes. Increases in ultimate tensile strength, cryogenic enhancement and elongation were observed with the tensile test results. Increased fracture toughness was observed with the SCT results. Nondestructive evaluations were conducted on all welded Joints. No volumetric defects were indicated. Surface indications on the root side of the welds did not significantly affect weld strength. The results of the nondestructive evaluations were confirmed via metallurgical studies. Photostress analysis revealed strain concentrations in multi-pass and heat-repaired FSW's. Details of the tests and results are presented.
The topology of large-scale structure. VI - Slices of the universe
NASA Astrophysics Data System (ADS)
Park, Changbom; Gott, J. R., III; Melott, Adrian L.; Karachentsev, I. D.
1992-03-01
Results of an investigation of the topology of large-scale structure in two observed slices of the universe are presented. Both slices pass through the Coma cluster and their depths are 100 and 230/h Mpc. The present topology study shows that the largest void in the CfA slice is divided into two smaller voids by a statistically significant line of galaxies. The topology of toy models like the white noise and bubble models is shown to be inconsistent with that of the observed slices. A large N-body simulation was made of the biased cloud dark matter model and the slices are simulated by matching them in selection functions and boundary conditions. The genus curves for these simulated slices are spongelike and have a small shift in the direction of a meatball topology like those of observed slices.
The topology of large-scale structure. VI - Slices of the universe
NASA Technical Reports Server (NTRS)
Park, Changbom; Gott, J. R., III; Melott, Adrian L.; Karachentsev, I. D.
1992-01-01
Results of an investigation of the topology of large-scale structure in two observed slices of the universe are presented. Both slices pass through the Coma cluster and their depths are 100 and 230/h Mpc. The present topology study shows that the largest void in the CfA slice is divided into two smaller voids by a statistically significant line of galaxies. The topology of toy models like the white noise and bubble models is shown to be inconsistent with that of the observed slices. A large N-body simulation was made of the biased cloud dark matter model and the slices are simulated by matching them in selection functions and boundary conditions. The genus curves for these simulated slices are spongelike and have a small shift in the direction of a meatball topology like those of observed slices.
Deeply etched MMI-based components on 4 μm thick SOI for SOA-based optical RAM cell circuits
NASA Astrophysics Data System (ADS)
Cherchi, Matteo; Ylinen, Sami; Harjanne, Mikko; Kapulainen, Markku; Aalto, Timo; Kanellos, George T.; Fitsios, Dimitrios; Pleros, Nikos
2013-02-01
We present novel deeply etched functional components, fabricated by multi-step patterning in the frame of our 4 μm thick Silicon on Insulator (SOI) platform based on singlemode rib-waveguides and on the previously developed rib-tostrip converter. These novel components include Multi-Mode Interference (MMI) splitters with any desired splitting ratio, wavelength sensitive 50/50 splitters with pre-filtering capability, multi-stage Mach-Zehnder Interferometer (MZI) filters for suppression of Amplified Spontaneous Emission (ASE), and MMI resonator filters. These novel building blocks enable functionalities otherwise not achievable on our SOI platform, and make it possible to integrate optical RAM cell layouts, by resorting to our technology for hybrid integration of Semiconductor Optical Amplifiers (SOAs). Typical SOA-based RAM cell layouts require generic splitting ratios, which are not readily achievable by a single MMI splitter. We present here a novel solution to this problem, which is very compact and versatile and suits perfectly our technology. Another useful functional element when using SOAs is the pass-band filter to suppress ASE. We pursued two complimentary approaches: a suitable interleaved cascaded MZI filter, based on a novel suitably designed MMI coupler with pre-filtering capabilities, and a completely novel MMI resonator concept, to achieve larger free spectral ranges and narrower pass-band response. Simulation and design principles are presented and compared to preliminary experimental functional results, together with scaling rules and predictions of achievable RAM cell densities. When combined with our newly developed ultra-small light-turning concept, these new components are expected to pave the way for high integration density of RAM cells.
Multi-resource and multi-scale approaches for meeting the challenge of managing multiple species
Frank R. Thompson; Deborah M. Finch; John R. Probst; Glen D. Gaines; David S. Dobkin
1999-01-01
The large number of Neotropical migratory bird (NTMB) species and their diverse habitat requirements create conflicts and difficulties for land managers and conservationists. We provide examples of assessments or conservation efforts that attempt to address the problem of managing for multiple NTMB species. We advocate approaches at a variety of spatial and geographic...
Optical interconnect for large-scale systems
NASA Astrophysics Data System (ADS)
Dress, William
2013-02-01
This paper presents a switchless, optical interconnect module that serves as a node in a network of identical distribution modules for large-scale systems. Thousands to millions of hosts or endpoints may be interconnected by a network of such modules, avoiding the need for multi-level switches. Several common network topologies are reviewed and their scaling properties assessed. The concept of message-flow routing is discussed in conjunction with the unique properties enabled by the optical distribution module where it is shown how top-down software control (global routing tables, spanning-tree algorithms) may be avoided.
A Systematic Multi-Time Scale Solution for Regional Power Grid Operation
NASA Astrophysics Data System (ADS)
Zhu, W. J.; Liu, Z. G.; Cheng, T.; Hu, B. Q.; Liu, X. Z.; Zhou, Y. F.
2017-10-01
Many aspects need to be taken into consideration in a regional grid while making schedule plans. In this paper, a systematic multi-time scale solution for regional power grid operation considering large scale renewable energy integration and Ultra High Voltage (UHV) power transmission is proposed. In the time scale aspect, we discuss the problem from month, week, day-ahead, within-day to day-behind, and the system also contains multiple generator types including thermal units, hydro-plants, wind turbines and pumped storage stations. The 9 subsystems of the scheduling system are described, and their functions and relationships are elaborated. The proposed system has been constructed in a provincial power grid in Central China, and the operation results further verified the effectiveness of the system.
2010-09-01
on an Optical Micrograph of the Transverse View of Single-Pass NAB. After [5]............................................... 6 Figure 4 . Vertical...deformed and 6 elongated but does not see the same refinement that is seen inside the SZ [ 4 ]. The grain structure right outside the TMAZ will also...including grinding, polishing, and electropolishing . The first step was to grind the surface using a Buehler ECOMET 4 Variable Speed Grinder
Profitability and sustainability of small - medium scale palm biodiesel plant
NASA Astrophysics Data System (ADS)
Solikhah, Maharani Dewi; Kismanto, Agus; Raksodewanto, Agus; Peryoga, Yoga
2017-06-01
The mandatory of biodiesel application at 20% blending (B20) has been started since January 2016. It creates huge market for biodiesel industry. To build large-scale biodiesel plant (> 100,000 tons/year) is most favorable for biodiesel producers since it can give lower production cost. This cost becomes a challenge for small - medium scale biodiesel plants. However, current biodiesel plants in Indonesia are located mainly in Java and Sumatra, which then distribute biodiesel around Indonesia so that there is an additional cost for transportation from area to area. This factor becomes an opportunity for the small - medium scale biodiesel plants to compete with the large one. This paper discusses the profitability of small - medium scale biodiesel plants conducted on a capacity of 50 tons/day using CPO and its derivatives. The study was conducted by performing economic analysis between scenarios of biodiesel plant that using raw material of stearin, PFAD, and multi feedstock. Comparison on the feasibility of scenarios was also conducted on the effect of transportation cost and selling price. The economic assessment shows that profitability is highly affected by raw material price so that it is important to secure the source of raw materials and consider a multi-feedstock type for small - medium scale biodiesel plants to become a sustainable plant. It was concluded that the small - medium scale biodiesel plants will be profitable and sustainable if they are connected to palm oil mill, have a captive market, and are located minimally 200 km from other biodiesel plants. The use of multi feedstock could increase IRR from 18.68 % to 56.52 %.
A correlation between the cosmic microwave background and large-scale structure in the Universe.
Boughn, Stephen; Crittenden, Robert
2004-01-01
Observations of distant supernovae and the fluctuations in the cosmic microwave background (CMB) indicate that the expansion of the Universe may be accelerating under the action of a 'cosmological constant' or some other form of 'dark energy'. This dark energy now appears to dominate the Universe and not only alters its expansion rate, but also affects the evolution of fluctuations in the density of matter, slowing down the gravitational collapse of material (into, for example, clusters of galaxies) in recent times. Additional fluctuations in the temperature of CMB photons are induced as they pass through large-scale structures and these fluctuations are necessarily correlated with the distribution of relatively nearby matter. Here we report the detection of correlations between recent CMB data and two probes of large-scale structure: the X-ray background and the distribution of radio galaxies. These correlations are consistent with those predicted by dark energy, indicating that we are seeing the imprint of dark energy on the growth of structure in the Universe.
Predicting agricultural impacts of large-scale drought: 2012 and the case for better modeling
USDA-ARS?s Scientific Manuscript database
We present an example of a simulation-based forecast for the 2012 U.S. maize growing season produced as part of a high-resolution, multi-scale, predictive mechanistic modeling study designed for decision support, risk management, and counterfactual analysis. The simulations undertaken for this analy...
Use of large-scale, multi-species surveys to monitor gyrfalcon and ptarmigan populations
Bart, Jonathan; Fuller, Mark; Smith, Paul; Dunn, Leah; Watson, Richard T.; Cade, Tom J.; Fuller, Mark; Hunt, Grainger; Potapov, Eugene
2011-01-01
We evaluated the ability of three large-scale, multi-species surveys in the Arctic to provide information on abundance and habitat relationships of Gyrfalcons (Falco rusticolus) and ptarmigan. The Program for Regional and International Shorebird Monitoring (PRISM) has surveyed birds widely across the arctic regions of Canada and Alaska since 2001. The Arctic Coastal Plain survey has collected abundance information on the North Slope of Alaska using fixed-wing aircraft since 1992. The Northwest Territories-Nunavut Bird Checklist has collected presenceabsence information from little-known locations in northern Canada since 1995. All three surveys provide extensive information on Willow Ptarmigan (Lagopus lagopus) and Rock Ptarmigan (L. muta). For example, they show that ptarmigan are most abundant in western Alaska, next most abundant in northern Alaska and northwest Canada, and least abundant in the Canadian Archipelago. PRISM surveys were less successful in detecting Gyrfalcons, and the Arctic Coastal Plain Survey is largely outside the Gyrfalcon?s breeding range. The Checklist Survey, however, reflects the expansive Gyrfalcon range in Canada. We suggest that collaboration by Gyrfalcon and ptarmigan biologists with the organizers of large scale surveys like the ones we investigated provides an opportunity for obtaining useful information on these species and their environment across large areas.
Evidence for the interaction of large scale magnetic structures in solar flares
NASA Technical Reports Server (NTRS)
Mandrini, C. H.; Demoulin, P.; Henoux, J. C.; Machado, M. E.
1991-01-01
By modeling the observed vertical magnetic field of an active region AR 2372 by the potential field of an ensemble of magnetic dipoles, the likely location of the separatrices, surfaces that separates cells of different field line connectivities, and of the separator which is the intersection of the separatrices, is derived. Four of the five off-band H-alpha kernels of a flare that occurred less than 20 minutes before obtaining the magnetogram are shown to have taken place near or at the separatrices. These H-alpha kernels are connected by field lines that pass near the separator. This indicates that the flare may have resulted from the interaction in the separator region of large scale magnetic structures.
GenASiS Basics: Object-oriented utilitarian functionality for large-scale physics simulations
Cardall, Christian Y.; Budiardja, Reuben D.
2015-06-11
Aside from numerical algorithms and problem setup, large-scale physics simulations on distributed-memory supercomputers require more basic utilitarian functionality, such as physical units and constants; display to the screen or standard output device; message passing; I/O to disk; and runtime parameter management and usage statistics. Here we describe and make available Fortran 2003 classes furnishing extensible object-oriented implementations of this sort of rudimentary functionality, along with individual `unit test' programs and larger example problems demonstrating their use. Lastly, these classes compose the Basics division of our developing astrophysics simulation code GenASiS (General Astrophysical Simulation System), but their fundamental nature makes themmore » useful for physics simulations in many fields.« less
Asynchronous adaptive time step in quantitative cellular automata modeling
Zhu, Hao; Pang, Peter YH; Sun, Yan; Dhar, Pawan
2004-01-01
Background The behaviors of cells in metazoans are context dependent, thus large-scale multi-cellular modeling is often necessary, for which cellular automata are natural candidates. Two related issues are involved in cellular automata based multi-cellular modeling: how to introduce differential equation based quantitative computing to precisely describe cellular activity, and upon it, how to solve the heavy time consumption issue in simulation. Results Based on a modified, language based cellular automata system we extended that allows ordinary differential equations in models, we introduce a method implementing asynchronous adaptive time step in simulation that can considerably improve efficiency yet without a significant sacrifice of accuracy. An average speedup rate of 4–5 is achieved in the given example. Conclusions Strategies for reducing time consumption in simulation are indispensable for large-scale, quantitative multi-cellular models, because even a small 100 × 100 × 100 tissue slab contains one million cells. Distributed and adaptive time step is a practical solution in cellular automata environment. PMID:15222901
Towards Personalized Cardiology: Multi-Scale Modeling of the Failing Heart
Amr, Ali; Neumann, Dominik; Georgescu, Bogdan; Seegerer, Philipp; Kamen, Ali; Haas, Jan; Frese, Karen S.; Irawati, Maria; Wirsz, Emil; King, Vanessa; Buss, Sebastian; Mereles, Derliz; Zitron, Edgar; Keller, Andreas; Katus, Hugo A.; Comaniciu, Dorin; Meder, Benjamin
2015-01-01
Background Despite modern pharmacotherapy and advanced implantable cardiac devices, overall prognosis and quality of life of HF patients remain poor. This is in part due to insufficient patient stratification and lack of individualized therapy planning, resulting in less effective treatments and a significant number of non-responders. Methods and Results State-of-the-art clinical phenotyping was acquired, including magnetic resonance imaging (MRI) and biomarker assessment. An individualized, multi-scale model of heart function covering cardiac anatomy, electrophysiology, biomechanics and hemodynamics was estimated using a robust framework. The model was computed on n=46 HF patients, showing for the first time that advanced multi-scale models can be fitted consistently on large cohorts. Novel multi-scale parameters derived from the model of all cases were analyzed and compared against clinical parameters, cardiac imaging, lab tests and survival scores to evaluate the explicative power of the model and its potential for better patient stratification. Model validation was pursued by comparing clinical parameters that were not used in the fitting process against model parameters. Conclusion This paper illustrates how advanced multi-scale models can complement cardiovascular imaging and how they could be applied in patient care. Based on obtained results, it becomes conceivable that, after thorough validation, such heart failure models could be applied for patient management and therapy planning in the future, as we illustrate in one patient of our cohort who received CRT-D implantation. PMID:26230546
NASA Astrophysics Data System (ADS)
Linkmann, Moritz; Buzzicotti, Michele; Biferale, Luca
2018-06-01
We provide analytical and numerical results concerning multi-scale correlations between the resolved velocity field and the subgrid-scale (SGS) stress-tensor in large eddy simulations (LES). Following previous studies for Navier-Stokes equations, we derive the exact hierarchy of LES equations governing the spatio-temporal evolution of velocity structure functions of any order. The aim is to assess the influence of the subgrid model on the inertial range intermittency. We provide a series of predictions, within the multifractal theory, for the scaling of correlation involving the SGS stress and we compare them against numerical results from high-resolution Smagorinsky LES and from a-priori filtered data generated from direct numerical simulations (DNS). We find that LES data generally agree very well with filtered DNS results and with the multifractal prediction for all leading terms in the balance equations. Discrepancies are measured for some of the sub-leading terms involving cross-correlation between resolved velocity increments and the SGS tensor or the SGS energy transfer, suggesting that there must be room to improve the SGS modelisation to further extend the inertial range properties for any fixed LES resolution.
NASA Astrophysics Data System (ADS)
Sinha, Neeraj; Zambon, Andrea; Ott, James; Demagistris, Michael
2015-06-01
Driven by the continuing rapid advances in high-performance computing, multi-dimensional high-fidelity modeling is an increasingly reliable predictive tool capable of providing valuable physical insight into complex post-detonation reacting flow fields. Utilizing a series of test cases featuring blast waves interacting with combustible dispersed clouds in a small-scale test setup under well-controlled conditions, the predictive capabilities of a state-of-the-art code are demonstrated and validated. Leveraging physics-based, first principle models and solving large system of equations on highly-resolved grids, the combined effects of finite-rate/multi-phase chemical processes (including thermal ignition), turbulent mixing and shock interactions are captured across the spectrum of relevant time-scales and length scales. Since many scales of motion are generated in a post-detonation environment, even if the initial ambient conditions are quiescent, turbulent mixing plays a major role in the fireball afterburning as well as in dispersion, mixing, ignition and burn-out of combustible clouds in its vicinity. Validating these capabilities at the small scale is critical to establish a reliable predictive tool applicable to more complex and large-scale geometries of practical interest.
Giantsoudi, Drosoula; Schuemann, Jan; Jia, Xun; Dowdell, Stephen; Jiang, Steve; Paganetti, Harald
2015-03-21
Monte Carlo (MC) methods are recognized as the gold-standard for dose calculation, however they have not replaced analytical methods up to now due to their lengthy calculation times. GPU-based applications allow MC dose calculations to be performed on time scales comparable to conventional analytical algorithms. This study focuses on validating our GPU-based MC code for proton dose calculation (gPMC) using an experimentally validated multi-purpose MC code (TOPAS) and compare their performance for clinical patient cases. Clinical cases from five treatment sites were selected covering the full range from very homogeneous patient geometries (liver) to patients with high geometrical complexity (air cavities and density heterogeneities in head-and-neck and lung patients) and from short beam range (breast) to large beam range (prostate). Both gPMC and TOPAS were used to calculate 3D dose distributions for all patients. Comparisons were performed based on target coverage indices (mean dose, V95, D98, D50, D02) and gamma index distributions. Dosimetric indices differed less than 2% between TOPAS and gPMC dose distributions for most cases. Gamma index analysis with 1%/1 mm criterion resulted in a passing rate of more than 94% of all patient voxels receiving more than 10% of the mean target dose, for all patients except for prostate cases. Although clinically insignificant, gPMC resulted in systematic underestimation of target dose for prostate cases by 1-2% compared to TOPAS. Correspondingly the gamma index analysis with 1%/1 mm criterion failed for most beams for this site, while for 2%/1 mm criterion passing rates of more than 94.6% of all patient voxels were observed. For the same initial number of simulated particles, calculation time for a single beam for a typical head and neck patient plan decreased from 4 CPU hours per million particles (2.8-2.9 GHz Intel X5600) for TOPAS to 2.4 s per million particles (NVIDIA TESLA C2075) for gPMC. Excellent agreement was demonstrated between our fast GPU-based MC code (gPMC) and a previously extensively validated multi-purpose MC code (TOPAS) for a comprehensive set of clinical patient cases. This shows that MC dose calculations in proton therapy can be performed on time scales comparable to analytical algorithms with accuracy comparable to state-of-the-art CPU-based MC codes.
Observations of increased tropical rainfall preceded by air passage over forests.
Spracklen, D V; Arnold, S R; Taylor, C M
2012-09-13
Vegetation affects precipitation patterns by mediating moisture, energy and trace-gas fluxes between the surface and atmosphere. When forests are replaced by pasture or crops, evapotranspiration of moisture from soil and vegetation is often diminished, leading to reduced atmospheric humidity and potentially suppressing precipitation. Climate models predict that large-scale tropical deforestation causes reduced regional precipitation, although the magnitude of the effect is model and resolution dependent. In contrast, observational studies have linked deforestation to increased precipitation locally but have been unable to explore the impact of large-scale deforestation. Here we use satellite remote-sensing data of tropical precipitation and vegetation, combined with simulated atmospheric transport patterns, to assess the pan-tropical effect of forests on tropical rainfall. We find that for more than 60 per cent of the tropical land surface (latitudes 30 degrees south to 30 degrees north), air that has passed over extensive vegetation in the preceding few days produces at least twice as much rain as air that has passed over little vegetation. We demonstrate that this empirical correlation is consistent with evapotranspiration maintaining atmospheric moisture in air that passes over extensive vegetation. We combine these empirical relationships with current trends of Amazonian deforestation to estimate reductions of 12 and 21 per cent in wet-season and dry-season precipitation respectively across the Amazon basin by 2050, due to less-efficient moisture recycling. Our observation-based results complement similar estimates from climate models, in which the physical mechanisms and feedbacks at work could be explored in more detail.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chin, George; Marquez, Andres; Choudhury, Sutanay
2012-09-01
Triadic analysis encompasses a useful set of graph mining methods that is centered on the concept of a triad, which is a subgraph of three nodes and the configuration of directed edges across the nodes. Such methods are often applied in the social sciences as well as many other diverse fields. Triadic methods commonly operate on a triad census that counts the number of triads of every possible edge configuration in a graph. Like other graph algorithms, triadic census algorithms do not scale well when graphs reach tens of millions to billions of nodes. To enable the triadic analysis ofmore » large-scale graphs, we developed and optimized a triad census algorithm to efficiently execute on shared memory architectures. We will retrace the development and evolution of a parallel triad census algorithm. Over the course of several versions, we continually adapted the code’s data structures and program logic to expose more opportunities to exploit parallelism on shared memory that would translate into improved computational performance. We will recall the critical steps and modifications that occurred during code development and optimization. Furthermore, we will compare the performances of triad census algorithm versions on three specific systems: Cray XMT, HP Superdome, and AMD multi-core NUMA machine. These three systems have shared memory architectures but with markedly different hardware capabilities to manage parallelism.« less
The Maryland Large-Scale Integrated Neurocognitive Architecture
2008-03-01
Visual input enters the network through the lateral geniculate nucleus (LGN) and is passed forward through visual brain regions (V1, V2, and V4...University of Maryland Sponsored by Defense Advanced Research Projects Agency DARPA Order No. V029 APPROVED FOR PUBLIC RELEASE...interpreted as necessarily representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the U.S
PETSc Users Manual Revision 3.7
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balay, Satish; Abhyankar, S.; Adams, M.
This manual describes the use of PETSc for the numerical solution of partial differential equations and related problems on high-performance computers. The Portable, Extensible Toolkit for Scientific Computation (PETSc) is a suite of data structures and routines that provide the building blocks for the implementation of large-scale application codes on parallel (and serial) computers. PETSc uses the MPI standard for all message-passing communication.
PETSc Users Manual Revision 3.8
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balay, S.; Abhyankar, S.; Adams, M.
This manual describes the use of PETSc for the numerical solution of partial differential equations and related problems on high-performance computers. The Portable, Extensible Toolkit for Scientific Computation (PETSc) is a suite of data structures and routines that provide the building blocks for the implementation of large-scale application codes on parallel (and serial) computers. PETSc uses the MPI standard for all message-passing communication.
Filtering analysis of a direct numerical simulation of the turbulent Rayleigh-Benard problem
NASA Technical Reports Server (NTRS)
Eidson, T. M.; Hussaini, M. Y.; Zang, T. A.
1990-01-01
A filtering analysis of a turbulent flow was developed which provides details of the path of the kinetic energy of the flow from its creation via thermal production to its dissipation. A low-pass spatial filter is used to split the velocity and the temperature field into a filtered component (composed mainly of scales larger than a specific size, nominally the filter width) and a fluctuation component (scales smaller than a specific size). Variables derived from these fields can fall into one of the above two ranges or be composed of a mixture of scales dominated by scales near the specific size. The filter is used to split the kinetic energy equation into three equations corresponding to the three scale ranges described above. The data from a direct simulation of the Rayleigh-Benard problem for conditions where the flow is turbulent are used to calculate the individual terms in the three kinetic energy equations. This is done for a range of filter widths. These results are used to study the spatial location and the scale range of the thermal energy production, the cascading of kinetic energy, the diffusion of kinetic energy, and the energy dissipation. These results are used to evaluate two subgrid models typically used in large-eddy simulations of turbulence. Subgrid models attempt to model the energy below the filter width that is removed by a low-pass filter.
Beyond RGB: Very high resolution urban remote sensing with multimodal deep networks
NASA Astrophysics Data System (ADS)
Audebert, Nicolas; Le Saux, Bertrand; Lefèvre, Sébastien
2018-06-01
In this work, we investigate various methods to deal with semantic labeling of very high resolution multi-modal remote sensing data. Especially, we study how deep fully convolutional networks can be adapted to deal with multi-modal and multi-scale remote sensing data for semantic labeling. Our contributions are threefold: (a) we present an efficient multi-scale approach to leverage both a large spatial context and the high resolution data, (b) we investigate early and late fusion of Lidar and multispectral data, (c) we validate our methods on two public datasets with state-of-the-art results. Our results indicate that late fusion make it possible to recover errors steaming from ambiguous data, while early fusion allows for better joint-feature learning but at the cost of higher sensitivity to missing data.
Viscous decay of nonlinear oscillations of a spherical bubble at large Reynolds number
NASA Astrophysics Data System (ADS)
Smith, W. R.; Wang, Q. X.
2017-08-01
The long-time viscous decay of large-amplitude bubble oscillations is considered in an incompressible Newtonian fluid, based on the Rayleigh-Plesset equation. At large Reynolds numbers, this is a multi-scaled problem with a short time scale associated with inertial oscillation and a long time scale associated with viscous damping. A multi-scaled perturbation method is thus employed to solve the problem. The leading-order analytical solution of the bubble radius history is obtained to the Rayleigh-Plesset equation in a closed form including both viscous and surface tension effects. Some important formulae are derived including the following: the average energy loss rate of the bubble system during each cycle of oscillation, an explicit formula for the dependence of the oscillation frequency on the energy, and an implicit formula for the amplitude envelope of the bubble radius as a function of the energy. Our theory shows that the energy of the bubble system and the frequency of oscillation do not change on the inertial time scale at leading order, the energy loss rate on the long viscous time scale being inversely proportional to the Reynolds number. These asymptotic predictions remain valid during each cycle of oscillation whether or not compressibility effects are significant. A systematic parametric analysis is carried out using the above formula for the energy of the bubble system, frequency of oscillation, and minimum/maximum bubble radii in terms of the Reynolds number, the dimensionless initial pressure of the bubble gases, and the Weber number. Our results show that the frequency and the decay rate have substantial variations over the lifetime of a decaying oscillation. The results also reveal that large-amplitude bubble oscillations are very sensitive to small changes in the initial conditions through large changes in the phase shift.
Implementing TCP/IP and a socket interface as a server in a message-passing operating system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hipp, E.; Wiltzius, D.
1990-03-01
The UNICOS 4.3BSD network code and socket transport interface are the basis of an explicit network server for NLTSS, a message passing operating system on the Cray YMP. A BSD socket user library provides access to the network server using an RPC mechanism. The advantages of this server methodology are its modularity and extensibility to migrate to future protocol suites (e.g. OSI) and transport interfaces. In addition, the network server is implemented in an explicit multi-tasking environment to take advantage of the Cray YMP multi-processor platform. 19 refs., 5 figs.
Mortazavi, Forough; Mortazavi, Saideh S; Khosrorad, Razieh
2015-09-01
Procrastination is a common behavior which affects different aspects of life. The procrastination assessment scale-student (PASS) evaluates academic procrastination apropos its frequency and reasons. The aims of the present study were to translate, culturally adapt, and validate the Farsi version of the PASS in a sample of Iranian medical students. In this cross-sectional study, the PASS was translated into Farsi through the forward-backward method, and its content validity was thereafter assessed by a panel of 10 experts. The Farsi version of the PASS was subsequently distributed among 423 medical students. The internal reliability of the PASS was assessed using Cronbach's alpha. An exploratory factor analysis (EFA) was conducted on 18 items and then 28 items of the scale to find new models. The construct validity of the scale was assessed using both EFA and confirmatory factor analysis. The predictive validity of the scale was evaluated by calculating the correlation between the academic procrastination scores and the students' average scores in the previous semester. The corresponding reliability of the first and second parts of the scale was 0.781 and 0.861. An EFA on 18 items of the scale found 4 factors which jointly explained 53.2% of variances: The model was marginally acceptable (root mean square error of approximation [RMSEA] =0.098, standardized root mean square residual [SRMR] =0.076, χ(2) /df =4.8, comparative fit index [CFI] =0.83). An EFA on 28 items of the scale found 4 factors which altogether explained 42.62% of variances: The model was acceptable (RMSEA =0.07, SRMR =0.07, χ(2)/df =2.8, incremental fit index =0.90, CFI =0.90). There was a negative correlation between the procrastination scores and the students' average scores (r = -0.131, P =0.02). The Farsi version of the PASS is a valid and reliable tool to measure academic procrastination in Iranian undergraduate medical students.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2012-09-25
The Megatux platform enables the emulation of large scale (multi-million node) distributed systems. In particular, it allows for the emulation of large-scale networks interconnecting a very large number of emulated computer systems. It does this by leveraging virtualization and associated technologies to allow hundreds of virtual computers to be hosted on a single moderately sized server or workstation. Virtualization technology provided by modern processors allows for multiple guest OSs to run at the same time, sharing the hardware resources. The Megatux platform can be deployed on a single PC, a small cluster of a few boxes or a large clustermore » of computers. With a modest cluster, the Megatux platform can emulate complex organizational networks. By using virtualization, we emulate the hardware, but run actual software enabling large scale without sacrificing fidelity.« less
NASA Astrophysics Data System (ADS)
Lei, Y.; Treuhaft, R. N.; Siqueira, P.; Torbick, N.; Lucas, R.; Keller, M. M.; Schmidt, M.; Ducey, M. J.; Salas, W.
2017-12-01
Large-scale products of forest height and disturbance are essential for understanding the global carbon distribution as well as its changes in response to natural events and human activities. Regarding this scientific need, both NASA's GEDI and NASA-ISRO's NISAR are going to be launched in the 2018-2021 timeframe in parallel with DLR's current TanDEM-X and/or the proposed TanDEM-L, which provides a lot of potential for global ecosystem mapping. A new simple and efficient method of forest height mapping has been developed for combining spaceborne repeat-pass InSAR and lidar missions (e.g. NISAR and GEDI) which estimates temporal decorrelation parameters of repeat-pass InSAR and uses the lidar data as training samples. An open-access Python-based software has been developed for automated processing. As a result, a mosaic of forest height was generated for US states of Maine and New Hampshire (11.6 million ha) using JAXA's ALOS-1 and ALOS-2 HV-pol InSAR data and a small piece of lidar training samples (44,000 ha) with the height estimates validated against airborne lidar and field inventory data over both flat and mountainous areas. In addition, through estimating and correcting for the temporal decorrelation effects in the spaceborne repeat-pass InSAR coherence data and also utilizing the spaceborne single-pass InSAR phase data, forest disturbance such as selective logging is not only detected but also quantified in subtropical forests of Australia using ALOS-1 HH-pol InSAR data (validated against NASA's Landsat), as well as in tropics of Brazil using TanDEM-X and ALOS-2 HH-pol InSAR data (validated against field inventory data). The operational simplicity and efficiency make these methods a potential observing/processing prototype for the fusion of NISAR, GEDI and TanDEM-X/L.
Biodiversity conservation in Swedish forests: ways forward for a 30-year-old multi-scaled approach.
Gustafsson, Lena; Perhans, Karin
2010-12-01
A multi-scaled model for biodiversity conservation in forests was introduced in Sweden 30 years ago, which makes it a pioneer example of an integrated ecosystem approach. Trees are set aside for biodiversity purposes at multiple scale levels varying from individual trees to areas of thousands of hectares, with landowner responsibility at the lowest level and with increasing state involvement at higher levels. Ecological theory supports the multi-scaled approach, and retention efforts at every harvest occasion stimulate landowners' interest in conservation. We argue that the model has large advantages but that in a future with intensified forestry and global warming, development based on more progressive thinking is necessary to maintain and increase biodiversity. Suggestions for the future include joint planning for several forest owners, consideration of cost-effectiveness, accepting opportunistic work models, adjusting retention levels to stand and landscape composition, introduction of temporary reserves, creation of "receiver habitats" for species escaping climate change, and protection of young forests.
Multi-scale Material Parameter Identification Using LS-DYNA® and LS-OPT®
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stander, Nielen; Basudhar, Anirban; Basu, Ushnish
2015-09-14
Ever-tightening regulations on fuel economy, and the likely future regulation of carbon emissions, demand persistent innovation in vehicle design to reduce vehicle mass. Classical methods for computational mass reduction include sizing, shape and topology optimization. One of the few remaining options for weight reduction can be found in materials engineering and material design optimization. Apart from considering different types of materials, by adding material diversity and composite materials, an appealing option in automotive design is to engineer steel alloys for the purpose of reducing plate thickness while retaining sufficient strength and ductility required for durability and safety. A project tomore » develop computational material models for advanced high strength steel is currently being executed under the auspices of the United States Automotive Materials Partnership (USAMP) funded by the US Department of Energy. Under this program, new Third Generation Advanced High Strength Steel (i.e., 3GAHSS) are being designed, tested and integrated with the remaining design variables of a benchmark vehicle Finite Element model. The objectives of the project are to integrate atomistic, microstructural, forming and performance models to create an integrated computational materials engineering (ICME) toolkit for 3GAHSS. The mechanical properties of Advanced High Strength Steels (AHSS) are controlled by many factors, including phase composition and distribution in the overall microstructure, volume fraction, size and morphology of phase constituents as well as stability of the metastable retained austenite phase. The complex phase transformation and deformation mechanisms in these steels make the well-established traditional techniques obsolete, and a multi-scale microstructure-based modeling approach following the ICME [0]strategy was therefore chosen in this project. Multi-scale modeling as a major area of research and development is an outgrowth of the Comprehensive Test Ban Treaty of 1996 which banned surface testing of nuclear devices [1]. This had the effect that experimental work was reduced from large scale tests to multiscale experiments to provide material models with validation at different length scales. In the subsequent years industry realized that multi-scale modeling and simulation-based design were transferable to the design optimization of any structural system. Horstemeyer [1] lists a number of advantages of the use of multiscale modeling. Among these are: the reduction of product development time by alleviating costly trial-and-error iterations as well as the reduction of product costs through innovations in material, product and process designs. Multi-scale modeling can reduce the number of costly large scale experiments and can increase product quality by providing more accurate predictions. Research tends to be focussed on each particular length scale, which enhances accuracy in the long term. This paper serves as an introduction to the LS-OPT and LS-DYNA methodology for multi-scale modeling. It mainly focuses on an approach to integrate material identification using material models of different length scales. As an example, a multi-scale material identification strategy, consisting of a Crystal Plasticity (CP) material model and a homogenized State Variable (SV) model, is discussed and the parameter identification of the individual material models of different length scales is demonstrated. The paper concludes with thoughts on integrating the multi-scale methodology into the overall vehicle design.« less
Adaptive Control for Uncertain Nonlinear Multi-Input Multi-Output Systems
NASA Technical Reports Server (NTRS)
Cao, Chengyu (Inventor); Hovakimyan, Naira (Inventor); Xargay, Enric (Inventor)
2014-01-01
Systems and methods of adaptive control for uncertain nonlinear multi-input multi-output systems in the presence of significant unmatched uncertainty with assured performance are provided. The need for gain-scheduling is eliminated through the use of bandwidth-limited (low-pass) filtering in the control channel, which appropriately attenuates the high frequencies typically appearing in fast adaptation situations and preserves the robustness margins in the presence of fast adaptation.
Shanbehzadeh, Sanaz; Salavati, Mahyar; Tavahomi, Mahnaz; Khatibi, Ali; Talebian, Saeed; Khademi-Kalantari, Khosro
2017-11-01
Psychometric testing of the Persian version of Pain Anxiety Symptom Scale 20. The aim of this study was to assess the reliability and construct validity of the PASS-20 in nonspecific chronic low back pain (LBP) patients. The PASS-20 is a self-report questionnaire that assesses pain-related anxiety. The Psychometric properties of this instrument have not been assessed in Persian-speaking chronic LBP patients. One hundred and sixty participants with chronic LBP completed the Persian version of PASS-20, Tampa Scale of Kinesiophobia (TSK), Fear-Avoidance Beliefs Questionnaire (FABQ), Pain Catastrophizing Scale (PCS), trait form of the State-Trait Anxiety (STAI-T), Oswestry Low Back Pain Disability Index (ODI), Beck Depression Inventory (BDI-II), and Visual Analogue Scale (VAS). To evaluate test-retest reliability, 60 patients filled out the PASS-20, 6 to 8 days after the first visit. Test-retest reliability (intraclass correlation coefficient [ICC], standard error of measurement [SEM], and minimal detectable change [MDC]), internal consistency, dimensionality, and construct validity were examined. The ICCs of the PASS-20 subscales and total score ranged from 0.71 to 0.8. The SEMs for PASS-20 total score was 7.29 and for the subscales ranged from 2.43 to 2.98. The MDC for the total score was 20.14 and for the subscales ranged from 6.71 to 8.23. The Cronbach alpha values for the subscales and total score ranged from 0.70 to 0.91. Significant positive correlations were found between the PASS-20 total score and PCS, TSK, FABQ, ODI, BDI, STAI-T, and pain intensity. The Persian version of the PASS-20 showed acceptable psychometric properties for the assessment of pain-related anxiety in Persian-speaking patients with chronic LBP. 3.
NASA Astrophysics Data System (ADS)
Liben-Nowell, David
With the recent explosion of popularity of commercial social-networking sites like Facebook and MySpace, the size of social networks that can be studied scientifically has passed from the scale traditionally studied by sociologists and anthropologists to the scale of networks more typically studied by computer scientists. In this chapter, I will highlight a recent line of computational research into the modeling and analysis of the small-world phenomenon - the observation that typical pairs of people in a social network are connected by very short chains of intermediate friends - and the ability of members of a large social network to collectively find efficient routes to reach individuals in the network. I will survey several recent mathematical models of social networks that account for these phenomena, with an emphasis on both the provable properties of these social-network models and the empirical validation of the models against real large-scale social-network data.
Animal movement data: GPS telemetry, autocorrelation and the need for path-level analysis [chapter 7
Samuel A. Cushman
2010-01-01
In the previous chapter we presented the idea of a multi-layer, multi-scale, spatially referenced data-cube as the foundation for monitoring and for implementing flexible modeling of ecological pattern-process relationships in particulate, in context and to integrate these across large spatial extents at the grain of the strongest linkage between response and driving...
Global Ocean Data Quality Assessment of SARAL/AltiKa GDR products
NASA Astrophysics Data System (ADS)
Picot, Nicolas; Prandi, Pierre; desjonqueres, jean-damien
2015-04-01
The SARAL mission was successfully launched on February, 5th 2013 and cycle 1 started a few days later on March 14th. For more than 2 years, the Ka-band altimeter and dual frequency radiometer on board have been collecting high quality ocean topography measurements. Within the first months of the mission, a first patch (P1) was developed to correct some small anomalies detected in the products and to account for in-flight calibration data. At the beginning of year 2014, a second patch (P2) was produced (applied from cycle 10 pass 407 on OGDR data and from pass 566 on IGDR data) and the all GDR produced before this were reprocessed in order to deliver a consistent dataset to users. This new version of the products provides, among other changes, important improvements regarding radiometer data processing, sea-state bias and wind speed. Since the beginning of the mission, data quality assessment of OGDR, IGDR and GDR data has been routinely performed at CNES and CLS (as part of the CNES SALP project). We will present the main results of the data quality assessment over ocean based on SARAL/AltiKa GDR data reprocessed using the homogeneous P2 version. The main data quality metrics presented will include: Data availability and validity, Monitoring of the main altimeter and radiometer parameters and comparisons to other altimeter missions such as OSTM/Jason-2, Mission performance through mono-mission crossovers analysis, Investigation of inter-mission biases and large-scale regional differences from multi-mission crossovers between SARAL and Jason-2. Monitoring of the global mean SLA and comparison to Jason-2 Finally, we will present the new product version standard that is currently under development on CNES side.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Medina, Socorro; Houze, Robert A.
2016-02-19
Kelvin–Helmholtz billows with horizontal scales of 3–4 km have been observed in midlatitude cyclones moving over the Italian Alps and the Oregon Cascades when the atmosphere was mostly statically stable with high amounts of shear and Ri < 0.25. In one case, data from a mobile radar located within a windward facing valley documented a layer in which the shear between down-valley flow below 1.2 km and strong upslope cross-barrier flow above was large. Several episodes of Kelvin–Helmholtz waves were observed within the shear layer. The occurrence of the waves appears to be related to the strength of the shear:more » when the shear attained large values, an episode of billows occurred, followed by a sharp decrease in the shear. The occurrence of large values of shear and Kelvin–Helmholtz billows over two different mountain ranges suggests that they may be important features occurring when extratropical cyclones with statically stable flow pass over mountain ranges.« less
Evaluation of Three Pain Assessment Scales Used for Ventilated Neonates.
Huang, Xiao-Zhi; Li, Li; Zhou, Jun; He, Fang; Zhong, Chun-Xia; Wang, Bin
2018-06-26
To compare and evaluate the reliability, validity, feasibility, clinical utility, and nurses' preference of the Premature Infant Pain Profile-Revised (PIPP-R), the Neonatal Pain, Agitation, and Sedation Scale (N-PASS), and the Neonatal Infant Acute Pain Assessment Scale (NIAPAS) used for procedural pain in ventilated neonates. Procedural pain is a common phenomenon but is undermanaged and underassessed in hospitalized neonates. Information for clinician selecting pain measurements to improve neonatal care and outcomes are still limited. A prospective observational study and adheres to the relevant EQUATOR guidelines. A total of 1080 pain assessments were made at 90 neonates by two nurses independently, using three scales viewing three phases of videotaped painful (arterial blood sampling) and non-painful procedures (diaper change). Internal consistency, inter-rater reliability, discriminant validity, concurrent validity and convergent validity of scales were analyzed. Feasibility, clinical utility, and nurses' preference of scales were also investigated. All three scales showed excellent inter-raters coefficients (from 0.991 to 0.992) and good internal consistency (0.733 for the PIPP-R, 0.837 for the N-PASS and 0.836 for the NIAPAS, respectively). Scores of painful and nonpainful procedures on the three scales changed significantly across the phases. There was a strong correlation between the three scales with adequate limits of agreement. The mean scores of the N-PASS for feasibility and utility were significantly higher than those of the NIAPAS, but not significantly higher than those of the PIPP-R. The N-PASS was mostly preferred by 55.9% of the nurses, followed by the NIAPAS (23.5%) and the PIPP-R (20.6%). The three scales are all reliable and valid, but the N-PASS and the NIAPAS performs better in reliability. The N-PASS appears to be a better choice for frontier nurses to assess procedural pain in ventilated neonates based on its good feasibility, utility, and nurses' preference. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Homogenization of Large-Scale Movement Models in Ecology
Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.
2011-01-01
A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.
Principles for problem aggregation and assignment in medium scale multiprocessors
NASA Technical Reports Server (NTRS)
Nicol, David M.; Saltz, Joel H.
1987-01-01
One of the most important issues in parallel processing is the mapping of workload to processors. This paper considers a large class of problems having a high degree of potential fine grained parallelism, and execution requirements that are either not predictable, or are too costly to predict. The main issues in mapping such a problem onto medium scale multiprocessors are those of aggregation and assignment. We study a method of parameterized aggregation that makes few assumptions about the workload. The mapping of aggregate units of work onto processors is uniform, and exploits locality of workload intensity to balance the unknown workload. In general, a finer aggregate granularity leads to a better balance at the price of increased communication/synchronization costs; the aggregation parameters can be adjusted to find a reasonable granularity. The effectiveness of this scheme is demonstrated on three model problems: an adaptive one-dimensional fluid dynamics problem with message passing, a sparse triangular linear system solver on both a shared memory and a message-passing machine, and a two-dimensional time-driven battlefield simulation employing message passing. Using the model problems, the tradeoffs are studied between balanced workload and the communication/synchronization costs. Finally, an analytical model is used to explain why the method balances workload and minimizes the variance in system behavior.
Cruise noise of the 2/9th scale model of the Large-scale Advanced Propfan (LAP) propeller, SR-7A
NASA Technical Reports Server (NTRS)
Dittmar, James H.; Stang, David B.
1987-01-01
Noise data on the Large-scale Advanced Propfan (LAP) propeller model SR-7A were taken in the NASA Lewis Research Center 8 x 6 foot Wind Tunnel. The maximum blade passing tone noise first rises with increasing helical tip Mach number to a peak level, then remains the same or decreases from its peak level when going to higher helical tip Mach numbers. This trend was observed for operation at both constant advance ratio and approximately equal thrust. This noise reduction or, leveling out at high helical tip Mach numbers, points to the use of higher propeller tip speeds as a possible method to limit airplane cabin noise while maintaining high flight speed and efficiency. Projections of the tunnel model data are made to the full scale LAP propeller mounted on the test bed aircraft and compared with predictions. The prediction method is found to be somewhat conservative in that it slightly overpredicts the projected model data at the peak.
Cruise noise of the 2/9 scale model of the Large-scale Advanced Propfan (LAP) propeller, SR-7A
NASA Technical Reports Server (NTRS)
Dittmar, James H.; Stang, David B.
1987-01-01
Noise data on the Large-scale Advanced Propfan (LAP) propeller model SR-7A were taken in the NASA Lewis Research Center 8 x 6 foot Wind Tunnel. The maximum blade passing tone noise first rises with increasing helical tip Mach number to a peak level, then remains the same or decreases from its peak level when going to higher helical tip Mach numbers. This trend was observed for operation at both constant advance ratio and approximately equal thrust. This noise reduction or, leveling out at high helical tip Mach numbers, points to the use of higher propeller tip speeds as a possible method to limit airplane cabin noise while maintaining high flight speed and efficiency. Projections of the tunnel model data are made to the full scale LAP propeller mounted on the test bed aircraft and compared with predictions. The prediction method is found to be somewhat conservative in that it slightly overpredicts the projected model data at the peak.
Evaluating single-pass catch as a tool for identifying spatial pattern in fish distribution
Bateman, Douglas S.; Gresswell, Robert E.; Torgersen, Christian E.
2005-01-01
We evaluate the efficacy of single-pass electrofishing without blocknets as a tool for collecting spatially continuous fish distribution data in headwater streams. We compare spatial patterns in abundance, sampling effort, and length-frequency distributions from single-pass sampling of coastal cutthroat trout (Oncorhynchus clarki clarki) to data obtained from a more precise multiple-pass removal electrofishing method in two mid-sized (500–1000 ha) forested watersheds in western Oregon. Abundance estimates from single- and multiple-pass removal electrofishing were positively correlated in both watersheds, r = 0.99 and 0.86. There were no significant trends in capture probabilities at the watershed scale (P > 0.05). Moreover, among-sample variation in fish abundance was higher than within-sample error in both streams indicating that increased precision of unit-scale abundance estimates would provide less information on patterns of abundance than increasing the fraction of habitat units sampled. In the two watersheds, respectively, single-pass electrofishing captured 78 and 74% of the estimated population of cutthroat trout with 7 and 10% of the effort. At the scale of intermediate-sized watersheds, single-pass electrofishing exhibited a sufficient level of precision to be effective in detecting spatial patterns of cutthroat trout abundance and may be a useful tool for providing the context for investigating fish-habitat relationships at multiple scales.
NASA Astrophysics Data System (ADS)
Béal, D.; Piégay, H.; Arnaud, F.; Rollet, A.; Schmitt, L.
2011-12-01
Aerial high resolution visible imagery allows producing large river bathymetry assuming that water depth is related to water colour (Beer-Bouguer-Lambert law). In this paper we aim at monitoring Rhine River geometry changes for a diachronic study as well as sediment transport after an artificial injection (25.000 m3 restoration operation). For that a consequent data base of ground measurements of river depth is used, built on 3 different sources: (i) differential GPS acquisitions, (ii) sounder data and (iii) lateral profiles realized by experts. Water depth is estimated using a multi linear regression over neo channels built on a principal component analysis over red, green and blue bands and previously cited depth data. The study site is a 12 km long reach of the by-passed section of the Rhine River that draws French and German border. This section has been heavily impacted by engineering works during the last two centuries: channelization since 1842 for navigation purposes and the construction of a 45 km long lateral canal and 4 consecutive hydroelectric power plants of since 1932. Several bathymetric models are produced based on 3 different spatial resolutions (6, 13 and 20 cm) and 5 acquisitions (January, March, April, August and October) since 2008. Objectives are to find the optimal spatial resolution and to characterize seasonal effects. Best performances according to the 13 cm resolution show a 18 cm accuracy when suspended matters impacted less water transparency. Discussions are oriented to the monitoring of the artificial reload after 2 flood events during winter 2010-2011. Bathymetric models produced are also useful to build 2D hydraulic model's mesh.
Numerical Upscaling of Solute Transport in Fractured Porous Media Based on Flow Aligned Blocks
NASA Astrophysics Data System (ADS)
Leube, P.; Nowak, W.; Sanchez-Vila, X.
2013-12-01
High-contrast or fractured-porous media (FPM) pose one of the largest unresolved challenges for simulating large hydrogeological systems. The high contrast in advective transport between fast conduits and low-permeability rock matrix, including complex mass transfer processes, leads to the typical complex characteristics of early bulk arrivals and long tailings. Adequate direct representation of FPM requires enormous numerical resolutions. For large scales, e.g. the catchment scale, and when allowing for uncertainty in the fracture network architecture or in matrix properties, computational costs quickly reach an intractable level. In such cases, multi-scale simulation techniques have become useful tools. They allow decreasing the complexity of models by aggregating and transferring their parameters to coarser scales and so drastically reduce the computational costs. However, these advantages come at a loss of detail and accuracy. In this work, we develop and test a new multi-scale or upscaled modeling approach based on block upscaling. The novelty is that individual blocks are defined by and aligned with the local flow coordinates. We choose a multi-rate mass transfer (MRMT) model to represent the remaining sub-block non-Fickian behavior within these blocks on the coarse scale. To make the scale transition simple and to save computational costs, we capture sub-block features by temporal moments (TM) of block-wise particle arrival times to be matched with the MRMT model. By predicting spatial mass distributions of injected tracers in a synthetic test scenario, our coarse-scale solution matches reasonably well with the corresponding fine-scale reference solution. For predicting higher TM-orders (such as arrival time and effective dispersion), the prediction accuracy steadily decreases. This is compensated to some extent by the MRMT model. If the MRMT model becomes too complex, it loses its effect. We also found that prediction accuracy is sensitive to the choice of the effective dispersion coefficients and on the block resolution. A key advantage of the flow-aligned blocks is that the small-scale velocity field is reproduced quite accurately on the block-scale through their flow alignment. Thus, the block-scale transverse dispersivities remain in the similar magnitude as local ones, and they do not have to represent macroscopic uncertainty. Also, the flow-aligned blocks minimize numerical dispersion when solving the large-scale transport problem.
Multi scales based sparse matrix spectral clustering image segmentation
NASA Astrophysics Data System (ADS)
Liu, Zhongmin; Chen, Zhicai; Li, Zhanming; Hu, Wenjin
2018-04-01
In image segmentation, spectral clustering algorithms have to adopt the appropriate scaling parameter to calculate the similarity matrix between the pixels, which may have a great impact on the clustering result. Moreover, when the number of data instance is large, computational complexity and memory use of the algorithm will greatly increase. To solve these two problems, we proposed a new spectral clustering image segmentation algorithm based on multi scales and sparse matrix. We devised a new feature extraction method at first, then extracted the features of image on different scales, at last, using the feature information to construct sparse similarity matrix which can improve the operation efficiency. Compared with traditional spectral clustering algorithm, image segmentation experimental results show our algorithm have better degree of accuracy and robustness.
Trends in the capture fisheries in Cuyo East Pass, Philippines
San Diego, Tee-Jay A.; Fisher, William L.
2014-01-01
Findings are presented of a comprehensive analysis of time series catch and effort data from 2000 to 2006 collected from a multi-species, multi-gear and two-sector (municipal and commercial) capture fisheries in Cuyo East Pass, Philippines. Multivariate techniques were used to determine temporal variation in species composition and gear selectivity that corresponded with annual trends in catch and effort. Distinct annual variation in species composition was found for five fisheries classified according to sector-gear combination, corresponding decline in catch diversity, noted shifts in gears used, and an erratic CPUE trend as a result of catch variation. These patterns and trends illustrate the occurrence of ecosystem overfishing for Cuyo East Pass. Our approach provided a holistic representation of the fishing situation, condition of the fisheries and corresponding implications to the ecosystem, fitting well within the context of the ecosystem approach to fisheries management.
NASA Astrophysics Data System (ADS)
Gustafsson, G.; Potemra, T. A.; Favin, S.; Saflekos, N. A.
1981-10-01
Principal oscillations of the TRIAD satellite are studied in 150 passes and are identified as the librations of a gravity-stabilized satellite. The libration periods are T(O)/2 and T(O)/(3) exp 1/2, where T(O) is the orbit period of about 100 min. The amplitude and phase change over periods of a few days, sometimes vanishing altogether, and these attitude changes are numerically evaluated and removed. Data from three consecutive passes spanning over three hours show a magnetic profile which extends as far as 10 deg in latitude from a single region 1 Birkeland current sheet, confirming the permanent and global nature of large-scale Birkeland currents.
The coma cluster after lunch: Has a galaxcy group passed through the cluster core?
NASA Technical Reports Server (NTRS)
Burns, Jack O.; Roettiger, Kurt; Ledlow, Michael; Klypin, Anatoly
1994-01-01
We propose that the Coma cluster has recently undergone a collision with the NGC 4839 galaxy group. The ROSAT X-ray morphology, the Coma radio halo, the presence of poststarburst galaxies in the bridge between Coma and NGC 4839, the usually high velocity dispersion for the NGC 4839 group, and the position of a large-scale galaxy filament to the NE of Coma are all used to argue that the NGC 4839 group passed through the core of Coma approximately 2 Gyr ago. We present a new Hydro/N-body simulation of the merger between a galaxy group and a rich cluster that reproduces many of the observed X-ray and optical properties of Coma/NGC 4839.
A Hybrid, Large-Scale Wireless Sensor Network for Real-Time Acquisition and Tracking
2007-06-01
multicolor, Quantum Well Infrared Photodetector ( QWIP ), step-stare, large-format Focal Plane Array (FPA) is proposed and evaluated through performance...Photodetector ( QWIP ), step-stare, large-format Focal Plane Array (FPA) is proposed and evaluated through performance analysis. The thesis proposes...7 1. Multi-color IR Sensors - Operational Advantages ...........................8 2. Quantum-Well IR Photodetector ( QWIP
NASA Astrophysics Data System (ADS)
Okamoto, Taro; Takenaka, Hiroshi; Nakamura, Takeshi; Aoki, Takayuki
2010-12-01
We adopted the GPU (graphics processing unit) to accelerate the large-scale finite-difference simulation of seismic wave propagation. The simulation can benefit from the high-memory bandwidth of GPU because it is a "memory intensive" problem. In a single-GPU case we achieved a performance of about 56 GFlops, which was about 45-fold faster than that achieved by a single core of the host central processing unit (CPU). We confirmed that the optimized use of fast shared memory and registers were essential for performance. In the multi-GPU case with three-dimensional domain decomposition, the non-contiguous memory alignment in the ghost zones was found to impose quite long time in data transfer between GPU and the host node. This problem was solved by using contiguous memory buffers for ghost zones. We achieved a performance of about 2.2 TFlops by using 120 GPUs and 330 GB of total memory: nearly (or more than) 2200 cores of host CPUs would be required to achieve the same performance. The weak scaling was nearly proportional to the number of GPUs. We therefore conclude that GPU computing for large-scale simulation of seismic wave propagation is a promising approach as a faster simulation is possible with reduced computational resources compared to CPUs.
Political Influence on Japanese Nuclear and Security Policy: New Forces Face Large Obstacles
2014-02-01
Fukushima incident immediately triggered a resurgence of the anti- nuclear power movement in Japan, and quickly enlarged it to national scale.80...Bottom-up Activism,” Asia-Pacific Issues 103 (January 2012). 57 time passes after the Fukushima incidents. Anti- nuclear -power sentiment in Japan...spread well beyond the areas immediately affected by either the Fukushima disasters themselves or by other nuclear plants
Effect of Bypass Capacitor in Common-mode Noise Reduction Technique for Automobile PCB
NASA Astrophysics Data System (ADS)
Uno, Takanori; Ichikawa, Kouji; Mabuchi, Yuichi; Nakamura, Atushi
In this letter, we studied the use of common mode noise reduction technique for in-vehicle electronic equipment, each comprising large-scale integrated circuit (LSI), printed circuit board (PCB), wiring harnesses, and ground plane. We have improved the model circuit of the common mode noise that flows to the wire harness to add the effect of by-pass capacitors located near an LSI.
Multi-band filter design with less total film thickness for short-wave infrared
NASA Astrophysics Data System (ADS)
Yan, Yung-Jhe; Chien, I.-Pen; Chen, Po-Han; Chen, Sheng-Hui; Tsai, Yi-Chun; Ou-Yang, Mang
2017-08-01
A multi-band pass filter array was proposed and designed for short wave infrared applications. The central wavelength of the multi-band pass filters are located about 905 nm, 950 nm, 1055 nm and 1550 nm. In the simulation of an optical interference band pass filter, high spectrum performance (high transmittance ratio between the pass band and stop band) relies on (1) the index gap between the selected high/low-index film materials, with a larger gap correlated to higher performance, and (2) sufficient repeated periods of high/low-index thin-film layers. When determining high and low refractive index materials, spectrum performance was improved by increasing repeated periods. Consequently, the total film thickness increases rapidly. In some cases, a thick total film thickness is difficult to process in practice, especially when incorporating photolithography liftoff. Actually the maximal thickness of the photoresist being able to liftoff will bound the total film thickness of the band pass filter. For the application of the short wave infrared with the wavelength range from 900nm to 1700nm, silicone was chosen as a high refractive index material. Different from other dielectric materials used in the visible range, silicone has a higher absorptance in the visible range opposite to higher transmission in the short wave infrared. In other words, designing band pass filters based on silicone as a high refractive index material film could not obtain a better spectrum performance than conventional high index materials like TiO2 or Ta2O5, but also its material cost would reduce about half compared to the total film thickness with the conventional material TiO2. Through the simulation and several experimental trials, the total film thickness below 4 um was practicable and reasonable. The fabrication of the filters was employed a dual electric gun deposition system with ion assisted deposition after the lithography process. Repeating four times of lithography and deposition process and black matrix coating, the optical device processes were completed.
Seasonal dependence of large-scale Birkeland currents
NASA Technical Reports Server (NTRS)
Fujii, R.; Iijima, T.; Potemra, T. A.; Sugiura, M.
1981-01-01
Seasonal variations of large-scale Birkeland currents are examined in a study of the source mechanisms and the closure of the three-dimensional current systems in the ionosphere. Vector magnetic field data acquired by the TRIAD satellite in the Northern Hemisphere were analyzed for the statistics of single sheet and double sheet Birkeland currents during 555 passes during the summer and 408 passes during the winter. The single sheet currents are observed more frequently in the dayside of the auroral zone, and more often in summer than in winter. The intensities of both the single and double dayside currents are found to be greater in the summer than in the winter by a factor of two, while the intensities of the double sheet Birkeland currents on the nightside do not show a significant difference from summer to winter. Both the single and double sheet currents are found at higher latitudes in the summer than in the winter on the dayside. Results suggest that the Birkeland current intensities are controlled by the ionospheric conductivity in the polar region, and that the currents close via the polar cap when the conductivity there is sufficiently high. It is also concluded that an important source of these currents must be a voltage generator in the magnetosphere.
Brackley, R; Lucas, M C; Thomas, R; Adams, C E; Bean, C W
2018-05-01
This study assessed the usefulness of passing euthanized Atlantic salmon Salmo salar smolts through an Archimedean screw turbine to test for external damage, as compared with live, actively swimming smolts. Scale loss was the only observed effect. Severe scale loss was 5·9 times more prevalent in euthanized turbine-passed fish (45%) than the live fish (7·6%). Additionally, distinctive patterns of scale loss, consistent with grinding between the turbine helices and housing trough, were observed in 35% of euthanized turbine-passed smolts. This distinctive pattern of scale loss was not seen in live turbine-passed smolts, nor in control groups (live and euthanized smolts released downstream of the turbine), which suggests that the altered behaviour of dead fish in turbine flows generates biased injury outcomes. © 2018 The Fisheries Society of the British Isles.
Option pricing from wavelet-filtered financial series
NASA Astrophysics Data System (ADS)
de Almeida, V. T. X.; Moriconi, L.
2012-10-01
We perform wavelet decomposition of high frequency financial time series into large and small time scale components. Taking the FTSE100 index as a case study, and working with the Haar basis, it turns out that the small scale component defined by most (≃99.6%) of the wavelet coefficients can be neglected for the purpose of option premium evaluation. The relevance of the hugely compressed information provided by low-pass wavelet-filtering is related to the fact that the non-gaussian statistical structure of the original financial time series is essentially preserved for expiration times which are larger than just one trading day.
NASA Astrophysics Data System (ADS)
Wang, Yang; Yu, Jianqun; Yu, Yajun
2018-05-01
To solve the problems in the DEM simulations of the screening process of a swing-bar sieve, in this paper we propose the real-virtual boundary method to build the geometrical model of the screen deck on a swing-bar sieve. The motion of the swing-bar sieve is modelled by the planer multi-body kinematics. A coupled model of the discrete element method (DEM) with multi-body kinematics (MBK) is presented to simulate the flowing and passing processes of soybean particles on the screen deck. By the comparison of the simulated results with the experimental results of the screening process of the LA-LK laboratory scale swing-bar sieve, the feasibility and validity of the real-virtual boundary method and the coupled DEM-MBK model we proposed in this paper can be verified. This work provides the basis for the optimization design of the swing-bar sieve with circular apertures and complex motion.
Schuster, Richard; Römer, Heinrich; Germain, Ryan R
2013-01-01
Roads are a major cause of habitat fragmentation that can negatively affect many mammal populations. Mitigation measures such as crossing structures are a proposed method to reduce the negative effects of roads on wildlife, but the best methods for determining where such structures should be implemented, and how their effects might differ between species in mammal communities is largely unknown. We investigated the effects of a major highway through south-eastern British Columbia, Canada on several mammal species to determine how the highway may act as a barrier to animal movement, and how species may differ in their crossing-area preferences. We collected track data of eight mammal species across two winters, along both the highway and pre-marked transects, and used a multi-scale modeling approach to determine the scale at which habitat characteristics best predicted preferred crossing sites for each species. We found evidence for a severe barrier effect on all investigated species. Freely-available remotely-sensed habitat landscape data were better than more costly, manually-digitized microhabitat maps in supporting models that identified preferred crossing sites; however, models using both types of data were better yet. Further, in 6 of 8 cases models which incorporated multiple spatial scales were better at predicting preferred crossing sites than models utilizing any single scale. While each species differed in terms of the landscape variables associated with preferred/avoided crossing sites, we used a multi-model inference approach to identify locations along the highway where crossing structures may benefit all of the species considered. By specifically incorporating both highway and off-highway data and predictions we were able to show that landscape context plays an important role for maximizing mitigation measurement efficiency. Our results further highlight the need for mitigation measures along major highways to improve connectivity between mammal populations, and illustrate how multi-scale data can be used to identify preferred crossing sites for different species within a mammal community.
McCrea, Simon M
2009-01-01
Alexander Luria's model of the working brain consisting of three functional units was formulated through the examination of hundreds of focal brain-injury patients. Several psychometric instruments based on Luria's syndrome analysis and accompanying qualitative tasks have been developed since the 1970s. In the mid-1970s, JP Das and colleagues defined a specific cognitive processes model based directly on Luria's two coding units termed simultaneous and successive by studying diverse cross-cultural, ability, and socioeconomic strata. The cognitive assessment system is based on the PASS model of cognitive processes and consists of four composite scales of Planning-Attention-Simultaneous-Successive (PASS) devised by Naglieri and Das in 1997. Das and colleagues developed the two new scales of planning and attention to more closely model Luria's theory of higher cortical functions. In this paper a theoretical review of Luria's theory, Das and colleagues elaboration of Luria's model, and the neural correlates of PASS composite scales based on extant studies is summarized. A brief empirical study of the neuropsychological specificity of the PASS composite scales in a sample of 33 focal cortical stroke patients using cluster analysis is then discussed. Planning and simultaneous were sensitive to right hemisphere lesions. These findings were integrated with recent functional neuroimaging studies of PASS scales. In sum it was found that simultaneous is strongly dependent on dual bilateral occipitoparietal interhemispheric coordination whereas successive demonstrated left frontotemporal specificity with some evidence of interhemispheric coordination across the prefrontal cortex. Hence, support for the validity of the PASS composite scales was found as well as for the axiom of the independence of code content from code type originally specified in 1994 by Das, Naglieri, and Kirby.
A review and empirical study of the composite scales of the Das–Naglieri cognitive assessment system
McCrea, Simon M
2009-01-01
Alexander Luria’s model of the working brain consisting of three functional units was formulated through the examination of hundreds of focal brain-injury patients. Several psychometric instruments based on Luria’s syndrome analysis and accompanying qualitative tasks have been developed since the 1970s. In the mid-1970s, JP Das and colleagues defined a specific cognitive processes model based directly on Luria’s two coding units termed simultaneous and successive by studying diverse cross-cultural, ability, and socioeconomic strata. The cognitive assessment system is based on the PASS model of cognitive processes and consists of four composite scales of Planning–Attention–Simultaneous–Successive (PASS) devised by Naglieri and Das in 1997. Das and colleagues developed the two new scales of planning and attention to more closely model Luria’s theory of higher cortical functions. In this paper a theoretical review of Luria’s theory, Das and colleagues elaboration of Luria’s model, and the neural correlates of PASS composite scales based on extant studies is summarized. A brief empirical study of the neuropsychological specificity of the PASS composite scales in a sample of 33 focal cortical stroke patients using cluster analysis is then discussed. Planning and simultaneous were sensitive to right hemisphere lesions. These findings were integrated with recent functional neuroimaging studies of PASS scales. In sum it was found that simultaneous is strongly dependent on dual bilateral occipitoparietal interhemispheric coordination whereas successive demonstrated left frontotemporal specificity with some evidence of interhemispheric coordination across the prefrontal cortex. Hence, support for the validity of the PASS composite scales was found as well as for the axiom of the independence of code content from code type originally specified in 1994 by Das, Naglieri, and Kirby. PMID:22110322
Investigating the Role of Large-Scale Domain Dynamics in Protein-Protein Interactions.
Delaforge, Elise; Milles, Sigrid; Huang, Jie-Rong; Bouvier, Denis; Jensen, Malene Ringkjøbing; Sattler, Michael; Hart, Darren J; Blackledge, Martin
2016-01-01
Intrinsically disordered linkers provide multi-domain proteins with degrees of conformational freedom that are often essential for function. These highly dynamic assemblies represent a significant fraction of all proteomes, and deciphering the physical basis of their interactions represents a considerable challenge. Here we describe the difficulties associated with mapping the large-scale domain dynamics and describe two recent examples where solution state methods, in particular NMR spectroscopy, are used to investigate conformational exchange on very different timescales.
Investigating the Role of Large-Scale Domain Dynamics in Protein-Protein Interactions
Delaforge, Elise; Milles, Sigrid; Huang, Jie-rong; Bouvier, Denis; Jensen, Malene Ringkjøbing; Sattler, Michael; Hart, Darren J.; Blackledge, Martin
2016-01-01
Intrinsically disordered linkers provide multi-domain proteins with degrees of conformational freedom that are often essential for function. These highly dynamic assemblies represent a significant fraction of all proteomes, and deciphering the physical basis of their interactions represents a considerable challenge. Here we describe the difficulties associated with mapping the large-scale domain dynamics and describe two recent examples where solution state methods, in particular NMR spectroscopy, are used to investigate conformational exchange on very different timescales. PMID:27679800
ERIC Educational Resources Information Center
Warfvinge, Per
2008-01-01
The ECTS grade transfer scale is an interface grade scale to help European universities, students and employers to understand the level of student achievement. Hence, the ECTS scale can be seen as an interface, transforming local scales to a common system where A-E denote passing grades. By definition, ECTS should distribute the passing students…
NASA Astrophysics Data System (ADS)
Operto, S.; Miniussi, A.
2018-03-01
Three-dimensional frequency-domain full waveform inversion (FWI) is applied on North Sea wide-azimuth ocean-bottom cable data at low frequencies (≤ 10 Hz) to jointly update vertical wavespeed, density and quality factor Q in the visco-acoustic VTI approximation. We assess whether density and Q should be viewed as proxy to absorb artefacts resulting from approximate wave physics or are valuable for interpretation in presence of saturated sediments and gas. FWI is performed in the frequency domain to account for attenuation easily. Multi-parameter frequency-domain FWI is efficiently performed with a few discrete frequencies following a multi-scale frequency continuation. However, grouping a few frequencies during each multi-scale step is necessary to mitigate acquisition footprint and match dispersive shallow guided waves. Q and density absorb a significant part of the acquisition footprint hence cleaning the velocity model from this pollution. Low Q perturbations correlate with low velocity zones associated with soft sediments and gas cloud. However, the amplitudes of the Q perturbations show significant variations when the inversion tuning is modified. This dispersion in the Q reconstructions is however not passed on the velocity parameter suggesting that cross-talks between first-order kinematic and second-order dynamic parameters are limited. The density model shows a good match with a well log at shallow depths. Moreover, the impedance built a posteriori from the FWI velocity and density models shows a well-focused image with however local differences with the velocity model near the sea bed where density might have absorbed elastic effects. The FWI models are finally assessed against time-domain synthetic seismogram modelling performed with the same frequency-domain modelling engine used for FWI.
Chow, Alexander K; Sherer, Benjamin A; Yura, Emily; Kielb, Stephanie; Kocjancic, Ervin; Eggener, Scott; Turk, Thomas; Park, Sangtae; Psutka, Sarah; Abern, Michael; Latchamsetty, Kalyan C; Coogan, Christopher L
2017-11-01
To evaluate the Urological resident's attitude and experience with surgical simulation in residency education using a multi-institutional, multi-modality model. Residents from 6 area urology training programs rotated through simulation stations in 4 consecutive sessions from 2014 to 2017. Workshops included GreenLight photovaporization of the prostate, ureteroscopic stone extraction, laparoscopic peg transfer, 3-dimensional laparoscopy rope pass, transobturator sling placement, intravesical injection, high definition video system trainer, vasectomy, and Urolift. Faculty members provided teaching assistance, objective scoring, and verbal feedback. Participants completed a nonvalidated questionnaire evaluating utility of the workshop and soliciting suggestions for improvement. Sixty-three of 75 participants (84%) (postgraduate years 1-6) completed the exit questionnaire. Median rating of exercise usefulness on a scale of 1-10 ranged from 7.5 to 9. On a scale of 0-10, cumulative median scores of the course remained high over 4 years: time limit per station (9; interquartile range [IQR] 2), faculty instruction (9, IQR 2), ease of use (9, IQR 2), face validity (8, IQR 3), and overall course (9, IQR 2). On multivariate analysis, there was no difference in rating of domains between postgraduate years. Sixty-seven percent (42/63) believe that simulation training should be a requirement of Urology residency. Ninety-seven percent (63/65) viewed the laboratory as beneficial to their education. This workshop model is a valuable training experience for residents. Most participants believe that surgical simulation is beneficial and should be a requirement for Urology residency. High ratings of usefulness for each exercise demonstrated excellent face validity provided by the course. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Fathy, Ibrahim
2016-07-01
This paper presents a statistical study of different types of large-scale geomagnetic pulsation (Pc3, Pc4, Pc5 and Pi2) detected simultaneously by two MAGDAS stations located at Fayum (Geo. Coordinates 29.18 N and 30.50 E) and Aswan (Geo. Coordinates 23.59 N and 32.51 E) in Egypt. The second order butter-worth band-pass filter has been used to filter and analyze the horizontal H-component of the geomagnetic field in one-second data. The data was collected during the solar minimum of the current solar cycle 24. We list the most energetic pulsations detected by the two stations instantaneously, in addition; the average amplitude of the pulsation signals was calculated.
Detecting recurrent gene mutation in interaction network context using multi-scale graph diffusion.
Babaei, Sepideh; Hulsman, Marc; Reinders, Marcel; de Ridder, Jeroen
2013-01-23
Delineating the molecular drivers of cancer, i.e. determining cancer genes and the pathways which they deregulate, is an important challenge in cancer research. In this study, we aim to identify pathways of frequently mutated genes by exploiting their network neighborhood encoded in the protein-protein interaction network. To this end, we introduce a multi-scale diffusion kernel and apply it to a large collection of murine retroviral insertional mutagenesis data. The diffusion strength plays the role of scale parameter, determining the size of the network neighborhood that is taken into account. As a result, in addition to detecting genes with frequent mutations in their genomic vicinity, we find genes that harbor frequent mutations in their interaction network context. We identify densely connected components of known and putatively novel cancer genes and demonstrate that they are strongly enriched for cancer related pathways across the diffusion scales. Moreover, the mutations in the clusters exhibit a significant pattern of mutual exclusion, supporting the conjecture that such genes are functionally linked. Using multi-scale diffusion kernel, various infrequently mutated genes are found to harbor significant numbers of mutations in their interaction network neighborhood. Many of them are well-known cancer genes. The results demonstrate the importance of defining recurrent mutations while taking into account the interaction network context. Importantly, the putative cancer genes and networks detected in this study are found to be significant at different diffusion scales, confirming the necessity of a multi-scale analysis.
Takagi, Satoshi; Nagase, Hiroyuki; Hayashi, Tatsuya; Kita, Tamotsu; Hayashi, Katsumi; Sanada, Shigeru; Koike, Masayuki
2014-01-01
The hybrid convolution kernel technique for computed tomography (CT) is known to enable the depiction of an image set using different window settings. Our purpose was to decrease the number of artifacts in the hybrid convolution kernel technique for head CT and to determine whether our improved combined multi-kernel head CT images enabled diagnosis as a substitute for both brain (low-pass kernel-reconstructed) and bone (high-pass kernel-reconstructed) images. Forty-four patients with nondisplaced skull fractures were included. Our improved multi-kernel images were generated so that pixels of >100 Hounsfield unit in both brain and bone images were composed of CT values of bone images and other pixels were composed of CT values of brain images. Three radiologists compared the improved multi-kernel images with bone images. The improved multi-kernel images and brain images were identically displayed on the brain window settings. All three radiologists agreed that the improved multi-kernel images on the bone window settings were sufficient for diagnosing skull fractures in all patients. This improved multi-kernel technique has a simple algorithm and is practical for clinical use. Thus, simplified head CT examinations and fewer images that need to be stored can be expected.
Characterization of a plasma photonic crystal using a multi-fluid plasma model
NASA Astrophysics Data System (ADS)
Thomas, W. R.; Shumlak, U.; Wang, B.; Righetti, F.; Cappelli, M. A.; Miller, S. T.
2017-10-01
Plasma photonic crystals have the potential to significantly expand the capabilities of current microwave filtering and switching technologies by providing high speed (μs) control of energy band-gap/pass characteristics in the GHz through low THz range. While photonic crystals consisting of dielectric, semiconductor, and metallic matrices have seen thousands of articles published over the last several decades, plasma-based photonic crystals remain a relatively unexplored field. Numerical modeling efforts so far have largely used the standard methods of analysis for photonic crystals (the Plane Wave Expansion Method, Finite Difference Time Domain, and ANSYS finite element electromagnetic code HFSS), none of which capture nonlinear plasma-radiation interactions. In this study, a 5N-moment multi-fluid plasma model is implemented using University of Washington's WARPXM finite element multi-physics code. A two-dimensional plasma-vacuum photonic crystal is simulated and its behavior is characterized through the generation of dispersion diagrams and transmission spectra. These results are compared with theory, experimental data, and ANSYS HFSS simulation results. This research is supported by a Grant from United States Air Force Office of Scientific Research.
Real-time multi-mode neutron multiplicity counter
Rowland, Mark S; Alvarez, Raymond A
2013-02-26
Embodiments are directed to a digital data acquisition method that collects data regarding nuclear fission at high rates and performs real-time preprocessing of large volumes of data into directly useable forms for use in a system that performs non-destructive assaying of nuclear material and assemblies for mass and multiplication of special nuclear material (SNM). Pulses from a multi-detector array are fed in parallel to individual inputs that are tied to individual bits in a digital word. Data is collected by loading a word at the individual bit level in parallel, to reduce the latency associated with current shift-register systems. The word is read at regular intervals, all bits simultaneously, with no manipulation. The word is passed to a number of storage locations for subsequent processing, thereby removing the front-end problem of pulse pileup. The word is used simultaneously in several internal processing schemes that assemble the data in a number of more directly useable forms. The detector includes a multi-mode counter that executes a number of different count algorithms in parallel to determine different attributes of the count data.
NASA Astrophysics Data System (ADS)
Manfredi, Sabato
2016-06-01
Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network size. Finally, a numerical example shows the applicability of the proposed method and its advantage in terms of computational complexity when compared with the existing approaches.
NASA Astrophysics Data System (ADS)
Khuwaileh, Bassam
High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL) based algorithm previously developed to quantify the uncertainty for single physics models is extended for large scale multi-physics coupled problems with feedback effect. Moreover, a non-linear surrogate based UQ approach is developed, used and compared to performance of the KL approach and brute force Monte Carlo (MC) approach. On the other hand, an efficient Data Assimilation (DA) algorithm is developed to assess information about model's parameters: nuclear data cross-sections and thermal-hydraulics parameters. Two improvements are introduced in order to perform DA on the high dimensional problems. First, a goal-oriented surrogate model can be used to replace the original models in the depletion sequence (MPACT -- COBRA-TF - ORIGEN). Second, approximating the complex and high dimensional solution space with a lower dimensional subspace makes the sampling process necessary for DA possible for high dimensional problems. Moreover, safety analysis and design optimization depend on the accurate prediction of various reactor attributes. Predictions can be enhanced by reducing the uncertainty associated with the attributes of interest. Accordingly, an inverse problem can be defined and solved to assess the contributions from sources of uncertainty; and experimental effort can be subsequently directed to further improve the uncertainty associated with these sources. In this dissertation a subspace-based gradient-free and nonlinear algorithm for inverse uncertainty quantification namely the Target Accuracy Assessment (TAA) has been developed and tested. The ideas proposed in this dissertation were first validated using lattice physics applications simulated using SCALE6.1 package (Pressurized Water Reactor (PWR) and Boiling Water Reactor (BWR) lattice models). Ultimately, the algorithms proposed her were applied to perform UQ and DA for assembly level (CASL progression problem number 6) and core wide problems representing Watts Bar Nuclear 1 (WBN1) for cycle 1 of depletion (CASL Progression Problem Number 9) modeled via simulated using VERA-CS which consists of several multi-physics coupled models. The analysis and algorithms developed in this dissertation were encoded and implemented in a newly developed tool kit algorithms for Reduced Order Modeling based Uncertainty/Sensitivity Estimator (ROMUSE).
A scalable multi-photon coincidence detector based on superconducting nanowires.
Zhu, Di; Zhao, Qing-Yuan; Choi, Hyeongrak; Lu, Tsung-Ju; Dane, Andrew E; Englund, Dirk; Berggren, Karl K
2018-06-04
Coincidence detection of single photons is crucial in numerous quantum technologies and usually requires multiple time-resolved single-photon detectors. However, the electronic readout becomes a major challenge when the measurement basis scales to large numbers of spatial modes. Here, we address this problem by introducing a two-terminal coincidence detector that enables scalable readout of an array of detector segments based on superconducting nanowire microstrip transmission line. Exploiting timing logic, we demonstrate a sixteen-element detector that resolves all 136 possible single-photon and two-photon coincidence events. We further explore the pulse shapes of the detector output and resolve up to four-photon events in a four-element device, giving the detector photon-number-resolving capability. This new detector architecture and operating scheme will be particularly useful for multi-photon coincidence detection in large-scale photonic integrated circuits.
Avian movements and wetland connectivity in landscape conservation
Haig, Susan M.; Mehlman, D.W.; Oring, L.W.
1998-01-01
The current conservation crisis calls for research and management to be carried out on a long-term, multi-species basis at large spatial scales. Unfortunately, scientists, managers, and agencies often are stymied in their effort to conduct these large-scale studies because of a lack of appropriate technology, methodology, and funding. This issue is of particular concern in wetland conservation, for which the standard landscape approach may include consideration of a large tract of land but fail to incorporate the suite of wetland sites frequently used by highly mobile organisms such as waterbirds (e.g., shorebirds, wading birds, waterfowl). Typically, these species have population dynamics that require use of multiple wetlands, but this aspect of their life history has often been ignored in planning for their conservation. We outline theoretical, empirical, modeling, and planning problems associated with this issue and suggest solutions to some current obstacles. These solutions represent a tradeoff between typical in-depth single-species studies and more generic multi-species studies. They include studying within- and among-season movements of waterbirds on a spatial scale appropriate to both widely dispersing and more stationary species; multi-species censuses at multiple sites; further development and use of technology such as satellite transmitters and population-specific molecular markers; development of spatially explicit population models that consider within-season movements of waterbirds; and recognition from funding agencies that landscape-level issues cannot adequately be addressed without support for these types of studies.
Controllable 3D architectures of aligned carbon nanotube arrays by multi-step processes
NASA Astrophysics Data System (ADS)
Huang, Shaoming
2003-06-01
An effective way to fabricate large area three-dimensional (3D) aligned CNTs pattern based on pyrolysis of iron(II) phthalocyanine (FePc) by two-step processes is reported. The controllable generation of different lengths and selective growth of the aligned CNT arrays on metal-patterned (e.g., Ag and Au) substrate are the bases for generating such 3D aligned CNTs architectures. By controlling experimental conditions 3D aligned CNT arrays with different lengths/densities and morphologies/structures as well as multi-layered architectures can be fabricated in large scale by multi-step pyrolysis of FePc. These 3D architectures could have interesting properties and be applied for developing novel nanotube-based devices.
Multi-GPU implementation of a VMAT treatment plan optimization algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tian, Zhen, E-mail: Zhen.Tian@UTSouthwestern.edu, E-mail: Xun.Jia@UTSouthwestern.edu, E-mail: Steve.Jiang@UTSouthwestern.edu; Folkerts, Michael; Tan, Jun
Purpose: Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU’s relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors’ group, on a multi-GPU platform tomore » solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. Methods: The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors’ method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H and N) cancer case is then used to validate the authors’ method. The authors also compare their multi-GPU implementation with three different single GPU implementation strategies, i.e., truncating DDC matrix (S1), repeatedly transferring DDC matrix between CPU and GPU (S2), and porting computations involving DDC matrix to CPU (S3), in terms of both plan quality and computational efficiency. Two more H and N patient cases and three prostate cases are used to demonstrate the advantages of the authors’ method. Results: The authors’ multi-GPU implementation can finish the optimization process within ∼1 min for the H and N patient case. S1 leads to an inferior plan quality although its total time was 10 s shorter than the multi-GPU implementation due to the reduced matrix size. S2 and S3 yield the same plan quality as the multi-GPU implementation but take ∼4 and ∼6 min, respectively. High computational efficiency was consistently achieved for the other five patient cases tested, with VMAT plans of clinically acceptable quality obtained within 23–46 s. Conversely, to obtain clinically comparable or acceptable plans for all six of these VMAT cases that the authors have tested in this paper, the optimization time needed in a commercial TPS system on CPU was found to be in an order of several minutes. Conclusions: The results demonstrate that the multi-GPU implementation of the authors’ column-generation-based VMAT optimization can handle the large-scale VMAT optimization problem efficiently without sacrificing plan quality. The authors’ study may serve as an example to shed some light on other large-scale medical physics problems that require multi-GPU techniques.« less
Inference in the brain: Statistics flowing in redundant population codes
Pitkow, Xaq; Angelaki, Dora E
2017-01-01
It is widely believed that the brain performs approximate probabilistic inference to estimate causal variables in the world from ambiguous sensory data. To understand these computations, we need to analyze how information is represented and transformed by the actions of nonlinear recurrent neural networks. We propose that these probabilistic computations function by a message-passing algorithm operating at the level of redundant neural populations. To explain this framework, we review its underlying concepts, including graphical models, sufficient statistics, and message-passing, and then describe how these concepts could be implemented by recurrently connected probabilistic population codes. The relevant information flow in these networks will be most interpretable at the population level, particularly for redundant neural codes. We therefore outline a general approach to identify the essential features of a neural message-passing algorithm. Finally, we argue that to reveal the most important aspects of these neural computations, we must study large-scale activity patterns during moderately complex, naturalistic behaviors. PMID:28595050
Ding, Edwin; Lefrancois, Simon; Kutz, Jose Nathan; Wise, Frank W.
2011-01-01
The mode-locking of dissipative soliton fiber lasers using large mode area fiber supporting multiple transverse modes is studied experimentally and theoretically. The averaged mode-locking dynamics in a multi-mode fiber are studied using a distributed model. The co-propagation of multiple transverse modes is governed by a system of coupled Ginzburg–Landau equations. Simulations show that stable and robust mode-locked pulses can be produced. However, the mode-locking can be destabilized by excessive higher-order mode content. Experiments using large core step-index fiber, photonic crystal fiber, and chirally-coupled core fiber show that mode-locking can be significantly disturbed in the presence of higher-order modes, resulting in lower maximum single-pulse energies. In practice, spatial mode content must be carefully controlled to achieve full pulse energy scaling. This paper demonstrates that mode-locking performance is very sensitive to the presence of multiple waveguide modes when compared to systems such as amplifiers and continuous-wave lasers. PMID:21731106
Ding, Edwin; Lefrancois, Simon; Kutz, Jose Nathan; Wise, Frank W
2011-01-01
The mode-locking of dissipative soliton fiber lasers using large mode area fiber supporting multiple transverse modes is studied experimentally and theoretically. The averaged mode-locking dynamics in a multi-mode fiber are studied using a distributed model. The co-propagation of multiple transverse modes is governed by a system of coupled Ginzburg-Landau equations. Simulations show that stable and robust mode-locked pulses can be produced. However, the mode-locking can be destabilized by excessive higher-order mode content. Experiments using large core step-index fiber, photonic crystal fiber, and chirally-coupled core fiber show that mode-locking can be significantly disturbed in the presence of higher-order modes, resulting in lower maximum single-pulse energies. In practice, spatial mode content must be carefully controlled to achieve full pulse energy scaling. This paper demonstrates that mode-locking performance is very sensitive to the presence of multiple waveguide modes when compared to systems such as amplifiers and continuous-wave lasers.
OMERO and Bio-Formats 5: flexible access to large bioimaging datasets at scale
NASA Astrophysics Data System (ADS)
Moore, Josh; Linkert, Melissa; Blackburn, Colin; Carroll, Mark; Ferguson, Richard K.; Flynn, Helen; Gillen, Kenneth; Leigh, Roger; Li, Simon; Lindner, Dominik; Moore, William J.; Patterson, Andrew J.; Pindelski, Blazej; Ramalingam, Balaji; Rozbicki, Emil; Tarkowska, Aleksandra; Walczysko, Petr; Allan, Chris; Burel, Jean-Marie; Swedlow, Jason
2015-03-01
The Open Microscopy Environment (OME) has built and released Bio-Formats, a Java-based proprietary file format conversion tool and OMERO, an enterprise data management platform under open source licenses. In this report, we describe new versions of Bio-Formats and OMERO that are specifically designed to support large, multi-gigabyte or terabyte scale datasets that are routinely collected across most domains of biological and biomedical research. Bio- Formats reads image data directly from native proprietary formats, bypassing the need for conversion into a standard format. It implements the concept of a file set, a container that defines the contents of multi-dimensional data comprised of many files. OMERO uses Bio-Formats to read files natively, and provides a flexible access mechanism that supports several different storage and access strategies. These new capabilities of OMERO and Bio-Formats make them especially useful for use in imaging applications like digital pathology, high content screening and light sheet microscopy that create routinely large datasets that must be managed and analyzed.
Development of the US3D Code for Advanced Compressible and Reacting Flow Simulations
NASA Technical Reports Server (NTRS)
Candler, Graham V.; Johnson, Heath B.; Nompelis, Ioannis; Subbareddy, Pramod K.; Drayna, Travis W.; Gidzak, Vladimyr; Barnhardt, Michael D.
2015-01-01
Aerothermodynamics and hypersonic flows involve complex multi-disciplinary physics, including finite-rate gas-phase kinetics, finite-rate internal energy relaxation, gas-surface interactions with finite-rate oxidation and sublimation, transition to turbulence, large-scale unsteadiness, shock-boundary layer interactions, fluid-structure interactions, and thermal protection system ablation and thermal response. Many of the flows have a large range of length and time scales, requiring large computational grids, implicit time integration, and large solution run times. The University of Minnesota NASA US3D code was designed for the simulation of these complex, highly-coupled flows. It has many of the features of the well-established DPLR code, but uses unstructured grids and has many advanced numerical capabilities and physical models for multi-physics problems. The main capabilities of the code are described, the physical modeling approaches are discussed, the different types of numerical flux functions and time integration approaches are outlined, and the parallelization strategy is overviewed. Comparisons between US3D and the NASA DPLR code are presented, and several advanced simulations are presented to illustrate some of novel features of the code.
NASA Astrophysics Data System (ADS)
Derakhshandeh-Haghighi, Reza; Jenabali Jahromi, Seyed Ahmad
2016-02-01
The wear behavior of aluminum matrix composite powder with varying concentration of nano alumina particles, which was consolidated by equal-channel angular pressing (ECAP) at different passes, was determined by applying, 10 and 46 N loads, using a pin-on-disk machine. Optical and electronic microscopy, EDX analysis, and hardness measurement were performed in order to characterize the worn samples. The relative density of the samples after each pass of ECAP was determined using Archimedes principle. Within the studied range of loads, the wear loss decreased by increasing the number of ECAP passes.
Pattern-based, multi-scale segmentation and regionalization of EOSD land cover
NASA Astrophysics Data System (ADS)
Niesterowicz, Jacek; Stepinski, Tomasz F.
2017-10-01
The Earth Observation for Sustainable Development of Forests (EOSD) map is a 25 m resolution thematic map of Canadian forests. Because of its large spatial extent and relatively high resolution the EOSD is difficult to analyze using standard GIS methods. In this paper we propose multi-scale segmentation and regionalization of EOSD as new methods for analyzing EOSD on large spatial scales. Segments, which we refer to as forest land units (FLUs), are delineated as tracts of forest characterized by cohesive patterns of EOSD categories; we delineated from 727 to 91,885 FLUs within the spatial extent of EOSD depending on the selected scale of a pattern. Pattern of EOSD's categories within each FLU is described by 1037 landscape metrics. A shapefile containing boundaries of all FLUs together with an attribute table listing landscape metrics make up an SQL-searchable spatial database providing detailed information on composition and pattern of land cover types in Canadian forest. Shapefile format and extensive attribute table pertaining to the entire legend of EOSD are designed to facilitate broad range of investigations in which assessment of composition and pattern of forest over large areas is needed. We calculated four such databases using different spatial scales of pattern. We illustrate the use of FLU database for producing forest regionalization maps of two Canadian provinces, Quebec and Ontario. Such maps capture the broad scale variability of forest at the spatial scale of the entire province. We also demonstrate how FLU database can be used to map variability of landscape metrics, and thus the character of landscape, over the entire Canada.
Mortazavi, Forough; Mortazavi, Saideh S.; Khosrorad, Razieh
2015-01-01
Background: Procrastination is a common behavior which affects different aspects of life. The procrastination assessment scale-student (PASS) evaluates academic procrastination apropos its frequency and reasons. Objectives: The aims of the present study were to translate, culturally adapt, and validate the Farsi version of the PASS in a sample of Iranian medical students. Patients and Methods: In this cross-sectional study, the PASS was translated into Farsi through the forward-backward method, and its content validity was thereafter assessed by a panel of 10 experts. The Farsi version of the PASS was subsequently distributed among 423 medical students. The internal reliability of the PASS was assessed using Cronbach’s alpha. An exploratory factor analysis (EFA) was conducted on 18 items and then 28 items of the scale to find new models. The construct validity of the scale was assessed using both EFA and confirmatory factor analysis. The predictive validity of the scale was evaluated by calculating the correlation between the academic procrastination scores and the students’ average scores in the previous semester. Results: The corresponding reliability of the first and second parts of the scale was 0.781 and 0.861. An EFA on 18 items of the scale found 4 factors which jointly explained 53.2% of variances: The model was marginally acceptable (root mean square error of approximation [RMSEA] =0.098, standardized root mean square residual [SRMR] =0.076, χ2 /df =4.8, comparative fit index [CFI] =0.83). An EFA on 28 items of the scale found 4 factors which altogether explained 42.62% of variances: The model was acceptable (RMSEA =0.07, SRMR =0.07, χ2/df =2.8, incremental fit index =0.90, CFI =0.90). There was a negative correlation between the procrastination scores and the students’ average scores (r = -0.131, P =0.02). Conclusions: The Farsi version of the PASS is a valid and reliable tool to measure academic procrastination in Iranian undergraduate medical students. PMID:26473078
Suprathermal electron penetration into the inner magnetosphere of Saturn
NASA Astrophysics Data System (ADS)
Thomsen, M. F.; Coates, A. J.; Roussos, E.; Wilson, R. J.; Hansen, K. C.; Lewis, G. R.
2016-06-01
For most Cassini passes through the inner magnetosphere of Saturn, the hot electron population (> few hundred eVs) largely disappears inside of some cutoff L shell. Anode-and-actuation-angle averages of hot electron fluxes observed by the Cassini Electron Spectrometer are binned into 0.1 Rs bins in dipole L to explore the properties of this cutoff distance. The cutoff L shell is quite variable from pass to pass (on timescales as short as 10-20 h). At energies of 5797 eV, 2054 eV, and 728 eV, 90% of the inner boundary values lie between L ~ 4.7 and 8.4, with a median near L = 6.2, consistent with the range of L values over which discrete interchange injections have been observed, thus strengthening the case that the interchange process is responsible for delivering the bulk of the hot electrons seen in the inner magnetosphere. The occurrence distribution of the inner boundary is more sharply peaked on the nightside than at other local times. There is no apparent dependence of the depth of penetration on large-scale solar wind properties. It appears likely that internal processes (magnetic stress on mass-loaded flux tubes) are dominating the injection of hot electrons into the inner magnetosphere.
Superconductor bearings, flywheels and transportation
NASA Astrophysics Data System (ADS)
Werfel, F. N.; Floegel-Delor, U.; Rothfeld, R.; Riedel, T.; Goebel, B.; Wippich, D.; Schirrmeister, P.
2012-01-01
This paper describes the present status of high temperature superconductors (HTS) and of bulk superconducting magnet devices, their use in bearings, in flywheel energy storage systems (FESS) and linear transport magnetic levitation (Maglev) systems. We report and review the concepts of multi-seeded REBCO bulk superconductor fabrication. The multi-grain bulks increase the averaged trapped magnetic flux density up to 40% compared to single-grain assembly in large-scale applications. HTS magnetic bearings with permanent magnet (PM) excitation were studied and scaled up to maximum forces of 10 kN axially and 4.5 kN radially. We examine the technology of the high-gradient magnetic bearing concept and verify it experimentally. A large HTS bearing is tested for stabilizing a 600 kg rotor of a 5 kWh/250 kW flywheel system. The flywheel rotor tests show the requirement for additional damping. Our compact flywheel system is compared with similar HTS-FESS projects. A small-scale compact YBCO bearing with in situ Stirling cryocooler is constructed and investigated for mobile applications. Next we show a successfully developed modular linear Maglev system for magnetic train operation. Each module levitates 0.25t at 10 mm distance during one-day operation without refilling LN2. More than 30 vacuum cryostats containing multi-seeded YBCO blocks are fabricated and are tested now in Germany, China and Brazil.
NASA Astrophysics Data System (ADS)
Marscher, Alan P.
2011-09-01
Multi-wavelength light curves of bright gamma-ray blazars (e.g., 3C 454.3) are compared with the model proposed by Marscher and Jorstad. In this scenario, much of the optical and high-energy radiation in a blazar is emitted near the 43 GHz core of the jet as seen in VLBA images, parsecs from the central engine. The main physical features are a turbulent ambient jet plasma that passes through a standing recollimation shock in the jet. The model allows for short time-scales of optical and gamma-ray variability by restricting the highest-energy electrons radiating at these frequencies to a small fraction of the turbulent cells, perhaps those with a particular orientation of the magnetic field relative to the shock front. Because of this, the volume filling factor at high frequencies is relatively low, while that of the electrons radiating below about 10 THz is near unity. Such a model is consistent with the (1) red-noise power spectra of flux variations, (2) shorter time-scales of variability at higher frequencies, (3) frequency dependence of polarization and its variability, and (4) breaks in the synchrotron spectrum by more than the radiative loss value of 0.5. Simulated light curves are generated by a numerical code that (as of May 2011) includes synchrotron radiation as well as inverse Compton scattering of seed photons from both a dust torus and a Mach disk at the jet axis. The latter source of seed photons produces more pronounced variability in gamma-ray than in optical light curves, as is often observed. More features are expected to be added to the code by the time of the presentation. This research is supported in part by NASA through Fermi grants NNX08AV65G and NNX10AO59G, and by NSF grant AST-0907893.
Parallel multiscale simulations of a brain aneurysm
Grinberg, Leopold; Fedosov, Dmitry A.; Karniadakis, George Em
2012-01-01
Cardiovascular pathologies, such as a brain aneurysm, are affected by the global blood circulation as well as by the local microrheology. Hence, developing computational models for such cases requires the coupling of disparate spatial and temporal scales often governed by diverse mathematical descriptions, e.g., by partial differential equations (continuum) and ordinary differential equations for discrete particles (atomistic). However, interfacing atomistic-based with continuum-based domain discretizations is a challenging problem that requires both mathematical and computational advances. We present here a hybrid methodology that enabled us to perform the first multi-scale simulations of platelet depositions on the wall of a brain aneurysm. The large scale flow features in the intracranial network are accurately resolved by using the high-order spectral element Navier-Stokes solver εκ αr. The blood rheology inside the aneurysm is modeled using a coarse-grained stochastic molecular dynamics approach (the dissipative particle dynamics method) implemented in the parallel code LAMMPS. The continuum and atomistic domains overlap with interface conditions provided by effective forces computed adaptively to ensure continuity of states across the interface boundary. A two-way interaction is allowed with the time-evolving boundary of the (deposited) platelet clusters tracked by an immersed boundary method. The corresponding heterogeneous solvers ( εκ αr and LAMMPS) are linked together by a computational multilevel message passing interface that facilitates modularity and high parallel efficiency. Results of multiscale simulations of clot formation inside the aneurysm in a patient-specific arterial tree are presented. We also discuss the computational challenges involved and present scalability results of our coupled solver on up to 300K computer processors. Validation of such coupled atomistic-continuum models is a main open issue that has to be addressed in future work. PMID:23734066
NASA Astrophysics Data System (ADS)
Perrier, E. M. A.; Bird, N. R. A.; Rieutord, T. B.
2010-04-01
Quantifying the connectivity of pore networks is a key issue not only for modelling fluid flow and solute transport in porous media but also for assessing the ability of soil ecosystems to filter bacteria, viruses and any type of living microorganisms as well inert particles which pose a contamination risk. Straining is the main mechanical component of filtration processes: it is due to size effects, when a given soil retains a conveyed entity larger than the pores through which it is attempting to pass. We postulate that the range of sizes of entities which can be trapped inside soils has to be associated with the large range of scales involved in natural soil structures and that information on the pore size distribution has to be complemented by information on a Critical Filtration Size (CFS) delimiting the transition between percolating and non percolating regimes in multiscale pore networks. We show that the mass fractal dimensions which are classically used in soil science to quantify scaling laws in observed pore size distributions can also be used to build 3-D multiscale models of pore networks exhibiting such a critical transition. We extend to the 3-D case a new theoretical approach recently developed to address the connectivity of 2-D fractal networks (Bird and Perrier, 2009). Theoretical arguments based on renormalisation functions provide insight into multi-scale connectivity and a first estimation of CFS. Numerical experiments on 3-D prefractal media confirm the qualitative theory. These results open the way towards a new methodology to estimate soil filtration efficiency from the construction of soil structural models to be calibrated on available multiscale data.
NASA Astrophysics Data System (ADS)
Perrier, E. M. A.; Bird, N. R. A.; Rieutord, T. B.
2010-10-01
Quantifying the connectivity of pore networks is a key issue not only for modelling fluid flow and solute transport in porous media but also for assessing the ability of soil ecosystems to filter bacteria, viruses and any type of living microorganisms as well inert particles which pose a contamination risk. Straining is the main mechanical component of filtration processes: it is due to size effects, when a given soil retains a conveyed entity larger than the pores through which it is attempting to pass. We postulate that the range of sizes of entities which can be trapped inside soils has to be associated with the large range of scales involved in natural soil structures and that information on the pore size distribution has to be complemented by information on a critical filtration size (CFS) delimiting the transition between percolating and non percolating regimes in multiscale pore networks. We show that the mass fractal dimensions which are classically used in soil science to quantify scaling laws in observed pore size distributions can also be used to build 3-D multiscale models of pore networks exhibiting such a critical transition. We extend to the 3-D case a new theoretical approach recently developed to address the connectivity of 2-D fractal networks (Bird and Perrier, 2009). Theoretical arguments based on renormalisation functions provide insight into multi-scale connectivity and a first estimation of CFS. Numerical experiments on 3-D prefractal media confirm the qualitative theory. These results open the way towards a new methodology to estimate soil filtration efficiency from the construction of soil structural models to be calibrated on available multiscale data.
Parallel multiscale simulations of a brain aneurysm.
Grinberg, Leopold; Fedosov, Dmitry A; Karniadakis, George Em
2013-07-01
Cardiovascular pathologies, such as a brain aneurysm, are affected by the global blood circulation as well as by the local microrheology. Hence, developing computational models for such cases requires the coupling of disparate spatial and temporal scales often governed by diverse mathematical descriptions, e.g., by partial differential equations (continuum) and ordinary differential equations for discrete particles (atomistic). However, interfacing atomistic-based with continuum-based domain discretizations is a challenging problem that requires both mathematical and computational advances. We present here a hybrid methodology that enabled us to perform the first multi-scale simulations of platelet depositions on the wall of a brain aneurysm. The large scale flow features in the intracranial network are accurately resolved by using the high-order spectral element Navier-Stokes solver εκ αr . The blood rheology inside the aneurysm is modeled using a coarse-grained stochastic molecular dynamics approach (the dissipative particle dynamics method) implemented in the parallel code LAMMPS. The continuum and atomistic domains overlap with interface conditions provided by effective forces computed adaptively to ensure continuity of states across the interface boundary. A two-way interaction is allowed with the time-evolving boundary of the (deposited) platelet clusters tracked by an immersed boundary method. The corresponding heterogeneous solvers ( εκ αr and LAMMPS) are linked together by a computational multilevel message passing interface that facilitates modularity and high parallel efficiency. Results of multiscale simulations of clot formation inside the aneurysm in a patient-specific arterial tree are presented. We also discuss the computational challenges involved and present scalability results of our coupled solver on up to 300K computer processors. Validation of such coupled atomistic-continuum models is a main open issue that has to be addressed in future work.
Buchan, Iris; Covvey, H. Dominic; Rakowski, Harry
1985-01-01
A program has been developed for left ventricular (LV) border tracking on ultrasound images. For each frame, forty border points at equally-spaced angles around the LV center are found gradually during three passes. Pass 1 uses adaptive thresholding to find the most obvious border points. Pass 2 then uses an artificial intelligence technique of finding possible border path segments, associating a score with each, and, from paths with superior scores, obtaining more of the border points. Pass 3 closes any remaining gaps by interpolation. The program tracks the LV border quite well in spite of dropout and interference from intracardiac structures, except during end-systole. Multi-level passes provide a very useful structure for border tracking, with increasingly slow but more sophisticated algorithms possible at higher levels for use when earlier passes recognise failure.
ERIC Educational Resources Information Center
Rutherford, Teomara; Kibrick, Melissa; Burchinal, Margaret; Richland, Lindsey; Conley, AnneMarie; Osborne, Keara; Schneider, Stephanie; Duran, Lauren; Coulson, Andrew; Antenore, Fran; Daniels, Abby; Martinez, Michael E.
2010-01-01
This paper describes the background, methodology, preliminary findings, and anticipated future directions of a large-scale multi-year randomized field experiment addressing the efficacy of ST Math [Spatial-Temporal Math], a fully-developed math curriculum that uses interactive animated software. ST Math's unique approach minimizes the use of…
Samuel A. Cushman; Nicholas B. Elliot; David W. Macdonald; Andrew J. Loveridge
2015-01-01
Habitat loss and fragmentation are among the major drivers of population declines and extinction, particularly in large carnivores. Connectivity models provide practical tools for assessing fragmentation effects and developing mitigation or conservation responses. To be useful to conservation practitioners, connectivity models need to incorporate multiple scales and...
Multi-anode wire two dimensional proportional counter for detecting Iron-55 X-Ray Radiation
NASA Astrophysics Data System (ADS)
Weston, Michael William James
Radiation detectors in many applications use small sensor areas or large tubes which only collect one-dimensional information. There are some applications that require analyzing a large area and locating specific elements such as contamination on the heat tiles of a space shuttle or features on historical artifacts. The process can be time consuming and scanning a large area in a single pass is beneficial. The use of a two dimensional multi-wire proportional counter provides a large detection window presenting positional information in a single pass. This thesis described the design and implementation of an experimental detector to evaluate a specific design intended for use as a handheld instrument. The main effort of this research was to custom build a detector for testing purposes. The aluminum chamber and all circuit boards were custom designed and built specifically for this application. Various software and programmable logic algorithms were designed to analyze the raw data in real time and attempted to determine what data was useful and what could be discarded. The research presented here provides results useful for designing an improved second generation detector in the future. With the anode wire spacing chosen and the minimal collimation of the radiation source, detected events occurred all over the detection grid at any time. The raw event data did not make determining the source position easy and further data correlation was required. An abundance of samples had multiple wire hits which were not useful because it falsely reported the source to be all over the place and at different energy levels. By narrowing down the results to only use the largest signal pairs on different axes in each event, a much more accurate analysis of where the source existed above the grid was determined. The basic principle and construction method was shown to work, however the gas selection, geometry and anode wire constructs proved to be poor. To provide a system optimized for a specific application would require detailed Monte Carlo simulations. These simulation results together with the details and techniques implemented in this thesis would provide a final instrument of much higher accuracy.
NASA Astrophysics Data System (ADS)
Tone, Tetsuya; Kohara, Kazuhiro
We have investigated ways to reduce congestion in a theme park with multi-agents. We constructed a theme park model called Digital Park 1.0 with twenty-three attractions similar in form to Tokyo Disney Sea. We consider not only congestion information (number of vistors standing in line at each attraction) but also the advantage of a priority boarding pass, like Fast Pass which is used at Tokyo Disney Sea. The congestion-information-usage ratio, which reflects the ratio of visitors who behave according to congestion information, was changed from 0% to 100% in both models, with and without priority boarding pass. The “mean stay time of visitors" is a measure of satisfaction. The smaller mean stay time, the larger degree of satisfaction. Here, a short stay time means a short wait time. The resluts of each simulation are averaged over ten trials. The main results are as follows. (1) When congestion-information-usage ratio increased, the mean stay time decreases. When 20% of visitors behaved according to congestion information, the mean stay time was reduced by 30%. (2) A priority boarding pass reduced congestion, and mean stay time was reduced by 15%. (3) When visitors used congestion information and a priority boarding pass, mean stay time was further reduced. When the congestion-information-usage ratio was 20%, mean stay time was reduced by 35%. (4) When congestion-information-usage ratio was over 50%, the congestion reduction effects reached saturation.
Multi-Product Microalgae Biorefineries: From Concept Towards Reality.
't Lam, G P; Vermuë, M H; Eppink, M H M; Wijffels, R H; van den Berg, C
2018-02-01
Although microalgae are a promising biobased feedstock, industrial scale production is still far off. To enhance the economic viability of large-scale microalgae processes, all biomass components need to be valorized, requiring a multi-product biorefinery. However, this concept is still too expensive. Typically, downstream processing of industrial biotechnological bulk products accounts for 20-40% of the total production costs, while for a microalgae multi-product biorefinery the costs are substantially higher (50-60%). These costs are high due to the lack of appropriate and mild technologies to access the different product fractions such as proteins, carbohydrates, and lipids. To reduce the costs, simplified processes need to be developed for the main unit operations including harvesting, cell disruption, extraction, and possibly fractionation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Development of geopolitically relevant ranking criteria for geoengineering methods
NASA Astrophysics Data System (ADS)
Boyd, Philip W.
2016-11-01
A decade has passed since Paul Crutzen published his editorial essay on the potential for stratospheric geoengineering to cool the climate in the Anthropocene. He synthesized the effects of the 1991 Pinatubo eruption on the planet's radiative budget and used this large-scale event to broaden and deepen the debate on the challenges and opportunities of large-scale geoengineering. Pinatubo had pronounced effects, both in the short and longer term (months to years), on the ocean, land, and the atmosphere. This rich set of data on how a large-scale natural event influences many regional and global facets of the Earth System provides a comprehensive viewpoint to assess the wider ramifications of geoengineering. Here, I use the Pinatubo archives to develop a range of geopolitically relevant ranking criteria for a suite of different geoengineering approaches. The criteria focus on the spatial scales needed for geoengineering and whether large-scale dispersal is a necessary requirement for a technique to deliver significant cooling or carbon dioxide reductions. These categories in turn inform whether geoengineering approaches are amenable to participation (the "democracy of geoengineering") and whether they will lead to transboundary issues that could precipitate geopolitical conflicts. The criteria provide the requisite detail to demarcate different geoengineering approaches in the context of geopolitics. Hence, they offer another tool that can be used in the development of a more holistic approach to the debate on geoengineering.
Small-scale multi-axial hybrid simulation of a shear-critical reinforced concrete frame
NASA Astrophysics Data System (ADS)
Sadeghian, Vahid; Kwon, Oh-Sung; Vecchio, Frank
2017-10-01
This study presents a numerical multi-scale simulation framework which is extended to accommodate hybrid simulation (numerical-experimental integration). The framework is enhanced with a standardized data exchange format and connected to a generalized controller interface program which facilitates communication with various types of laboratory equipment and testing configurations. A small-scale experimental program was conducted using a six degree-of-freedom hydraulic testing equipment to verify the proposed framework and provide additional data for small-scale testing of shearcritical reinforced concrete structures. The specimens were tested in a multi-axial hybrid simulation manner under a reversed cyclic loading condition simulating earthquake forces. The physical models were 1/3.23-scale representations of a beam and two columns. A mixed-type modelling technique was employed to analyze the remainder of the structures. The hybrid simulation results were compared against those obtained from a large-scale test and finite element analyses. The study found that if precautions are taken in preparing model materials and if the shear-related mechanisms are accurately considered in the numerical model, small-scale hybrid simulations can adequately simulate the behaviour of shear-critical structures. Although the findings of the study are promising, to draw general conclusions additional test data are required.
NASA Astrophysics Data System (ADS)
Cao, Chao
2009-03-01
Nano-scale physical phenomena and processes, especially those in electronics, have drawn great attention in the past decade. Experiments have shown that electronic and transport properties of functionalized carbon nanotubes are sensitive to adsorption of gas molecules such as H2, NO2, and NH3. Similar measurements have also been performed to study adsorption of proteins on other semiconductor nano-wires. These experiments suggest that nano-scale systems can be useful for making future chemical and biological sensors. Aiming to understand the physical mechanisms underlying and governing property changes at nano-scale, we start off by investigating, via first-principles method, the electronic structure of Pd-CNT before and after hydrogen adsorption, and continue with coherent electronic transport using non-equilibrium Green’s function techniques combined with density functional theory. Once our results are fully analyzed they can be used to interpret and understand experimental data, with a few difficult issues to be addressed. Finally, we discuss a newly developed multi-scale computing architecture, OPAL, that coordinates simultaneous execution of multiple codes. Inspired by the capabilities of this computing framework, we present a scenario of future modeling and simulation of multi-scale, multi-physical processes.
SChloro: directing Viridiplantae proteins to six chloroplastic sub-compartments.
Savojardo, Castrense; Martelli, Pier Luigi; Fariselli, Piero; Casadio, Rita
2017-02-01
Chloroplasts are organelles found in plants and involved in several important cell processes. Similarly to other compartments in the cell, chloroplasts have an internal structure comprising several sub-compartments, where different proteins are targeted to perform their functions. Given the relation between protein function and localization, the availability of effective computational tools to predict protein sub-organelle localizations is crucial for large-scale functional studies. In this paper we present SChloro, a novel machine-learning approach to predict protein sub-chloroplastic localization, based on targeting signal detection and membrane protein information. The proposed approach performs multi-label predictions discriminating six chloroplastic sub-compartments that include inner membrane, outer membrane, stroma, thylakoid lumen, plastoglobule and thylakoid membrane. In comparative benchmarks, the proposed method outperforms current state-of-the-art methods in both single- and multi-compartment predictions, with an overall multi-label accuracy of 74%. The results demonstrate the relevance of the approach that is eligible as a good candidate for integration into more general large-scale annotation pipelines of protein subcellular localization. The method is available as web server at http://schloro.biocomp.unibo.it gigi@biocomp.unibo.it.
Large area sub-micron chemical imaging of magnesium in sea urchin teeth.
Masic, Admir; Weaver, James C
2015-03-01
The heterogeneous and site-specific incorporation of inorganic ions can profoundly influence the local mechanical properties of damage tolerant biological composites. Using the sea urchin tooth as a research model, we describe a multi-technique approach to spatially map the distribution of magnesium in this complex multiphase system. Through the combined use of 16-bit backscattered scanning electron microscopy, multi-channel energy dispersive spectroscopy elemental mapping, and diffraction-limited confocal Raman spectroscopy, we demonstrate a new set of high throughput, multi-spectral, high resolution methods for the large scale characterization of mineralized biological materials. In addition, instrument hardware and data collection protocols can be modified such that several of these measurements can be performed on irregularly shaped samples with complex surface geometries and without the need for extensive sample preparation. Using these approaches, in conjunction with whole animal micro-computed tomography studies, we have been able to spatially resolve micron and sub-micron structural features across macroscopic length scales on entire urchin tooth cross-sections and correlate these complex morphological features with local variability in elemental composition. Copyright © 2015 Elsevier Inc. All rights reserved.
SNAVA-A real-time multi-FPGA multi-model spiking neural network simulation architecture.
Sripad, Athul; Sanchez, Giovanny; Zapata, Mireya; Pirrone, Vito; Dorta, Taho; Cambria, Salvatore; Marti, Albert; Krishnamourthy, Karthikeyan; Madrenas, Jordi
2018-01-01
Spiking Neural Networks (SNN) for Versatile Applications (SNAVA) simulation platform is a scalable and programmable parallel architecture that supports real-time, large-scale, multi-model SNN computation. This parallel architecture is implemented in modern Field-Programmable Gate Arrays (FPGAs) devices to provide high performance execution and flexibility to support large-scale SNN models. Flexibility is defined in terms of programmability, which allows easy synapse and neuron implementation. This has been achieved by using a special-purpose Processing Elements (PEs) for computing SNNs, and analyzing and customizing the instruction set according to the processing needs to achieve maximum performance with minimum resources. The parallel architecture is interfaced with customized Graphical User Interfaces (GUIs) to configure the SNN's connectivity, to compile the neuron-synapse model and to monitor SNN's activity. Our contribution intends to provide a tool that allows to prototype SNNs faster than on CPU/GPU architectures but significantly cheaper than fabricating a customized neuromorphic chip. This could be potentially valuable to the computational neuroscience and neuromorphic engineering communities. Copyright © 2017 Elsevier Ltd. All rights reserved.
Active phase locking of thirty fiber channels using multilevel phase dithering method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Zhimeng; Luo, Yongquan, E-mail: yongquan-l@sina.com; Liu, Cangli
2016-03-15
An active phase locking of a large-scale fiber array with thirty channels has been demonstrated experimentally. In the experiment, the first group of thirty phase controllers is used to compensate the phase noises between the elements and the second group of thirty phase modulators is used to impose additional phase disturbances to mimic the phase noises in the high power fiber amplifiers. A multi-level phase dithering algorithm using dual-level rectangular-wave phase modulation and time division multiplexing can achieve the same phase control as single/multi-frequency dithering technique, but without coherent demodulation circuit. The phase locking efficiency of 30 fiber channels ismore » achieved about 98.68%, 97.82%, and 96.50% with no additional phase distortion, modulated phase distortion I (±1 rad), and phase distortion II (±2 rad), corresponding to the phase error of λ/54, λ/43, and λ/34 rms. The contrast of the coherent combined beam profile is about 89%. Experimental results reveal that the multi-level phase dithering technique has great potential in scaling to a large number of laser beams.« less
Saura, Santiago; Rondinini, Carlo
2016-01-01
One of the biggest challenges in large-scale conservation is quantifying connectivity at broad geographic scales and for a large set of species. Because connectivity analyses can be computationally intensive, and the planning process quite complex when multiple taxa are involved, assessing connectivity at large spatial extents for many species turns to be often intractable. Such limitation results in that conducted assessments are often partial by focusing on a few key species only, or are generic by considering a range of dispersal distances and a fixed set of areas to connect that are not directly linked to the actual spatial distribution or mobility of particular species. By using a graph theory framework, here we propose an approach to reduce computational effort and effectively consider large assemblages of species in obtaining multi-species connectivity priorities. We demonstrate the potential of the approach by identifying defragmentation priorities in the Italian road network focusing on medium and large terrestrial mammals. We show that by combining probabilistic species graphs prior to conducting the network analysis (i) it is possible to analyse connectivity once for all species simultaneously, obtaining conservation or restoration priorities that apply for the entire species assemblage; and that (ii) those priorities are well aligned with the ones that would be obtained by aggregating the results of separate connectivity analysis for each of the individual species. This approach offers great opportunities to extend connectivity assessments to large assemblages of species and broad geographic scales. PMID:27768718
Radar remote sensing for archaeology in Hangu Frontier Pass in Xin’an, China
NASA Astrophysics Data System (ADS)
Jiang, A. H.; Chen, F. L.; Tang, P. P.; Liu, G. L.; Liu, W. K.; Wang, H. C.; Lu, X.; Zhao, X. L.
2017-02-01
As a non-invasive tool, remote sensing can be applied to archaeology taking the advantage of large scale covering, in-time acquisition, high spatial-temporal resolution and etc. In archaeological research, optical approaches have been widely used. However, the capability of Synthetic Aperture Radar (SAR) for archaeological detection has not been fully exploded so far. In this study, we chose Hangu Frontier Pass of Han Dynasty located in Henan Province as the experimental site (included into the cluster of Silk Roads World Heritage sites). An exploratory study to detect the historical remains was conducted. Firstly, TanDEM-X SAR data were applied to generate high resolution DEM of Hangu Frontier Pass; and then the relationship between the pass and derived ridge lines was analyzed. Second, the temporal-averaged amplitude SAR images highlighted archaeological traces owing to the depressed speckle noise. For instance, the processing of 20-scene PALSAR data (spanning from 2007 to 2011) enabled us to detect unknown archaeological features. Finally, the heritage remains detected by SAR data were verified by Ground Penetrating Radar (GPR) prospecting, implying the potential of the space-to-ground radar remote sensing for archaeological applications.
NASA Technical Reports Server (NTRS)
Nummedal, D.
1978-01-01
There are two overflights planned for the field conference; one for the Cheney-Palouse tract of the eastern channeled scabland, the other covering the coulees and basins of the western region. The approximate flight lines are indicated on the accompanying LANDSAT images. The first flight will follow the eastern margin of this large scabland tract, passing a series of loess remnants, gravel bars and excavated rock basins. The western scablands overflight will provide a review of the structurally controlled complex pattern of large-scale erosion and deposition characteristic of the region between the upper Grand Coulee (Banks Lake) and the Pasco Basin.
NASA Astrophysics Data System (ADS)
Casadei, F.; Ruzzene, M.
2011-04-01
This work illustrates the possibility to extend the field of application of the Multi-Scale Finite Element Method (MsFEM) to structural mechanics problems that involve localized geometrical discontinuities like cracks or notches. The main idea is to construct finite elements with an arbitrary number of edge nodes that describe the actual geometry of the damage with shape functions that are defined as local solutions of the differential operator of the specific problem according to the MsFEM approach. The small scale information are then brought to the large scale model through the coupling of the global system matrices that are assembled using classical finite element procedures. The efficiency of the method is demonstrated through selected numerical examples that constitute classical problems of great interest to the structural health monitoring community.
Cotter, C J; Gottwald, G A; Holm, D D
2017-09-01
In Holm (Holm 2015 Proc. R. Soc. A 471 , 20140963. (doi:10.1098/rspa.2014.0963)), stochastic fluid equations were derived by employing a variational principle with an assumed stochastic Lagrangian particle dynamics. Here we show that the same stochastic Lagrangian dynamics naturally arises in a multi-scale decomposition of the deterministic Lagrangian flow map into a slow large-scale mean and a rapidly fluctuating small-scale map. We employ homogenization theory to derive effective slow stochastic particle dynamics for the resolved mean part, thereby obtaining stochastic fluid partial equations in the Eulerian formulation. To justify the application of rigorous homogenization theory, we assume mildly chaotic fast small-scale dynamics, as well as a centring condition. The latter requires that the mean of the fluctuating deviations is small, when pulled back to the mean flow.
Intensive agriculture erodes β-diversity at large scales.
Karp, Daniel S; Rominger, Andrew J; Zook, Jim; Ranganathan, Jai; Ehrlich, Paul R; Daily, Gretchen C
2012-09-01
Biodiversity is declining from unprecedented land conversions that replace diverse, low-intensity agriculture with vast expanses under homogeneous, intensive production. Despite documented losses of species richness, consequences for β-diversity, changes in community composition between sites, are largely unknown, especially in the tropics. Using a 10-year data set on Costa Rican birds, we find that low-intensity agriculture sustained β-diversity across large scales on a par with forest. In high-intensity agriculture, low local (α) diversity inflated β-diversity as a statistical artefact. Therefore, at small spatial scales, intensive agriculture appeared to retain β-diversity. Unlike in forest or low-intensity systems, however, high-intensity agriculture also homogenised vegetation structure over large distances, thereby decoupling the fundamental ecological pattern of bird communities changing with geographical distance. This ~40% decline in species turnover indicates a significant decline in β-diversity at large spatial scales. These findings point the way towards multi-functional agricultural systems that maintain agricultural productivity while simultaneously conserving biodiversity. © 2012 Blackwell Publishing Ltd/CNRS.
Hierarchical modeling and robust synthesis for the preliminary design of large scale complex systems
NASA Astrophysics Data System (ADS)
Koch, Patrick Nathan
Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: (1) Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis, (2) Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration, and (3) Noise modeling techniques for implementing robust preliminary design when approximate models are employed. The method developed and associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system; the turbofan system-level problem is partitioned into engine cycle and configuration design and a compressor module is integrated for more detailed subsystem-level design exploration, improving system evaluation.
Wang, Bao-Zhen; Chen, Zhi
2013-01-01
This article presents a GIS-based multi-source and multi-box modeling approach (GMSMB) to predict the spatial concentration distributions of airborne pollutant on local and regional scales. In this method, an extended multi-box model combined with a multi-source and multi-grid Gaussian model are developed within the GIS framework to examine the contributions from both point- and area-source emissions. By using GIS, a large amount of data including emission sources, air quality monitoring, meteorological data, and spatial location information required for air quality modeling are brought into an integrated modeling environment. It helps more details of spatial variation in source distribution and meteorological condition to be quantitatively analyzed. The developed modeling approach has been examined to predict the spatial concentration distribution of four air pollutants (CO, NO(2), SO(2) and PM(2.5)) for the State of California. The modeling results are compared with the monitoring data. Good agreement is acquired which demonstrated that the developed modeling approach could deliver an effective air pollution assessment on both regional and local scales to support air pollution control and management planning.
Topological defects in extended inflation
NASA Technical Reports Server (NTRS)
Copeland, Edmund J.; Kolb, Edward W.; Liddle, Andrew R.
1990-01-01
The production of topological defects, especially cosmic strings, in extended inflation models was considered. In extended inflation, the Universe passes through a first-order phase transition via bubble percolation, which naturally allows defects to form at the end of inflation. The correlation length, which determines the number density of the defects, is related to the mean size of bubbles when they collide. This mechanism allows a natural combination of inflation and large scale structure via cosmic strings.
Couriot, Ophélie; Hewison, A J Mark; Saïd, Sonia; Cagnacci, Francesca; Chamaillé-Jammes, Simon; Linnell, John D C; Mysterud, Atle; Peters, Wibke; Urbano, Ferdinando; Heurich, Marco; Kjellander, Petter; Nicoloso, Sandro; Berger, Anne; Sustr, Pavel; Kroeschel, Max; Soennichsen, Leif; Sandfort, Robin; Gehr, Benedikt; Morellet, Nicolas
2018-05-01
Much research on large herbivore movement has focused on the annual scale to distinguish between resident and migratory tactics, commonly assuming that individuals are sedentary at the within-season scale. However, apparently sedentary animals may occupy a number of sub-seasonal functional home ranges (sfHR), particularly when the environment is spatially heterogeneous and/or temporally unpredictable. The roe deer (Capreolus capreolus) experiences sharply contrasting environmental conditions due to its widespread distribution, but appears markedly sedentary over much of its range. Using GPS monitoring from 15 populations across Europe, we evaluated the propensity of this large herbivore to be truly sedentary at the seasonal scale in relation to variation in environmental conditions. We studied movement using net square displacement to identify the possible use of sfHR. We expected that roe deer should be less sedentary within seasons in heterogeneous and unpredictable environments, while migratory individuals should be seasonally more sedentary than residents. Our analyses revealed that, across the 15 populations, all individuals adopted a multi-range tactic, occupying between two and nine sfHR during a given season. In addition, we showed that (i) the number of sfHR was only marginally influenced by variation in resource distribution, but decreased with increasing sfHR size; and (ii) the distance between sfHR increased with increasing heterogeneity and predictability in resource distribution, as well as with increasing sfHR size. We suggest that the multi-range tactic is likely widespread among large herbivores, allowing animals to track spatio-temporal variation in resource distribution and, thereby, to cope with changes in their local environment.
Farquharson, Kelly; Murphy, Kimberly A.
2016-01-01
Purpose: This paper describes methodological procedures involving execution of a large-scale, multi-site longitudinal study of language and reading comprehension in young children. Researchers in the Language and Reading Research Consortium (LARRC) developed and implemented these procedures to ensure data integrity across multiple sites, schools, and grades. Specifically, major features of our approach, as well as lessons learned, are summarized in 10 steps essential for successful completion of a large-scale longitudinal investigation in early grades. Method: Over 5 years, children in preschool through third grade were administered a battery of 35 higher- and lower-level language, listening, and reading comprehension measures (RCM). Data were collected from children, their teachers, and their parents/guardians at four sites across the United States. Substantial and rigorous effort was aimed toward maintaining consistency in processes and data management across sites for children, assessors, and staff. Conclusion: With appropriate planning, flexibility, and communication strategies in place, LARRC developed and executed a successful multi-site longitudinal research study that will meet its goal of investigating the contribution and role of language skills in the development of children's listening and reading comprehension. Through dissemination of our design strategies and lessons learned, research teams embarking on similar endeavors can be better equipped to anticipate the challenges. PMID:27064308
Wagner, Tyler; Jefferson T. Deweber,; Jason Detar,; Kristine, David; John A. Sweka,
2014-01-01
Many potential stressors to aquatic environments operate over large spatial scales, prompting the need to assess and monitor both site-specific and regional dynamics of fish populations. We used hierarchical Bayesian models to evaluate the spatial and temporal variability in density and capture probability of age-1 and older Brook Trout Salvelinus fontinalis from three-pass removal data collected at 291 sites over a 37-year time period (1975–2011) in Pennsylvania streams. There was high between-year variability in density, with annual posterior means ranging from 2.1 to 10.2 fish/100 m2; however, there was no significant long-term linear trend. Brook Trout density was positively correlated with elevation and negatively correlated with percent developed land use in the network catchment. Probability of capture did not vary substantially across sites or years but was negatively correlated with mean stream width. Because of the low spatiotemporal variation in capture probability and a strong correlation between first-pass CPUE (catch/min) and three-pass removal density estimates, the use of an abundance index based on first-pass CPUE could represent a cost-effective alternative to conducting multiple-pass removal sampling for some Brook Trout monitoring and assessment objectives. Single-pass indices may be particularly relevant for monitoring objectives that do not require precise site-specific estimates, such as regional monitoring programs that are designed to detect long-term linear trends in density.
Theory and practice of corrosion related to ashes and deposits in a WtE boiler.
Verbinnen, Bram; De Greef, Johan; Van Caneghem, Jo
2018-03-01
Corrosion of heat-exchanging components is one of the main operational problems in Waste-to-Energy plants, limiting the electrical efficiency that can be reached. Corrosion is mainly related to the devolatilization and/or formation of chlorides, sulphates and mixtures thereof on the heat-exchanging surfaces. Theoretical considerations on this corrosion were already put forward in literature, but this paper now for the first time combines theory with a large scale sampling campaign of several Waste-to-Energy plants. Based on the outcome of elemental and mineralogical analysis, the distribution of Cl and S in ashes sampled throughout the plant during normal operation is explained. Cl concentrations are high (15-20%) in the first empty pass, decrease in the second and third empty pass, but increase again in the convective part, whereas the S concentrations show an inverse behavior, with the highest concentrations (30%) observed in the second and third empty pass. Sampling of deposits on specific places where corrosion possibly occurred, gives a better insight in the mechanisms related to corrosion phenomena in real-scale WtE plants and provides practical evidence for some phenomena that were only assumed on the basis of theory or lab scale experiments before. More specific, it confirms the role of oxygen content, temperatures in the different stages of the boiler, the presence of polysulphates, Pb and Zb, and the concentrations of HCl and SO 2 in the flue gas for different types of boiler corrosion. Copyright © 2017 Elsevier Ltd. All rights reserved.
Habitat structure mediates biodiversity effects on ecosystem properties
Godbold, J. A.; Bulling, M. T.; Solan, M.
2011-01-01
Much of what we know about the role of biodiversity in mediating ecosystem processes and function stems from manipulative experiments, which have largely been performed in isolated, homogeneous environments that do not incorporate habitat structure or allow natural community dynamics to develop. Here, we use a range of habitat configurations in a model marine benthic system to investigate the effects of species composition, resource heterogeneity and patch connectivity on ecosystem properties at both the patch (bioturbation intensity) and multi-patch (nutrient concentration) scale. We show that allowing fauna to move and preferentially select patches alters local species composition and density distributions, which has negative effects on ecosystem processes (bioturbation intensity) at the patch scale, but overall positive effects on ecosystem functioning (nutrient concentration) at the multi-patch scale. Our findings provide important evidence that community dynamics alter in response to localized resource heterogeneity and that these small-scale variations in habitat structure influence species contributions to ecosystem properties at larger scales. We conclude that habitat complexity forms an important buffer against disturbance and that contemporary estimates of the level of biodiversity required for maintaining future multi-functional systems may need to be revised. PMID:21227969
Habitat structure mediates biodiversity effects on ecosystem properties.
Godbold, J A; Bulling, M T; Solan, M
2011-08-22
Much of what we know about the role of biodiversity in mediating ecosystem processes and function stems from manipulative experiments, which have largely been performed in isolated, homogeneous environments that do not incorporate habitat structure or allow natural community dynamics to develop. Here, we use a range of habitat configurations in a model marine benthic system to investigate the effects of species composition, resource heterogeneity and patch connectivity on ecosystem properties at both the patch (bioturbation intensity) and multi-patch (nutrient concentration) scale. We show that allowing fauna to move and preferentially select patches alters local species composition and density distributions, which has negative effects on ecosystem processes (bioturbation intensity) at the patch scale, but overall positive effects on ecosystem functioning (nutrient concentration) at the multi-patch scale. Our findings provide important evidence that community dynamics alter in response to localized resource heterogeneity and that these small-scale variations in habitat structure influence species contributions to ecosystem properties at larger scales. We conclude that habitat complexity forms an important buffer against disturbance and that contemporary estimates of the level of biodiversity required for maintaining future multi-functional systems may need to be revised.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seyedhosseini, Mojtaba; Kumar, Ritwik; Jurrus, Elizabeth R.
2011-10-01
Automated neural circuit reconstruction through electron microscopy (EM) images is a challenging problem. In this paper, we present a novel method that exploits multi-scale contextual information together with Radon-like features (RLF) to learn a series of discriminative models. The main idea is to build a framework which is capable of extracting information about cell membranes from a large contextual area of an EM image in a computationally efficient way. Toward this goal, we extract RLF that can be computed efficiently from the input image and generate a scale-space representation of the context images that are obtained at the output ofmore » each discriminative model in the series. Compared to a single-scale model, the use of a multi-scale representation of the context image gives the subsequent classifiers access to a larger contextual area in an effective way. Our strategy is general and independent of the classifier and has the potential to be used in any context based framework. We demonstrate that our method outperforms the state-of-the-art algorithms in detection of neuron membranes in EM images.« less
NASA Astrophysics Data System (ADS)
Fiore, Sandro; Płóciennik, Marcin; Doutriaux, Charles; Blanquer, Ignacio; Barbera, Roberto; Donvito, Giacinto; Williams, Dean N.; Anantharaj, Valentine; Salomoni, Davide D.; Aloisio, Giovanni
2017-04-01
In many scientific domains such as climate, data is often n-dimensional and requires tools that support specialized data types and primitives to be properly stored, accessed, analysed and visualized. Moreover, new challenges arise in large-scale scenarios and eco-systems where petabytes (PB) of data can be available and data can be distributed and/or replicated, such as the Earth System Grid Federation (ESGF) serving the Coupled Model Intercomparison Project, Phase 5 (CMIP5) experiment, providing access to 2.5PB of data for the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5). A case study on climate models intercomparison data analysis addressing several classes of multi-model experiments is being implemented in the context of the EU H2020 INDIGO-DataCloud project. Such experiments require the availability of large amount of data (multi-terabyte order) related to the output of several climate models simulations as well as the exploitation of scientific data management tools for large-scale data analytics. More specifically, the talk discusses in detail a use case on precipitation trend analysis in terms of requirements, architectural design solution, and infrastructural implementation. The experiment has been tested and validated on CMIP5 datasets, in the context of a large scale distributed testbed across EU and US involving three ESGF sites (LLNL, ORNL, and CMCC) and one central orchestrator site (PSNC). The general "environment" of the case study relates to: (i) multi-model data analysis inter-comparison challenges; (ii) addressed on CMIP5 data; and (iii) which are made available through the IS-ENES/ESGF infrastructure. The added value of the solution proposed in the INDIGO-DataCloud project are summarized in the following: (i) it implements a different paradigm (from client- to server-side); (ii) it intrinsically reduces data movement; (iii) it makes lightweight the end-user setup; (iv) it fosters re-usability (of data, final/intermediate products, workflows, sessions, etc.) since everything is managed on the server-side; (v) it complements, extends and interoperates with the ESGF stack; (vi) it provides a "tool" for scientists to run multi-model experiments, and finally; and (vii) it can drastically reduce the time-to-solution for these experiments from weeks to hours. At the time the contribution is being written, the proposed testbed represents the first concrete implementation of a distributed multi-model experiment in the ESGF/CMIP context joining server-side and parallel processing, end-to-end workflow management and cloud computing. As opposed to the current scenario based on search & discovery, data download, and client-based data analysis, the INDIGO-DataCloud architectural solution described in this contribution addresses the scientific computing & analytics requirements by providing a paradigm shift based on server-side and high performance big data frameworks jointly with two-level workflow management systems realized at the PaaS level via a cloud infrastructure.
NASA Astrophysics Data System (ADS)
El-Garaihy, W. H.; Fouad, D. M.; Salem, H. G.
2018-07-01
Multi-channel Spiral Twist Extrusion (MCSTE) is introduced as a novel severe plastic deformation (SPD) technique for producing superior mechanical properties associated with ultrafine grained structure in bulk metals and alloys. The MCSTE design is based on inserting a uniform square cross-sectioned billet within stacked disks that guarantee shear strain accumulation. In an attempt to validate the technique and evaluate its plastic deformation characteristics, a series of experiments were conducted. The influence of the number of MCSTE passes on the mechanical properties and microstructural evolution of AA1100 alloy were investigated. Four passes of MCSTE, at a relatively low twisting angle of 30 deg, resulted in increasing the strength and hardness coupled with retention of ductility. Metallographic observations indicated a significant grain size reduction of 72 pct after 4 passes of MCSTE compared with the as-received (AR) condition. Moreover, the structural uniformity increased with the number of passes, which was reflected in the hardness distribution from the peripheries to the center of the extrudates. The current study showed that the MCSTE technique could be an effective, adaptable SPD die design with a promising potential for industrial applications compared to its counterparts.
NASA Astrophysics Data System (ADS)
El-Garaihy, W. H.; Fouad, D. M.; Salem, H. G.
2018-04-01
Multi-channel Spiral Twist Extrusion (MCSTE) is introduced as a novel severe plastic deformation (SPD) technique for producing superior mechanical properties associated with ultrafine grained structure in bulk metals and alloys. The MCSTE design is based on inserting a uniform square cross-sectioned billet within stacked disks that guarantee shear strain accumulation. In an attempt to validate the technique and evaluate its plastic deformation characteristics, a series of experiments were conducted. The influence of the number of MCSTE passes on the mechanical properties and microstructural evolution of AA1100 alloy were investigated. Four passes of MCSTE, at a relatively low twisting angle of 30 deg, resulted in increasing the strength and hardness coupled with retention of ductility. Metallographic observations indicated a significant grain size reduction of 72 pct after 4 passes of MCSTE compared with the as-received (AR) condition. Moreover, the structural uniformity increased with the number of passes, which was reflected in the hardness distribution from the peripheries to the center of the extrudates. The current study showed that the MCSTE technique could be an effective, adaptable SPD die design with a promising potential for industrial applications compared to its counterparts.
NASA Astrophysics Data System (ADS)
Brookfield, Michael E.
2008-10-01
The late Permian to late Triassic sediments of the Solway Basin consist of an originally flat-lying, laterally persistent and consistent succession of mature, dominantly fine-grained red clastics laid down in part of a very large intracontinental basin. The complete absence of body or trace fossils or palaeosols indicates a very arid (hyperarid) depositional environment for most of the sediments. At the base of the succession, thin regolith breccias and sandstones rest unconformably on basement and early Permian rift clastics. Overlying gypsiferous red silty mudstones, very fine sandstones and thick gypsum were deposited in either a playa lake or in a hypersaline estuary, and their margins. These pass upwards into thick-bedded, multi-storied, fine- to very fine-grained red quartzo-felspathic and sublithic arenites in which even medium sand is rare despite channels with clay pebbles up to 30 cm in diameter. Above, thick trough cross-bedded and parallel laminated fine-grained aeolian sandstones (deposited in extensive barchanoid dune complexes) pass up into very thick, multicoloured mudstones, and gypsum deposited in marginal marine or lacustrine sabkha environments. The latter pass up into marine Lower Jurassic shales and limestones. Thirteen non-marine clastic lithofacies are arranged into five main lithofacies associations whose facies architecture is reconstructed where possible by analysis of large exposures. The five associations can be compared with the desert pavement, arid ephemeral stream, sabkha, saline lake and aeolian sand dune environments of the arid to hyperarid areas of existing intracontinental basins such as Lake Eyre and Lake Chad. The accommodation space in such basins is controlled by gradual tectonic subsidence moderated by large fluctuations in shallow lake extent (caused by climatic change and local variation) and this promotes a large-scale layer-cake stratigraphy as exemplified in the Solway basin. Here, the dominant fine-grained mature sandstones above the local basal reg breccias suggest water-reworking of wind-transported sediment, as in the northern part of the Lake Chad basin. Growth faulting occurs in places in the Solway basin, caused by underlying evaporite movement, but these faults did not significantly affect pre-late Triassic sedimentation and did not expose pre-Permian units above the basal breccias. There is no evidence of post-early Permian rifting anywhere during deposition of the late Permian to middle Triassic British succession although the succession is often interpreted with a rift-basin model. The arid to hyperarid palaeoclimate changed little during deposition of the Solway basin succession, in contrast to Lakes Eyre and Chad: and this is attributed to tectonic and palaeolatitude stability. Unlike the later Mesozoic- Cenozoic, only limited plate movements took place during the Triassic in western Europe, palaeolatitude changed little, and the Solway Basin remained in the northern latitudinal desert belt from early to mid-Triassic times. However, the influence of the early Triassic impoverished biota on environmental interpretations needs further study.
NASA Astrophysics Data System (ADS)
Jiang, Peng; Gautam, Mahesh R.; Zhu, Jianting; Yu, Zhongbo
2013-02-01
SummaryMulti-scale temporal variability of precipitation has an established relationship with floods and droughts. In this paper, we present the diagnostics on the ability of 16 General Circulation Models (GCMs) from Bias Corrected and Downscaled (BCSD) World Climate Research Program's (WCRP's) Coupled Model Inter-comparison Project Phase 3 (CMIP3) projections and 10 Regional Climate Models (RCMs) that participated in the North American Regional Climate Change Assessment Program (NARCCAP) to represent multi-scale temporal variability determined from the observed station data. Four regions (Los Angeles, Las Vegas, Tucson, and Cimarron) in the Southwest United States are selected as they represent four different precipitation regions classified by clustering method. We investigate how storm properties and seasonal, inter-annual, and decadal precipitation variabilities differed between GCMs/RCMs and observed records in these regions. We find that current GCMs/RCMs tend to simulate longer storm duration and lower storm intensity compared to those from observed records. Most GCMs/RCMs fail to produce the high-intensity summer storms caused by local convective heat transport associated with the summer monsoon. Both inter-annual and decadal bands are present in the GCM/RCM-simulated precipitation time series; however, these do not line up to the patterns of large-scale ocean oscillations such as El Nino/La Nina Southern Oscillation (ENSO) and Pacific Decadal Oscillation (PDO). Our results show that the studied GCMs/RCMs can capture long-term monthly mean as the examined data is bias-corrected and downscaled, but fail to simulate the multi-scale precipitation variability including flood generating extreme events, which suggests their inadequacy for studies on floods and droughts that are strongly associated with multi-scale temporal precipitation variability.
NASA Technical Reports Server (NTRS)
Wood, Paul; Gramling, Cheryl; Stone, John; Smith, Patrick; Reiter, Jenifer
2016-01-01
This paper discusses commissioning of NASAs Magnetospheric MultiScale (MMS) Mission. The mission includes four identical spacecraft with a large, complex set of instrumentation. The planning for and execution of commissioning for this mission is described. The paper concludes by discussing lessons learned.
Generation of large scale GHZ states with the interactions of photons and quantum-dot spins
NASA Astrophysics Data System (ADS)
Miao, Chun; Fang, Shu-Dong; Dong, Ping; Yang, Ming; Cao, Zhuo-Liang
2018-03-01
We present a deterministic scheme for generating large scale GHZ states in a cavity-quantum dot system. A singly charged quantum dot is embedded in a double-sided optical microcavity with partially reflective top and bottom mirrors. The GHZ-type Bell spin state can be created and two n-spin GHZ states can be perfectly fused to a 2n-spin GHZ state with the help of n ancilla single-photon pulses. The implementation of the current scheme only depends on the photon detection and its need not to operate multi-qubit gates and multi-qubit measurements. Discussions about the effect of the cavity loss, side leakage and exciton cavity coupling strength for the fidelity of generated states show that the fidelity can remain high enough by controlling system parameters. So the current scheme is simple and feasible in experiment.
Advances in multi-scale modeling of solidification and casting processes
NASA Astrophysics Data System (ADS)
Liu, Baicheng; Xu, Qingyan; Jing, Tao; Shen, Houfa; Han, Zhiqiang
2011-04-01
The development of the aviation, energy and automobile industries requires an advanced integrated product/process R&D systems which could optimize the product and the process design as well. Integrated computational materials engineering (ICME) is a promising approach to fulfill this requirement and make the product and process development efficient, economic, and environmentally friendly. Advances in multi-scale modeling of solidification and casting processes, including mathematical models as well as engineering applications are presented in the paper. Dendrite morphology of magnesium and aluminum alloy of solidification process by using phase field and cellular automaton methods, mathematical models of segregation of large steel ingot, and microstructure models of unidirectionally solidified turbine blade casting are studied and discussed. In addition, some engineering case studies, including microstructure simulation of aluminum casting for automobile industry, segregation of large steel ingot for energy industry, and microstructure simulation of unidirectionally solidified turbine blade castings for aviation industry are discussed.
Fault-tolerant Control of a Cyber-physical System
NASA Astrophysics Data System (ADS)
Roxana, Rusu-Both; Eva-Henrietta, Dulf
2017-10-01
Cyber-physical systems represent a new emerging field in automatic control. The fault system is a key component, because modern, large scale processes must meet high standards of performance, reliability and safety. Fault propagation in large scale chemical processes can lead to loss of production, energy, raw materials and even environmental hazard. The present paper develops a multi-agent fault-tolerant control architecture using robust fractional order controllers for a (13C) cryogenic separation column cascade. The JADE (Java Agent DEvelopment Framework) platform was used to implement the multi-agent fault tolerant control system while the operational model of the process was implemented in Matlab/SIMULINK environment. MACSimJX (Multiagent Control Using Simulink with Jade Extension) toolbox was used to link the control system and the process model. In order to verify the performance and to prove the feasibility of the proposed control architecture several fault simulation scenarios were performed.
Large scale modulation of high frequency acoustic waves in periodic porous media.
Boutin, Claude; Rallu, Antoine; Hans, Stephane
2012-12-01
This paper deals with the description of the modulation at large scale of high frequency acoustic waves in gas saturated periodic porous media. High frequencies mean local dynamics at the pore scale and therefore absence of scale separation in the usual sense of homogenization. However, although the pressure is spatially varying in the pores (according to periodic eigenmodes), the mode amplitude can present a large scale modulation, thereby introducing another type of scale separation to which the asymptotic multi-scale procedure applies. The approach is first presented on a periodic network of inter-connected Helmholtz resonators. The equations governing the modulations carried by periodic eigenmodes, at frequencies close to their eigenfrequency, are derived. The number of cells on which the carrying periodic mode is defined is therefore a parameter of the modeling. In a second part, the asymptotic approach is developed for periodic porous media saturated by a perfect gas. Using the "multicells" periodic condition, one obtains the family of equations governing the amplitude modulation at large scale of high frequency waves. The significant difference between modulations of simple and multiple mode are evidenced and discussed. The features of the modulation (anisotropy, width of frequency band) are also analyzed.
Chang, Hang; Han, Ju; Zhong, Cheng; Snijders, Antoine M.; Mao, Jian-Hua
2017-01-01
The capabilities of (I) learning transferable knowledge across domains; and (II) fine-tuning the pre-learned base knowledge towards tasks with considerably smaller data scale are extremely important. Many of the existing transfer learning techniques are supervised approaches, among which deep learning has the demonstrated power of learning domain transferrable knowledge with large scale network trained on massive amounts of labeled data. However, in many biomedical tasks, both the data and the corresponding label can be very limited, where the unsupervised transfer learning capability is urgently needed. In this paper, we proposed a novel multi-scale convolutional sparse coding (MSCSC) method, that (I) automatically learns filter banks at different scales in a joint fashion with enforced scale-specificity of learned patterns; and (II) provides an unsupervised solution for learning transferable base knowledge and fine-tuning it towards target tasks. Extensive experimental evaluation of MSCSC demonstrates the effectiveness of the proposed MSCSC in both regular and transfer learning tasks in various biomedical domains. PMID:28129148
NASA Astrophysics Data System (ADS)
Hamann, S.; Börner, K.; Burlacov, I.; Spies, H.-J.; Strämke, M.; Strämke, S.; Röpcke, J.
2015-12-01
A laboratory scale plasma nitriding monitoring reactor (PLANIMOR) has been designed to study the basics of active screen plasma nitriding (ASPN) processes. PLANIMOR consists of a tube reactor vessel, made of borosilicate glass, enabling optical emission spectroscopy (OES) and infrared absorption spectroscopy. The linear setup of the electrode system of the reactor has the advantages to apply the diagnostic approaches on each part of the plasma process, separately. Furthermore, possible changes of the electrical field and of the heat generation, as they could appear in down-scaled cylindrical ASPN reactors, are avoided. PLANIMOR has been used for the nitriding of steel samples, achieving similar results as in an industrial scale ASPN reactor. A compact spectrometer using an external cavity quantum cascade laser combined with an optical multi-pass cell has been applied for the detection of molecular reaction products. This allowed the determination of the concentrations of four stable molecular species (CH4, C2H2, HCN, and NH3). With the help of OES, the rotational temperature of the screen plasma could be determined.
a Fast Segmentation Algorithm for C-V Model Based on Exponential Image Sequence Generation
NASA Astrophysics Data System (ADS)
Hu, J.; Lu, L.; Xu, J.; Zhang, J.
2017-09-01
For the island coastline segmentation, a fast segmentation algorithm for C-V model method based on exponential image sequence generation is proposed in this paper. The exponential multi-scale C-V model with level set inheritance and boundary inheritance is developed. The main research contributions are as follows: 1) the problems of the "holes" and "gaps" are solved when extraction coastline through the small scale shrinkage, low-pass filtering and area sorting of region. 2) the initial value of SDF (Signal Distance Function) and the level set are given by Otsu segmentation based on the difference of reflection SAR on land and sea, which are finely close to the coastline. 3) the computational complexity of continuous transition are successfully reduced between the different scales by the SDF and of level set inheritance. Experiment results show that the method accelerates the acquisition of initial level set formation, shortens the time of the extraction of coastline, at the same time, removes the non-coastline body part and improves the identification precision of the main body coastline, which automates the process of coastline segmentation.
Forest height Mapping using the fusion of Lidar and MULTI-ANGLE spectral data
NASA Astrophysics Data System (ADS)
Pang, Y.; Li, Z.
2016-12-01
Characterizing the complexity of forest ecosystem over large area is highly complex. Light detection and Ranging (LIDAR) approaches have demonstrated a high capacity to accurately estimate forest structural parameters. A number of satellite mission concepts have been proposed to fuse LiDAR with other optical imagery allowing Multi-angle spectral observations to be captured using the Bidirectional Reflectance Distribution Function (BRDF) characteristics of forests. China is developing the concept of Chinese Terrestrial Carbon Mapping Satellite. A multi-beam waveform Lidar is the main sensor. A multi-angle imagery system is considered as the spatial mapping sensor. In this study, we explore the fusion potential of Lidar and multi-angle spectral data to estimate forest height across different scales. We flew intensive airborne Lidar and Multi-angle hyperspectral data in Genhe Forest Ecological Research Station, Northeast China. Then extended the spatial scale with some long transect flights to cover more forest structures. Forest height data derived from airborne lidar data was used as reference data and the multi-angle hyperspectral data was used as model inputs. Our results demonstrate that the multi-angle spectral data can be used to estimate forest height with the RMSE of 1.1 m with an R2 approximately 0.8.
CMB hemispherical asymmetry from non-linear isocurvature perturbations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Assadullahi, Hooshyar; Wands, David; Firouzjahi, Hassan
2015-04-01
We investigate whether non-adiabatic perturbations from inflation could produce an asymmetric distribution of temperature anisotropies on large angular scales in the cosmic microwave background (CMB). We use a generalised non-linear δ N formalism to calculate the non-Gaussianity of the primordial density and isocurvature perturbations due to the presence of non-adiabatic, but approximately scale-invariant field fluctuations during multi-field inflation. This local-type non-Gaussianity leads to a correlation between very long wavelength inhomogeneities, larger than our observable horizon, and smaller scale fluctuations in the radiation and matter density. Matter isocurvature perturbations contribute primarily to low CMB multipoles and hence can lead to a hemisphericalmore » asymmetry on large angular scales, with negligible asymmetry on smaller scales. In curvaton models, where the matter isocurvature perturbation is partly correlated with the primordial density perturbation, we are unable to obtain a significant asymmetry on large angular scales while respecting current observational constraints on the observed quadrupole. However in the axion model, where the matter isocurvature and primordial density perturbations are uncorrelated, we find it may be possible to obtain a significant asymmetry due to isocurvature modes on large angular scales. Such an isocurvature origin for the hemispherical asymmetry would naturally give rise to a distinctive asymmetry in the CMB polarisation on large scales.« less
USDA-ARS?s Scientific Manuscript database
Recent weather patterns have left California’s agricultural areas in severe drought. Given the reduced water availability in much of California it is critical to be able to measure water use and crop condition over large areas, but also in fine detail at scales of individual fields to support water...
Felo, Michael; Christensen, Brandon; Higgins, John
2013-01-01
The bioreactor volume delineating the selection of primary clarification technology is not always easily defined. Development of a commercial scale process for the manufacture of therapeutic proteins requires scale-up from a few liters to thousands of liters. While the separation techniques used for protein purification are largely conserved across scales, the separation techniques for primary cell culture clarification vary with scale. Process models were developed to compare monoclonal antibody production costs using two cell culture clarification technologies. One process model was created for cell culture clarification by disc stack centrifugation with depth filtration. A second process model was created for clarification by multi-stage depth filtration. Analyses were performed to examine the influence of bioreactor volume, product titer, depth filter capacity, and facility utilization on overall operating costs. At bioreactor volumes <1,000 L, clarification using multi-stage depth filtration offers cost savings compared to clarification using centrifugation. For bioreactor volumes >5,000 L, clarification using centrifugation followed by depth filtration offers significant cost savings. For bioreactor volumes of ∼ 2,000 L, clarification costs are similar between depth filtration and centrifugation. At this scale, factors including facility utilization, available capital, ease of process development, implementation timelines, and process performance characterization play an important role in clarification technology selection. In the case study presented, a multi-product facility selected multi-stage depth filtration for cell culture clarification at the 500 and 2,000 L scales of operation. Facility implementation timelines, process development activities, equipment commissioning and validation, scale-up effects, and process robustness are examined. © 2013 American Institute of Chemical Engineers.
Where the Wild Things Are: Observational Constraints on Black Holes' Growth
NASA Astrophysics Data System (ADS)
Merloni, Andrea
2009-12-01
The physical and evolutionary relation between growing supermassive black holes (AGN) and host galaxies is currently the subject of intense research activity. Nevertheless, a deep theoretical understanding of such a relation is hampered by the unique multi-scale nature of the combined AGN-galaxy system, which defies any purely numerical, or semi-analytic approach. Various physical process active on different physical scales have signatures in different parts of the electromagnetic spectrum; thus, observations at different wavelengths and theoretical ideas all can contribute towards a ``large dynamic range'' view of the AGN phenomenon, capable of conceptually ``resolving'' the many scales involved. As an example, I will focus in this review on two major recent observational results on the cosmic evolution of supermassive black holes, focusing on the novel contribution given to the field by the COSMOS survey. First of all, I will discuss the evidence for the so-called ``downsizing'' in the AGN population as derived from large X-ray surveys. I will then present new constraints on the evolution of the black hole-galaxy scaling relation at 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bogacz, Alex; Bruning, Oliver; Cruz-Alaniz, E.
Unprecedently high luminosity of 10 34 cm -2 s -1, promised by the LHeC accelerator complex poses several beam dynamics and lattice design challenges. As part of accelerator design process, exploration of innovative beam dynamics solutions and their lattice implementations is the key to mitigating performance limitations due to fundamental beam phenomena, such as: synchrotron radiation and collective instabilities. This article will present beam dynamics driven approach to accelerator design, which in particular, addresses emittance dilution due to quantum excitations and beam breakup instability in a large scale, multi-pass Energy Recovery Linac (ERL). The use of ERL accelerator technology tomore » provide improved beam quality and higher brightness continues to be the subject of active community interest and active accelerator development of future Electron Ion Colliders (EIC). Here, we employ current state of though for ERLs aiming at the energy frontier EIC. We will follow conceptual design options recently identified for the LHeC. The main thrust of these studies was to enhance the collider performance, while limiting overall power consumption through exploring interplay between emittance preservation and efficiencies promised by the ERL technology. Here, this combined with a unique design of the Interaction Region (IR) optics gives the impression that luminosity of 10 34 cm -2 s -1 is indeed feasible.« less
Prominence and tornado dynamics observed with IRIS and THEMIS
NASA Astrophysics Data System (ADS)
Schmieder, Brigitte; Levens, Peter; Labrosse, Nicolas; Mein, Pierre; Lopez Ariste, Arturo; Zapior, Maciek
2017-08-01
Several prominences were observed during campaigns in September 2013 and July 2014 with the IRIS spectrometer and the vector magnetograph THEMIS (Tenerife). SDO/AIA and IRIS provided images and spectra of prominences and tornadoes corresponding to different physical conditions of the transition region between the cool plasma and the corona. The vector magnetic field was derived from THEMIS observations by using the He D3 depolarisation due to the magnetic field. The inversion code (PCA) takes into account the Hanle and Zeeman effects and allows us to compute the strength and the inclination of the magnetic field which is shown to be mostly horizontal in prominences as well as in tornadoes. Movies from SDO/AIA in 304 A and Hinode/SOT in Ca II show the highly dynamic nature of the fine structures. From spectra in Mg II and Si IV lines provided by IRIS and H-alpha observed by the Multi-channel Subtractive Double Pass (MSDP) spectrograph in the Meudon Solar Tower we derived the Doppler shifts of the fine structures and reconstructed the 3D structure of tornadoes. We conclude that the apparent rotation of AIA tornadoes is due to large-scale quasi-periodic oscillations of the plasma along more or less horizontal magnetic structures.
Novel Lattice Solutions for the LHeC
Bogacz, Alex; Bruning, Oliver; Cruz-Alaniz, E.; ...
2017-08-01
Unprecedently high luminosity of 10 34 cm -2 s -1, promised by the LHeC accelerator complex poses several beam dynamics and lattice design challenges. As part of accelerator design process, exploration of innovative beam dynamics solutions and their lattice implementations is the key to mitigating performance limitations due to fundamental beam phenomena, such as: synchrotron radiation and collective instabilities. This article will present beam dynamics driven approach to accelerator design, which in particular, addresses emittance dilution due to quantum excitations and beam breakup instability in a large scale, multi-pass Energy Recovery Linac (ERL). The use of ERL accelerator technology tomore » provide improved beam quality and higher brightness continues to be the subject of active community interest and active accelerator development of future Electron Ion Colliders (EIC). Here, we employ current state of though for ERLs aiming at the energy frontier EIC. We will follow conceptual design options recently identified for the LHeC. The main thrust of these studies was to enhance the collider performance, while limiting overall power consumption through exploring interplay between emittance preservation and efficiencies promised by the ERL technology. Here, this combined with a unique design of the Interaction Region (IR) optics gives the impression that luminosity of 10 34 cm -2 s -1 is indeed feasible.« less
NASA Astrophysics Data System (ADS)
Duro, Javier; Iglesias, Rubén; Blanco, Pablo; Albiol, David; Koudogbo, Fifamè
2015-04-01
The Wide Area Product (WAP) is a new interferometric product developed to provide measurement over large regions. Persistent Scatterers Interferometry (PSI) has largely proved their robust and precise performance in measuring ground surface deformation in different application domains. In this context, however, the accurate displacement estimation over large-scale areas (more than 10.000 km2) characterized by low magnitude motion gradients (3-5 mm/year), such as the ones induced by inter-seismic or Earth tidal effects, still remains an open issue. The main reason for that is the inclusion of low quality and more distant persistent scatterers in order to bridge low-quality areas, such as water bodies, crop areas and forested regions. This fact yields to spatial propagation errors on PSI integration process, poor estimation and compensation of the Atmospheric Phase Screen (APS) and the difficult to face residual long-wavelength phase patterns originated by orbit state vectors inaccuracies. Research work for generating a Wide Area Product of ground motion in preparation for the Sentinel-1 mission has been conducted in the last stages of Terrafirma as well as in other research programs. These developments propose technological updates for keeping the precision over large scale PSI analysis. Some of the updates are based on the use of external information, like meteorological models, and the employment of GNSS data for an improved calibration of large measurements. Usually, covering wide regions implies the processing over areas with a land use which is chiefly focused on livestock, horticulture, urbanization and forest. This represents an important challenge for providing continuous InSAR measurements and the application of advanced phase filtering strategies to enhance the coherence. The advanced PSI processing has been performed out over several areas, allowing a large scale analysis of tectonic patterns, and motion caused by multi-hazards as volcanic, landslide and flood. Several examples of the application of the PSI WAP to wide regions for measuring ground displacements related to different types of hazards, natural and human induced will be presented. The InSAR processing approach to measure accurate movements at local and large scales for allowing multi-hazard interpretation studies will also be discussed. The test areas will show deformations related to active faults systems, landslides in mountains slopes, ground compaction over underneath aquifers and movements in volcanic areas.
HSTDEK: Developing a methodology for construction of large-scale, multi-use knowledge bases
NASA Technical Reports Server (NTRS)
Freeman, Michael S.
1987-01-01
The primary research objectives of the Hubble Space Telescope Design/Engineering Knowledgebase (HSTDEK) are to develop a methodology for constructing and maintaining large scale knowledge bases which can be used to support multiple applications. To insure the validity of its results, this research is being persued in the context of a real world system, the Hubble Space Telescope. The HSTDEK objectives are described in detail. The history and motivation of the project are briefly described. The technical challenges faced by the project are outlined.
Adaptive Numerical Algorithms in Space Weather Modeling
NASA Technical Reports Server (NTRS)
Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.;
2010-01-01
Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical schemes. Depending on the application, we find that different time stepping methods are optimal. Several of the time integration schemes exploit the block-based granularity of the grid structure. The framework and the adaptive algorithms enable physics based space weather modeling and even forecasting.
NASA Astrophysics Data System (ADS)
Zigone, Dimitri; Rivet, Diane; Radiguet, Mathilde; Campillo, Michel; Voisin, Christophe; Cotte, Nathalie; Walpersdorf, Andrea; Shapiro, Nikolai M.; Cougoulat, Glenn; Roux, Philippe; Kostoglodov, Vladimir; Husker, Allen; Payero, Juan S.
2012-09-01
We investigate the triggering of seismic tremor and slow slip event in Guerrero (Mexico) by the February 27, 2010 Maule earthquake (Mw 8.8). Triggered tremors start with the arrival of S wave generated by the Maule earthquake, and keep occurring during the passing of ScS, SS, Love and Rayleigh waves. The Rayleigh wave dispersion curve footprints the high frequency energy envelope of the triggered tremor, indicating a strong modulation of the source of tremors by the passing surface wave. This correlation and modulation by the passing waves is progressively lost with time over a few hours. The tremor activity continues during the weeks/months after the earthquake. GPS time series suggest that the second sub-event of the 2009-2010 SSE in Guerrero is actually triggered by the Maule earthquake. The southward displacement of the GPS stations starts coincidently with the earthquake and tremors. The long duration of tremors indicate a continuing deformation process at depth, which we propose to be the second sub-event of the 2009-2010 SSE. We show a quasi-systematic correlation between surface displacement rate measured by GPS and tremor activity, suggesting that the NVT are controlled by the variations in the slip history of the SSE. This study shows that two types of tremors emerge: (1) Those directly triggered by the passing waves and (2) those triggered by the stress variations associated with slow slip. This indicates the prominent role of aseismic creep in the Mexican subduction zone response to a large teleseismic earthquake, possibly leading to large-scale stress redistribution.
Large-area photogrammetry based testing of wind turbine blades
NASA Astrophysics Data System (ADS)
Poozesh, Peyman; Baqersad, Javad; Niezrecki, Christopher; Avitabile, Peter; Harvey, Eric; Yarala, Rahul
2017-03-01
An optically based sensing system that can measure the displacement and strain over essentially the entire area of a utility-scale blade leads to a measurement system that can significantly reduce the time and cost associated with traditional instrumentation. This paper evaluates the performance of conventional three dimensional digital image correlation (3D DIC) and three dimensional point tracking (3DPT) approaches over the surface of wind turbine blades and proposes a multi-camera measurement system using dynamic spatial data stitching. The potential advantages for the proposed approach include: (1) full-field measurement distributed over a very large area, (2) the elimination of time-consuming wiring and expensive sensors, and (3) the need for large-channel data acquisition systems. There are several challenges associated with extending the capability of a standard 3D DIC system to measure entire surface of utility scale blades to extract distributed strain, deflection, and modal parameters. This paper only tries to address some of the difficulties including: (1) assessing the accuracy of the 3D DIC system to measure full-field distributed strain and displacement over the large area, (2) understanding the geometrical constraints associated with a wind turbine testing facility (e.g. lighting, working distance, and speckle pattern size), (3) evaluating the performance of the dynamic stitching method to combine two different fields of view by extracting modal parameters from aligned point clouds, and (4) determining the feasibility of employing an output-only system identification to estimate modal parameters of a utility scale wind turbine blade from optically measured data. Within the current work, the results of an optical measurement (one stereo-vision system) performed on a large area over a 50-m utility-scale blade subjected to quasi-static and cyclic loading are presented. The blade certification and testing is typically performed using International Electro-Technical Commission standard (IEC 61400-23). For static tests, the blade is pulled in either flap-wise or edge-wise directions to measure deflection or distributed strain at a few limited locations of a large-sized blade. Additionally, the paper explores the error associated with using a multi-camera system (two stereo-vision systems) in measuring 3D displacement and extracting structural dynamic parameters on a mock set up emulating a utility-scale wind turbine blade. The results obtained in this paper reveal that the multi-camera measurement system has the potential to identify the dynamic characteristics of a very large structure.
Oxidation kinetics of Haynes 230 alloy in air at temperatures between 650 and 850 °C
NASA Astrophysics Data System (ADS)
Jian, Li; Jian, Pu; Bing, Hua; Xie, Guangyuan
Haynes 230 alloy was oxidized in air at temperatures between 650 and 850 °C. Thermogravimetry was used to measure the kinetics of oxidation. The formed oxides were identified by the thin film (small angle) X-ray diffraction technique. Cr 2O 3 and MnCr 2O 4 were found in the oxide scale. Multi-stage oxidation kinetics was observed, and each stage follows Wagner's parabolic law. The first slow oxidation stage corresponded to the growth of an Cr 2O 3 layer, controlled by Cr ions diffusion through the dense Cr 2O 3 scale. The faster second stage was a result of rapid diffusion of Mn ions passing through the established Cr 2O 3 scale to form MnCr 2O 4 on top of the Cr 2O 3 layer. A duplex oxide scale is expected. The third stage, with a rate close to that of the first stage, only appeared for oxidation in the intermediate temperature range, i.e., 750-800 °C, which can be explained by the interruption of the Mn flux that forms MnCr 2O 4.
NASA Astrophysics Data System (ADS)
Rubin, Kate H. R.; Diamond-Stanic, Aleksandar M.; Coil, Alison L.; Crighton, Neil H. M.; Moustakas, John
2018-01-01
The spectroscopy of background QSO sightlines passing close to foreground galaxies is a potent technique for studying the circumgalactic medium (CGM). However, QSOs are effectively point sources, limiting their potential to constrain the size of circumgalactic gaseous structures. Here we present the first large Keck/Low-resolution Imaging Spectrometer (LRIS) and Very Large Telescope (VLT)/Focal Reducer/Low-dispersion Spectrograph 2 (FORS2) spectroscopic survey of bright ({B}{AB}< 22.3) background galaxies whose lines of sight probe Mg II λ λ 2796,2803 absorption from the CGM around close projected foreground galaxies at transverse distances 10 {kpc}< {R}\\perp < 150 {kpc}. Our sample of 72 projected pairs, drawn from the PRIsm MUlti-object Survey, includes 48 background galaxies that do not host bright active galactic nuclei, and both star-forming and quiescent foreground galaxies with stellar masses of 9.0< {log}{M}* /{M}ȯ < 11.2 at redshifts of 0.35< {z}{{f}/{{g}}}< 0.8. We detect Mg II absorption associated with these foreground galaxies with equivalent widths of 0.25 \\mathring{{A}} < {W}2796< 2.6 \\mathring{{A}} at > 2σ significance in 20 individual background sightlines passing within {R}\\perp < 50 {kpc} and place 2σ upper limits on W 2796 of ≲ 0.5 \\mathring{{A}} in an additional 11 close sightlines. Within {R}\\perp < 50 {kpc}, W 2796 is anticorrelated with R ⊥, consistent with analyses of Mg II absorption detected along background QSO sightlines. Subsamples of these foreground hosts divided at {log}{M}* /{M}ȯ =9.9 exhibit statistically inconsistent W 2796 distributions at 30 {kpc}< {R}\\perp < 50 {kpc}, with the higher-M * galaxies yielding a larger median W 2796 by 0.9 \\mathring{{A}} . Finally, we demonstrate that foreground galaxies with similar stellar masses exhibit the same median W 2796 at a given R ⊥ to within < 0.2 \\mathring{{A}} toward both background galaxies and toward QSO sightlines drawn from the literature. Analysis of these data sets constraining the spatial coherence scale of circumgalactic Mg II absorption is presented in a companion paper.
Laser removal of graffiti from Pink Morelia Quarry
NASA Astrophysics Data System (ADS)
Penide, J.; Quintero, F.; Riveiro, A.; Sánchez-Castillo, A.; Comesaña, R.; del Val, J.; Lusquiños, F.; Pou, J.
2013-11-01
Morelia is an important city sited in Mexico. Its historical center reflects most of their culture and history, especially of the colonial period; in fact, it was appointed World Heritage Site by UNESCO. Sadly, there is a serious problem with graffiti in Morelia and its historical center is the worst affected since its delicate charming is definitely damaged. Hitherto, the conventional methods employed to remove graffiti from Pink Morelia Quarry (the most used building stone in Morelia) are quite aggressive to the appearance of the monuments, so actually, they are not a very good solution. In this work, we performed a study on the removal of graffiti from Pink Morelia Quarry by high power diode laser. We carried out an extensive experimental study looking for the optimal processing parameters, and compared a single-pass with a multi-pass method. Indeed, we achieved an effective cleaning without producing serious side effects in the stone. In conclusion, the multi-pass method emitting in continuous wave was revealed as the more effective operating modes to remove the graffiti.
Integrated Data Modeling and Simulation on the Joint Polar Satellite System Program
NASA Technical Reports Server (NTRS)
Roberts, Christopher J.; Boyce, Leslye; Smith, Gary; Li, Angela; Barrett, Larry
2012-01-01
The Joint Polar Satellite System is a modern, large-scale, complex, multi-mission aerospace program, and presents a variety of design, testing and operational challenges due to: (1) System Scope: multi-mission coordination, role, responsibility and accountability challenges stemming from porous/ill-defined system and organizational boundaries (including foreign policy interactions) (2) Degree of Concurrency: design, implementation, integration, verification and operation occurring simultaneously, at multiple scales in the system hierarchy (3) Multi-Decadal Lifecycle: technical obsolesce, reliability and sustainment concerns, including those related to organizational and industrial base. Additionally, these systems tend to become embedded in the broader societal infrastructure, resulting in new system stakeholders with perhaps different preferences (4) Barriers to Effective Communications: process and cultural issues that emerge due to geographic dispersion and as one spans boundaries including gov./contractor, NASA/Other USG, and international relationships.
Multi-pass encoding of hyperspectral imagery with spectral quality control
NASA Astrophysics Data System (ADS)
Wasson, Steven; Walker, William
2015-05-01
Multi-pass encoding is a technique employed in the field of video compression that maximizes the quality of an encoded video sequence within the constraints of a specified bit rate. This paper presents research where multi-pass encoding is extended to the field of hyperspectral image compression. Unlike video, which is primarily intended to be viewed by a human observer, hyperspectral imagery is processed by computational algorithms that generally attempt to classify the pixel spectra within the imagery. As such, these algorithms are more sensitive to distortion in the spectral dimension of the image than they are to perceptual distortion in the spatial dimension. The compression algorithm developed for this research, which uses the Karhunen-Loeve transform for spectral decorrelation followed by a modified H.264/Advanced Video Coding (AVC) encoder, maintains a user-specified spectral quality level while maximizing the compression ratio throughout the encoding process. The compression performance may be considered near-lossless in certain scenarios. For qualitative purposes, this paper presents the performance of the compression algorithm for several Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Hyperion datasets using spectral angle as the spectral quality assessment function. Specifically, the compression performance is illustrated in the form of rate-distortion curves that plot spectral angle versus bits per pixel per band (bpppb).
Large-Scale Coronal Heating from the Solar Magnetic Network
NASA Technical Reports Server (NTRS)
Falconer, David A.; Moore, Ronald L.; Porter, Jason G.; Hathaway, David H.
1999-01-01
In Fe 12 images from SOHO/EIT, the quiet solar corona shows structure on scales ranging from sub-supergranular (i.e., bright points and coronal network) to multi- supergranular. In Falconer et al 1998 (Ap.J., 501, 386) we suppressed the large-scale background and found that the network-scale features are predominantly rooted in the magnetic network lanes at the boundaries of the supergranules. The emission of the coronal network and bright points contribute only about 5% of the entire quiet solar coronal Fe MI emission. Here we investigate the large-scale corona, the supergranular and larger-scale structure that we had previously treated as a background, and that emits 95% of the total Fe XII emission. We compare the dim and bright halves of the large- scale corona and find that the bright half is 1.5 times brighter than the dim half, has an order of magnitude greater area of bright point coverage, has three times brighter coronal network, and has about 1.5 times more magnetic flux than the dim half These results suggest that the brightness of the large-scale corona is more closely related to the large- scale total magnetic flux than to bright point activity. We conclude that in the quiet sun: (1) Magnetic flux is modulated (concentrated/diluted) on size scales larger than supergranules. (2) The large-scale enhanced magnetic flux gives an enhanced, more active, magnetic network and an increased incidence of network bright point formation. (3) The heating of the large-scale corona is dominated by more widespread, but weaker, network activity than that which heats the bright points. This work was funded by the Solar Physics Branch of NASA's office of Space Science through the SR&T Program and the SEC Guest Investigator Program.
Liu, Jinjun; Leng, Yonggang; Lai, Zhihui; Fan, Shengbo
2018-04-25
Mechanical fault diagnosis usually requires not only identification of the fault characteristic frequency, but also detection of its second and/or higher harmonics. However, it is difficult to detect a multi-frequency fault signal through the existing Stochastic Resonance (SR) methods, because the characteristic frequency of the fault signal as well as its second and higher harmonics frequencies tend to be large parameters. To solve the problem, this paper proposes a multi-frequency signal detection method based on Frequency Exchange and Re-scaling Stochastic Resonance (FERSR). In the method, frequency exchange is implemented using filtering technique and Single SideBand (SSB) modulation. This new method can overcome the limitation of "sampling ratio" which is the ratio of the sampling frequency to the frequency of target signal. It also ensures that the multi-frequency target signals can be processed to meet the small-parameter conditions. Simulation results demonstrate that the method shows good performance for detecting a multi-frequency signal with low sampling ratio. Two practical cases are employed to further validate the effectiveness and applicability of this method.
Students' Multi-Modal Re-Presentations of Scientific Knowledge and Creativity
ERIC Educational Resources Information Center
Koren, Yitzhak; Klavir, Rama; Gorodetsky, Malka
2005-01-01
The paper brings the results of a project that passed on to students the opportunity for re-presenting their acquired knowledge via the construction of multi-modal "learning resources". These "learning resources" substituted for lectures and books and became the official learning sources in the classroom. The rational for the…
A study of narrow gap laser welding for thick plates using the multi-layer and multi-pass method
NASA Astrophysics Data System (ADS)
Li, Ruoyang; Wang, Tianjiao; Wang, Chunming; Yan, Fei; Shao, Xinyu; Hu, Xiyuan; Li, Jianmin
2014-12-01
This paper details a new method that combines laser autogenous welding, laser wire filling welding and hybrid laser-GMAW welding to weld 30 mm thick plate using a multi-layer, multi-pass process. A “Y” shaped groove was used to create the joint. Research was also performed to optimize the groove size and the processing parameters. Laser autogenous welding is first used to create the backing weld. The lower, narrowest part of the groove is then welded using laser wire filling welding. Finally, the upper part of the groove is welded using laser-GMAW hybrid welding. Additionally, the wire feeding and droplet transfer behaviors are observed by high speed photography. The two main conclusions from this work are: the wire is often biased towards the side walls, resulting in a lack of fusion at the joint and the creation of other defects for larger groove sizes. Additionally, this results in the droplet transfer behavior becoming unstable, leading to a poor weld appearance for smaller groove sizes.
Multi-scale Food Energy and Water Dynamics in the Blue Nile Highlands
NASA Astrophysics Data System (ADS)
Zaitchik, B. F.; Simane, B.; Block, P. J.; Foltz, J.; Mueller-Mahn, D.; Gilioli, G.; Sciarretta, A.
2017-12-01
The Ethiopian highlands are often called the "water tower of Africa," giving rise to major transboundary rivers. Rapid hydropower development is quickly transforming these highlands into the "power plant of Africa" as well. For local people, however, they are first and foremost a land of small farms, devoted primarily to subsistence agriculture. Under changing climate, rapid national economic growth, and steadily increasing population and land pressures, these mountains and their inhabitants have become the focal point of a multi-scale food-energy-water nexus with significant implications across East Africa. Here we examine coupled natural-human system dynamics that emerge when basin and nation scale resource development strategies are superimposed on a local economy that is largely subsistence based. Sensitivity to local and remote climate shocks are considered, as is the role of Earth Observation in understanding and informing management of food-energy-water resources across scales.
Cotter, C. J.
2017-01-01
In Holm (Holm 2015 Proc. R. Soc. A 471, 20140963. (doi:10.1098/rspa.2014.0963)), stochastic fluid equations were derived by employing a variational principle with an assumed stochastic Lagrangian particle dynamics. Here we show that the same stochastic Lagrangian dynamics naturally arises in a multi-scale decomposition of the deterministic Lagrangian flow map into a slow large-scale mean and a rapidly fluctuating small-scale map. We employ homogenization theory to derive effective slow stochastic particle dynamics for the resolved mean part, thereby obtaining stochastic fluid partial equations in the Eulerian formulation. To justify the application of rigorous homogenization theory, we assume mildly chaotic fast small-scale dynamics, as well as a centring condition. The latter requires that the mean of the fluctuating deviations is small, when pulled back to the mean flow. PMID:28989316
Immobilization of Energetics on Live Fire Ranges (CU-1229). Revision 1.0
2004-07-31
Its cost ultimately may be prohibitive for large scale application in some areas, but its humic composition should aid adsorption of energetics and/or...acetonitrile) to sterile glass bottles, evaporating the solvent under a stream of nitrogen, adding a known volume of CaCl2, and sonicating/mixing until all...filtration)- Same as (a) above, except that the cleared supernatant was passed through a 0.45 µm glass fiber syringe filter prior to scintillation counting
Feasibility study of a microwave radar system for agricultural inspection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Okelo-Odongo, S.
1994-10-03
The feasibility of an impulse radar system for agricultural inspection is investigated. This system would be able to quickly determine the quality of foodstuffs that are passed through the system. A prototype was designed at the Lawrence Livermore National Laboratory and this report discusses it`s evaluation. A variety of apples were used to test the system and preliminary data suggests that this technology holds promise for successful application on a large scale in food processing plants.
An Overview of Mesoscale Modeling Software for Energetic Materials Research
2010-03-01
12 2.9 Large-scale Atomic/Molecular Massively Parallel Simulator ( LAMMPS ...13 Table 10. LAMMPS summary...extensive reviews, lectures and workshops are available on multiscale modeling of materials applications (76-78). • Multi-phase mixtures of
NASA Astrophysics Data System (ADS)
Rebassa-Mansergas, A.; Liu, X.-W.; Cojocaru, R.; Yuan, H.-B.; Torres, S.; García-Berro, E.; Xiang, M.-X.; Huang, Y.; Koester, D.; Hou, Y.; Li, G.; Zhang, Y.
2015-06-01
Modern large-scale surveys have allowed the identification of large numbers of white dwarfs. However, these surveys are subject to complicated target selection algorithms, which make it almost impossible to quantify to what extent the observational biases affect the observed populations. The LAMOST (Large Sky Area Multi-Object Fiber Spectroscopic Telescope) Spectroscopic Survey of the Galactic anticentre (LSS-GAC) follows a well-defined set of criteria for selecting targets for observations. This advantage over previous surveys has been fully exploited here to identify a small yet well-characterized magnitude-limited sample of hydrogen-rich (DA) white dwarfs. We derive preliminary LSS-GAC DA white dwarf luminosity and mass functions. The space density and average formation rate of DA white dwarfs we derive are 0.83 ± 0.16 × 10-3 pc-3 and 5.42 ± 0.08 × 10-13 pc-3 yr-1, respectively. Additionally, using an existing Monte Carlo population synthesis code we simulate the population of single DA white dwarfs in the Galactic anticentre, under various assumptions. The synthetic populations are passed through the LSS-GAC selection criteria, taking into account all possible observational biases. This allows us to perform a meaningful comparison of the observed and simulated distributions. We find that the LSS-GAC set of criteria is highly efficient in selecting white dwarfs for spectroscopic observations (80-85 per cent) and that, overall, our simulations reproduce well the observed luminosity function. However, they fail at reproducing an excess of massive white dwarfs present in the observed mass function. A plausible explanation for this is that a sizable fraction of massive white dwarfs in the Galaxy are the product of white dwarf-white dwarf mergers.
Tučník, Petr; Bureš, Vladimír
2016-01-01
Multi-criteria decision-making (MCDM) can be formally implemented by various methods. This study compares suitability of four selected MCDM methods, namely WPM, TOPSIS, VIKOR, and PROMETHEE, for future applications in agent-based computational economic (ACE) models of larger scale (i.e., over 10 000 agents in one geographical region). These four MCDM methods were selected according to their appropriateness for computational processing in ACE applications. Tests of the selected methods were conducted on four hardware configurations. For each method, 100 tests were performed, which represented one testing iteration. With four testing iterations conducted on each hardware setting and separated testing of all configurations with the-server parameter de/activated, altogether, 12800 data points were collected and consequently analyzed. An illustrational decision-making scenario was used which allows the mutual comparison of all of the selected decision making methods. Our test results suggest that although all methods are convenient and can be used in practice, the VIKOR method accomplished the tests with the best results and thus can be recommended as the most suitable for simulations of large-scale agent-based models.
Yuan, Liang (Leon); Herman, Peter R.
2016-01-01
Three-dimensional (3D) periodic nanostructures underpin a promising research direction on the frontiers of nanoscience and technology to generate advanced materials for exploiting novel photonic crystal (PC) and nanofluidic functionalities. However, formation of uniform and defect-free 3D periodic structures over large areas that can further integrate into multifunctional devices has remained a major challenge. Here, we introduce a laser scanning holographic method for 3D exposure in thick photoresist that combines the unique advantages of large area 3D holographic interference lithography (HIL) with the flexible patterning of laser direct writing to form both micro- and nano-structures in a single exposure step. Phase mask interference patterns accumulated over multiple overlapping scans are shown to stitch seamlessly and form uniform 3D nanostructure with beam size scaled to small 200 μm diameter. In this way, laser scanning is presented as a facile means to embed 3D PC structure within microfluidic channels for integration into an optofluidic lab-on-chip, demonstrating a new laser HIL writing approach for creating multi-scale integrated microsystems. PMID:26922872
2013-01-01
Based Micropolar Single Crystal Plasticity: Comparison of Multi - and Single Criterion Theories. J. Mech. Phys. Solids 2011, 59, 398–422. ALE3D ...element boundaries in a multi -step constitutive evaluation (Becker, 2011). The results showed the desired effects of smoothing the deformation field...Implementation The model was implemented in the large-scale parallel, explicit finite element code ALE3D (2012). The crystal plasticity
A holistic approach for large-scale derived flood frequency analysis
NASA Astrophysics Data System (ADS)
Dung Nguyen, Viet; Apel, Heiko; Hundecha, Yeshewatesfa; Guse, Björn; Sergiy, Vorogushyn; Merz, Bruno
2017-04-01
Spatial consistency, which has been usually disregarded because of the reported methodological difficulties, is increasingly demanded in regional flood hazard (and risk) assessments. This study aims at developing a holistic approach for deriving flood frequency at large scale consistently. A large scale two-component model has been established for simulating very long-term multisite synthetic meteorological fields and flood flow at many gauged and ungauged locations hence reflecting the spatially inherent heterogeneity. The model has been applied for the region of nearly a half million km2 including Germany and parts of nearby countries. The model performance has been multi-objectively examined with a focus on extreme. By this continuous simulation approach, flood quantiles for the studied region have been derived successfully and provide useful input for a comprehensive flood risk study.
Large-scale Activities Associated with the 2005 Sep. 7th Event
NASA Astrophysics Data System (ADS)
Zong, Weiguo
We present a multi-wavelength study on large-scale activities associated with a significant solar event. On 2005 September 7, a flare classified as bigger than X17 was observed. Combining with Hα 6562.8 ˚, He I 10830 ˚and soft X-ray observations, three large-scale activities were A A found to propagate over a long distance on the solar surface. 1) The first large-scale activity emanated from the flare site, which propagated westward around the solar equator and appeared as sequential brightenings. With MDI longitudinal magnetic field map, the activity was found to propagate along the magnetic network. 2) The second large-scale activity could be well identified both in He I 10830 ˚images and soft X-ray images and appeared as diffuse emission A enhancement propagating away. The activity started later than the first one and was not centric on the flare site. Moreover, a rotation was found along with the bright front propagating away. 3) The third activity was ahead of the second one, which was identified as a "winking" filament. The three activities have different origins, which were seldom observed in one event. Therefore this study is useful to understand the mechanism of large-scale activities on solar surface.
NASA Astrophysics Data System (ADS)
Shelestov, Andrii; Lavreniuk, Mykola; Kussul, Nataliia; Novikov, Alexei; Skakun, Sergii
2017-02-01
Many applied problems arising in agricultural monitoring and food security require reliable crop maps at national or global scale. Large scale crop mapping requires processing and management of large amount of heterogeneous satellite imagery acquired by various sensors that consequently leads to a “Big Data” problem. The main objective of this study is to explore efficiency of using the Google Earth Engine (GEE) platform when classifying multi-temporal satellite imagery with potential to apply the platform for a larger scale (e.g. country level) and multiple sensors (e.g. Landsat-8 and Sentinel-2). In particular, multiple state-of-the-art classifiers available in the GEE platform are compared to produce a high resolution (30 m) crop classification map for a large territory ( 28,100 km2 and 1.0 M ha of cropland). Though this study does not involve large volumes of data, it does address efficiency of the GEE platform to effectively execute complex workflows of satellite data processing required with large scale applications such as crop mapping. The study discusses strengths and weaknesses of classifiers, assesses accuracies that can be achieved with different classifiers for the Ukrainian landscape, and compares them to the benchmark classifier using a neural network approach that was developed in our previous studies. The study is carried out for the Joint Experiment of Crop Assessment and Monitoring (JECAM) test site in Ukraine covering the Kyiv region (North of Ukraine) in 2013. We found that Google Earth Engine (GEE) provides very good performance in terms of enabling access to the remote sensing products through the cloud platform and providing pre-processing; however, in terms of classification accuracy, the neural network based approach outperformed support vector machine (SVM), decision tree and random forest classifiers available in GEE.
NASA Technical Reports Server (NTRS)
Groesbeck, D. E.; Huff, R. G.; Vonglahn, U. H.
1977-01-01
Small-scale circular, noncircular, single- and multi-element nozzles with flow areas as large as 122 sq cm were tested with cold airflow at exit Mach numbers from 0.28 to 1.15. The effects of multi-element nozzle shape and element spacing on jet Mach number decay were studied in an effort to reduce the noise caused by jet impingement on externally blown flap (EBF) STOL aircraft. The jet Mach number decay data are well represented by empirical relations. Jet spreading and Mach number decay contours are presented for all configurations tested.
Laser Amplifier Development for the Remote Sensing of CO2 from Space
NASA Technical Reports Server (NTRS)
Yu, Anthony W.; Abshire, James B.; Storm, Mark; Betin, Alexander
2015-01-01
Accurate global measurements of tropospheric CO2 mixing ratios are needed to study CO2 emissions and CO2 exchange with the land and oceans. NASA Goddard Space Flight Center (GSFC) is developing a pulsed lidar approach for an integrated path differential absorption (IPDA) lidar to allow global measurements of atmospheric CO2 column densities from space. Our group has developed, and successfully flown, an airborne pulsed lidar instrument that uses two tunable pulsed laser transmitters allowing simultaneous measurement of a single CO2 absorption line in the 1570 nm band, absorption of an O2 line pair in the oxygen A-band (765 nm), range, and atmospheric backscatter profiles in the same path. Both lasers are pulsed at 10 kHz, and the two absorption line regions are sampled at typically a 300 Hz rate. A space-based version of this lidar must have a much larger lidar power-area product due to the approximately x40 longer range and faster along track velocity compared to airborne instrument. Initial link budget analysis indicated that for a 400 km orbit, a 1.5 m diameter telescope and a 10 second integration time, a approximately 2 mJ laser energy is required to attain the precision needed for each measurement. To meet this energy requirement, we have pursued parallel power scaling efforts to enable space-based lidar measurement of CO2 concentrations. These included a multiple aperture approach consists of multi-element large mode area fiber amplifiers and a single-aperture approach consists of a multi-pass Er:Yb:Phosphate glass based planar waveguide amplifier (PWA). In this paper we will present our laser amplifier design approaches and preliminary results.
Coarse-Grain Bandwidth Estimation Scheme for Large-Scale Network
NASA Technical Reports Server (NTRS)
Cheung, Kar-Ming; Jennings, Esther H.; Sergui, John S.
2013-01-01
A large-scale network that supports a large number of users can have an aggregate data rate of hundreds of Mbps at any time. High-fidelity simulation of a large-scale network might be too complicated and memory-intensive for typical commercial-off-the-shelf (COTS) tools. Unlike a large commercial wide-area-network (WAN) that shares diverse network resources among diverse users and has a complex topology that requires routing mechanism and flow control, the ground communication links of a space network operate under the assumption of a guaranteed dedicated bandwidth allocation between specific sparse endpoints in a star-like topology. This work solved the network design problem of estimating the bandwidths of a ground network architecture option that offer different service classes to meet the latency requirements of different user data types. In this work, a top-down analysis and simulation approach was created to size the bandwidths of a store-and-forward network for a given network topology, a mission traffic scenario, and a set of data types with different latency requirements. These techniques were used to estimate the WAN bandwidths of the ground links for different architecture options of the proposed Integrated Space Communication and Navigation (SCaN) Network. A new analytical approach, called the "leveling scheme," was developed to model the store-and-forward mechanism of the network data flow. The term "leveling" refers to the spreading of data across a longer time horizon without violating the corresponding latency requirement of the data type. Two versions of the leveling scheme were developed: 1. A straightforward version that simply spreads the data of each data type across the time horizon and doesn't take into account the interactions among data types within a pass, or between data types across overlapping passes at a network node, and is inherently sub-optimal. 2. Two-state Markov leveling scheme that takes into account the second order behavior of the store-and-forward mechanism, and the interactions among data types within a pass. The novelty of this approach lies in the modeling of the store-and-forward mechanism of each network node. The term store-and-forward refers to the data traffic regulation technique in which data is sent to an intermediate network node where they are temporarily stored and sent at a later time to the destination node or to another intermediate node. Store-and-forward can be applied to both space-based networks that have intermittent connectivity, and ground-based networks with deterministic connectivity. For groundbased networks, the store-and-forward mechanism is used to regulate the network data flow and link resource utilization such that the user data types can be delivered to their destination nodes without violating their respective latency requirements.
The UAB Informatics Institute and 2016 CEGS N-GRID de-identification shared task challenge.
Bui, Duy Duc An; Wyatt, Mathew; Cimino, James J
2017-11-01
Clinical narratives (the text notes found in patients' medical records) are important information sources for secondary use in research. However, in order to protect patient privacy, they must be de-identified prior to use. Manual de-identification is considered to be the gold standard approach but is tedious, expensive, slow, and impractical for use with large-scale clinical data. Automated or semi-automated de-identification using computer algorithms is a potentially promising alternative. The Informatics Institute of the University of Alabama at Birmingham is applying de-identification to clinical data drawn from the UAB hospital's electronic medical records system before releasing them for research. We participated in a shared task challenge by the Centers of Excellence in Genomic Science (CEGS) Neuropsychiatric Genome-Scale and RDoC Individualized Domains (N-GRID) at the de-identification regular track to gain experience developing our own automatic de-identification tool. We focused on the popular and successful methods from previous challenges: rule-based, dictionary-matching, and machine-learning approaches. We also explored new techniques such as disambiguation rules, term ambiguity measurement, and used multi-pass sieve framework at a micro level. For the challenge's primary measure (strict entity), our submissions achieved competitive results (f-measures: 87.3%, 87.1%, and 86.7%). For our preferred measure (binary token HIPAA), our submissions achieved superior results (f-measures: 93.7%, 93.6%, and 93%). With those encouraging results, we gain the confidence to improve and use the tool for the real de-identification task at the UAB Informatics Institute. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Hördemann, C.; Hirschfelder, K.; Schaefer, M.; Gillner, A.
2015-09-01
The breakthrough of flexible organic electronics and especially organic photovoltaics is highly dependent on cost-efficient production technologies. Roll-2-Roll processes show potential for a promising solution in terms of high throughput and low-cost production of thin film organic components. Solution based material deposition and integrated laser patterning processes offer new possibilities for versatile production lines. The use of flexible polymeric substrates brings along challenges in laser patterning which have to be overcome. One main challenge when patterning transparent conductive layers on polymeric substrates are material bulges at the edges of the ablated area. Bulges can lead to short circuits in the layer system leading to device failure. Therefore following layers have to have a sufficient thickness to cover and smooth the ridge. In order to minimize the bulging height, a study has been carried out on transparent conductive ITO layers on flexible PET substrates. Ablation results using different beam shapes, such as Gaussian beam, Top-Hat beam and Donut-shaped beam, as well as multi-pass scribing and double-pulsed ablation are compared. Furthermore, lab scale methods for cleaning the patterned layer and eliminating bulges are contrasted to the use of additional water based sacrificial layers in order to obtain an alternative procedure suitable for large scale Roll-2-Roll manufacturing. Besides progress in research, ongoing transfer of laser processes into a Roll-2-Roll demonstrator is illustrated. By using fixed optical elements in combination with a galvanometric scanner, scribing, variable patterning and edge deletion can be performed individually.
Strecker, Angela L; Casselman, John M; Fortin, Marie-Josée; Jackson, Donald A; Ridgway, Mark S; Abrams, Peter A; Shuter, Brian J
2011-07-01
Species present in communities are affected by the prevailing environmental conditions, and the traits that these species display may be sensitive indicators of community responses to environmental change. However, interpretation of community responses may be confounded by environmental variation at different spatial scales. Using a hierarchical approach, we assessed the spatial and temporal variation of traits in coastal fish communities in Lake Huron over a 5-year time period (2001-2005) in response to biotic and abiotic environmental factors. The association of environmental and spatial variables with trophic, life-history, and thermal traits at two spatial scales (regional basin-scale, local site-scale) was quantified using multivariate statistics and variation partitioning. We defined these two scales (regional, local) on which to measure variation and then applied this measurement framework identically in all 5 study years. With this framework, we found that there was no change in the spatial scales of fish community traits over the course of the study, although there were small inter-annual shifts in the importance of regional basin- and local site-scale variables in determining community trait composition (e.g., life-history, trophic, and thermal). The overriding effects of regional-scale variables may be related to inter-annual variation in average summer temperature. Additionally, drivers of fish community traits were highly variable among study years, with some years dominated by environmental variation and others dominated by spatially structured variation. The influence of spatial factors on trait composition was dynamic, which suggests that spatial patterns in fish communities over large landscapes are transient. Air temperature and vegetation were significant variables in most years, underscoring the importance of future climate change and shoreline development as drivers of fish community structure. Overall, a trait-based hierarchical framework may be a useful conservation tool, as it highlights the multi-scaled interactive effect of variables over a large landscape.
Tsai, Jason Sheng-Hong; Du, Yan-Yi; Huang, Pei-Hsiang; Guo, Shu-Mei; Shieh, Leang-San; Chen, Yuhua
2011-07-01
In this paper, a digital redesign methodology of the iterative learning-based decentralized adaptive tracker is proposed to improve the dynamic performance of sampled-data linear large-scale control systems consisting of N interconnected multi-input multi-output subsystems, so that the system output will follow any trajectory which may not be presented by the analytic reference model initially. To overcome the interference of each sub-system and simplify the controller design, the proposed model reference decentralized adaptive control scheme constructs a decoupled well-designed reference model first. Then, according to the well-designed model, this paper develops a digital decentralized adaptive tracker based on the optimal analog control and prediction-based digital redesign technique for the sampled-data large-scale coupling system. In order to enhance the tracking performance of the digital tracker at specified sampling instants, we apply the iterative learning control (ILC) to train the control input via continual learning. As a result, the proposed iterative learning-based decentralized adaptive tracker not only has robust closed-loop decoupled property but also possesses good tracking performance at both transient and steady state. Besides, evolutionary programming is applied to search for a good learning gain to speed up the learning process of ILC. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ma, Sangback
In this paper we compare various parallel preconditioners such as Point-SSOR (Symmetric Successive OverRelaxation), ILU(0) (Incomplete LU) in the Wavefront ordering, ILU(0) in the Multi-color ordering, Multi-Color Block SOR (Successive OverRelaxation), SPAI (SParse Approximate Inverse) and pARMS (Parallel Algebraic Recursive Multilevel Solver) for solving large sparse linear systems arising from two-dimensional PDE (Partial Differential Equation)s on structured grids. Point-SSOR is well-known, and ILU(0) is one of the most popular preconditioner, but it is inherently serial. ILU(0) in the Wavefront ordering maximizes the parallelism in the natural order, but the lengths of the wave-fronts are often nonuniform. ILU(0) in the Multi-color ordering is a simple way of achieving a parallelism of the order N, where N is the order of the matrix, but its convergence rate often deteriorates as compared to that of natural ordering. We have chosen the Multi-Color Block SOR preconditioner combined with direct sparse matrix solver, since for the Laplacian matrix the SOR method is known to have a nondeteriorating rate of convergence when used with the Multi-Color ordering. By using block version we expect to minimize the interprocessor communications. SPAI computes the sparse approximate inverse directly by least squares method. Finally, ARMS is a preconditioner recursively exploiting the concept of independent sets and pARMS is the parallel version of ARMS. Experiments were conducted for the Finite Difference and Finite Element discretizations of five two-dimensional PDEs with large meshsizes up to a million on an IBM p595 machine with distributed memory. Our matrices are real positive, i. e., their real parts of the eigenvalues are positive. We have used GMRES(m) as our outer iterative method, so that the convergence of GMRES(m) for our test matrices are mathematically guaranteed. Interprocessor communications were done using MPI (Message Passing Interface) primitives. The results show that in general ILU(0) in the Multi-Color ordering ahd ILU(0) in the Wavefront ordering outperform the other methods but for symmetric and nearly symmetric 5-point matrices Multi-Color Block SOR gives the best performance, except for a few cases with a small number of processors.
SOMAR-LES: A framework for multi-scale modeling of turbulent stratified oceanic flows
NASA Astrophysics Data System (ADS)
Chalamalla, Vamsi K.; Santilli, Edward; Scotti, Alberto; Jalali, Masoud; Sarkar, Sutanu
2017-12-01
A new multi-scale modeling technique, SOMAR-LES, is presented in this paper. Localized grid refinement gives SOMAR (the Stratified Ocean Model with Adaptive Resolution) access to small scales of the flow which are normally inaccessible to general circulation models (GCMs). SOMAR-LES drives a LES (Large Eddy Simulation) on SOMAR's finest grids, forced with large scale forcing from the coarser grids. Three-dimensional simulations of internal tide generation, propagation and scattering are performed to demonstrate this multi-scale modeling technique. In the case of internal tide generation at a two-dimensional bathymetry, SOMAR-LES is able to balance the baroclinic energy budget and accurately model turbulence losses at only 10% of the computational cost required by a non-adaptive solver running at SOMAR-LES's fine grid resolution. This relative cost is significantly reduced in situations with intermittent turbulence or where the location of the turbulence is not known a priori because SOMAR-LES does not require persistent, global, high resolution. To illustrate this point, we consider a three-dimensional bathymetry with grids adaptively refined along the tidally generated internal waves to capture remote mixing in regions of wave focusing. The computational cost in this case is found to be nearly 25 times smaller than that of a non-adaptive solver at comparable resolution. In the final test case, we consider the scattering of a mode-1 internal wave at an isolated two-dimensional and three-dimensional topography, and we compare the results with Legg (2014) numerical experiments. We find good agreement with theoretical estimates. SOMAR-LES is less dissipative than the closure scheme employed by Legg (2014) near the bathymetry. Depending on the flow configuration and resolution employed, a reduction of more than an order of magnitude in computational costs is expected, relative to traditional existing solvers.
Römer, Heinrich; Germain, Ryan R.
2013-01-01
Roads are a major cause of habitat fragmentation that can negatively affect many mammal populations. Mitigation measures such as crossing structures are a proposed method to reduce the negative effects of roads on wildlife, but the best methods for determining where such structures should be implemented, and how their effects might differ between species in mammal communities is largely unknown. We investigated the effects of a major highway through south-eastern British Columbia, Canada on several mammal species to determine how the highway may act as a barrier to animal movement, and how species may differ in their crossing-area preferences. We collected track data of eight mammal species across two winters, along both the highway and pre-marked transects, and used a multi-scale modeling approach to determine the scale at which habitat characteristics best predicted preferred crossing sites for each species. We found evidence for a severe barrier effect on all investigated species. Freely-available remotely-sensed habitat landscape data were better than more costly, manually-digitized microhabitat maps in supporting models that identified preferred crossing sites; however, models using both types of data were better yet. Further, in 6 of 8 cases models which incorporated multiple spatial scales were better at predicting preferred crossing sites than models utilizing any single scale. While each species differed in terms of the landscape variables associated with preferred/avoided crossing sites, we used a multi-model inference approach to identify locations along the highway where crossing structures may benefit all of the species considered. By specifically incorporating both highway and off-highway data and predictions we were able to show that landscape context plays an important role for maximizing mitigation measurement efficiency. Our results further highlight the need for mitigation measures along major highways to improve connectivity between mammal populations, and illustrate how multi-scale data can be used to identify preferred crossing sites for different species within a mammal community. PMID:24244912
Quantum dots for a high-throughput Pfu polymerase based multi-round polymerase chain reaction (PCR).
Sang, Fuming; Zhang, Zhizhou; Yuan, Lin; Liu, Deli
2018-02-26
Multi-round PCR is an important technique for obtaining enough target DNA from rare DNA resources, and is commonly used in many fields including forensic science, ancient DNA analysis and cancer research. However, multi-round PCR is often aborted, largely due to the accumulation of non-specific amplification during repeated amplifications. Here, we developed a Pfu polymerase based multi-round PCR technique assisted by quantum dots (QDs). Different PCR assays, DNA polymerases (Pfu and Taq), DNA sizes and GC amounts were compared in this study. In the presence of QDs, PCR specificity could be retained even in the ninth-round amplification. Moreover, the longer and more complex the targets were, the earlier the abortion happened in multi-round PCR. However, no obvious enhancement of specificity was found in multi-round PCR using Taq DNA polymerase. Significantly, the fidelity of Pfu polymerase based multi-round PCR was not sacrificed in the presence of QDs. Besides, pre-incubation at 50 °C for an hour had no impact on multi-round PCR performance, which further authenticated the hot start effect of QDs modulated in multi-round PCR. The findings of this study demonstrated that a cost-effective and promising multi-round PCR technique for large-scale and high-throughput sample analysis could be established with high specificity, sensibility and accuracy.
Detecting the severity of perinatal anxiety with the Perinatal Anxiety Screening Scale (PASS).
Somerville, Susanne; Byrne, Shannon L; Dedman, Kellie; Hagan, Rosemary; Coo, Soledad; Oxnam, Elizabeth; Doherty, Dorota; Cunningham, Nadia; Page, Andrew C
2015-11-01
The Perinatal Anxiety Screening Scale (PASS; Somerville et al., 2014) reliably identifies perinatal women at risk of problematic anxiety when a clinical cut-off score of 26 is used. This study aimed to identify a severity continuum of anxiety symptoms with the PASS to enhance screening, treatment and research for perinatal anxiety. Antenatal and postnatal women (n=410) recruited from the antenatal clinics and mental health services at an obstetric hospital completed the Edinburgh Postnatal Depression Scale (EPDS), the Depression, Anxiety and Stress Scale (DASS-21), the Spielberg State-Trait Anxiety Inventory (STAI), the Beck Depression Inventory II (BDI), and the PASS. The women referred to mental health services were assessed to determine anxiety diagnoses via a diagnostic interview conducted by an experienced mental health professional from the Department of Psychological Medicine - King Edward Memorial Hospital. Three normative groups for the PASS, namely minimal anxiety, mild-moderate anxiety, and severe anxiety, were identified based on the severity of anxiety indicated on the standardised scales and anxiety diagnoses. Two cut-off points for the normative groups were calculated using the Jacobson-Truax method (Jacobson and Truax, 1991) resulting in three severity ranges: 'minimal anxiety'; 'mild-moderate anxiety'; and 'severe anxiety'. The most frequent diagnoses in the study sample were adjustment disorder, mixed anxiety and depression, generalised anxiety, and post-traumatic stress disorder. This may limit the generalisability of the severity range results to other anxiety diagnoses including obsessive compulsive disorder and specific phobia. Severity ranges for the PASS add value to having a clinically validated cut-off score in the detection and monitoring of problematic perinatal anxiety. The PASS can now be used to identify risk of an anxiety disorder and the severity ranges can indicate developing risk for early referrals for further assessments, prioritisation of access to resources and tracking of clinically significant deterioration, improvement or stability in anxiety over time. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.
Dizon-Maspat, Jemelle; Bourret, Justin; D'Agostini, Anna; Li, Feng
2012-04-01
As the therapeutic monoclonal antibody (mAb) market continues to grow, optimizing production processes is becoming more critical in improving efficiencies and reducing cost-of-goods in large-scale production. With the recent trends of increasing cell culture titers from upstream process improvements, downstream capacity has become the bottleneck in many existing manufacturing facilities. Single Pass Tangential Flow Filtration (SPTFF) is an emerging technology, which is potentially useful in debottlenecking downstream capacity, especially when the pool tank size is a limiting factor. It can be integrated as part of an existing purification process, after a column chromatography step or a filtration step, without introducing a new unit operation. In this study, SPTFF technology was systematically evaluated for reducing process intermediate volumes from 2× to 10× with multiple mAbs and the impact of SPTFF on product quality, and process yield was analyzed. Finally, the potential fit into the typical 3-column industry platform antibody purification process and its implementation in a commercial scale manufacturing facility were also evaluated. Our data indicate that using SPTFF to concentrate protein pools is a simple, flexible, and robust operation, which can be implemented at various scales to improve antibody purification process capacity. Copyright © 2011 Wiley Periodicals, Inc.
Methods for High-Order Multi-Scale and Stochastic Problems Analysis, Algorithms, and Applications
2016-10-17
finite volume schemes, discontinuous Galerkin finite element method, and related methods, for solving computational fluid dynamics (CFD) problems and...approximation for finite element methods. (3) The development of methods of simulation and analysis for the study of large scale stochastic systems of...laws, finite element method, Bernstein-Bezier finite elements , weakly interacting particle systems, accelerated Monte Carlo, stochastic networks 16
Statistical Downscaling in Multi-dimensional Wave Climate Forecast
NASA Astrophysics Data System (ADS)
Camus, P.; Méndez, F. J.; Medina, R.; Losada, I. J.; Cofiño, A. S.; Gutiérrez, J. M.
2009-04-01
Wave climate at a particular site is defined by the statistical distribution of sea state parameters, such as significant wave height, mean wave period, mean wave direction, wind velocity, wind direction and storm surge. Nowadays, long-term time series of these parameters are available from reanalysis databases obtained by numerical models. The Self-Organizing Map (SOM) technique is applied to characterize multi-dimensional wave climate, obtaining the relevant "wave types" spanning the historical variability. This technique summarizes multi-dimension of wave climate in terms of a set of clusters projected in low-dimensional lattice with a spatial organization, providing Probability Density Functions (PDFs) on the lattice. On the other hand, wind and storm surge depend on instantaneous local large-scale sea level pressure (SLP) fields while waves depend on the recent history of these fields (say, 1 to 5 days). Thus, these variables are associated with large-scale atmospheric circulation patterns. In this work, a nearest-neighbors analog method is used to predict monthly multi-dimensional wave climate. This method establishes relationships between the large-scale atmospheric circulation patterns from numerical models (SLP fields as predictors) with local wave databases of observations (monthly wave climate SOM PDFs as predictand) to set up statistical models. A wave reanalysis database, developed by Puertos del Estado (Ministerio de Fomento), is considered as historical time series of local variables. The simultaneous SLP fields calculated by NCEP atmospheric reanalysis are used as predictors. Several applications with different size of sea level pressure grid and with different temporal domain resolution are compared to obtain the optimal statistical model that better represents the monthly wave climate at a particular site. In this work we examine the potential skill of this downscaling approach considering perfect-model conditions, but we will also analyze the suitability of this methodology to be used for seasonal forecast and for long-term climate change scenario projection of wave climate.
Multi-granularity Bandwidth Allocation for Large-Scale WDM/TDM PON
NASA Astrophysics Data System (ADS)
Gao, Ziyue; Gan, Chaoqin; Ni, Cuiping; Shi, Qiongling
2017-12-01
WDM (wavelength-division multiplexing)/TDM (time-division multiplexing) PON (passive optical network) is being viewed as a promising solution for delivering multiple services and applications, such as high-definition video, video conference and data traffic. Considering the real-time transmission, QoS (quality of services) requirements and differentiated services model, a multi-granularity dynamic bandwidth allocation (DBA) in both domains of wavelengths and time for large-scale hybrid WDM/TDM PON is proposed in this paper. The proposed scheme achieves load balance by using the bandwidth prediction. Based on the bandwidth prediction, the wavelength assignment can be realized fairly and effectively to satisfy the different demands of various classes. Specially, the allocation of residual bandwidth further augments the DBA and makes full use of bandwidth resources in the network. To further improve the network performance, two schemes named extending the cycle of one free wavelength (ECoFW) and large bandwidth shrinkage (LBS) are proposed, which can prevent transmission from interruption when the user employs more than one wavelength. The simulation results show the effectiveness of the proposed scheme.
By-Pass Diode Temperature Tests of a Solar Array Coupon Under Space Thermal Environment Conditions
NASA Technical Reports Server (NTRS)
Wright, Kenneth H., Jr.; Schneider, Todd A.; Vaughn, Jason A.; Hoang, Bao; Wong, Frankie
2016-01-01
Tests were performed on a 56-cell Advanced Triple Junction solar array coupon whose purpose was to determine margin available for bypass diodes integrated with new, large multi-junction solar cells that are manufactured from a 4-inch wafer. The tests were performed under high vacuum with cold and ambient coupon back-side. The bypass diodes were subjected to a sequence of increasing discrete current steps from 0 Amp to 2.0 Amp in steps of 0.25 Amp. At each current step, a temperature measurement was obtained via remote viewing by an infrared camera. This paper discusses the experimental methodology, including the calibration of the thermal imaging system, and the results.
GPU Multi-Scale Particle Tracking and Multi-Fluid Simulations of the Radiation Belts
NASA Astrophysics Data System (ADS)
Ziemba, T.; Carscadden, J.; O'Donnell, D.; Winglee, R.; Harnett, E.; Cash, M.
2007-12-01
The properties of the radiation belts can vary dramatically under the influence of magnetic storms and storm-time substorms. The task of understanding and predicting radiation belt properties is made difficult because their properties determined by global processes as well as small-scale wave-particle interactions. A full solution to the problem will require major innovations in technique and computer hardware. The proposed work will demonstrates liked particle tracking codes with new multi-scale/multi-fluid global simulations that provide the first means to include small-scale processes within the global magnetospheric context. A large hurdle to the problem is having sufficient computer hardware that is able to handle the dissipate temporal and spatial scale sizes. A major innovation of the work is that the codes are designed to run of graphics processing units (GPUs). GPUs are intrinsically highly parallelized systems that provide more than an order of magnitude computing speed over a CPU based systems, for little more cost than a high end-workstation. Recent advancements in GPU technologies allow for full IEEE float specifications with performance up to several hundred GFLOPs per GPU and new software architectures have recently become available to ease the transition from graphics based to scientific applications. This allows for a cheap alternative to standard supercomputing methods and should increase the time to discovery. A demonstration of the code pushing more than 500,000 particles faster than real time is presented, and used to provide new insight into radiation belt dynamics.
NASA Astrophysics Data System (ADS)
Vijayanand, V. D.; Kumar, J. Ganesh; Parida, P. K.; Ganesan, V.; Laha, K.
2017-02-01
Effect of electrode size on creep deformation and rupture behavior has been assessed by carrying out creep tests at 923 K (650 °C) over the stress range 140 to 225 MPa on 316LN stainless steel weld joints fabricated employing 2.5 and 4 mm diameter electrodes. The multi-pass welding technique not only changes the morphology of delta ferrite from vermicular to globular in the previous weld bead region near to the weld bead interface, but also subjects the region to thermo-mechanical heat treatment to generate appreciable strength gradient. Electron backscatter diffraction analysis revealed significant localized strain gradients in regions adjoining the weld pass interface for the joint fabricated with large electrode size. Larger electrode diameter joint exhibited higher creep rupture strength than the smaller diameter electrode joint. However, both the joints had lower creep rupture strength than the base metal. Failure in the joints was associated with microstructural instability in the fusion zone, and the vermicular delta ferrite zone was more prone to creep cavitation. Larger electrode diameter joint was found to be more resistant to failure caused by creep cavitation than the smaller diameter electrode joint. This has been attributed to the larger strength gradient between the beads and significant separation between the cavity prone vermicular delta ferrite zones which hindered the cavity growth. Close proximity of cavitated zones in smaller electrode joint facilitated their faster coalescence leading to more reduction in creep rupture strength. Failure location in the joints was found to depend on the electrode size and applied stress. The change in failure location has been assessed on performing finite element analysis of stress distribution across the joint on incorporating tensile and creep strengths of different constituents of joints, estimated by ball indentation and impression creep testing techniques.
Multi-Pass Quadrupole Mass Analyzer
NASA Technical Reports Server (NTRS)
Prestage, John D.
2013-01-01
Analysis of the composition of planetary atmospheres is one of the most important and fundamental measurements in planetary robotic exploration. Quadrupole mass analyzers (QMAs) are the primary tool used to execute these investigations, but reductions in size of these instruments has sacrificed mass resolving power so that the best present-day QMA devices are still large, expensive, and do not deliver performance of laboratory instruments. An ultra-high-resolution QMA was developed to resolve N2 +/CO+ by trapping ions in a linear trap quadrupole filter. Because N2 and CO are resolved, gas chromatography columns used to separate species before analysis are eliminated, greatly simplifying gas analysis instrumentation. For highest performance, the ion trap mode is used. High-resolution (or narrow-band) mass selection is carried out in the central region, but near the DC electrodes at each end, RF/DC field settings are adjusted to allow broadband ion passage. This is to prevent ion loss during ion reflection at each end. Ions are created inside the trap so that low-energy particles are selected by low-voltage settings on the end electrodes. This is beneficial to good mass resolution since low-energy particles traverse many cycles of the RF filtering fields. Through Monte Carlo simulations, it is shown that ions are reflected at each end many tens of times, each time being sent back through the central section of the quadrupole where ultrahigh mass filtering is carried out. An analyzer was produced with electrical length orders of magnitude longer than its physical length. Since the selector fields are sized as in conventional devices, the loss of sensitivity inherent in miniaturizing quadrupole instruments is avoided. The no-loss, multi-pass QMA architecture will improve mass resolution of planetary QMA instruments while reducing demands on the RF electronics for high-voltage/high-frequency production since ion transit time is no longer limited to a single pass. The QMA-based instrument will thus give way to substantial reductions of the mass of flight instruments.
NASA Astrophysics Data System (ADS)
Chaytor, J. D.; Baldwin, W. E.; Danforth, W. W.; Bentley, S. J.; Miner, M. D.; Damour, M.
2017-12-01
Mudflows (channelized and unconfined debris flows) on the Mississippi River Delta Front (MRDF) are a recognized hazard to oil and gas infrastructure in the shallow Gulf of Mexico. Preconditioning of the seafloor for failure results from high sedimentation rates coupled with slope over-steepening, under-consolidation, and abundant biogenic gas production. Cyclical loading of the seafloor by waves from passing major storms appears to be a primary trigger, but the role of smaller (more frequent) storms and background oceanographic processes are largely unconstrained. A pilot high-resolution seafloor mapping and seismic imaging study was carried out across portions of the MRDF aboard the R/V Point Sur from May 19-26, 2017, as part of a multi-agency/university effort to characterize mudflow hazards in the area. The primary objective of the cruise was to assess the suitability of seafloor mapping and shallow sub-surface imaging tools in the challenging environmental conditions found across delta fronts (e.g., variably-distributed water column stratification and wide-spread biogenic gas in the shallow sub-surface). More than 600 km of multibeam bathymetry/backscatter/water column data, 425 km of towed chirp data, and > 500 km of multi-channel seismic data (boomer/mini-sparker sources, 32-channel streamer) were collected. Varied mudflow (gully, lobe), pro-delta morphologies, and structural features, some of which have been surveyed more than once, were imaged in selected survey areas from Pass a Loutre to Southwest Pass. The present location of the SS Virginia, which has been moving with one of the mudflow lobes since it was sunk in 1942, was determined and found to be 60 m SW of its 2006 position, suggesting movement not linked to hurricane-induced wave triggering of mudflows. Preliminary versions these data were used to identify sediment sampling sites visited on a cruise in early June 2017 led by scientists from LSU and other university/agency partners.
Commissioning MMS: Challenges and Lessons Learned
NASA Technical Reports Server (NTRS)
Wood, Paul; Gramling, Cheryl; Reiter, Jennifer; Smith, Patrick; Stone, John
2016-01-01
This paper discusses commissioning of NASA's Magnetospheric MultiScale (MMS) Mission. The mission includes four identical spacecraft with a large, complex set of instrumentation. The planning for and execution of commissioning for this mission is described. The paper concludes by discussing lessons learned.
Meng, Ran; Wu, Jin; Zhao, Feng; ...
2018-06-01
Understanding post-fire forest recovery is pivotal to the study of forest dynamics and global carbon cycle. Field-based studies indicated a convex response of forest recovery rate to burn severity at the individual tree level, related with fire-induced tree mortality; however, these findings were constrained in spatial/temporal extents, while not detectable by traditional optical remote sensing studies, largely attributing to the contaminated effect from understory recovery. For this work, we examined whether the combined use of multi-sensor remote sensing techniques (i.e., 1m simultaneous airborne imaging spectroscopy and LiDAR and 2m satellite multi-spectral imagery) to separate canopy recovery from understory recovery wouldmore » enable to quantify post-fire forest recovery rate spanning a large gradient in burn severity over large-scales. Our study was conducted in a mixed pine-oak forest in Long Island, NY, three years after a top-killing fire. Our studies remotely detected an initial increase and then decline of forest recovery rate to burn severity across the burned area, with a maximum canopy area-based recovery rate of 10% per year at moderate forest burn severity class. More intriguingly, such remotely detected convex relationships also held at species level, with pine trees being more resilient to high burn severity and having a higher maximum recovery rate (12% per year) than oak trees (4% per year). These results are one of the first quantitative evidences showing the effects of fire adaptive strategies on post-fire forest recovery, derived from relatively large spatial-temporal domains. Our study thus provides the methodological advance to link multi-sensor remote sensing techniques to monitor forest dynamics in a spatially explicit manner over large-scales, with important implications for fire-related forest management, and for constraining/benchmarking fire effect schemes in ecological process models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, Ran; Wu, Jin; Zhao, Feng
Understanding post-fire forest recovery is pivotal to the study of forest dynamics and global carbon cycle. Field-based studies indicated a convex response of forest recovery rate to burn severity at the individual tree level, related with fire-induced tree mortality; however, these findings were constrained in spatial/temporal extents, while not detectable by traditional optical remote sensing studies, largely attributing to the contaminated effect from understory recovery. For this work, we examined whether the combined use of multi-sensor remote sensing techniques (i.e., 1m simultaneous airborne imaging spectroscopy and LiDAR and 2m satellite multi-spectral imagery) to separate canopy recovery from understory recovery wouldmore » enable to quantify post-fire forest recovery rate spanning a large gradient in burn severity over large-scales. Our study was conducted in a mixed pine-oak forest in Long Island, NY, three years after a top-killing fire. Our studies remotely detected an initial increase and then decline of forest recovery rate to burn severity across the burned area, with a maximum canopy area-based recovery rate of 10% per year at moderate forest burn severity class. More intriguingly, such remotely detected convex relationships also held at species level, with pine trees being more resilient to high burn severity and having a higher maximum recovery rate (12% per year) than oak trees (4% per year). These results are one of the first quantitative evidences showing the effects of fire adaptive strategies on post-fire forest recovery, derived from relatively large spatial-temporal domains. Our study thus provides the methodological advance to link multi-sensor remote sensing techniques to monitor forest dynamics in a spatially explicit manner over large-scales, with important implications for fire-related forest management, and for constraining/benchmarking fire effect schemes in ecological process models.« less
Regional climate model sensitivity to domain size
NASA Astrophysics Data System (ADS)
Leduc, Martin; Laprise, René
2009-05-01
Regional climate models are increasingly used to add small-scale features that are not present in their lateral boundary conditions (LBC). It is well known that the limited area over which a model is integrated must be large enough to allow the full development of small-scale features. On the other hand, integrations on very large domains have shown important departures from the driving data, unless large scale nudging is applied. The issue of domain size is studied here by using the “perfect model” approach. This method consists first of generating a high-resolution climatic simulation, nicknamed big brother (BB), over a large domain of integration. The next step is to degrade this dataset with a low-pass filter emulating the usual coarse-resolution LBC. The filtered nesting data (FBB) are hence used to drive a set of four simulations (LBs for Little Brothers), with the same model, but on progressively smaller domain sizes. The LB statistics for a climate sample of four winter months are compared with BB over a common region. The time average (stationary) and transient-eddy standard deviation patterns of the LB atmospheric fields generally improve in terms of spatial correlation with the reference (BB) when domain gets smaller. The extraction of the small-scale features by using a spectral filter allows detecting important underestimations of the transient-eddy variability in the vicinity of the inflow boundary, which can penalize the use of small domains (less than 100 × 100 grid points). The permanent “spatial spin-up” corresponds to the characteristic distance that the large-scale flow needs to travel before developing small-scale features. The spin-up distance tends to grow in size at higher levels in the atmosphere.
NASA Astrophysics Data System (ADS)
Kamran, J.; Hasan, B. A.; Tariq, N. H.; Izhar, S.; Sarwar, M.
2014-06-01
In this study the effect of multi-passes warm rolling of AZ31 magnesium alloy on texture, microstructure, grain size variation and hardness of as cast sample (A) and two rolled samples (B & C) taken from different locations of the as-cast ingot was investigated. The purpose was to enhance the formability of AZ31 alloy in order to help manufacturability. It was observed that multi-passes warm rolling (250°C to 350°C) of samples B & C with initial thickness 7.76mm and 7.73 mm was successfully achieved up to 85% reduction without any edge or surface cracks in ten steps with a total of 26 passes. The step numbers 1 to 4 consist of 5, 2, 11 and 3 passes respectively, the remaining steps 5 to 10 were single pass rolls. In each discrete step a fixed roll gap is used in a way that true strain per step increases very slowly from 0.0067 in the first step to 0.7118 in the 26th step. Both samples B & C showed very similar behavior after 26th pass and were successfully rolled up to 85% thickness reduction. However, during 10th step (27th pass) with a true strain value of 0.772 the sample B experienced very severe surface as well as edge cracks. Sample C was therefore not rolled for the 10th step and retained after 26 passes. Both samples were studied in terms of their basal texture, microstructure, grain size and hardness. Sample C showed an equiaxed grain structure after 85% total reduction. The equiaxed grain structure of sample C may be due to the effective involvement of dynamic recrystallization (DRX) which led to formation of these grains with relatively low misorientations with respect to the parent as cast grains. The sample B on the other hand showed a microstructure in which all the grains were elongated along the rolling direction (RD) after 90 % total reduction and DRX could not effectively play its role due to heavy strain and lack of plastic deformation systems. The microstructure of as cast sample showed a near-random texture (mrd 4.3), with average grain size of 44 & micro-hardness of 52 Hv. The grain size of sample B and C was 14μm and 27μm respectively and mrd intensity of basal texture was 5.34 and 5.46 respectively. The hardness of sample B and C came out to be 91 and 66 Hv respectively due to reduction in grain size and followed the well known Hall-Petch relationship.
NASA Astrophysics Data System (ADS)
Sun, P.; Jokipii, J. R.; Giacalone, J.
2016-12-01
Anisotropies in astrophysical turbulence has been proposed and observed for a long time. And recent observations adopting the multi-scale analysis techniques provided a detailed description of the scale-dependent power spectrum of the magnetic field parallel and perpendicular to the scale-dependent magnetic field line at different scales in the solar wind. In the previous work, we proposed a multi-scale method to synthesize non-isotropic turbulent magnetic field with pre-determined power spectra of the fluctuating magnetic field as a function of scales. We present the effect of test particle transport in the resulting field with a two-scale algorithm. We find that the scale-dependent turbulence anisotropy has a significant difference in the effect on charged par- ticle transport from what the isotropy or the global anisotropy has. It is important to apply this field synthesis method to the solar wind magnetic field based on spacecraft data. However, this relies on how we extract the power spectra of the turbulent magnetic field across different scales. In this study, we propose here a power spectrum synthesis method based on Fourier analysis to extract the large and small scale power spectrum from a single spacecraft observation with a long enough period and a high sampling frequency. We apply the method to the solar wind measurement by the magnetometer onboard the ACE spacecraft and regenerate the large scale isotropic 2D spectrum and the small scale anisotropic 2D spectrum. We run test particle simulations in the magnetid field generated in this way to estimate the transport coefficients and to compare with the isotropic turbulence model.
NASA Astrophysics Data System (ADS)
Jia, Weitao; Tang, Yan; Ning, Fangkun; Le, Qichi; Cui, Jianzhong
2018-04-01
Different rolling operations of as-cast AZ31B alloy were performed under different rolling speed (18 ∼ 72 m min‑1) and rolling pass conditions at 400 °C. Microstructural studies, tensile testing and formability evaluation relevant to each rolling operation were investigated. For 1-pass rolling, coarse average grain size (CAGS) region gradually approached the center layer as the rolling speed increased. Moreover, twins, shear bands and coarse-grain structures were the dominant components in the microstructure of plates rolled at 18, 48 and 72 m min‑1, respectively, indicating the severe deformation inhomogeneity under the high reduction per pass condition. For 2-pass rolling and 4-pass rolling, dynamic recrystallization was observed to be well and CAGS region has substantially disappeared, indicating the significant improvement in deformation uniformity and further the grain homogenization under the conditions. Microstructure uniformity degree of 2-pass rolled plates did not vary much as the rolling speed varied. On this basis, shear band distribution dominated the deformation behavior during the uniaxial tension of the 2-pass rolled plates. However, microstructure uniformity accompanied by twin distribution played a leading role in stretching the 4-pass rolled plates.
NASA Astrophysics Data System (ADS)
Amenomori, M.; Ayabe, S.; Cui, S. W.; Danzengluobu; Ding, L. K.; Ding, X. H.; Feng, C. F.; Feng, Z. Y.; Gao, X. Y.; Geng, Q. X.; Guo, H. W.; He, H. H.; He, M.; Hibino, K.; Hotta, N.; Hu, Haibing; Hu, H. B.; Huang, J.; Huang, Q.; Jia, H. Y.; Kajino, F.; Kasahara, K.; Katayose, Y.; Kato, C.; Kawata, K.; Labaciren; Le, G. M.; Li, J. Y.; Lu, H.; Lu, S. L.; Meng, X. R.; Mizutani, K.; Mu, J.; Munakata, K.; Nagai, A.; Nanjo, H.; Nishizawa, M.; Ohnishi, M.; Ohta, I.; Onuma, H.; Ouchi, T.; Ozawa, S.; Ren, J. R.; Saito, T.; Sakata, M.; Sasaki, T.; Shibata, M.; Shiomi, A.; Shirai, T.; Sugimoto, H.; Takita, M.; Tan, Y. H.; Tateyama, N.; Torii, S.; Tsuchiya, H.; Udo, S.; Utsugi, T.; Wang, B. S.; Wang, H.; Wang, X.; Wang, Y. G.; Wu, H. R.; Xue, L.; Yamamoto, Y.; Yan, C. T.; Yang, X. C.; Yasue, S.; Ye, Z. H.; Yu, G. C.; Yuan, A. F.; Yuda, T.; Zhang, H. M.; Zhang, J. L.; Zhang, N. J.; Zhang, X. Y.; Zhang, Y.; Zhang, Yi; Zhaxisangzhu; Zhou, X. X.; Tibet Asγ Collaboration
2005-06-01
We present the large-scale sidereal anisotropy of Galactic cosmic-ray intensity in the multi-TeV region observed with the Tibet-III air shower array during the period from 1999 through 2003. The sidereal daily variation of cosmic rays observed in this experiment shows an excess of relative intensity around 4-7 hr local sidereal time as well as a deficit around 12 hr local sidereal time. While the amplitude of the excess is not significant when averaged over all declinations, the excess in individual declination bands becomes larger and clearer as the viewing direction moves toward the south. The maximum phase of the excess intensity changes from ~7 hr at the Northern Hemisphere to ~4 hr at the equatorial region. We also show that both the amplitude and the phase of the first harmonic vector of the daily variation are remarkably independent of primary energy in the multi-TeV region. This is the first result determining the energy and declination dependences of the full 24 hr profiles of the sidereal daily variation in the multi-TeV region with a single air shower experiment.
NASA Astrophysics Data System (ADS)
Sweeney, C.; Kort, E. A.; Rella, C.; Conley, S. A.; Karion, A.; Lauvaux, T.; Frankenberg, C.
2015-12-01
Along with a boom in oil and natural gas production in the US, there has been a substantial effort to understand the true environmental impact of these operations on air and water quality, as well asnet radiation balance. This multi-institution effort funded by both governmental and non-governmental agencies has provided a case study for identification and verification of emissions using a multi-scale, top-down approach. This approach leverages a combination of remote sensing to identify areas that need specific focus and airborne in-situ measurements to quantify both regional and large- to mid-size single-point emitters. Ground-based networks of mobile and stationary measurements provide the bottom tier of measurements from which process-level information can be gathered to better understand the specific sources and temporal distribution of the emitters. The motivation for this type of approach is largely driven by recent work in the Barnett Shale region in Texas as well as the San Juan Basin in New Mexico and Colorado; these studies suggest that relatively few single-point emitters dominate the regional emissions of CH4.
Large Angle Optical Access in a Sub-Kelvin Cryostat
NASA Astrophysics Data System (ADS)
Hähnle, S.; Bueno, J.; Huiting, R.; Yates, S. J. C.; Baselmans, J. J. A.
2018-05-01
The development of lens-antenna-coupled aluminum-based microwave kinetic inductance detectors (MKIDs) and on-chip spectrometers needs a dedicated cryogenic setup to measure the beam patterns of the lens-antenna system over a large angular throughput and broad frequency range. This requires a careful design since the MKID has to be cooled to temperatures below 300 mK to operate effectively. We developed such a cryostat with a large opening angle θ = ± 37.8° and an optical access with a low-pass edge at 950 GHz . The system is based upon a commercial pulse tube cooled 3 K system with a ^4He -^3He sorption cooler to allow base temperatures below 300 mK . A careful study of the spectral and geometric throughput was performed to minimize thermal loading on the cold stage, allowing a base temperature of 265 mK . Radio-transparent multi-layer-insulation was employed as a recent development in filter technology to efficiently block near-infrared radiation.
Multi-scale Slip Inversion Based on Simultaneous Spatial and Temporal Domain Wavelet Transform
NASA Astrophysics Data System (ADS)
Liu, W.; Yao, H.; Yang, H. Y.
2017-12-01
Finite fault inversion is a widely used method to study earthquake rupture processes. Some previous studies have proposed different methods to implement finite fault inversion, including time-domain, frequency-domain, and wavelet-domain methods. Many previous studies have found that different frequency bands show different characteristics of the seismic rupture (e.g., Wang and Mori, 2011; Yao et al., 2011, 2013; Uchide et al., 2013; Yin et al., 2017). Generally, lower frequency waveforms correspond to larger-scale rupture characteristics while higher frequency data are representative of smaller-scale ones. Therefore, multi-scale analysis can help us understand the earthquake rupture process thoroughly from larger scale to smaller scale. By the use of wavelet transform, the wavelet-domain methods can analyze both the time and frequency information of signals in different scales. Traditional wavelet-domain methods (e.g., Ji et al., 2002) implement finite fault inversion with both lower and higher frequency signals together to recover larger-scale and smaller-scale characteristics of the rupture process simultaneously. Here we propose an alternative strategy with a two-step procedure, i.e., firstly constraining the larger-scale characteristics with lower frequency signals, and then resolving the smaller-scale ones with higher frequency signals. We have designed some synthetic tests to testify our strategy and compare it with the traditional one. We also have applied our strategy to study the 2015 Gorkha Nepal earthquake using tele-seismic waveforms. Both the traditional method and our two-step strategy only analyze the data in different temporal scales (i.e., different frequency bands), while the spatial distribution of model parameters also shows multi-scale characteristics. A more sophisticated strategy is to transfer the slip model into different spatial scales, and then analyze the smooth slip distribution (larger scales) with lower frequency data firstly and more detailed slip distribution (smaller scales) with higher frequency data subsequently. We are now implementing the slip inversion using both spatial and temporal domain wavelets. This multi-scale analysis can help us better understand frequency-dependent rupture characteristics of large earthquakes.
Dettmer, Jan; Dosso, Stan E; Holland, Charles W
2008-03-01
This paper develops a joint time/frequency-domain inversion for high-resolution single-bounce reflection data, with the potential to resolve fine-scale profiles of sediment velocity, density, and attenuation over small seafloor footprints (approximately 100 m). The approach utilizes sequential Bayesian inversion of time- and frequency-domain reflection data, employing ray-tracing inversion for reflection travel times and a layer-packet stripping method for spherical-wave reflection-coefficient inversion. Posterior credibility intervals from the travel-time inversion are passed on as prior information to the reflection-coefficient inversion. Within the reflection-coefficient inversion, parameter information is passed from one layer packet inversion to the next in terms of marginal probability distributions rotated into principal components, providing an efficient approach to (partially) account for multi-dimensional parameter correlations with one-dimensional, numerical distributions. Quantitative geoacoustic parameter uncertainties are provided by a nonlinear Gibbs sampling approach employing full data error covariance estimation (including nonstationary effects) and accounting for possible biases in travel-time picks. Posterior examination of data residuals shows the importance of including data covariance estimates in the inversion. The joint inversion is applied to data collected on the Malta Plateau during the SCARAB98 experiment.