Video quality assessment using motion-compensated temporal filtering and manifold feature similarity
Yu, Mei; Jiang, Gangyi; Shao, Feng; Peng, Zongju
2017-01-01
Well-performed Video quality assessment (VQA) method should be consistent with human visual systems for better prediction accuracy. In this paper, we propose a VQA method using motion-compensated temporal filtering (MCTF) and manifold feature similarity. To be more specific, a group of frames (GoF) is first decomposed into a temporal high-pass component (HPC) and a temporal low-pass component (LPC) by MCTF. Following this, manifold feature learning (MFL) and phase congruency (PC) are used to predict the quality of temporal LPC and temporal HPC respectively. The quality measures of the LPC and the HPC are then combined as GoF quality. A temporal pooling strategy is subsequently used to integrate GoF qualities into an overall video quality. The proposed VQA method appropriately processes temporal information in video by MCTF and temporal pooling strategy, and simulate human visual perception by MFL. Experiments on publicly available video quality database showed that in comparison with several state-of-the-art VQA methods, the proposed VQA method achieves better consistency with subjective video quality and can predict video quality more accurately. PMID:28445489
Modeling a maintenance simulation of the geosynchronous platform
NASA Technical Reports Server (NTRS)
Kleiner, A. F., Jr.
1980-01-01
A modeling technique used to conduct a simulation study comparing various maintenance routines for a space platform is dicussed. A system model is described and illustrated, the basic concepts of a simulation pass are detailed, and sections on failures and maintenance are included. The operation of the system across time is best modeled by a discrete event approach with two basic events - failure and maintenance of the system. Each overall simulation run consists of introducing a particular model of the physical system, together with a maintenance policy, demand function, and mission lifetime. The system is then run through many passes, each pass corresponding to one mission and the model is re-initialized before each pass. Statistics are compiled at the end of each pass and after the last pass a report is printed. Items of interest typically include the time to first maintenance, total number of maintenance trips for each pass, average capability of the system, etc.
A practical guide to replica-exchange Wang—Landau simulations
NASA Astrophysics Data System (ADS)
Vogel, Thomas; Li, Ying Wai; Landau, David P.
2018-04-01
This paper is based on a series of tutorial lectures about the replica-exchange Wang-Landau (REWL) method given at the IX Brazilian Meeting on Simulational Physics (BMSP 2017). It provides a practical guide for the implementation of the method. A complete example code for a model system is available online. In this paper, we discuss the main parallel features of this code after a brief introduction to the REWL algorithm. The tutorial section is mainly directed at users who have written a single-walker Wang–Landau program already but might have just taken their first steps in parallel programming using the Message Passing Interface (MPI). In the last section, we answer “frequently asked questions” from users about the implementation of REWL for different scientific problems.
Optimization of Spiral-Based Pulse Sequences for First Pass Myocardial Perfusion Imaging
Salerno, Michael; Sica, Christopher T.; Kramer, Christopher M.; Meyer, Craig H.
2010-01-01
While spiral trajectories have multiple attractive features such as their isotropic resolution, acquisition efficiency, and robustness to motion, there has been limited application of these techniques to first pass perfusion imaging because of potential off-resonance and inconsistent data artifacts. Spiral trajectories may also be less sensitive to dark-rim artifacts (DRA) that are caused, at least in part, by cardiac motion. By careful consideration of the spiral trajectory readout duration, flip angle strategy, and image reconstruction strategy, spiral artifacts can be abated to create high quality first pass myocardial perfusion images with high SNR. The goal of this paper was to design interleaved spiral pulse sequences for first-pass myocardial perfusion imaging, and to evaluate them clinically for image quality and the presence of dark-rim, blurring, and dropout artifacts. PMID:21590802
New Finger Biometric Method Using Near Infrared Imaging
Lee, Eui Chul; Jung, Hyunwoo; Kim, Daeyeoul
2011-01-01
In this paper, we propose a new finger biometric method. Infrared finger images are first captured, and then feature extraction is performed using a modified Gaussian high-pass filter through binarization, local binary pattern (LBP), and local derivative pattern (LDP) methods. Infrared finger images include the multimodal features of finger veins and finger geometries. Instead of extracting each feature using different methods, the modified Gaussian high-pass filter is fully convolved. Therefore, the extracted binary patterns of finger images include the multimodal features of veins and finger geometries. Experimental results show that the proposed method has an error rate of 0.13%. PMID:22163741
Modified current follower-based immittance function simulators
NASA Astrophysics Data System (ADS)
Alpaslan, Halil; Yuce, Erkan
2017-12-01
In this paper, four immittance function simulators consisting of a single modified current follower with single Z- terminal and a minimum number of passive components are proposed. The first proposed circuit can provide +L parallel with +R and the second proposed one can realise -L parallel with -R. The third proposed structure can provide +L series with +R and the fourth proposed one can realise -L series with -R. However, all the proposed immittance function simulators need a single resistive matching constraint. Parasitic impedance effects on all the proposed immittance function simulators are investigated. A second-order current-mode (CM) high-pass filter derived from the first proposed immittance function simulator is given as an application example. Also, a second-order CM low-pass filter derived from the third proposed immittance function simulator is given as an application example. A number of simulation results based on SPICE programme and an experimental test result are given to verify the theory.
Brackney, Dana E; Lane, Susan Hayes; Dawson, Tyia; Koontz, Angie
2017-11-01
This descriptive field study examines processes used to evaluate simulation for senior-level Bachelor of Science in Nursing (BSN) students in a capstone course, discusses challenges related to simulation evaluation, and reports the relationship between faculty evaluation of student performance and National Council Licensure Examination for Registered Nurses (NCLEX-RN) first-time passing rates. Researchers applied seven terms used to rank BSN student performance (n = 41, female, ages 22-24 years) in a senior-level capstone simulation. Faculty evaluation was correlated with students' NCLEX-RN outcomes. Students evaluated as "lacking confidence" and "flawed" were less likely to pass the NCLEX-RN on the first attempt. Faculty evaluation of capstone simulation performance provided additional evidence of student preparedness for practice in the RN role, as evidenced by the relationship between the faculty assessment and NCLEX-RN success. Simulation has been broadly accepted as a powerful educational tool that may also contribute to verification of student achievement of program outcomes and readiness for the RN role.
Design of fire detection equipment based on ultraviolet detection technology
NASA Astrophysics Data System (ADS)
Liu, Zhenji; Liu, Jin; Chu, Sheng; Ping, Chao; Yuan, Xiaobing
2015-03-01
Utilized the feature of wide bandgap semiconductor of MgZnO, researched and developed a kind of Mid-Ultraviolet-Band(MUV) ultraviolet detector which has passed the simulation experiment in the sun circumstance. Based on the ultraviolet detector, it gives out a design scheme of gun-shot detection device, which is composed of twelve ultraviolet detectors, signal amplifier, processor, annunciator , azimuth indicator and the bracket. Through Analysing the feature of solar blind, ultraviolet responsivity, fire feature of gunshots and detection distance, the feasibility of this design scheme is proved.
DESDynI Lidar for Solid Earth Applications
NASA Technical Reports Server (NTRS)
Sauber, Jeanne; Hofton, Michelle; Bruhn, Ronald; Lutchke, Scott; Blair, Bryan
2011-01-01
As part of the NASA's DESDynI mission, global elevation profiles from contiguous 25 m footprint Lidar measurements will be made. Here we present results of a performance simulation of a single pass of the multi-beam Lidar instrument over uplifted marine terraces in southern Alaska. The significance of the Lidar simulations is that surface topography would be captured at sufficient resolution for mapping uplifted terraces features but it will be hard to discern I-2m topographic change over features less than tens of meters in width. Since Lidar would penetrate most vegetation, the accurate bald Earth elevation profiles will give new elevation information beyond the standard 30-m OEM.
Poly(A) code analyses reveal key determinants for tissue-specific mRNA alternative polyadenylation
Weng, Lingjie; Li, Yi; Xie, Xiaohui; Shi, Yongsheng
2016-01-01
mRNA alternative polyadenylation (APA) is a critical mechanism for post-transcriptional gene regulation and is often regulated in a tissue- and/or developmental stage-specific manner. An ultimate goal for the APA field has been to be able to computationally predict APA profiles under different physiological or pathological conditions. As a first step toward this goal, we have assembled a poly(A) code for predicting tissue-specific poly(A) sites (PASs). Based on a compendium of over 600 features that have known or potential roles in PAS selection, we have generated and refined a machine-learning algorithm using multiple high-throughput sequencing-based data sets of tissue-specific and constitutive PASs. This code can predict tissue-specific PASs with >85% accuracy. Importantly, by analyzing the prediction performance based on different RNA features, we found that PAS context, including the distance between alternative PASs and the relative position of a PAS within the gene, is a key feature for determining the susceptibility of a PAS to tissue-specific regulation. Our poly(A) code provides a useful tool for not only predicting tissue-specific APA regulation, but also for studying its underlying molecular mechanisms. PMID:27095026
NASA Astrophysics Data System (ADS)
Fang, Li; Xu, Yusheng; Yao, Wei; Stilla, Uwe
2016-11-01
For monitoring of glacier surface motion in pole and alpine areas, radar remote sensing is becoming a popular technology accounting for its specific advantages of being independent of weather conditions and sunlight. In this paper we propose a method for glacier surface motion monitoring using phase correlation (PC) based on point-like features (PLF). We carry out experiments using repeat-pass TerraSAR X-band (TSX) and Sentinel-1 C-band (S1C) intensity images of the Taku glacier in Juneau icefield located in southeast Alaska. The intensity imagery is first filtered by an improved adaptive refined Lee filter while the effect of topographic reliefs is removed via SRTM-X DEM. Then, a robust phase correlation algorithm based on singular value decomposition (SVD) and an improved random sample consensus (RANSAC) algorithm is applied to sequential PLF pairs generated by correlation using a 2D sinc function template. The approaches for glacier monitoring are validated by both simulated SAR data and real SAR data from two satellites. The results obtained from these three test datasets confirm the superiority of the proposed approach compared to standard correlation-like methods. By the use of the proposed adaptive refined Lee filter, we achieve a good balance between the suppression of noise and the preservation of local image textures. The presented phase correlation algorithm shows the accuracy of better than 0.25 pixels, when conducting matching tests using simulated SAR intensity images with strong noise. Quantitative 3D motions and velocities of the investigated Taku glacier during a repeat-pass period are obtained, which allows a comprehensive and reliable analysis for the investigation of large-scale glacier surface dynamics.
A post-processing algorithm for time domain pitch trackers
NASA Astrophysics Data System (ADS)
Specker, P.
1983-01-01
This paper describes a powerful post-processing algorithm for time-domain pitch trackers. On two successive passes, the post-processing algorithm eliminates errors produced during a first pass by a time-domain pitch tracker. During the second pass, incorrect pitch values are detected as outliers by computing the distribution of values over a sliding 80 msec window. During the third pass (based on artificial intelligence techniques), remaining pitch pulses are used as anchor points to reconstruct the pitch train from the original waveform. The algorithm produced a decrease in the error rate from 21% obtained with the original time domain pitch tracker to 2% for isolated words and sentences produced in an office environment by 3 male and 3 female talkers. In a noisy computer room errors decreased from 52% to 2.9% for the same stimuli produced by 2 male talkers. The algorithm is efficient, accurate, and resistant to noise. The fundamental frequency micro-structure is tracked sufficiently well to be used in extracting phonetic features in a feature-based recognition system.
Phoenix Mars Lander: Vortices and Dust Devils at the Landing Site
NASA Astrophysics Data System (ADS)
Ellehoj, M. D.; Taylor, P. A.; Gunnlaugsson, H. P.; Gheynani, B. T.; Drube, L.; von Holstein-Rathlou, C.; Whiteway, J.; Lemmon, M.; Madsen, M. B.; Fisher, D.; Volpe, R.; Smith, P.
2008-12-01
Near continuous measurements of temperatures and pressure on the Phoenix Mars Lander are used to identify the passage of vertically oriented vortex structures at the Phoenix landing site (126W, 68N) on Mars. Observations: During the Phoenix mission the pressure and temperature sensors frequently detected features passing over or close to the lander. Short duration (order 20 s) pressure drops of order 1-2 Pa, and often less, were observed relatively frequently, accompanied by increases in temperature. Similar features were observed from the Pathfinder mission, although in that case the reported pressure drops were often larger [1]. Statistics of the pressure drop features over the first 102 sols of the Phoenix mission shows that most of the events occur between noon and 15:00 LMST - the hottest part of the sol. Dust Raising: By assuming the concept of a vortex in cyclostrophic flow as well as various assumptions about the atmosphere, we obtain a pressure drop of 1.9 - 3.2 Pa if dust is to be raised. We only saw few pressure drops this large in Sols 0-102. However, the features do not need to pass directly over the lander and the pressures could be lower than the minima we measure. Furthermore, the response time of the pressure sensor is of order 3-5 s so it may not capture peak pressure perturbations. Thus, more dust devils may have occurred near the Phoenix site, but most of our detected vortices would be ghostly, dustless devils. Modelling: Using a Large Eddy Simulation model, we can simulate highly convective boundary layers on Mars [2]. The typical vortex has a diameter of 150 m, and extends up to 1 km. Further calculations give an incidence of 11 vortex events per day that could be compatible with the LES simulations. Deeper investigation of this is planned -but the numbers are roughly compatible. If the significant pressure signatures are limited to the center of the vortex then 5 per sol might be appropriate. The Phoenix mission has collected a unique set of in situ meteorological data from the Arctic regions on Mars. Modelling work shows that vertically oriented vortices with low pressure, warm cores, can develop on internal boundaries, such as those associated with cellular convection, and this is supported by observations. Simple cyclostrophic estimates of vortex wind speeds suggest that dust devils will form, but that most vortices will not be capable of lifting dust from the surface. So, at least in the first 102 sols, most of the Phoenix devils are dustless. References [1] F Ferri, PH Smith, M Lemmon, NO Renno; (2003) Dust devils as observed by Mars Pathfinder. JGR,108, NO. E12, 5133, doi:10.1029/2000JE001421. [2] Gheynani, B.T. and Taylor, P.A., (2008), Large Eddy Simulation of vertical vortices in highly convective Martian boundary layer, Paper 10 B.6, 18th Symposium on Boundary Layers and Turbulence, June 2008, Stockholm, Sweden
NASA Astrophysics Data System (ADS)
Wang, Chengpeng; Li, Fuguo; Liu, Juncheng
2018-04-01
The objectives of this work are to study the deformational feature, textures, microstructures, and dislocation configurations of ultrafine-grained copper processed by the process of elliptical cross-section spiral equal-channel extrusion (ECSEE). The deformation patterns of simple shear and pure shear in the ECSEE process were evaluated with the analytical method of geometric strain. The influence of the main technical parameters of ECSEE die on the effective strain distribution on the surface of ECSEE-fabricated samples was examined by the finite element simulation. The high friction factor could improve the effective strain accumulation of material deformation. Moreover, the pure copper sample fabricated by ECSEE ion shows a strong rotated cube shear texture. The refining mechanism of the dislocation deformation is dominant in copper processed by a single pass of ECSEE. The inhomogeneity of the micro-hardness distribution on the longitudinal section of the ECSEE-fabricated sample is consistent with the strain and microstructure distribution features.
NASA Astrophysics Data System (ADS)
Sukanto, H.; Budiana, E. P.; Putra, B. H. H.
2016-03-01
The objective of this research is to get a comparison of the distribution of the room temperature by using three materials, namely plastic-rubber composite, clay, and asbestos. The simulation used Ansys Fluent to get the temperature distribution. There were two conditions in this simulations, first the air passing beside the room and second the air passing in front of the room. Each condition will be varied with the air speed of 1 m/s, 2 m/s, 3 m/s, 4 m/s, 5 m/s for each material used. There are three heat transfers in this simulation, namely radiation, convection, and conduction. Based on the ANSI/ ASHRAE Standard 55-2004, the results of the simulation showed that the best temperature distribution was the roof of plastic-rubber composites.
A Bayesian model averaging method for improving SMT phrase table
NASA Astrophysics Data System (ADS)
Duan, Nan
2013-03-01
Previous methods on improving translation quality by employing multiple SMT models usually carry out as a second-pass decision procedure on hypotheses from multiple systems using extra features instead of using features in existing models in more depth. In this paper, we propose translation model generalization (TMG), an approach that updates probability feature values for the translation model being used based on the model itself and a set of auxiliary models, aiming to alleviate the over-estimation problem and enhance translation quality in the first-pass decoding phase. We validate our approach for translation models based on auxiliary models built by two different ways. We also introduce novel probability variance features into the log-linear models for further improvements. We conclude our approach can be developed independently and integrated into current SMT pipeline directly. We demonstrate BLEU improvements on the NIST Chinese-to-English MT tasks for single-system decodings.
Agnihotri, Deepak; Verma, Kesari; Tripathi, Priyanka
2016-01-01
The contiguous sequences of the terms (N-grams) in the documents are symmetrically distributed among different classes. The symmetrical distribution of the N-Grams raises uncertainty in the belongings of the N-Grams towards the class. In this paper, we focused on the selection of most discriminating N-Grams by reducing the effects of symmetrical distribution. In this context, a new text feature selection method named as the symmetrical strength of the N-Grams (SSNG) is proposed using a two pass filtering based feature selection (TPF) approach. Initially, in the first pass of the TPF, the SSNG method chooses various informative N-Grams from the entire extracted N-Grams of the corpus. Subsequently, in the second pass the well-known Chi Square (χ(2)) method is being used to select few most informative N-Grams. Further, to classify the documents the two standard classifiers Multinomial Naive Bayes and Linear Support Vector Machine have been applied on the ten standard text data sets. In most of the datasets, the experimental results state the performance and success rate of SSNG method using TPF approach is superior to the state-of-the-art methods viz. Mutual Information, Information Gain, Odds Ratio, Discriminating Feature Selection and χ(2).
Mathur, S; Symons, S P; Huynh, T J; Muthusami, P; Montanera, W; Bharatha, A
2017-01-01
Spinal epidural AVFs are rare spinal vascular malformations. When there is associated intradural venous reflux, they may mimic the more common spinal dural AVFs. Correct diagnosis and localization before conventional angiography is beneficial to facilitate treatment. We hypothesize that first-pass contrast-enhanced MRA can diagnose and localize spinal epidural AVFs with intradural venous reflux and distinguish them from other spinal AVFs. Forty-two consecutive patients with a clinical and/or radiologic suspicion of spinal AVF underwent MR imaging, first-pass contrast-enhanced MRA, and DSA at a single institute (2000-2015). MR imaging/MRA and DSA studies were reviewed by 2 independent blinded observers. DSA was used as the reference standard. On MRA, all 7 spinal epidural AVFs with intradural venous reflux were correctly diagnosed and localized with no interobserver disagreement. The key diagnostic feature was arterialized filling of an epidural venous pouch with a refluxing radicular vein arising from the arterialized epidural venous system. First-pass contrast-enhanced MRA is a reliable and useful technique for the initial diagnosis and localization of spinal epidural AVFs with intradural venous reflux and can distinguish these lesions from other spinal AVFs. © 2017 by American Journal of Neuroradiology.
Mapping a battlefield simulation onto message-passing parallel architectures
NASA Technical Reports Server (NTRS)
Nicol, David M.
1987-01-01
Perhaps the most critical problem in distributed simulation is that of mapping: without an effective mapping of workload to processors the speedup potential of parallel processing cannot be realized. Mapping a simulation onto a message-passing architecture is especially difficult when the computational workload dynamically changes as a function of time and space; this is exactly the situation faced by battlefield simulations. This paper studies an approach where the simulated battlefield domain is first partitioned into many regions of equal size; typically there are more regions than processors. The regions are then assigned to processors; a processor is responsible for performing all simulation activity associated with the regions. The assignment algorithm is quite simple and attempts to balance load by exploiting locality of workload intensity. The performance of this technique is studied on a simple battlefield simulation implemented on the Flex/32 multiprocessor. Measurements show that the proposed method achieves reasonable processor efficiencies. Furthermore, the method shows promise for use in dynamic remapping of the simulation.
A decade has passed since the term “emerging” was first formally used to describe the existence of waterpollutants not previously recognized; a 1998 NRC workshop ("Identifying Future Drinking WaterContaminants") and several 1999 reports by USGS were among the first to feature the...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sukanto, H., E-mail: masheher@uns.ac.id; Budiana, E. P., E-mail: budiana.e@gmail.com; Putra, B. H. H., E-mail: benedictus.hendy@gmail.com
The objective of this research is to get a comparison of the distribution of the room temperature by using three materials, namely plastic-rubber composite, clay, and asbestos. The simulation used Ansys Fluent to get the temperature distribution. There were two conditions in this simulations, first the air passing beside the room and second the air passing in front of the room. Each condition will be varied with the air speed of 1 m/s, 2 m/s, 3 m/s, 4 m/s, 5 m/s for each material used. There are three heat transfers in this simulation, namely radiation, convection, and conduction. Based on the ANSI/ ASHRAE Standard 55-2004,more » the results of the simulation showed that the best temperature distribution was the roof of plastic-rubber composites.« less
Simple cellular automaton model for traffic breakdown, highway capacity, and synchronized flow.
Kerner, Boris S; Klenov, Sergey L; Schreckenberg, Michael
2011-10-01
We present a simple cellular automaton (CA) model for two-lane roads explaining the physics of traffic breakdown, highway capacity, and synchronized flow. The model consists of the rules "acceleration," "deceleration," "randomization," and "motion" of the Nagel-Schreckenberg CA model as well as "overacceleration through lane changing to the faster lane," "comparison of vehicle gap with the synchronization gap," and "speed adaptation within the synchronization gap" of Kerner's three-phase traffic theory. We show that these few rules of the CA model can appropriately simulate fundamental empirical features of traffic breakdown and highway capacity found in traffic data measured over years in different countries, like characteristics of synchronized flow, the existence of the spontaneous and induced breakdowns at the same bottleneck, and associated probabilistic features of traffic breakdown and highway capacity. Single-vehicle data derived in model simulations show that synchronized flow first occurs and then self-maintains due to a spatiotemporal competition between speed adaptation to a slower speed of the preceding vehicle and passing of this slower vehicle. We find that the application of simple dependences of randomization probability and synchronization gap on driving situation allows us to explain the physics of moving synchronized flow patterns and the pinch effect in synchronized flow as observed in real traffic data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spentzouris, P.; /Fermilab; Cary, J.
The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessarymore » accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors. ComPASS is in the first year of executing its plan to develop the next-generation HPC accelerator modeling tools. ComPASS aims to develop an integrated simulation environment that will utilize existing and new accelerator physics modules with petascale capabilities, by employing modern computing and solver technologies. The ComPASS vision is to deliver to accelerator scientists a virtual accelerator and virtual prototyping modeling environment, with the necessary multiphysics, multiscale capabilities. The plan for this development includes delivering accelerator modeling applications appropriate for each stage of the ComPASS software evolution. Such applications are already being used to address challenging problems in accelerator design and optimization. The ComPASS organization for software development and applications accounts for the natural domain areas (beam dynamics, electromagnetics, and advanced acceleration), and all areas depend on the enabling technologies activities, such as solvers and component technology, to deliver the desired performance and integrated simulation environment. The ComPASS applications focus on computationally challenging problems important for design or performance optimization to all major HEP, NP, and BES accelerator facilities. With the cost and complexity of particle accelerators rising, the use of computation to optimize their designs and find improved operating regimes becomes essential, potentially leading to significant cost savings with modest investment.« less
Jensen, Katrine; Bjerrum, Flemming; Hansen, Henrik Jessen; Petersen, René Horsleben; Pedersen, Jesper Holst; Konge, Lars
2017-06-01
The societies of thoracic surgery are working to incorporate simulation and competency-based assessment into specialty training. One challenge is the development of a simulation-based test, which can be used as an assessment tool. The study objective was to establish validity evidence for a virtual reality simulator test of a video-assisted thoracoscopic surgery (VATS) lobectomy of a right upper lobe. Participants with varying experience in VATS lobectomy were included. They were familiarized with a virtual reality simulator (LapSim ® ) and introduced to the steps of the procedure for a VATS right upper lobe lobectomy. The participants performed two VATS lobectomies on the simulator with a 5-min break between attempts. Nineteen pre-defined simulator metrics were recorded. Fifty-three participants from nine different countries were included. High internal consistency was found for the metrics with Cronbach's alpha coefficient for standardized items of 0.91. Significant test-retest reliability was found for 15 of the metrics (p-values <0.05). Significant correlations between the metrics and the participants VATS lobectomy experience were identified for seven metrics (p-values <0.001), and 10 metrics showed significant differences between novices (0 VATS lobectomies performed) and experienced surgeons (>50 VATS lobectomies performed). A pass/fail level defined as approximately one standard deviation from the mean metric scores for experienced surgeons passed none of the novices (0 % false positives) and failed four of the experienced surgeons (29 % false negatives). This study is the first to establish validity evidence for a VATS right upper lobe lobectomy virtual reality simulator test. Several simulator metrics demonstrated significant differences between novices and experienced surgeons and pass/fail criteria for the test were set with acceptable consequences. This test can be used as a first step in assessing thoracic surgery trainees' VATS lobectomy competency.
A trial of e-simulation of sudden patient deterioration (FIRST2ACT WEB) on student learning.
Bogossian, Fiona E; Cooper, Simon J; Cant, Robyn; Porter, Joanne; Forbes, Helen
2015-10-01
High-fidelity simulation pedagogy is of increasing importance in health professional education; however, face-to-face simulation programs are resource intensive and impractical to implement across large numbers of students. To investigate undergraduate nursing students' theoretical and applied learning in response to the e-simulation program-FIRST2ACT WEBTM, and explore predictors of virtual clinical performance. Multi-center trial of FIRST2ACT WEBTM accessible to students in five Australian universities and colleges, across 8 campuses. A population of 489 final-year nursing students in programs of study leading to license to practice. Participants proceeded through three phases: (i) pre-simulation-briefing and assessment of clinical knowledge and experience; (ii) e-simulation-three interactive e-simulation clinical scenarios which included video recordings of patients with deteriorating conditions, interactive clinical tasks, pop up responses to tasks, and timed performance; and (iii) post-simulation feedback and evaluation. Descriptive statistics were followed by bivariate analysis to detect any associations, which were further tested using standard regression analysis. Of 409 students who commenced the program (83% response rate), 367 undergraduate nursing students completed the web-based program in its entirety, yielding a completion rate of 89.7%; 38.1% of students achieved passing clinical performance across three scenarios, and the proportion achieving passing clinical knowledge increased from 78.15% pre-simulation to 91.6% post-simulation. Knowledge was the main independent predictor of clinical performance in responding to a virtual deteriorating patient R(2)=0.090, F(7, 352)=4.962, p<0.001. The use of web-based technology allows simulation activities to be accessible to a large number of participants and completion rates indicate that 'Net Generation' nursing students were highly engaged with this mode of learning. The web-based e-simulation program FIRST2ACTTM effectively enhanced knowledge, virtual clinical performance, and self-assessed knowledge, skills, confidence, and competence in final-year nursing students. Copyright © 2015 Elsevier Ltd. All rights reserved.
Østergaard, Mia L; Nielsen, Kristina R; Albrecht-Beste, Elisabeth; Konge, Lars; Nielsen, Michael B
2018-01-01
This study aimed to develop a test with validity evidence for abdominal diagnostic ultrasound with a pass/fail-standard to facilitate mastery learning. The simulator had 150 real-life patient abdominal scans of which 15 cases with 44 findings were selected, representing level 1 from The European Federation of Societies for Ultrasound in Medicine and Biology. Four groups of experience levels were constructed: Novices (medical students), trainees (first-year radiology residents), intermediates (third- to fourth-year radiology residents) and advanced (physicians with ultrasound fellowship). Participants were tested in a standardized setup and scored by two blinded reviewers prior to an item analysis. The item analysis excluded 14 diagnoses. Both internal consistency (Cronbach's alpha 0.96) and inter-rater reliability (0.99) were good and there were statistically significant differences (p < 0.001) between all four groups, except the intermediate and advanced groups (p = 1.0). There was a statistically significant correlation between experience and test scores (Pearson's r = 0.82, p < 0.001). The pass/fail-standard failed all novices (no false positives) and passed all advanced (no false negatives). All intermediate participants and six out of 14 trainees passed. We developed a test for diagnostic abdominal ultrasound with solid validity evidence and a pass/fail-standard without any false-positive or false-negative scores. • Ultrasound training can benefit from competency-based education based on reliable tests. • This simulation-based test can differentiate between competency levels of ultrasound examiners. • This test is suitable for competency-based education, e.g. mastery learning. • We provide a pass/fail standard without false-negative or false-positive scores.
Yang, Yang; Kramer, Christopher M.; Shaw, Peter W.; Meyer, Craig H.; Salerno, Michael
2015-01-01
Purpose To design and evaluate 2D L1-SPIRiT accelerated spiral pulse sequences for first-pass myocardial perfusion imaging with whole heart coverage capable of measuring 8 slices at 2 mm in-plane resolution at heart rates up to 125 beats per minute (BPM). Methods Combinations of 5 different spiral trajectories and 4 k-t sampling patterns were retrospectively simulated in 25 fully sampled datasets and reconstructed with L1-SPIRiT to determine the best combination of parameters. Two candidate sequences were prospectively evaluated in 34 human subjects to assess in-vivo performance. Results A dual density broad transition spiral trajectory with either angularly uniform or golden angle in time k-t sampling pattern had the largest structural similarity (SSIM) and smallest root mean square error (RMSE) from the retrospective simulation, and the L1-SPIRiT reconstruction had well-preserved temporal dynamics. In vivo data demonstrated that both of the sampling patterns could produce high quality perfusion images with whole-heart coverage. Conclusion First-pass myocardial perfusion imaging using accelerated spirals with optimized trajectory and k-t sampling pattern can produce high quality 2D-perfusion images with wholeheart coverage at the heart rates up to 125 BPM. PMID:26538511
ms2: A molecular simulation tool for thermodynamic properties
NASA Astrophysics Data System (ADS)
Deublein, Stephan; Eckl, Bernhard; Stoll, Jürgen; Lishchuk, Sergey V.; Guevara-Carrion, Gabriela; Glass, Colin W.; Merker, Thorsten; Bernreuther, Martin; Hasse, Hans; Vrabec, Jadran
2011-11-01
This work presents the molecular simulation program ms2 that is designed for the calculation of thermodynamic properties of bulk fluids in equilibrium consisting of small electro-neutral molecules. ms2 features the two main molecular simulation techniques, molecular dynamics (MD) and Monte-Carlo. It supports the calculation of vapor-liquid equilibria of pure fluids and multi-component mixtures described by rigid molecular models on the basis of the grand equilibrium method. Furthermore, it is capable of sampling various classical ensembles and yields numerous thermodynamic properties. To evaluate the chemical potential, Widom's test molecule method and gradual insertion are implemented. Transport properties are determined by equilibrium MD simulations following the Green-Kubo formalism. ms2 is designed to meet the requirements of academia and industry, particularly achieving short response times and straightforward handling. It is written in Fortran90 and optimized for a fast execution on a broad range of computer architectures, spanning from single processor PCs over PC-clusters and vector computers to high-end parallel machines. The standard Message Passing Interface (MPI) is used for parallelization and ms2 is therefore easily portable to different computing platforms. Feature tools facilitate the interaction with the code and the interpretation of input and output files. The accuracy and reliability of ms2 has been shown for a large variety of fluids in preceding work. Program summaryProgram title:ms2 Catalogue identifier: AEJF_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJF_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Special Licence supplied by the authors No. of lines in distributed program, including test data, etc.: 82 794 No. of bytes in distributed program, including test data, etc.: 793 705 Distribution format: tar.gz Programming language: Fortran90 Computer: The simulation tool ms2 is usable on a wide variety of platforms, from single processor machines over PC-clusters and vector computers to vector-parallel architectures. (Tested with Fortran compilers: gfortran, Intel, PathScale, Portland Group and Sun Studio.) Operating system: Unix/Linux, Windows Has the code been vectorized or parallelized?: Yes. Message Passing Interface (MPI) protocol Scalability. Excellent scalability up to 16 processors for molecular dynamics and >512 processors for Monte-Carlo simulations. RAM:ms2 runs on single processors with 512 MB RAM. The memory demand rises with increasing number of processors used per node and increasing number of molecules. Classification: 7.7, 7.9, 12 External routines: Message Passing Interface (MPI) Nature of problem: Calculation of application oriented thermodynamic properties for rigid electro-neutral molecules: vapor-liquid equilibria, thermal and caloric data as well as transport properties of pure fluids and multi-component mixtures. Solution method: Molecular dynamics, Monte-Carlo, various classical ensembles, grand equilibrium method, Green-Kubo formalism. Restrictions: No. The system size is user-defined. Typical problems addressed by ms2 can be solved by simulating systems containing typically 2000 molecules or less. Unusual features: Feature tools are available for creating input files, analyzing simulation results and visualizing molecular trajectories. Additional comments: Sample makefiles for multiple operation platforms are provided. Documentation is provided with the installation package and is available at http://www.ms-2.de. Running time: The running time of ms2 depends on the problem set, the system size and the number of processes used in the simulation. Running four processes on a "Nehalem" processor, simulations calculating VLE data take between two and twelve hours, calculating transport properties between six and 24 hours.
ERIC Educational Resources Information Center
Lobuts, John, Jr.
2011-01-01
This autobiography is an attempt to write and share the author's personal story, first as a learner, then as a teacher. It also attempts to share the educational gifts initially bestowed and then passed on from one generation to the next. The writer will talk about how games and simulations were first inherited and learned, then employed in…
NASA Astrophysics Data System (ADS)
Gueudré, C.; Marrec, L. Le; Chekroun, M.; Moysan, J.; Chassignole, B.; Corneloup, G.
2011-06-01
Multipass welds made in austenitic stainless steel, in the primary circuit of nuclear power plants with pressurized water reactors, are characterized by an anisotropic and heterogeneous structure that disturbs the ultrasonic propagation and challenge the ultrasonic non-destructive testing. The simulation in this type of structure is now possible thanks to the MINA code which allows the grain orientation modeling taking into account the welding process, and the ATHENA code to exactly simulate the ultrasonic propagation. We propose studying the case where the order of the passes is unknown to estimate the possibility of reconstructing this important parameter by ultrasound measures. The first results are presented.
Yang, Guowei; You, Shengzui; Bi, Meihua; Fan, Bing; Lu, Yang; Zhou, Xuefang; Li, Jing; Geng, Hujun; Wang, Tianshu
2017-09-10
Free-space optical (FSO) communication utilizing a modulating retro-reflector (MRR) is an innovative way to convey information between the traditional optical transceiver and the semi-passive MRR unit that reflects optical signals. The reflected signals experience turbulence-induced fading in the double-pass channel, which is very different from that in the traditional single-pass FSO channel. In this paper, we consider the corner cube reflector (CCR) as the retro-reflective device in the MRR. A general geometrical model of the CCR is established based on the ray tracing method to describe the ray trajectory inside the CCR. This ray tracing model could treat the general case that the optical beam is obliquely incident on the hypotenuse surface of the CCR with the dihedral angle error and surface nonflatness. Then, we integrate this general CCR model into the wave-optics (WO) simulation to construct the double-pass beam propagation simulation. This double-pass simulation contains the forward propagation from the transceiver to the MRR through the atmosphere, the retro-reflection of the CCR, and the backward propagation from the MRR to the transceiver, which can be realized by a single-pass WO simulation, the ray tracing CCR model, and another single-pass WO simulation, respectively. To verify the proposed CCR model and double-pass WO simulation, the effective reflection area, the incremental phase, and the reflected beam spot on the transceiver plane of the CCR are analyzed, and the numerical results are in agreement with the previously published results. Finally, we use the double-pass WO simulation to investigate the double-pass channel in the MRR FSO systems. The histograms of the turbulence-induced fading in the forward and backward channels are obtained from the simulation data and are fitted by gamma-gamma (ΓΓ) distributions. As the two opposite channels are highly correlated, we model the double-pass channel fading by the product of two correlated ΓΓ random variables (RVs).
Regional climate model sensitivity to domain size
NASA Astrophysics Data System (ADS)
Leduc, Martin; Laprise, René
2009-05-01
Regional climate models are increasingly used to add small-scale features that are not present in their lateral boundary conditions (LBC). It is well known that the limited area over which a model is integrated must be large enough to allow the full development of small-scale features. On the other hand, integrations on very large domains have shown important departures from the driving data, unless large scale nudging is applied. The issue of domain size is studied here by using the “perfect model” approach. This method consists first of generating a high-resolution climatic simulation, nicknamed big brother (BB), over a large domain of integration. The next step is to degrade this dataset with a low-pass filter emulating the usual coarse-resolution LBC. The filtered nesting data (FBB) are hence used to drive a set of four simulations (LBs for Little Brothers), with the same model, but on progressively smaller domain sizes. The LB statistics for a climate sample of four winter months are compared with BB over a common region. The time average (stationary) and transient-eddy standard deviation patterns of the LB atmospheric fields generally improve in terms of spatial correlation with the reference (BB) when domain gets smaller. The extraction of the small-scale features by using a spectral filter allows detecting important underestimations of the transient-eddy variability in the vicinity of the inflow boundary, which can penalize the use of small domains (less than 100 × 100 grid points). The permanent “spatial spin-up” corresponds to the characteristic distance that the large-scale flow needs to travel before developing small-scale features. The spin-up distance tends to grow in size at higher levels in the atmosphere.
Design analysis tracking and data relay satellite simulation system
NASA Technical Reports Server (NTRS)
1974-01-01
The design and development of the equipment necessary to simulate the S-band multiple access link between user spacecraft, the Tracking and Data Relay Satellite, and a ground control terminal are discussed. The core of the S-band multiple access concept is the use of an Adaptive Ground Implemented Phased Array. The array contains thirty channels and provides the multiplexing and demultiplexing equipment required to demonstrate the ground implemented beam forming feature. The system provided will make it possible to demonstrate the performance of a desired user and ten interfering sources attempting to pass data through the multiple access system.
Analysis of Borderline Substitution/Electron Transfer Pathways from Direct ab initio MD Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamataka, H; Aida, M A.; Dupuis, Michel
Ab initio molecular dynamics simulations were carried out for the borderline reaction pathways in the reaction of CH2O?- with CH3Cl. The simulations reveal distinctive features of three types of mechanisms passing through the SN2-like transition state (TS): (i) a direct formation of SN2 products, (ii) a direct formation of ET products, and (iii) a 2-step formation of ET products via the SN2 valley. The direct formation of the ET product through the SN2-like TS appears to be more favorable at higher temperatures. The 2-step process depends on the amount of energy that goes into the C-C stretching mode.
Simulation-based validation and arrival-time correction for Patlak analyses of Perfusion-CT scans
NASA Astrophysics Data System (ADS)
Bredno, Jörg; Hom, Jason; Schneider, Thomas; Wintermark, Max
2009-02-01
Blood-brain-barrier (BBB) breakdown is a hypothesized mechanism for hemorrhagic transformation in acute stroke. The Patlak analysis of a Perfusion Computed Tomography (PCT) scan measures the BBB permeability, but the method yields higher estimates when applied to the first pass of the contrast bolus compared to a delayed phase. We present a numerical phantom that simulates vascular and parenchymal time-attenuation curves to determine the validity of permeability measurements obtained with different acquisition protocols. A network of tubes represents the major cerebral arteries ipsi- and contralateral to an ischemic event. These tubes branch off into smaller segments that represent capillary beds. Blood flow in the phantom is freely defined and simulated as non-Newtonian tubular flow. Diffusion of contrast in the vessels and permeation through vessel walls is part of the simulation. The phantom allows us to compare the results of a permeability measurement to the simulated vessel wall status. A Patlak analysis reliably detects areas with BBB breakdown for acquisitions of 240s duration, whereas results obtained from the first pass are biased in areas of reduced blood flow. Compensating for differences in contrast arrival times reduces this bias and gives good estimates of BBB permeability for PCT acquisitions of 90-150s duration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mooney, K; Zhao, T; Green, O
Purpose: To assess the performance of the deformable image registration algorithm used for MRI-guided adaptive radiation therapy using image feature analysis. Methods: MR images were collected from five patients treated on the MRIdian (ViewRay, Inc., Oakwood Village, OH), a three head Cobalt-60 therapy machine with an 0.35 T MR system. The images were acquired immediately prior to treatment with a uniform 1.5 mm resolution. Treatment sites were as follows: head/neck, lung, breast, stomach, and bladder. Deformable image registration was performed using the ViewRay software between the first fraction MRI and the final fraction MRI, and the DICE similarity coefficient (DSC)more » for the skin contours was reported. The SIFT and Harris feature detection and matching algorithms identified point features in each image separately, then found matching features in the other image. The target registration error (TRE) was defined as the vector distance between matched features on the two image sets. Each deformation was evaluated based on comparison of average TRE and DSC. Results: Image feature analysis produced between 2000–9500 points for evaluation on the patient images. The average (± standard deviation) TRE for all patients was 3.3 mm (±3.1 mm), and the passing rate of TRE<3 mm was 60% on the images. The head/neck patient had the best average TRE (1.9 mm±2.3 mm) and the best passing rate (80%). The lung patient had the worst average TRE (4.8 mm±3.3 mm) and the worst passing rate (37.2%). DSC was not significantly correlated with either TRE (p=0.63) or passing rate (p=0.55). Conclusions: Feature matching provides a quantitative assessment of deformable image registration, with a large number of data points for analysis. The TRE of matched features can be used to evaluate the registration of many objects throughout the volume, whereas DSC mainly provides a measure of gross overlap. We have a research agreement with ViewRay Inc.« less
Simple cellular automaton model for traffic breakdown, highway capacity, and synchronized flow
NASA Astrophysics Data System (ADS)
Kerner, Boris S.; Klenov, Sergey L.; Schreckenberg, Michael
2011-10-01
We present a simple cellular automaton (CA) model for two-lane roads explaining the physics of traffic breakdown, highway capacity, and synchronized flow. The model consists of the rules “acceleration,” “deceleration,” “randomization,” and “motion” of the Nagel-Schreckenberg CA model as well as “overacceleration through lane changing to the faster lane,” “comparison of vehicle gap with the synchronization gap,” and “speed adaptation within the synchronization gap” of Kerner's three-phase traffic theory. We show that these few rules of the CA model can appropriately simulate fundamental empirical features of traffic breakdown and highway capacity found in traffic data measured over years in different countries, like characteristics of synchronized flow, the existence of the spontaneous and induced breakdowns at the same bottleneck, and associated probabilistic features of traffic breakdown and highway capacity. Single-vehicle data derived in model simulations show that synchronized flow first occurs and then self-maintains due to a spatiotemporal competition between speed adaptation to a slower speed of the preceding vehicle and passing of this slower vehicle. We find that the application of simple dependences of randomization probability and synchronization gap on driving situation allows us to explain the physics of moving synchronized flow patterns and the pinch effect in synchronized flow as observed in real traffic data.
ERIC Educational Resources Information Center
Bacchus, Mohammed Kazim
2008-01-01
Publication of this piece is intended as a tribute to the late Professor M. Kazim Bacchus who passed away in March 2007. The paper provides a general discussion concerning the social and educational challenges faced by small nation states in an age characterised by globalisation. The analysis first identifies some of the basic features of small…
Yang, Yang; Kramer, Christopher M; Shaw, Peter W; Meyer, Craig H; Salerno, Michael
2016-11-01
To design and evaluate two-dimensional (2D) L1-SPIRiT accelerated spiral pulse sequences for first-pass myocardial perfusion imaging with whole heart coverage capable of measuring eight slices at 2 mm in-plane resolution at heart rates up to 125 beats per minute (BPM). Combinations of five different spiral trajectories and four k-t sampling patterns were retrospectively simulated in 25 fully sampled datasets and reconstructed with L1-SPIRiT to determine the best combination of parameters. Two candidate sequences were prospectively evaluated in 34 human subjects to assess in vivo performance. A dual density broad transition spiral trajectory with either angularly uniform or golden angle in time k-t sampling pattern had the largest structural similarity and smallest root mean square error from the retrospective simulation, and the L1-SPIRiT reconstruction had well-preserved temporal dynamics. In vivo data demonstrated that both of the sampling patterns could produce high quality perfusion images with whole-heart coverage. First-pass myocardial perfusion imaging using accelerated spirals with optimized trajectory and k-t sampling pattern can produce high quality 2D perfusion images with whole-heart coverage at the heart rates up to 125 BPM. Magn Reson Med 76:1375-1387, 2016. © 2015 International Society for Magnetic Resonance in Medicine. © 2015 International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
Yue, Xian-hua; Liu, Chun-fang; Liu, Hui-hua; Xiao, Su-fen; Tang, Zheng-hua; Tang, Tian
2018-02-01
The main goal of this study is to investigate the microstructure and electrical properties of Al-Zr-La alloys under different hot compression deformation temperatures. In particular, a Gleeble 3500 thermal simulator was used to carry out multi-pass hot compression tests. For five-pass hot compression deformation, the last-pass deformation temperatures were 240, 260, 300, 340, 380, and 420°C, respectively, where the first-pass deformation temperature was 460°C. The experimental results indicated that increasing the hot compression deformation temperature with each pass resulted in improved electrical conductivity of the alloy. Consequently, the flow stress was reduced after deformation of the samples subjected to the same number of passes. In addition, the dislocation density gradually decreased and the grain size increased after hot compression deformation. Furthermore, the dynamic recrystallization behavior was effectively suppressed during the hot compression process because spherical Al3Zr precipitates pinned the dislocation movement effectively and prevented grain boundary sliding.
Chemical compatibility screening results of plastic packaging to mixed waste simulants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nigrey, P.J.; Dickens, T.G.
1995-12-01
We have developed a chemical compatibility program for evaluating transportation packaging components for transporting mixed waste forms. We have performed the first phase of this experimental program to determine the effects of simulant mixed wastes on packaging materials. This effort involved the screening of 10 plastic materials in four liquid mixed waste simulants. The testing protocol involved exposing the respective materials to {approximately}3 kGy of gamma radiation followed by 14 day exposures to the waste simulants of 60 C. The seal materials or rubbers were tested using VTR (vapor transport rate) measurements while the liner materials were tested using specificmore » gravity as a metric. For these tests, a screening criteria of {approximately}1 g/m{sup 2}/hr for VTR and a specific gravity change of 10% was used. It was concluded that while all seal materials passed exposure to the aqueous simulant mixed waste, EPDM and SBR had the lowest VTRs. In the chlorinated hydrocarbon simulant mixed waste, only VITON passed the screening tests. In both the simulant scintillation fluid mixed waste and the ketone mixture simulant mixed waste, none of the seal materials met the screening criteria. It is anticipated that those materials with the lowest VTRs will be evaluated in the comprehensive phase of the program. For specific gravity testing of liner materials the data showed that while all materials with the exception of polypropylene passed the screening criteria, Kel-F, HDPE, and XLPE were found to offer the greatest resistance to the combination of radiation and chemicals.« less
Low-energy spectral features of supernova (anti)neutrinos in inverted hierarchy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fogli, G. L.; Marrone, A.; Tamborra, I.
2008-11-01
In the dense supernova core, self-interactions may align the flavor polarization vectors of {nu} and {nu} and induce collective flavor transformations. Different alignment Ansaetze are known to describe approximately the phenomena of synchronized or bipolar oscillations and the split of {nu} energy spectra. We discuss another phenomenon observed in some numerical experiments in inverted hierarchy, showing features akin to a low-energy split of {nu} spectra. The phenomenon appears to be approximately described by another alignment Ansatz which, in the considered scenario, reduces the (nonadiabatic) dynamics of all energy modes to only two {nu} plus two {nu} modes. The associated spectralmore » features, however, appear to be fragile when passing from single to multiangle simulations.« less
NASA Astrophysics Data System (ADS)
Lee, Taek-Soo; Frey, Eric C.; Tsui, Benjamin M. W.
2015-04-01
This paper presents two 4D mathematical observer models for the detection of motion defects in 4D gated medical images. Their performance was compared with results from human observers in detecting a regional motion abnormality in simulated 4D gated myocardial perfusion (MP) SPECT images. The first 4D mathematical observer model extends the conventional channelized Hotelling observer (CHO) based on a set of 2D spatial channels and the second is a proposed model that uses a set of 4D space-time channels. Simulated projection data were generated using the 4D NURBS-based cardiac-torso (NCAT) phantom with 16 gates/cardiac cycle. The activity distribution modelled uptake of 99mTc MIBI with normal perfusion and a regional wall motion defect. An analytical projector was used in the simulation and the filtered backprojection (FBP) algorithm was used in image reconstruction followed by spatial and temporal low-pass filtering with various cut-off frequencies. Then, we extracted 2D image slices from each time frame and reorganized them into a set of cine images. For the first model, we applied 2D spatial channels to the cine images and generated a set of feature vectors that were stacked for the images from different slices of the heart. The process was repeated for each of the 1,024 noise realizations, and CHO and receiver operating characteristics (ROC) analysis methodologies were applied to the ensemble of the feature vectors to compute areas under the ROC curves (AUCs). For the second model, a set of 4D space-time channels was developed and applied to the sets of cine images to produce space-time feature vectors to which the CHO methodology was applied. The AUC values of the second model showed better agreement (Spearman’s rank correlation (SRC) coefficient = 0.8) to human observer results than those from the first model (SRC coefficient = 0.4). The agreement with human observers indicates the proposed 4D mathematical observer model provides a good predictor of the performance of human observers in detecting regional motion defects in 4D gated MP SPECT images. The result supports the use of the observer model in the optimization and evaluation of 4D image reconstruction and compensation methods for improving the detection of motion abnormalities in 4D gated MP SPECT images.
White noise analysis of Phycomyces light growth response system. I. Normal intensity range.
Lipson, E D
1975-01-01
The Wiener-Lee-Schetzen method for the identification of a nonlinear system through white gaussian noise stimulation was applied to the transient light growth response of the sporangiophore of Phycomyces. In order to cover a moderate dynamic range of light intensity I, the imput variable was defined to be log I. The experiments were performed in the normal range of light intensity, centered about I0 = 10(-6) W/cm2. The kernels of the Wierner functionals were computed up to second order. Within the range of a few decades the system is reasonably linear with log I. The main nonlinear feature of the second-order kernel corresponds to the property of rectification. Power spectral analysis reveals that the slow dynamics of the system are of at least fifth order. The system can be represented approximately by a linear transfer function, including a first-order high-pass (adaptation) filter with a 4 min time constant and an underdamped fourth-order low-pass filter. Accordingly a linear electronic circuit was constructed to simulate the small scale response characteristics. In terms of the adaptation model of Delbrück and Reichardt (1956, in Cellular Mechanisms in Differentiation and Growth, Princeton University Press), kernels were deduced for the dynamic dependence of the growth velocity (output) on the "subjective intensity", a presumed internal variable. Finally the linear electronic simulator above was generalized to accommodate the large scale nonlinearity of the adaptation model and to serve as a tool for deeper test of the model. PMID:1203444
Thomas, Frank; Carpenter, Judi; Rhoades, Carol; Holleran, Renee; Snow, Gregory
2010-04-01
This exploratory study examined novice intubators and the effect difficult airway factors have on pre- and posttraining oral-tracheal simulation intubation success rates. Using a two-level, full-factorial design of experimentation (DOE) involving a combination of six airway factors (curved vs. straight laryngoscope blade, trismus, tongue edema, laryngeal spasm, pharyngeal obstruction, or cervical immobilization), 64 airway scenarios were prospectively randomized to 12 critical care nurses to evaluate pre- and posttraining first-pass intubation success rates on a simulator. Scenario variables and intubation outcomes were analyzed using a generalized linear mixed-effects model to determine two-way main and interactive effects. Interactive effects between the six study factors were nonsignificant (p = 0.69). For both pre- and posttraining, main effects showed the straight blade (p = 0.006), tongue edema (p = 0.0001), and laryngeal spasm (p = 0.004) significantly reduced success rates, while trismus (p = 0.358), pharyngeal obstruction (p = 0.078), and cervical immobilization did not significantly change the success rate. First-pass intubation success rate on the simulator significantly improved (p = 0.005) from pre- (19%) to posttraining (36%). Design of experimentation is useful in analyzing the effect difficult airway factors and training have on simulator intubation success rates. Future quality improvement DOE simulator research studies should be performed to help clarify the relationship between simulator factors and patient intubation rates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fave, Xenia, E-mail: xjfave@mdanderson.org; Fried, David; Mackin, Dennis
Purpose: Increasing evidence suggests radiomics features extracted from computed tomography (CT) images may be useful in prognostic models for patients with nonsmall cell lung cancer (NSCLC). This study was designed to determine whether such features can be reproducibly obtained from cone-beam CT (CBCT) images taken using medical Linac onboard-imaging systems in order to track them through treatment. Methods: Test-retest CBCT images of ten patients previously enrolled in a clinical trial were retrospectively obtained and used to determine the concordance correlation coefficient (CCC) for 68 different texture features. The volume dependence of each feature was also measured using the Spearman rankmore » correlation coefficient. Features with a high reproducibility (CCC > 0.9) that were not due to volume dependence in the patient test-retest set were further examined for their sensitivity to differences in imaging protocol, level of scatter, and amount of motion by using two phantoms. The first phantom was a texture phantom composed of rectangular cartridges to represent different textures. Features were measured from two cartridges, shredded rubber and dense cork, in this study. The texture phantom was scanned with 19 different CBCT imagers to establish the features’ interscanner variability. The effect of scatter on these features was studied by surrounding the same texture phantom with scattering material (rice and solid water). The effect of respiratory motion on these features was studied using a dynamic-motion thoracic phantom and a specially designed tumor texture insert of the shredded rubber material. The differences between scans acquired with different Linacs and protocols, varying amounts of scatter, and with different levels of motion were compared to the mean intrapatient difference from the test-retest image set. Results: Of the original 68 features, 37 had a CCC >0.9 that was not due to volume dependence. When the Linac manufacturer and imaging protocol were kept consistent, 4–13 of these 37 features passed our criteria for reproducibility more than 50% of the time, depending on the manufacturer-protocol combination. Almost all of the features changed substantially when scatter material was added around the phantom. For the dense cork, 23 features passed in the thoracic scans and 11 features passed in the head scans when the differences between one and two layers of scatter were compared. Using the same test for the shredded rubber, five features passed the thoracic scans and eight features passed the head scans. Motion substantially impacted the reproducibility of the features. With 4 mm of motion, 12 features from the entire volume and 14 features from the center slice measurements were reproducible. With 6–8 mm of motion, three features (Laplacian of Gaussian filtered kurtosis, gray-level nonuniformity, and entropy), from the entire volume and seven features (coarseness, high gray-level run emphasis, gray-level nonuniformity, sum-average, information measure correlation, scaled mean, and entropy) from the center-slice measurements were considered reproducible. Conclusions: Some radiomics features are robust to the noise and poor image quality of CBCT images when the imaging protocol is consistent, relative changes in the features are used, and patients are limited to those with less than 1 cm of motion.« less
NASA Astrophysics Data System (ADS)
Kuznetsova, T. A.
2018-05-01
The methods for increasing gas-turbine aircraft engines' (GTE) adaptive properties to interference based on empowerment of automatic control systems (ACS) are analyzed. The flow pulsation in suction and a discharge line of the compressor, which may cause the stall, are considered as the interference. The algorithmic solution to the problem of GTE pre-stall modes’ control adapted to stability boundary is proposed. The aim of the study is to develop the band-pass filtering algorithms to provide the detection functions of the compressor pre-stall modes for ACS GTE. The characteristic feature of pre-stall effect is the increase of pressure pulsation amplitude over the impeller at the multiples of the rotor’ frequencies. The used method is based on a band-pass filter combining low-pass and high-pass digital filters. The impulse response of the high-pass filter is determined through a known low-pass filter impulse response by spectral inversion. The resulting transfer function of the second order band-pass filter (BPF) corresponds to a stable system. The two circuit implementations of BPF are synthesized. Designed band-pass filtering algorithms were tested in MATLAB environment. Comparative analysis of amplitude-frequency response of proposed implementation allows choosing the BPF scheme providing the best quality of filtration. The BPF reaction to the periodic sinusoidal signal, simulating the experimentally obtained pressure pulsation function in the pre-stall mode, was considered. The results of model experiment demonstrated the effectiveness of applying band-pass filtering algorithms as part of ACS to identify the pre-stall mode of the compressor for detection of pressure fluctuations’ peaks, characterizing the compressor’s approach to the stability boundary.
Analysis of borderline substitution/electron transfer pathways from direct ab initio MD simulations
NASA Astrophysics Data System (ADS)
Yamataka, Hiroshi; Aida, Misako; Dupuis, Michel
2002-02-01
Ab initio molecular dynamics simulations were carried out for the borderline reaction pathways in the reaction of CH 2O rad - with CH 3Cl. The simulations reveal distinctive features of three types of mechanisms passing through the S N2-like transition state (TS): (i) a direct formation of S N2 products, (ii) a direct formation of ET products, and (iii) a two-step formation of ET products via the S N2 valley. The direct formation of the ET product through the S N2-like TS appears to be more favorable at higher temperatures. The two-step process depends on the amount of energy that goes into the C-C stretching mode.
Liu, J.; Xia, J.; Luo, Y.; Chen, C.; Li, X.; Huang, Y.
2007-01-01
The geotechnical integrity of critical infrastructure can be seriously compromised by the presence of fractures or crevices. Non-destructive techniques to accurately detect fractures in critical infrastructure such as dams and highways could be of significant benefit to the geotechnical industry. This paper investigates the application of shallow seismic and georadar methods to the detection of a vertical discontinuity using numerical simulations. The objective is to address the kinematical analysis of a vertical discontinuity, determine the resulting wave field characteristics, and provide the basis for determining the existence of vertical discontinuities based on the recorded signals. Simulation results demonstrate that: (1) A reflection from a vertical discontinuity produces a hyperbolic feature on a seismic or georadar profile; (2) In order for a reflection from a vertical discontinuity to be produced, a reflecting horizon below the discontinuity must exist, the offset between source and receiver (x0) must be non-zero, on the same side of the vertical discontinuity; (3) The range of distances from the vertical discontinuity where a reflection event is observed is proportional to its length and to x0; (4) Should the vertical crevice (or fracture) pass through a reflecting horizon, dual hyperbolic features can be observed on the records, and this can be used as a determining factor that the vertical crevice passes through the interface; and (5) diffractions from the edges of the discontinuity can be recorded with relatively smaller amplitude than reflections and their ranges are not constrained by the length of discontinuity. If the length of discontinuity is short enough, diffractions are the dominant feature. Real-world examples show that the shallow seismic reflection method and the georadar method are capable of recording the hyperbolic feature, which can be interpreted as vertical discontinuity. Thus, these methods show some promise as effective non-destructive detection methods for locating vertical discontinuities (e.g., fractures or crevices) in infrastructure such as dams and highway pavement. ?? 2007 Elsevier B.V. All rights reserved.
Aspen: A microsimulation model of the economy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Basu, N.; Pryor, R.J.; Quint, T.
1996-10-01
This report presents, Aspen. Sandia National Laboratories is developing this new agent-based microeconomic simulation model of the U.S. economy. The model is notable because it allows a large number of individual economic agents to be modeled at a high level of detail and with a great degree of freedom. Some features of Aspen are (a) a sophisticated message-passing system that allows individual pairs of agents to communicate, (b) the use of genetic algorithms to simulate the learning of certain agents, and (c) a detailed financial sector that includes a banking system and a bond market. Results from runs of themore » model are also presented.« less
The role of first formant information in simulated electro-acoustic hearing.
Verschuur, Carl; Boland, Conor; Frost, Emily; Constable, Jack
2013-06-01
Cochlear implant (CI) recipients with residual hearing show improved performance with the addition of low-frequency acoustic stimulation (electro-acoustic stimulation, EAS). The present study sought to determine whether a synthesized first formant (F1) signal provided benefit to speech recognition in simulated EAS hearing and to compare such benefit with that from other low-frequency signals. A further aim was to determine if F1 amplitude or frequency was more important in determining benefit and if F1 benefit varied with formant bandwidth. In two experiments, sentence recordings from a male speaker were processed via a simulation of a partial insertion CI, and presented to normal hearing listeners in combination with various low-frequency signals, including a tone tracking fundamental frequency (F0), low-pass filtered speech, and signals based on F1 estimation. A simulated EAS benefit was found with F1 signals, and was similar to the benefit from F0 or low-pass filtered speech. The benefit did not differ significantly with the narrowing or widening of the F1 bandwidth. The benefit from low-frequency envelope signals was significantly less than the benefit from any low-frequency signal containing fine frequency information. Results indicate that F1 provides a benefit in simulated EAS hearing but low frequency envelope information is less important than low frequency fine structure in determining such benefit.
A numerical relativity scheme for cosmological simulations
NASA Astrophysics Data System (ADS)
Daverio, David; Dirian, Yves; Mitsou, Ermis
2017-12-01
Cosmological simulations involving the fully covariant gravitational dynamics may prove relevant in understanding relativistic/non-linear features and, therefore, in taking better advantage of the upcoming large scale structure survey data. We propose a new 3 + 1 integration scheme for general relativity in the case where the matter sector contains a minimally-coupled perfect fluid field. The original feature is that we completely eliminate the fluid components through the constraint equations, thus remaining with a set of unconstrained evolution equations for the rest of the fields. This procedure does not constrain the lapse function and shift vector, so it holds in arbitrary gauge and also works for arbitrary equation of state. An important advantage of this scheme is that it allows one to define and pass an adaptation of the robustness test to the cosmological context, at least in the case of pressureless perfect fluid matter, which is the relevant one for late-time cosmology.
NASA Astrophysics Data System (ADS)
Mahalik, S. S.; Kundu, M.
2016-12-01
Linear resonance (LR) absorption of an intense 800 nm laser light in a nano-cluster requires a long laser pulse >100 fs when Mie-plasma frequency ( ω M ) of electrons in the expanding cluster matches the laser frequency (ω). For a short duration of the pulse, the condition for LR is not satisfied. In this case, it was shown by a model and particle-in-cell (PIC) simulations [Phys. Rev. Lett. 96, 123401 (2006)] that electrons absorb laser energy by anharmonic resonance (AHR) when the position-dependent frequency Ω [ r ( t ) ] of an electron in the self-consistent anharmonic potential of the cluster satisfies Ω [ r ( t ) ] = ω . However, AHR remains to be a debate and still obscure in multi-particle plasma simulations. Here, we identify AHR mechanism in a laser driven cluster using molecular dynamics (MD) simulations. By analyzing the trajectory of each MD electron and extracting its Ω [ r ( t ) ] in the self-generated anharmonic plasma potential, it is found that electron is outer ionized only when AHR is met. An anharmonic oscillator model, introduced here, brings out most of the features of MD electrons while passing the AHR. Thus, we not only bridge the gap between PIC simulations, analytical models, and MD calculations for the first time but also unequivocally prove that AHR process is a universal dominant collisionless mechanism of absorption in the short pulse regime or in the early time of longer pulses in clusters.
3D gut-liver chip with a PK model for prediction of first-pass metabolism.
Lee, Dong Wook; Ha, Sang Keun; Choi, Inwook; Sung, Jong Hwan
2017-11-07
Accurate prediction of first-pass metabolism is essential for improving the time and cost efficiency of drug development process. Here, we have developed a microfluidic gut-liver co-culture chip that aims to reproduce the first-pass metabolism of oral drugs. This chip consists of two separate layers for gut (Caco-2) and liver (HepG2) cell lines, where cells can be co-cultured in both 2D and 3D forms. Both cell lines were maintained well in the chip, verified by confocal microscopy and measurement of hepatic enzyme activity. We investigated the PK profile of paracetamol in the chip, and corresponding PK model was constructed, which was used to predict PK profiles for different chip design parameters. Simulation results implied that a larger absorption surface area and a higher metabolic capacity are required to reproduce the in vivo PK profile of paracetamol more accurately. Our study suggests the possibility of reproducing the human PK profile on a chip, contributing to accurate prediction of pharmacological effect of drugs.
Simulating Isotope Enrichment by Gaseous Diffusion
NASA Astrophysics Data System (ADS)
Reed, Cameron
2015-04-01
A desktop-computer simulation of isotope enrichment by gaseous diffusion has been developed. The simulation incorporates two non-interacting point-mass species whose members pass through a cascade of cells containing porous membranes and retain constant speeds as they reflect off the walls of the cells and the spaces between holes in the membranes. A particular feature is periodic forward recycling of enriched material to cells further along the cascade along with simultaneous return of depleted material to preceding cells. The number of particles, the mass ratio, the initial fractional abundance of the lighter species, and the time between recycling operations can be chosen by the user. The simulation is simple enough to be understood on the basis of two-dimensional kinematics, and demonstrates that the fractional abundance of the lighter-isotope species increases along the cascade. The logic of the simulation will be described and results of some typical runs will be presented and discussed.
2016-07-12
This color view from NASA's Juno spacecraft is made from some of the first images taken by JunoCam after the spacecraft entered orbit around Jupiter on July 5th (UTC). The view shows that JunoCam survived its first pass through Jupiter's extreme radiation environment, and is ready to collect images of the giant planet as Juno begins its mission. The image was taken on July 10, 2016 at 5:30 UTC, when the spacecraft was 2.7 million miles (4.3 million kilometers) from Jupiter on the outbound leg of its initial 53.5-day capture orbit. The image shows atmospheric features on Jupiter, including the Great Red Spot, and three of Jupiter's four largest moons. JunoCam will continue to image Jupiter during Juno's capture orbits. The first high-resolution images of the planet will be taken on August 27 when the Juno spacecraft makes its next close pass to Jupiter. http://photojournal.jpl.nasa.gov/catalog/PIA20707
Overtaking collision effects in a cw double-pass proton linac
Tao, Yue; Qiang, Ji; Hwang, Kilean
2017-12-22
The recirculating superconducting proton linac has the advantage of reducing the number of cavities in the accelerator and the corresponding construction and operational costs. Beam dynamics simulations were done recently in a double-pass recirculating proton linac using a single proton beam bunch. For continuous wave (cw) operation, the high-energy proton bunch during the second pass through the linac will overtake and collide with the low-energy bunch during the first pass at a number of locations of the linac. These collisions might cause proton bunch emittance growth and beam quality degradation. Here, we study the collisional effects due to Coulomb space-chargemore » forces between the high-energy bunch and the low-energy bunch. Our results suggest that these effects on the proton beam quality would be small and might not cause significant emittance growth or beam blowup through the linac. A 10 mA, 500 MeV cw double-pass proton linac is feasible without using extra hardware for phase synchronization.« less
Overtaking collision effects in a cw double-pass proton linac
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tao, Yue; Qiang, Ji; Hwang, Kilean
The recirculating superconducting proton linac has the advantage of reducing the number of cavities in the accelerator and the corresponding construction and operational costs. Beam dynamics simulations were done recently in a double-pass recirculating proton linac using a single proton beam bunch. For continuous wave (cw) operation, the high-energy proton bunch during the second pass through the linac will overtake and collide with the low-energy bunch during the first pass at a number of locations of the linac. These collisions might cause proton bunch emittance growth and beam quality degradation. Here, we study the collisional effects due to Coulomb space-chargemore » forces between the high-energy bunch and the low-energy bunch. Our results suggest that these effects on the proton beam quality would be small and might not cause significant emittance growth or beam blowup through the linac. A 10 mA, 500 MeV cw double-pass proton linac is feasible without using extra hardware for phase synchronization.« less
Atmospheric Boundary Layer Dynamics Near Ross Island and Over West Antarctica.
NASA Astrophysics Data System (ADS)
Liu, Zhong
The atmospheric boundary layer dynamics near Ross Island and over West Antarctica has been investigated. The study consists of two parts. The first part involved the use of data from ground-based remote sensing equipment (sodar and RASS), radiosondes, pilot balloons, automatic weather stations, and NOAA AVHRR satellite imagery. The second part involved the use of a high resolution boundary layer model coupled with a three-dimensional primitive equation mesoscale model to simulate the observed atmospheric boundary layer winds and temperatures. Turbulence parameters were simulated with an E-epsilon turbulence model driven by observed winds and temperatures. The observational analysis, for the first time, revealed that the airflow passing through the Ross Island area is supplied mainly by enhanced katabatic drainage from Byrd Glacier and secondarily drainage from Mulock and Skelton glaciers. The observed diurnal variation of the blocking effect near Ross Island is dominated by the changes in the upstream katabatic airflow. The synthesized analysis over West Antarctica found that the Siple Coast katabatic wind confluence zone consists of two superimposed katabatic airflows: a relatively warm and more buoyant katabatic flow from West Antarctica overlies a colder and less buoyant katabatic airflow from East Antarctica. The force balance analysis revealed that, inside the West Antarctic katabatic wind zone, the pressure gradient force associated with the blocked airflow against the Transantarctic Mountains dominates; inside the East Antarctic katabatic wind zone, the downslope buoyancy force due to the cold air overlying the sloping terrain is dominant. The analysis also shows that these forces are in geostrophic balance with the Coriolis force. An E-epsilon turbulence closure model is used to simulate the diurnal variation of sodar backscatter. The results show that the model is capable of qualitatively capturing the main features of the observed sodar backscatter. To improve the representation of the atmospheric boundary layer, a second-order turbulence closure model coupled with the input from a mesoscale model was applied to the springtime Siple Coast katabatic wind confluence zone. The simulation was able to capture the main features of the confluence zone, which were not well resolved by the mesoscale model.
Experimental and Computational Study of Trapped Vortex Combustor Sector Rig with Tri-pass Diffuser
NASA Technical Reports Server (NTRS)
Hendricks, Robert C.; Shouse, D. T.; Roquemore, W. M.; Burrus, D. L.; Duncan, B. S.; Ryder, R. C.; Brankovic, A.; Liu, N.-S.; Gallagher, J. R.; Hendricks, J. A.
2001-01-01
The Trapped Vortex Combustor (TVC) potentially offers numerous operational advantages over current production gas turbine engine combustors. These include lower weight, lower pollutant emissions, effective flame stabilization, high combustion efficiency, excellent high altitude relight capability, and operation in the lean burn or RQL (Rich burn/Quick mix/Lean burn) modes of combustion. The present work describes the operational principles of the TVC, and provides detailed performance data on a configuration featuring a tri-pass diffusion system. Performance data include EINOx (NO(sub x) emission index) results for various fuel-air ratios and combustor residence times, combustion efficiency as a function of combustor residence time, and combustor lean blow-out (LBO) performance. Computational fluid dynamics (CFD) simulations using liquid spray droplet evaporation and combustion modeling are performed and related to flow structures observed in photographs of the combustor. The CFD results are used to understand the aerodynamics and combustion features under different fueling conditions. Performance data acquired to date are favorable in comparison to conventional gas turbine combustors. Further testing over a wider range of fuel-air ratios, fuel flow splits, and pressure ratios is in progress to explore the TVC performance. In addition, alternate configurations for the upstream pressure feed, including bi-pass diffusion schemes, as well as variations on the fuel injection patterns, are currently in test and evaluation phases.
Collection of Calibration and Validation Data for an Airport Landside Dynamic Simulation Model.
1980-04-01
movements. The volume of skiers passing through Denver is sufficiently large to warrant the installation of special check-in counters for passengers with...Terminal, only seven sectors were used. Training Procedures MIA was the first of the three airports surveyed. A substantial amount of knowledge and
Science and Policy Interactions: A Case Study with Acid Rain
Management of air pollution has a long history in the United States. A succession of laws, with the first Federal law, passed in 1955, has lead to substantial reductions in emissions and improvements in air quality. These laws were simulated originally by acute local effects on ...
NASA Astrophysics Data System (ADS)
Kotova, S. P.; Mayorova, A. M.; Samagin, S. A.
2018-05-01
Techniques for forming vortex light fields using a modal type liquid crystal spatial modulator were proposed. An orbital angular momentum of light passing through the modulator or reflecting from it appears as a result of the jump in the profile of phase delay by means of using special configurations of contact electrodes and predetermined values of applying voltages. The features of the generated vortex beams and capabilities for their control were simulated.
Simulation-Based Training for Colonoscopy
Preisler, Louise; Svendsen, Morten Bo Søndergaard; Nerup, Nikolaj; Svendsen, Lars Bo; Konge, Lars
2015-01-01
Abstract The aim of this study was to create simulation-based tests with credible pass/fail standards for 2 different fidelities of colonoscopy models. Only competent practitioners should perform colonoscopy. Reliable and valid simulation-based tests could be used to establish basic competency in colonoscopy before practicing on patients. Twenty-five physicians (10 consultants with endoscopic experience and 15 fellows with very little endoscopic experience) were tested on 2 different simulator models: a virtual-reality simulator and a physical model. Tests were repeated twice on each simulator model. Metrics with discriminatory ability were identified for both modalities and reliability was determined. The contrasting-groups method was used to create pass/fail standards and the consequences of these were explored. The consultants significantly performed faster and scored higher than the fellows on both the models (P < 0.001). Reliability analysis showed Cronbach α = 0.80 and 0.87 for the virtual-reality and the physical model, respectively. The established pass/fail standards failed one of the consultants (virtual-reality simulator) and allowed one fellow to pass (physical model). The 2 tested simulations-based modalities provided reliable and valid assessments of competence in colonoscopy and credible pass/fail standards were established for both the tests. We propose to use these standards in simulation-based training programs before proceeding to supervised training on patients. PMID:25634177
Efficiently passing messages in distributed spiking neural network simulation.
Thibeault, Corey M; Minkovich, Kirill; O'Brien, Michael J; Harris, Frederick C; Srinivasa, Narayan
2013-01-01
Efficiently passing spiking messages in a neural model is an important aspect of high-performance simulation. As the scale of networks has increased so has the size of the computing systems required to simulate them. In addition, the information exchange of these resources has become more of an impediment to performance. In this paper we explore spike message passing using different mechanisms provided by the Message Passing Interface (MPI). A specific implementation, MVAPICH, designed for high-performance clusters with Infiniband hardware is employed. The focus is on providing information about these mechanisms for users of commodity high-performance spiking simulators. In addition, a novel hybrid method for spike exchange was implemented and benchmarked.
Chemical compatibility screening test results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nigrey, P.J.; Dickens, T.G.
1997-12-01
A program for evaluating packaging components that may be used in transporting mixed-waste forms has been developed and the first phase has been completed. This effort involved the screening of ten plastic materials in four simulant mixed-waste types. These plastics were butadiene-acrylonitrile copolymer rubber, cross-linked polyethylene (XLPE), epichlorohydrin rubber, ethylene-propylene rubber (EPDM), fluorocarbon (Viton or Kel-F), polytetrafluoroethylene, high-density polyethylene (HDPE), isobutylene-isoprene copolymer rubber (butyl), polypropylene, and styrene-butadiene rubber (SBR). The selected simulant mixed wastes were (1) an aqueous alkaline mixture of sodium nitrate and sodium nitrite; (2) a chlorinated hydrocarbon mixture; (3) a simulant liquid scintillation fluid; and (4) amore » mixture of ketones. The testing protocol involved exposing the respective materials to 286,000 rads of gamma radiation followed by 14-day exposures to the waste types at 60{degrees}C. The seal materials were tested using vapor transport rate (VTR) measurements while the liner materials were tested using specific gravity as a metric. For these tests, a screening criterion of 0.9 g/hr/m{sup 2} for VTR and a specific gravity change of 10% was used. Based on this work, it was concluded that while all seal materials passed exposure to the aqueous simulant mixed waste, EPDM and SBR had the lowest VTRs. In the chlorinated hydrocarbon simulant mixed waste, only Viton passed the screening tests. In both the simulant scintillation fluid mixed waste and the ketone mixture simulant mixed waste, none of the seal materials met the screening criteria. For specific gravity testing of liner materials, the data showed that while all materials with the exception of polypropylene passed the screening criteria, Kel-F, HDPE, and XLPE offered the greatest resistance to the combination of radiation and chemicals.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nigrey, P.J.; Dickens, T.G.; Dickman, P.T.
1997-08-01
Based on regulatory requirements for Type A and B radioactive material packaging, a Testing Program was developed to evaluate the effects of mixed wastes on plastic materials which could be used as liners and seals in transportation containers. The plastics evaluated in this program were butadiene-acrylonitrile copolymer (Nitrile rubber), cross-linked polyethylene, epichlorohydrin, ethylene-propylene rubber (EPDM), fluorocarbons, high-density polyethylene (HDPE), butyl rubber, polypropylene, polytetrafluoroethylene, and styrene-butadiene rubber (SBR). These plastics were first screened in four simulant mixed wastes. The liner materials were screened using specific gravity measurements and seal materials by vapor transport rate (VTR) measurements. For the screening of linermore » materials, Kel-F, HDPE, and XLPE were found to offer the greatest resistance to the combination of radiation and chemicals. The tests also indicated that while all seal materials passed exposure to the aqueous simulant mixed waste, EPDM and SBR had the lowest VTRs. In the chlorinated hydrocarbon simulant mixed waste, only Viton passed the screening tests. In both the simulant scintillation fluid mixed waste and the ketone mixture waste, none of the seal materials met the screening criteria. Those materials which passed the screening tests were subjected to further comprehensive testing in each of the simulant wastes. The materials were exposed to four different radiation doses followed by exposure to a simulant mixed waste at three temperatures and four different exposure times (7, 14, 28, 180 days). Materials were tested by measuring specific gravity, dimensional, hardness, stress cracking, VTR, compression set, and tensile properties. The second phase of this Testing Program involving the comprehensive testing of plastic liner has been completed and for seal materials is currently in progress.« less
Simulation study of pedestrian flow in a station hall during the Spring Festival travel rush
NASA Astrophysics Data System (ADS)
Wang, Lei; Zhang, Qian; Cai, Yun; Zhang, Jianlin; Ma, Qingguo
2013-05-01
The Spring Festival is the most important festival in China. How can passengers go home smoothly and quickly during the Spring Festival travel rush, especially when emergencies of terrible winter weather happen? By modifying the social force model, we simulated the pedestrian flow in a station hall. The simulation revealed casualties happened when passengers escaped from panic induced by crowd turbulence. The results suggest that passenger numbers, ticket checking patterns, baggage volumes, and anxiety can affect the speed of passing through the waiting corridor. Our approach is meaningful in understanding the feature of a crowd moving and can be served to reproduce mass events. Therefore, it not only develops a realistic modeling of pedestrian flow but also is important for a better preparation of emergency management.
To Pass or Not to Pass: Modeling the Movement and Affordance Dynamics of a Pick and Place Task
Lamb, Maurice; Kallen, Rachel W.; Harrison, Steven J.; Di Bernardo, Mario; Minai, Ali; Richardson, Michael J.
2017-01-01
Humans commonly engage in tasks that require or are made more efficient by coordinating with other humans. In this paper we introduce a task dynamics approach for modeling multi-agent interaction and decision making in a pick and place task where an agent must move an object from one location to another and decide whether to act alone or with a partner. Our aims were to identify and model (1) the affordance related dynamics that define an actor's choice to move an object alone or to pass it to their co-actor and (2) the trajectory dynamics of an actor's hand movements when moving to grasp, relocate, or pass the object. Using a virtual reality pick and place task, we demonstrate that both the decision to pass or not pass an object and the movement trajectories of the participants can be characterized in terms of a behavioral dynamics model. Simulations suggest that the proposed behavioral dynamics model exhibits features observed in human participants including hysteresis in decision making, non-straight line trajectories, and non-constant velocity profiles. The proposed model highlights how the same low-dimensional behavioral dynamics can operate to constrain multiple (and often nested) levels of human activity and suggests that knowledge of what, when, where and how to move or act during pick and place behavior may be defined by these low dimensional task dynamics and, thus, can emerge spontaneously and in real-time with little a priori planning. PMID:28701975
Nonlinear to Linear Elastic Code Coupling in 2-D Axisymmetric Media.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Preston, Leiph
Explosions within the earth nonlinearly deform the local media, but at typical seismological observation distances, the seismic waves can be considered linear. Although nonlinear algorithms can simulate explosions in the very near field well, these codes are computationally expensive and inaccurate at propagating these signals to great distances. A linearized wave propagation code, coupled to a nonlinear code, provides an efficient mechanism to both accurately simulate the explosion itself and to propagate these signals to distant receivers. To this end we have coupled Sandia's nonlinear simulation algorithm CTH to a linearized elastic wave propagation code for 2-D axisymmetric media (axiElasti)more » by passing information from the nonlinear to the linear code via time-varying boundary conditions. In this report, we first develop the 2-D axisymmetric elastic wave equations in cylindrical coordinates. Next we show how we design the time-varying boundary conditions passing information from CTH to axiElasti, and finally we demonstrate the coupling code via a simple study of the elastic radius.« less
Parallel-Processing Test Bed For Simulation Software
NASA Technical Reports Server (NTRS)
Blech, Richard; Cole, Gary; Townsend, Scott
1996-01-01
Second-generation Hypercluster computing system is multiprocessor test bed for research on parallel algorithms for simulation in fluid dynamics, electromagnetics, chemistry, and other fields with large computational requirements but relatively low input/output requirements. Built from standard, off-shelf hardware readily upgraded as improved technology becomes available. System used for experiments with such parallel-processing concepts as message-passing algorithms, debugging software tools, and computational steering. First-generation Hypercluster system described in "Hypercluster Parallel Processor" (LEW-15283).
Measurement and simulation of passive fast-ion D-alpha emission from the DIII-D tokamak
Bolte, Nathan G.; Heidbrink, William W.; Pace, David; ...
2016-09-14
Spectra of passive fast-ion D-alpha (FIDA) light from beam ions that charge exchange with background neutrals are measured and simulated. The fast ions come from three sources: ions that pass through the diagnostic sightlines on their first full orbit, an axisymmetric confined population, and ions that are expelled into the edge region by instabilities. A passive FIDA simulation (P-FIDASIM) is developed as a forward model for the spectra of the first-orbit fast ions and consists of an experimentally-validated beam deposition model, an ion orbit-following code, a collisional-radiative model, and a synthetic spectrometer. Model validation consists of the simulation of 86more » experimental spectra that are obtained using 6 different neutral beam fast-ion sources and 13 different lines of sight. Calibrated spectra are used to estimate the neutral density throughout the cross-section of the tokamak. The resulting 2D neutral density shows the expected increase toward each X-point with average neutral densities of 8 X 10 9 cm -3 at the plasma boundary and 1 X 10 11 cm -3 near the wall. Here, fast ions that are on passing orbits are expelled by the sawtooth instability more readily than trapped ions. In a sample discharge, approximately 1% of the fast-ion population is ejected into the high neutral density region per sawtooth crash.« less
Daryasafar, Navid; Baghbani, Somaye; Moghaddasi, Mohammad Naser; Sadeghzade, Ramezanali
2014-01-01
We intend to design a broadband band-pass filter with notch-band, which uses coupled transmission lines in the structure, using new models of coupled transmission lines. In order to realize and present the new model, first, previous models will be simulated in the ADS program. Then, according to the change of their equations and consequently change of basic parameters of these models, optimization and dependency among these parameters and also their frequency response are attended and results of these changes in order to design a new filter are converged.
Passive bottom reflection-loss estimation using ship noise and a vertical line array.
Muzi, Lanfranco; Siderius, Martin; Verlinden, Christopher M
2017-06-01
An existing technique for passive bottom-loss estimation from natural marine surface noise (generated by waves and wind) is adapted to use noise generated by ships. The original approach-based on beamforming of the noise field recorded by a vertical line array of hydrophones-is retained; however, additional processing is needed in order for the field generated by a passing ship to show features that are similar to those of the natural surface-noise field. A necessary requisite is that the ship position, relative to the array, varies over as wide a range of steering angles as possible, ideally passing directly over the array to ensure coverage of the steepest angles. The methodology is illustrated through simulation and applied to data from a field experiment conducted offshore of San Diego, CA in 2009.
Fully automated motion correction in first-pass myocardial perfusion MR image sequences.
Milles, Julien; van der Geest, Rob J; Jerosch-Herold, Michael; Reiber, Johan H C; Lelieveldt, Boudewijn P F
2008-11-01
This paper presents a novel method for registration of cardiac perfusion magnetic resonance imaging (MRI). The presented method is capable of automatically registering perfusion data, using independent component analysis (ICA) to extract physiologically relevant features together with their time-intensity behavior. A time-varying reference image mimicking intensity changes in the data of interest is computed based on the results of that ICA. This reference image is used in a two-pass registration framework. Qualitative and quantitative validation of the method is carried out using 46 clinical quality, short-axis, perfusion MR datasets comprising 100 images each. Despite varying image quality and motion patterns in the evaluation set, validation of the method showed a reduction of the average right ventricle (LV) motion from 1.26+/-0.87 to 0.64+/-0.46 pixels. Time-intensity curves are also improved after registration with an average error reduced from 2.65+/-7.89% to 0.87+/-3.88% between registered data and manual gold standard. Comparison of clinically relevant parameters computed using registered data and the manual gold standard show a good agreement. Additional tests with a simulated free-breathing protocol showed robustness against considerable deviations from a standard breathing protocol. We conclude that this fully automatic ICA-based method shows an accuracy, a robustness and a computation speed adequate for use in a clinical environment.
Cluster analysis of multiple planetary flow regimes
NASA Technical Reports Server (NTRS)
Mo, Kingtse; Ghil, Michael
1988-01-01
A modified cluster analysis method developed for the classification of quasi-stationary events into a few planetary flow regimes and for the examination of transitions between these regimes is described. The method was applied first to a simple deterministic model and then to a 500-mbar data set for Northern Hemisphere (NH), for which cluster analysis was carried out in the subspace of the first seven empirical orthogonal functions (EOFs). Stationary clusters were found in the low-frequency band of more than 10 days, while transient clusters were found in the band-pass frequency window between 2.5 and 6 days. In the low-frequency band, three pairs of clusters determined EOFs 1, 2, and 3, respectively; they exhibited well-known regional features, such as blocking, the Pacific/North American pattern, and wave trains. Both model and low-pass data exhibited strong bimodality.
Diffusion bonded boron/aluminum spar-shell fan blade
NASA Technical Reports Server (NTRS)
Carlson, C. E. K.; Cutler, J. L.; Fisher, W. J.; Memmott, J. V. W.
1980-01-01
Design and process development tasks intended to demonstrate composite blade application in large high by-pass ratio turbofan engines are described. Studies on a 3.0 aspect radio space and shell construction fan blade indicate a potential weight savings for a first stage fan rotor of 39% when a hollow titanium spar is employed. An alternate design which featured substantial blade internal volume filled with titanium honeycomb inserts achieved a 14% potential weight savings over the B/M rotor system. This second configuration requires a smaller development effort and entails less risk to translate a design into a successful product. The feasibility of metal joining large subsonic spar and shell fan blades was demonstrated. Initial aluminum alloy screening indicates a distinct preference for AA6061 aluminum alloy for use as a joint material. The simulated airfoil pressings established the necessity of rigid air surfaces when joining materials of different compressive rigidities. The two aluminum alloy matrix choices both were successfully formed into blade shells.
ERIC Educational Resources Information Center
Fox, Barry
Differences between reporting and classificatory functions in writing were examined in the responses of grade 10 and grade 12 students: 60 who were successful English students, and 60 on the borderline of passing in each of the grades. The reporting tasks required students to write compositions describing their first day in a high school or some…
An empirical comparison of SPM preprocessing parameters to the analysis of fMRI data.
Della-Maggiore, Valeria; Chau, Wilkin; Peres-Neto, Pedro R; McIntosh, Anthony R
2002-09-01
We present the results from two sets of Monte Carlo simulations aimed at evaluating the robustness of some preprocessing parameters of SPM99 for the analysis of functional magnetic resonance imaging (fMRI). Statistical robustness was estimated by implementing parametric and nonparametric simulation approaches based on the images obtained from an event-related fMRI experiment. Simulated datasets were tested for combinations of the following parameters: basis function, global scaling, low-pass filter, high-pass filter and autoregressive modeling of serial autocorrelation. Based on single-subject SPM analysis, we derived the following conclusions that may serve as a guide for initial analysis of fMRI data using SPM99: (1) The canonical hemodynamic response function is a more reliable basis function to model the fMRI time series than HRF with time derivative. (2) Global scaling should be avoided since it may significantly decrease the power depending on the experimental design. (3) The use of a high-pass filter may be beneficial for event-related designs with fixed interstimulus intervals. (4) When dealing with fMRI time series with short interstimulus intervals (<8 s), the use of first-order autoregressive model is recommended over a low-pass filter (HRF) because it reduces the risk of inferential bias while providing a relatively good power. For datasets with interstimulus intervals longer than 8 seconds, temporal smoothing is not recommended since it decreases power. While the generalizability of our results may be limited, the methods we employed can be easily implemented by other scientists to determine the best parameter combination to analyze their data.
Bremer, Peer-Timo; Weber, Gunther; Tierny, Julien; Pascucci, Valerio; Day, Marcus S; Bell, John B
2011-09-01
Large-scale simulations are increasingly being used to study complex scientific and engineering phenomena. As a result, advanced visualization and data analysis are also becoming an integral part of the scientific process. Often, a key step in extracting insight from these large simulations involves the definition, extraction, and evaluation of features in the space and time coordinates of the solution. However, in many applications, these features involve a range of parameters and decisions that will affect the quality and direction of the analysis. Examples include particular level sets of a specific scalar field, or local inequalities between derived quantities. A critical step in the analysis is to understand how these arbitrary parameters/decisions impact the statistical properties of the features, since such a characterization will help to evaluate the conclusions of the analysis as a whole. We present a new topological framework that in a single-pass extracts and encodes entire families of possible features definitions as well as their statistical properties. For each time step we construct a hierarchical merge tree a highly compact, yet flexible feature representation. While this data structure is more than two orders of magnitude smaller than the raw simulation data it allows us to extract a set of features for any given parameter selection in a postprocessing step. Furthermore, we augment the trees with additional attributes making it possible to gather a large number of useful global, local, as well as conditional statistic that would otherwise be extremely difficult to compile. We also use this representation to create tracking graphs that describe the temporal evolution of the features over time. Our system provides a linked-view interface to explore the time-evolution of the graph interactively alongside the segmentation, thus making it possible to perform extensive data analysis in a very efficient manner. We demonstrate our framework by extracting and analyzing burning cells from a large-scale turbulent combustion simulation. In particular, we show how the statistical analysis enabled by our techniques provides new insight into the combustion process.
Dopant profile modeling by rare event enhanced domain-following molecular dynamics
Beardmore, Keith M.; Jensen, Niels G.
2002-01-01
A computer-implemented molecular dynamics-based process simulates a distribution of ions implanted in a semiconductor substrate. The properties of the semiconductor substrate and ion dose to be simulated are first initialized, including an initial set of splitting depths that contain an equal number of virtual ions implanted in each substrate volume determined by the splitting depths. A first ion with selected velocity is input onto an impact position of the substrate that defines a first domain for the first ion during a first timestep, where the first domain includes only those atoms of the substrate that exert a force on the ion. A first position and velocity of the first ion is determined after the first timestep and a second domain of the first ion is formed at the first position. The first ion is split into first and second virtual ions if the first ion has passed through a splitting interval. The process then follows each virtual ion until all of the virtual ions have come to rest. A new ion is input to the surface and the process repeats until all of the ion dose has been input. The resulting ion rest positions form the simulated implant distribution.
Integration of OpenMC methods into MAMMOTH and Serpent
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerby, Leslie; DeHart, Mark; Tumulak, Aaron
OpenMC, a Monte Carlo particle transport simulation code focused on neutron criticality calculations, contains several methods we wish to emulate in MAMMOTH and Serpent. First, research coupling OpenMC and the Multiphysics Object-Oriented Simulation Environment (MOOSE) has shown promising results. Second, the utilization of Functional Expansion Tallies (FETs) allows for a more efficient passing of multiphysics data between OpenMC and MOOSE. Both of these capabilities have been preliminarily implemented into Serpent. Results are discussed and future work recommended.
HYDRA : High-speed simulation architecture for precision spacecraft formation simulation
NASA Technical Reports Server (NTRS)
Martin, Bryan J.; Sohl, Garett.
2003-01-01
e Hierarchical Distributed Reconfigurable Architecture- is a scalable simulation architecture that provides flexibility and ease-of-use which take advantage of modern computation and communication hardware. It also provides the ability to implement distributed - or workstation - based simulations and high-fidelity real-time simulation from a common core. Originally designed to serve as a research platform for examining fundamental challenges in formation flying simulation for future space missions, it is also finding use in other missions and applications, all of which can take advantage of the underlying Object-Oriented structure to easily produce distributed simulations. Hydra automates the process of connecting disparate simulation components (Hydra Clients) through a client server architecture that uses high-level descriptions of data associated with each client to find and forge desirable connections (Hydra Services) at run time. Services communicate through the use of Connectors, which abstract messaging to provide single-interface access to any desired communication protocol, such as from shared-memory message passing to TCP/IP to ACE and COBRA. Hydra shares many features with the HLA, although providing more flexibility in connectivity services and behavior overriding.
Galactic Cosmic Ray Simulation at the NASA Space Radiation Laboratory
NASA Technical Reports Server (NTRS)
Norbury, John W.; Slaba, Tony C.; Rusek, Adam
2015-01-01
The external Galactic Cosmic Ray (GCR) spectrum is significantly modified when it passes through spacecraft shielding and astronauts. One approach for simulating the GCR space radiation environment at ground based accelerators would use the modified spectrum, rather than the external spectrum, in the accelerator beams impinging on biological targets. Two recent workshops have studied such GCR simulation. The first workshop was held at NASA Langley Research Center in October 2014. The second workshop was held at the NASA Space Radiation Investigators' workshop in Galveston, Texas in January 2015. The results of these workshops will be discussed in this paper.
NASA Astrophysics Data System (ADS)
Kriechbaumer, Thomas; Blackburn, Kim; Gill, Andrew; Breckon, Toby; Everard, Nick; Wright, Ros; Rivas Casado, Monica
2014-05-01
Fragmentation of aquatic habitats can lead to the extinction of migratory fish species with severe negative consequences at the ecosystem level and thus opposes the target of good ecological status of rivers defined in the EU Water Framework Directive (WFD). In the UK, the implementation of the EU WFD requires investments in fish pass facilities of estimated 532 million GBP (i.e. 639 million Euros) until 2027 to ensure fish passage at around 3,000 barriers considered critical. Hundreds of passes have been installed in the past. However, monitoring studies of fish passes around the world indicate that on average less than half of the fish attempting to pass such facilities are actually successful. There is a need for frameworks that allow the rapid identification of facilities that are biologically effective and those that require enhancement. Although there are many environmental characteristics that can affect fish passage success, past research suggests that variations in hydrodynamic conditions, reflected in water velocities, velocity gradients and turbulences, are the major cues that fish use to seek migration pathways in rivers. This paper presents the first steps taken in the development of a framework for the rapid field-based quantification of the hydraulic conditions downstream of fish passes and the assessment of the attractivity of fish passes for salmonids and coarse fish in UK rivers. For this purpose, a small-sized remote control platform carrying an acoustic Doppler current profiler (ADCP), a GPS unit, a stereo camera and an inertial measurement unit has been developed. The large amount of data on water velocities and depths measured by the ADCP within relatively short time is used to quantify the spatial and temporal distribution of water velocities. By matching these hydraulic features with known preferences of migratory fish, it is attempted to identify likely migration routes and aggregation areas at barriers as well as hydraulic features that may distract fish away from fish pass entrances. The initial steps of the framework development have focused on the challenge of precise spatial data referencing in areas with limited sky view to navigation satellites. Platform tracking with a motorised Total Station, various satellite-based positioning solutions and simultaneous localisation and mapping (SLAM) based on stereo images have been tested. The effect of errors in spatial data referencing on ADCP-derived maps of flow features and bathymetry will be quantified through simultaneous deployment of these navigation technologies and the ADCP. This will inform the selection of a cost-effective platform positioning system in practice. Further steps will cover the quantification of uncertainties in ADCP data caused by highly turbulent flows and the identification of suitable ADCP data sampling strategies at fish passes. The final framework for fish pass assessment can contribute to an improved understanding of the interaction of fish and the complex hydraulic river environment.
Simulation of double-pass stimulated Raman backscattering
NASA Astrophysics Data System (ADS)
Wu, Z.; Chen, Q.; Morozov, A.; Suckewer, S.
2018-04-01
Experiments on Stimulated Raman Backscattering (SRBS) in plasma have demonstrated significantly higher energy conversion in a double-pass amplifier where the laser pulses go through the plasma twice compared with a single-pass amplifier with double the plasma length of a single pass. In this paper, the improvement in understanding recent experimental results is presented by considering quite in detail the effects of plasma heating on the modeling of SRBS. Our simulation results show that the low efficiency of single-pass amplifiers can be attributed to Landau damping and the frequency shift of Langmuir waves. In double-pass amplifiers, these issues can be avoided, to some degree, because pump-induced heating could be reduced, while the plasma cools down between the passes. Therefore, double-pass amplifiers yield considerably enhanced energy transfer from the pump to the seed, hence the output pulse intensity.
Statistics of Epidemics in Networks by Passing Messages
NASA Astrophysics Data System (ADS)
Shrestha, Munik Kumar
Epidemic processes are common out-of-equilibrium phenomena of broad interdisciplinary interest. In this thesis, we show how message-passing approach can be a helpful tool for simulating epidemic models in disordered medium like networks, and in particular for estimating the probability that a given node will become infectious at a particular time. The sort of dynamics we consider are stochastic, where randomness can arise from the stochastic events or from the randomness of network structures. As in belief propagation, variables or messages in message-passing approach are defined on the directed edges of a network. However, unlike belief propagation, where the posterior distributions are updated according to Bayes' rule, in message-passing approach we write differential equations for the messages over time. It takes correlations between neighboring nodes into account while preventing causal signals from backtracking to their immediate source, and thus avoids "echo chamber effects" where a pair of adjacent nodes each amplify the probability that the other is infectious. In our first results, we develop a message-passing approach to threshold models of behavior popular in sociology. These are models, first proposed by Granovetter, where individuals have to hear about a trend or behavior from some number of neighbors before adopting it themselves. In thermodynamic limit of large random networks, we provide an exact analytic scheme while calculating the time dependence of the probabilities and thus learning about the whole dynamics of bootstrap percolation, which is a simple model known in statistical physics for exhibiting discontinuous phase transition. As an application, we apply a similar model to financial networks, studying when bankruptcies spread due to the sudden devaluation of shared assets in overlapping portfolios. We predict that although diversification may be good for individual institutions, it can create dangerous systemic effects, and as a result financial contagion gets worse with too much diversification. We also predict that financial system exhibits "robust yet fragile" behavior, with regions of the parameter space where contagion is rare but catastrophic whenever it occurs. In further results, we develop a message-passing approach to recurrent state epidemics like susceptible-infectious-susceptible and susceptible-infectious-recovered-susceptible where nodes can return to previously inhabited states and multiple waves of infection can pass through the population. Given that message-passing has been applied exclusively to models with one-way state changes like susceptible-infectious and susceptible-infectious-recovered, we develop message-passing for recurrent epidemics based on a new class of differential equations and demonstrate that our approach is simple and efficiently approximates results obtained from Monte Carlo simulation, and that the accuracy of message-passing is often superior to the pair approximation (which also takes second-order correlations into account).
Correia, Vanda; Araújo, Duarte; Cummins, Alan; Craig, Cathy M
2012-06-01
This study used a virtual, simulated 3 vs. 3 rugby task to investigate whether gaps opening in particular running channels promote different actions by the ball carrier player and whether an effect of rugby expertise is verified. We manipulated emergent gaps in three different locations: Gap 1 in the participant's own running channel, Gap 2 in the first receiver's running channel, and Gap 3 in the second receiver's running channel. Recreational, intermediate, professional, and nonrugby players performed the task. They could (i) run with the ball, (ii) make a short pass, or (iii) make a long pass. All actions were digitally recorded. Results revealed that the emergence of gaps in the defensive line with respect to the participant's own position significantly influenced action selection. Namely, "run" was most often the action performed in Gap 1, "short pass" in Gap 2, and "long pass" in Gap 3 trials. Furthermore, a strong positive relationship between expertise and task achievement was found.
Unsupervised Feature Learning With Winner-Takes-All Based STDP
Ferré, Paul; Mamalet, Franck; Thorpe, Simon J.
2018-01-01
We present a novel strategy for unsupervised feature learning in image applications inspired by the Spike-Timing-Dependent-Plasticity (STDP) biological learning rule. We show equivalence between rank order coding Leaky-Integrate-and-Fire neurons and ReLU artificial neurons when applied to non-temporal data. We apply this to images using rank-order coding, which allows us to perform a full network simulation with a single feed-forward pass using GPU hardware. Next we introduce a binary STDP learning rule compatible with training on batches of images. Two mechanisms to stabilize the training are also presented : a Winner-Takes-All (WTA) framework which selects the most relevant patches to learn from along the spatial dimensions, and a simple feature-wise normalization as homeostatic process. This learning process allows us to train multi-layer architectures of convolutional sparse features. We apply our method to extract features from the MNIST, ETH80, CIFAR-10, and STL-10 datasets and show that these features are relevant for classification. We finally compare these results with several other state of the art unsupervised learning methods. PMID:29674961
Power spectrum model of visual masking: simulations and empirical data.
Serrano-Pedraza, Ignacio; Sierra-Vázquez, Vicente; Derrington, Andrew M
2013-06-01
In the study of the spatial characteristics of the visual channels, the power spectrum model of visual masking is one of the most widely used. When the task is to detect a signal masked by visual noise, this classical model assumes that the signal and the noise are previously processed by a bank of linear channels and that the power of the signal at threshold is proportional to the power of the noise passing through the visual channel that mediates detection. The model also assumes that this visual channel will have the highest ratio of signal power to noise power at its output. According to this, there are masking conditions where the highest signal-to-noise ratio (SNR) occurs in a channel centered in a spatial frequency different from the spatial frequency of the signal (off-frequency looking). Under these conditions the channel mediating detection could vary with the type of noise used in the masking experiment and this could affect the estimation of the shape and the bandwidth of the visual channels. It is generally believed that notched noise, white noise and double bandpass noise prevent off-frequency looking, and high-pass, low-pass and bandpass noises can promote it independently of the channel's shape. In this study, by means of a procedure that finds the channel that maximizes the SNR at its output, we performed numerical simulations using the power spectrum model to study the characteristics of masking caused by six types of one-dimensional noise (white, high-pass, low-pass, bandpass, notched, and double bandpass) for two types of channel's shape (symmetric and asymmetric). Our simulations confirm that (1) high-pass, low-pass, and bandpass noises do not prevent the off-frequency looking, (2) white noise satisfactorily prevents the off-frequency looking independently of the shape and bandwidth of the visual channel, and interestingly we proved for the first time that (3) notched and double bandpass noises prevent off-frequency looking only when the noise cutoffs around the spatial frequency of the signal match the shape of the visual channel (symmetric or asymmetric) involved in the detection. In order to test the explanatory power of the model with empirical data, we performed six visual masking experiments. We show that this model, with only two free parameters, fits the empirical masking data with high precision. Finally, we provide equations of the power spectrum model for six masking noises used in the simulations and in the experiments.
Swayze, G.A.; Clark, R.N.; Goetz, A.F.H.; Chrien, T.H.; Gorelick, N.S.
2003-01-01
Estimates of spectrometer band pass, sampling interval, and signal-to-noise ratio required for identification of pure minerals and plants were derived using reflectance spectra convolved to AVIRIS, HYDICE, MIVIS, VIMS, and other imaging spectrometers. For each spectral simulation, various levels of random noise were added to the reflectance spectra after convolution, and then each was analyzed with the Tetracorder spectra identification algorithm [Clark et al., 2003]. The outcome of each identification attempt was tabulated to provide an estimate of the signal-to-noise ratio at which a given percentage of the noisy spectra were identified correctly. Results show that spectral identification is most sensitive to the signal-to-noise ratio at narrow sampling interval values but is more sensitive to the sampling interval itself at broad sampling interval values because of spectral aliasing, a condition when absorption features of different materials can resemble one another. The band pass is less critical to spectral identification than the sampling interval or signal-to-noise ratio because broadening the band pass does not induce spectral aliasing. These conclusions are empirically corroborated by analysis of mineral maps of AVIRIS data collected at Cuprite, Nevada, between 1990 and 1995, a period during which the sensor signal-to-noise ratio increased up to sixfold. There are values of spectrometer sampling and band pass beyond which spectral identification of materials will require an abrupt increase in sensor signal-to-noise ratio due to the effects of spectral aliasing. Factors that control this threshold are the uniqueness of a material's diagnostic absorptions in terms of shape and wavelength isolation, and the spectral diversity of the materials found in nature and in the spectral library used for comparison. Array spectrometers provide the best data for identification when they critically sample spectra. The sampling interval should not be broadened to increase the signal-to-noise ratio in a photon-noise-limited system when high levels of accuracy are desired. It is possible, using this simulation method, to select optimum combinations of band-pass, sampling interval, and signal-to-noise ratio values for a particular application that maximize identification accuracy and minimize the volume of imaging data.
Dynamic phantom for radionuclide cardiology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nickles, R.J.
1979-06-01
A flow-based phantom has been developed to verify analysis routines most frequently employed in clinical radionuclide cardiology. Ejection-fraction studies by first-pass or equilibrium techniques are simulated, as well as assessment of shunts and cardiac output. This hydraulic phantom, with its valve-selectable dysfunctions, offers a greater role in training than in quality control, as originally intended.
Gulzari, Usman Ali; Sajid, Muhammad; Anjum, Sheraz; Agha, Shahrukh; Torres, Frank Sill
2016-01-01
A Mesh topology is one of the most promising architecture due to its regular and simple structure for on-chip communication. Performance of mesh topology degraded greatly by increasing the network size due to small bisection width and large network diameter. In order to overcome this limitation, many researchers presented modified Mesh design by adding some extra links to improve its performance in terms of network latency and power consumption. The Cross-By-Pass-Mesh was presented by us as an improved version of Mesh topology by intelligent addition of extra links. This paper presents an efficient topology named Cross-By-Pass-Torus for further increase in the performance of the Cross-By-Pass-Mesh topology. The proposed design merges the best features of the Cross-By-Pass-Mesh and Torus, to reduce the network diameter, minimize the average number of hops between nodes, increase the bisection width and to enhance the overall performance of the network. In this paper, the architectural design of the topology is presented and analyzed against similar kind of 2D topologies in terms of average latency, throughput and power consumption. In order to certify the actual behavior of proposed topology, the synthetic traffic trace and five different real embedded application workloads are applied to the proposed as well as other competitor network topologies. The simulation results indicate that Cross-By-Pass-Torus is an efficient candidate among its predecessor's and competitor topologies due to its less average latency and increased throughput at a slight cost in network power and energy for on-chip communication.
The analysis of initial Juno magnetometer data using a sparse magnetic field representation
NASA Astrophysics Data System (ADS)
Moore, Kimberly M.; Bloxham, Jeremy; Connerney, John E. P.; Jørgensen, John L.; Merayo, José M. G.
2017-05-01
The Juno spacecraft, now in polar orbit about Jupiter, passes much closer to Jupiter's surface than any previous spacecraft, presenting a unique opportunity to study the largest and most accessible planetary dynamo in the solar system. Here we present an analysis of magnetometer observations from Juno's first perijove pass (PJ1; to within 1.06 RJ of Jupiter's center). We calculate the residuals between the vector magnetic field observations and that calculated using the VIP4 spherical harmonic model and fit these residuals using an elastic net regression. The resulting model demonstrates how effective Juno's near-surface observations are in improving the spatial resolution of the magnetic field within the immediate vicinity of the orbit track. We identify two features resulting from our analyses: the presence of strong, oppositely signed pairs of flux patches near the equator and weak, possibly reversed-polarity patches of magnetic field over the polar regions. Additional orbits will be required to assess how robust these intriguing features are.
[Features of bemithyl pharmacokinetics upon inhalation administration].
Kurpiakova, A F; Geĭbo, D S; Bykov, V N; Nikiforov, A S
2014-01-01
A comparative study of bemithyl pharmacokinetics was carried out upon its inhalation, intragastric and intravenous administration. The main drug metabolites were identified and the pharmacokinetic parameters were calculated. The obtained results suggest that the inhalation administration of bemithyl is a promising replacement for oral administration, which is related to high bioavailability of the drug and the absence of the effect of "first pass" through the liver.
Cassini's Final Titan Radar Swath
2017-08-11
During its final targeted flyby of Titan on April 22, 2017, Cassini's radar mapper got the mission's last close look at the moon's surface. On this 127th targeted pass by Titan (unintuitively named "T-126"), the radar was used to take two images of the surface, shown at left and right. Both images are about 200 miles (300 kilometers) in width, from top to bottom. Objects appear bright when they are tilted toward the spacecraft or have rough surfaces; smooth areas appear dark. At left are the same bright, hilly terrains and darker plains that Cassini imaged during its first radar pass of Titan, in 2004. Scientists do not see obvious evidence of changes in this terrain over the 13 years since the original observation. At right, the radar looked once more for Titan's mysterious "magic island" (PIA20021) in a portion of one of the large hydrocarbon seas, Ligeia Mare. No "island" feature was observed during this pass. Scientists continue to work on what the transient feature might have been, with waves and bubbles being two possibilities. In between the two parts of its imaging observation, the radar instrument switched to altimetry mode, in order to make a first-ever (and last-ever) measurement of the depths of some of the lakes that dot the north polar region. For the measurements, the spacecraft pointed its antenna straight down at the surface and the radar measured the time delay between echoes from the lakes' surface and bottom. A graph is available at https://photojournal.jpl.nasa.gov/catalog/PIA21626
Ahn, Heejung; Kim, Hyun-Young
2015-05-01
This study is involved in designing high-fidelity simulations reflecting the Korean nursing education environment. In addition, it evaluated the simulations by nursing students' learning outcomes and perceptions of the simulation design features. A quantitative design was used in two separate phases. For the first phase, five nursing experts participated in verifying the appropriateness of two simulation scenarios that reflected the intended learning objectives. For the second phase, 69 nursing students in the third year of a bachelor's degree at a nursing school participated in evaluating the simulations and were randomized according to their previous course grades. The first phase verified the two simulation scenarios using a questionnaire. The second phase evaluated students' perceptions of the simulation design, self-confidence, and critical thinking skills using a quasi-experimental post-test design. ANCOVA was used to compare the experimental and control groups, and correlation coefficient analysis was used to determine the correlation among them. We created 2 simulation scenarios to integrate cognitive and psychomotor skills according to the learning objectives and clinical environment in Korea. The experimental group had significantly higher scores on self-confidence in the first scenario. The positive correlations between perceptions of the simulation design features, self-confidence, and critical thinking skill scores were statistically significant. Students with a more positive perception of the design features of the simulations had better learning outcomes. Based on this result, simulations need to be designed and implemented with more differentiation in order to be perceived more appropriately by students. Copyright © 2015 Elsevier Ltd. All rights reserved.
Lamb wave based damage detection using Matching Pursuit and Support Vector Machine classifier
NASA Astrophysics Data System (ADS)
Agarwal, Sushant; Mitra, Mira
2014-03-01
In this paper, the suitability of using Matching Pursuit (MP) and Support Vector Machine (SVM) for damage detection using Lamb wave response of thin aluminium plate is explored. Lamb wave response of thin aluminium plate with or without damage is simulated using finite element. Simulations are carried out at different frequencies for various kinds of damage. The procedure is divided into two parts - signal processing and machine learning. Firstly, MP is used for denoising and to maintain the sparsity of the dataset. In this study, MP is extended by using a combination of time-frequency functions as the dictionary and is deployed in two stages. Selection of a particular type of atoms lead to extraction of important features while maintaining the sparsity of the waveform. The resultant waveform is then passed as input data for SVM classifier. SVM is used to detect the location of the potential damage from the reduced data. The study demonstrates that SVM is a robust classifier in presence of noise and more efficient as compared to Artificial Neural Network (ANN). Out-of-sample data is used for the validation of the trained and tested classifier. Trained classifiers are found successful in detection of the damage with more than 95% detection rate.
NASA Technical Reports Server (NTRS)
Alsdorf, Douglas E.; Vonfrese, Ralph R. B.
1994-01-01
The FORTRAN programs supplied in this document provide a complete processing package for statistically extracting residual core, external field and lithospheric components in Magsat observations. To process the individual passes: (1) orbits are separated into dawn and dusk local times and by altitude, (2) passes are selected based on the variance of the magnetic field observations after a least-squares fit of the core field is removed from each pass over the study area, and (3) spatially adjacent passes are processed with a Fourier correlation coefficient filter to separate coherent and non-coherent features between neighboring tracks. In the second state of map processing: (1) data from the passes are normalized to a common altitude and gridded into dawn and dusk maps with least squares collocation, (2) dawn and dusk maps are correlated with a Fourier correlation efficient filter to separate coherent and non-coherent features; the coherent features are averaged to produce a total field grid, (3) total field grids from all altitudes are continued to a common altitude, correlation filtered for coherent anomaly features, and subsequently averaged to produce the final total field grid for the study region, and (4) the total field map is differentially reduced to the pole.
Design of dual band FSS by using quadruple L-slot technique
NASA Astrophysics Data System (ADS)
Fauzi, Noor Azamiah Md; Aziz, Mohamad Zoinol Abidin Abd.; Said, Maizatul Alice Meor; Othman, Mohd Azlishah; Ahmad, Badrul Hisham; Malek, Mohd Fareq Abd
2015-05-01
This paper presents a new design of dual band frequency selective surface (FSS) for band pass microwave transmission application. FSS can be used on energy saving glass to improve the transmission of wireless communication signals through the glass. The microwave signal will be attenuate when propagate throughout the different structure such as building. Therefore, some of the wireless communication system cannot be used in the optimum performance. The aim of this paper is designed, simulated and analyzed the new dual band FSS structure for microwave transmission. This design is based on a quadruple L slot combined with cross slot to produce pass band at 900 MHz and 2.4 GHz. The vertical of pair inverse L slot is used as the band pass for the frequency of 2.4GHz. While, the horizontal of pair inverse L slot is used as the band pass at frequency 900MHz. This design is simulated and analyzed by using Computer Simulation Technology (CST) Microwave Studio (MWS) software. The characteristics of the transmission (S21) and reflection (S11) of the dual band FSS were simulater and analyzed. The bandwidth of the first band is 118.91MHz which covered the frequency range from 833.4MHz until 952.31MHz. Meanwhile, the bandwidth for the second band is 358.84MHz which covered the frequency range from 2.1475GHz until 2.5063GHz. The resonance/center frequency of this design is obtained at 900MHz with a 26.902dB return loss and 2.37GHz with 28.506dB a return loss. This FSS is suitable as microwave filter for GSM900 and WLAN 2.4GHz application.
NASA Astrophysics Data System (ADS)
Qin, Fangcheng; Li, Yongtang; Qi, Huiping; Lv, Zhenhua
2016-11-01
The isothermal and non-isothermal multi-pass compression tests of centrifugal casting 42CrMo steel were conducted on a Gleeble-3500 thermal simulation machine. The effects of compression passes and finishing temperatures on deformation behavior and microstructure evolution were investigated. It is found that the microstructure is homogeneous with equiaxed grains, and the flow stress does not show significant change with the increase in passes, while the peak softening coefficient increases first and then decreases during inter-pass. Moreover, the dominant mechanisms of controlled temperature and accumulated static recrystallization for grain refinement and its homogeneous distribution are found after 5 passes deformation. As the finishing temperature increases, the flow stress decreases gradually, but the dynamic recrystallization accelerates and softening effect increases, resulting in the larger grain size and homogeneous microstructure. The microhardness decreases sharply because the sufficient softening occurs in microstructure. When the finishing temperature is 890 °C, the carbide particles are precipitated in the vicinity of the grain boundaries, thus inhibiting the dislocation motion. Thus, the higher finishing temperature (≥970 °C) for centrifugal casting 42CrMo alloy should be avoided in non-isothermal multi-pass deformation, which is beneficial to grain refinement and properties improvement.
A relative quantitative assessment of myocardial perfusion by first-pass technique: animal study
NASA Astrophysics Data System (ADS)
Chen, Jun; Zhang, Zhang; Yu, Xuefang; Zhou, Kenneth J.
2015-03-01
The purpose of this study is to quantitatively assess the myocardial perfusion by first-pass technique in swine model. Numerous techniques based on the analysis of Computed Tomography (CT) Hounsfield Unit (HU) density have emerged. Although these methods proposed to be able to assess haemodynamically significant coronary artery stenosis, their limitations are noticed. There are still needs to develop some new techniques. Experiments were performed upon five (5) closed-chest swine. Balloon catheters were placed into the coronary artery to simulate different degrees of luminal stenosis. Myocardial Blood Flow (MBF) was measured using color microsphere technique. Fractional Flow Reserve (FFR) was measured using pressure wire. CT examinations were performed twice during First-pass phase under adenosine-stress condition. CT HU Density (HUDCT) and CT HU Density Ratio (HUDRCT) were calculated using the acquired CT images. Our study presents that HUDRCT shows a good (y=0.07245+0.09963x, r2=0.898) correlation with MBF and FFR. In receiver operating characteristic (ROC) curve analyses, HUDRCT provides excellent diagnostic performance for the detection of significant ischemia during adenosine-stress as defined by FFR indicated by the value of Area Under the Curve (AUC) of 0.927. HUDRCT has the potential to be developed as a useful indicator of quantitative assessment of myocardial perfusion.
DC-pass filter design with notch filters superposition for CPW rectenna at low power level
NASA Astrophysics Data System (ADS)
Rivière, J.; Douyère, A.; Alicalapa, F.; Luk, J.-D. Lan Sun
2016-03-01
In this paper the challenging coplanar waveguide direct current (DC) pass filter is designed, analysed, fabricated and measured. As the ground plane and the conductive line are etched on the same plane, this technology allows the connection of series and shunt elements to the active devices without via holes through the substrate. Indeed, this study presents the first step in the optimization of a complete rectenna in coplanar waveguide (CPW) technology: key element of a radio frequency (RF) energy harvesting system. The measurement of the proposed filter shows good performance in the rejection of F0=2.45 GHz and F1=4.9 GHz. Additionally, a harmonic balance (HB) simulation of the complete rectenna is performed and shows a maximum RF-to-DC conversion efficiency of 37% with the studied DC-pass filter for an input power of 10 µW at 2.45 GHz.
Evaluation of total knee mechanics using a crouching simulator with a synthetic knee substitute.
Lowry, Michael; Rosenbaum, Heather; Walker, Peter S
2016-05-01
Mechanical evaluation of total knees is frequently required for aspects such as wear, strength, kinematics, contact areas, and force transmission. In order to carry out such tests, we developed a crouching simulator, based on the Oxford-type machine, with novel features including a synthetic knee including ligaments. The instrumentation and data processing methods enabled the determination of contact area locations and interface forces and moments, for a full flexion-extension cycle. To demonstrate the use of the simulator, we carried out a comparison of two different total knee designs, cruciate retaining and substituting. The first part of the study describes the simulator design and the methodology for testing the knees without requiring cadaveric knee specimens. The degrees of freedom of the anatomic hip and ankle joints were reproduced. Flexion-extension was obtained by changing quadriceps length, while variable hamstring forces were applied using springs. The knee joint was represented by three-dimensional printed blocks on to which the total knee components were fixed. Pretensioned elastomeric bands of realistic stiffnesses passed through holes in the block at anatomical locations to represent ligaments. Motion capture of the knees during flexion, together with laser scanning and computer modeling, was used to reconstruct contact areas on the bearing surfaces. A method was also developed for measuring tibial component interface forces and moments as a comparative assessment of fixation. The method involved interposing Tekscan pads at locations on the interface. Overall, the crouching machine and the methodology could be used for many different mechanical measurements of total knee designs, adapted especially for comparative or parametric studies. © IMechE 2016.
NASA Astrophysics Data System (ADS)
Jin, Young-Gwan; Son, Il-Heon; Im, Yong-Taek
2010-06-01
Experiments with a square specimen made of commercially pure aluminum alloy (AA1050) were conducted to investigate deformation behaviour during a multi-pass Equal Channel Angular Pressing (ECAP) for routes A, Bc, and C up to four passes. Three-dimensional finite element numerical simulations of the multi-pass ECAP were carried out in order to evaluate the influence of processing routes and number of passes on local flow behaviour by applying a simplified saturation model of flow stress under an isothermal condition. Simulation results were investigated by comparing them with the experimentally measured data in terms of load variations and microhardness distributions. Also, transmission electron microscopy analysis was employed to investigate the microstructural changes. The present work clearly shows that the three-dimensional flow characteristics of the deformed specimen were dependent on the strain path changes due to the processing routes and number of passes that occurred during the multi-pass ECAP.
Using support vector machines to identify literacy skills: Evidence from eye movements.
Lou, Ya; Liu, Yanping; Kaakinen, Johanna K; Li, Xingshan
2017-06-01
Is inferring readers' literacy skills possible by analyzing their eye movements during text reading? This study used Support Vector Machines (SVM) to analyze eye movement data from 61 undergraduate students who read a multiple-paragraph, multiple-topic expository text. Forward fixation time, first-pass rereading time, second-pass fixation time, and regression path reading time on different regions of the text were provided as features. The SVM classification algorithm assisted in distinguishing high-literacy-skilled readers from low-literacy-skilled readers with 80.3 % accuracy. Results demonstrate the effectiveness of combining eye tracking and machine learning techniques to detect readers with low literacy skills, and suggest that such approaches can be potentially used in predicting other cognitive abilities.
The US Navy Coastal Surge and Inundation Prediction System (CSIPS): Making Forecasts Easier
2013-02-14
produced the best results Peak Water Level Percent Error CD Formulation LAWMA , Amerada Pass Freshwater Canal Locks Calcasieu Pass Sabine Pass...Conclusions Ongoing Work 16 Baseline Simulation Results Peak Water Level Percent Error LAWMA , Amerada Pass Freshwater Canal Locks Calcasieu Pass...Conclusions Ongoing Work 20 Sensitivity Studies Waves Run Water Level – Percent Error of Peak HWM MAPE Lawma , Armeda Pass Freshwater
NASA Astrophysics Data System (ADS)
Flamant, C.; Drobinski, P.; Nance, L.; Banta, R.; Darby, L.; Dusek, J.; Hardesty, M.; Pelon, J.; Richard, E.
2002-04-01
This paper examines the three-dimensional structure and dynamics of southerly hybrid gap/mountain flow through the Wipp valley (Wipptal), Austria, observed on 30 October 1999 using high-resolution observations and model simulations. The observations were obtained during a shallow south föhn event documented in the framework of the Mesoscale Alpine Programme (MAP). Three important data sources were used: the airborne differential-absorption lidar LEANDRE 2, the ground-based Doppler lidar TEACO2 and in situ measurements from the National Oceanic and Atmospheric Administration P-3 aircraft. This event was simulated down to 2 km horizontal resolution using the non-hydrostatic mesoscale model Meso-NH. The structure and dynamics of the flow were realistically simulated. The combination of high-resolution observations and numerical simulations provided a comprehensive three-dimensional picture of the flow through the Wipptal: in the gap entrance region (Brenner Pass, Austria), the low-level jet was not solely due to the channelling of the southerly synoptic flow through the elevated gap. Part of the Wipptal flow originated as a mountain wave at the valley head wall of the Brenner Pass. Downstream of the pass, the shallow föhn flow had the characteristics of a downslope windstorm as it rushed down towards the Inn valley (Inntal) and the City of Innsbruck, Austria. Downhill of the Brenner Pass, the strongest flow was observed over a small obstacle along the western side wall (the Nösslachjoch), rather than channelled in the deeper part of the valley just to the east. Further north, the low-level jet was observed in the centre of the valley. Approximately halfway between Brenner Pass and Innsbruck, where the along-axis direction of the valley changes from north to north-north-west, the low-level jet was observed to be deflected to the eastern side wall of the Wipptal. Interaction between the Stubaier Alpen (the largest and highest topographic feature to the west of the Wipptal) and the south-westerly synoptic flow was found to be the primary mechanism responsible for the deflection. The along- and cross-valley structure and dynamics of the flow were observed to be highly variable due to the influence of surrounding mountains, localized steep slopes within the valley and outflows from tributaries (the Gschnitztal and the Stubaital) to the west of the Wipptal. For that shallow föhn case, observations and simulations provided a large body of evidence that downslope flow created thinning/thickening fluid and accelerations/decelerations reminiscent of mountain wave/hydraulic theory. Along the Wipptal, two hydraulic-jump-like transitions were observed and simulated, (i) on the lee slope of the Nösslachjoch and (ii) in the Gschnitztal exit region. A hydraulic solution of the flow was calculated in the framework of reduced-gravity shallow-water theory. The down-valley evolution of the Froude number computed using LEANDRE 2, P-3 flight level and TEACO2 measurements confirmed that these transitions were associated with super- to subcritical transitions.
NASA Astrophysics Data System (ADS)
Kong, Yun; Wang, Tianyang; Li, Zheng; Chu, Fulei
2017-09-01
Planetary transmission plays a vital role in wind turbine drivetrains, and its fault diagnosis has been an important and challenging issue. Owing to the complicated and coupled vibration source, time-variant vibration transfer path, and heavy background noise masking effect, the vibration signal of planet gear in wind turbine gearboxes exhibits several unique characteristics: Complex frequency components, low signal-to-noise ratio, and weak fault feature. In this sense, the periodic impulsive components induced by a localized defect are hard to extract, and the fault detection of planet gear in wind turbines remains to be a challenging research work. Aiming to extract the fault feature of planet gear effectively, we propose a novel feature extraction method based on spectral kurtosis and time wavelet energy spectrum (SK-TWES) in the paper. Firstly, the spectral kurtosis (SK) and kurtogram of raw vibration signals are computed and exploited to select the optimal filtering parameter for the subsequent band-pass filtering. Then, the band-pass filtering is applied to extrude periodic transient impulses using the optimal frequency band in which the corresponding SK value is maximal. Finally, the time wavelet energy spectrum analysis is performed on the filtered signal, selecting Morlet wavelet as the mother wavelet which possesses a high similarity to the impulsive components. The experimental signals collected from the wind turbine gearbox test rig demonstrate that the proposed method is effective at the feature extraction and fault diagnosis for the planet gear with a localized defect.
A novel broadband bi-mode active frequency selective surface
NASA Astrophysics Data System (ADS)
Xu, Yang; Gao, Jinsong; Xu, Nianxi; Shan, Dongzhi; Song, Naitao
2017-05-01
A novel broadband bi-mode active frequency selective surface (AFSS) is presented in this paper. The proposed structure is composed of a periodic array of convoluted square patches and Jerusalem Crosses. According to simulation results, the frequency response of AFSS definitely exhibits a mode switch feature between band-pass and band-stop modes when the diodes stay in ON and OFF states. In order to apply a uniform bias to each PIN diode, an ingenious biasing network based on the extension of Wheatstone bridge is adopted in prototype AFSS. The test results are in good agreement with the simulation results. A further physical mechanism of the bi-mode AFSS is shown by contrasting the distribution of electric field on the AFSS patterns for the two working states.
Lattice Boltzmann Simulation of Electroosmotic Micromixing by Heterogeneous Surface Charge
NASA Astrophysics Data System (ADS)
Tang, G. H.; Wang, F. F.; Tao, W. Q.
Microelectroosmotic flow is usually restricted to low Reynolds number regime, and mixing in these microfluidic systems becomes problematic due to the negligible inertial effects. To gain an improved understanding of mixing enhancement in microchannels patterned with heterogeneous surface charge, the lattice Boltzmann method has been employed to obtain the electric potential distribution in the electrolyte, the flow field, and the species concentration distribution, respectively. The simulation results show that heterogeneous surfaces can significantly disturb the streamlines leading to apparently substantial improvements in mixing. However, the introduction of such a feature can reduce the mass flow rate in the channel. The reduction in flow rate effectively prolongs the available mixing time when the flow passes through the channel and the observed mixing enhancement by heterogeneous surfaces partly results from longer mixing time.
A scene model of exosolar systems for use in planetary detection and characterisation simulations
NASA Astrophysics Data System (ADS)
Belu, A.; Thiébaut, E.; Ollivier, M.; Lagache, G.; Selsis, F.; Vakili, F.
2007-12-01
Context: Instrumental projects that will improve the direct optical finding and characterisation of exoplanets have advanced sufficiently to trigger organized investigation and development of corresponding signal processing algorithms. The first step is the availability of field-of-view (FOV) models. These can then be submitted to various instrumental models, which in turn produce simulated data, enabling the testing of processing algorithms. Aims: We aim to set the specifications of a physical model for typical FOVs of these instruments. Methods: The dynamic in resolution and flux between the various sources present in such a FOV imposes a multiscale, independent layer approach. From review of current literature and through extrapolations from currently available data and models, we derive the features of each source-type in the field of view likely to pass the instrumental filter at exo-Earth level. Results: Stellar limb darkening is shown to cause bias in leakage calibration if unaccounted for. Occurrence of perturbing background stars or galaxies in the typical FOV is unlikely. We extract galactic interstellar medium background emissions for current target lists. Galactic background can be considered uniform over the FOV, and it should show no significant drift with parallax. Our model specifications have been embedded into a Java simulator, soon to be made open-source. We have also designed an associated FITS input/output format standard that we present here. Work supported in part by the ESA/ESTEC contract 18701/04/NL/HB, led by Thales Alenia Space.
Statistical variability and confidence intervals for planar dose QA pass rates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, Daniel W.; Nelms, Benjamin E.; Attwood, Kristopher
Purpose: The most common metric for comparing measured to calculated dose, such as for pretreatment quality assurance of intensity-modulated photon fields, is a pass rate (%) generated using percent difference (%Diff), distance-to-agreement (DTA), or some combination of the two (e.g., gamma evaluation). For many dosimeters, the grid of analyzed points corresponds to an array with a low areal density of point detectors. In these cases, the pass rates for any given comparison criteria are not absolute but exhibit statistical variability that is a function, in part, on the detector sampling geometry. In this work, the authors analyze the statistics ofmore » various methods commonly used to calculate pass rates and propose methods for establishing confidence intervals for pass rates obtained with low-density arrays. Methods: Dose planes were acquired for 25 prostate and 79 head and neck intensity-modulated fields via diode array and electronic portal imaging device (EPID), and matching calculated dose planes were created via a commercial treatment planning system. Pass rates for each dose plane pair (both centered to the beam central axis) were calculated with several common comparison methods: %Diff/DTA composite analysis and gamma evaluation, using absolute dose comparison with both local and global normalization. Specialized software was designed to selectively sample the measured EPID response (very high data density) down to discrete points to simulate low-density measurements. The software was used to realign the simulated detector grid at many simulated positions with respect to the beam central axis, thereby altering the low-density sampled grid. Simulations were repeated with 100 positional iterations using a 1 detector/cm{sup 2} uniform grid, a 2 detector/cm{sup 2} uniform grid, and similar random detector grids. For each simulation, %/DTA composite pass rates were calculated with various %Diff/DTA criteria and for both local and global %Diff normalization techniques. Results: For the prostate and head/neck cases studied, the pass rates obtained with gamma analysis of high density dose planes were 2%-5% higher than respective %/DTA composite analysis on average (ranging as high as 11%), depending on tolerances and normalization. Meanwhile, the pass rates obtained via local normalization were 2%-12% lower than with global maximum normalization on average (ranging as high as 27%), depending on tolerances and calculation method. Repositioning of simulated low-density sampled grids leads to a distribution of possible pass rates for each measured/calculated dose plane pair. These distributions can be predicted using a binomial distribution in order to establish confidence intervals that depend largely on the sampling density and the observed pass rate (i.e., the degree of difference between measured and calculated dose). These results can be extended to apply to 3D arrays of detectors, as well. Conclusions: Dose plane QA analysis can be greatly affected by choice of calculation metric and user-defined parameters, and so all pass rates should be reported with a complete description of calculation method. Pass rates for low-density arrays are subject to statistical uncertainty (vs. the high-density pass rate), but these sampling errors can be modeled using statistical confidence intervals derived from the sampled pass rate and detector density. Thus, pass rates for low-density array measurements should be accompanied by a confidence interval indicating the uncertainty of each pass rate.« less
Research on simulated infrared image utility evaluation using deep representation
NASA Astrophysics Data System (ADS)
Zhang, Ruiheng; Mu, Chengpo; Yang, Yu; Xu, Lixin
2018-01-01
Infrared (IR) image simulation is an important data source for various target recognition systems. However, whether simulated IR images could be used as training data for classifiers depends on the features of fidelity and authenticity of simulated IR images. For evaluation of IR image features, a deep-representation-based algorithm is proposed. Being different from conventional methods, which usually adopt a priori knowledge or manually designed feature, the proposed method can extract essential features and quantitatively evaluate the utility of simulated IR images. First, for data preparation, we employ our IR image simulation system to generate large amounts of IR images. Then, we present the evaluation model of simulated IR image, for which an end-to-end IR feature extraction and target detection model based on deep convolutional neural network is designed. At last, the experiments illustrate that our proposed method outperforms other verification algorithms in evaluating simulated IR images. Cross-validation, variable proportion mixed data validation, and simulation process contrast experiments are carried out to evaluate the utility and objectivity of the images generated by our simulation system. The optimum mixing ratio between simulated and real data is 0.2≤γ≤0.3, which is an effective data augmentation method for real IR images.
A statistical model of the wave field in a bounded domain
NASA Astrophysics Data System (ADS)
Hellsten, T.
2017-02-01
Numerical simulations of plasma heating with radiofrequency waves often require repetitive calculations of wave fields as the plasma evolves. To enable effective simulations, bench marked formulas of the power deposition have been developed. Here, a statistical model applicable to waves with short wavelengths is presented, which gives the expected amplitude of the wave field as a superposition of four wave fields with weight coefficients depending on the single pass damping, as. The weight coefficient for the wave field coherent with that calculated in the absence of reflection agrees with the coefficient for strong single pass damping of an earlier developed heuristic model, for which the weight coefficients were obtained empirically using a full wave code to calculate the wave field and power deposition. Antennas launching electromagnetic waves into bounded domains are often designed to produce localised wave fields and power depositions in the limit of strong single pass damping. The reflection of the waves changes the coupling that partly destroys the localisation of the wave field, which explains the apparent paradox arising from the earlier developed heuristic formula that only a fraction as2(2-as) and not as of the power is absorbed with a profile corresponding to the power deposition for the first pass of the rays. A method to account for the change in the coupling spectrum caused by reflection for modelling the wave field with ray tracing in bounded media is proposed, which should be applicable to wave propagation in non-uniform media in more general geometries.
A biomimetic algorithm for the improved detection of microarray features
NASA Astrophysics Data System (ADS)
Nicolau, Dan V., Jr.; Nicolau, Dan V.; Maini, Philip K.
2007-02-01
One the major difficulties of microarray technology relate to the processing of large and - importantly - error-loaded images of the dots on the chip surface. Whatever the source of these errors, those obtained in the first stage of data acquisition - segmentation - are passed down to the subsequent processes, with deleterious results. As it has been demonstrated recently that biological systems have evolved algorithms that are mathematically efficient, this contribution attempts to test an algorithm that mimics a bacterial-"patented" algorithm for the search of available space and nutrients to find, "zero-in" and eventually delimitate the features existent on the microarray surface.
Webb Telescope Tested for Space, Ready for Science
2018-01-10
NASA’s James Webb Space Telescope is a civilization scale mission, set to look back to the first galaxies formed after the Big Bang and help answer the question “are we alone in the universe?” After passing a key test at Johnson Space Center designed to simulate the cold vacuum of space, Webb is ready for the next step ahead of a launch in 2019
NASA Astrophysics Data System (ADS)
Bolte, Nathan; Heidbrink, W. W.; Pace, D. C.; van Zeeland, M. A.; Chen, X.
2015-11-01
A new fast-ion diagnostic method uses passive emission of D-alpha radiation to determine fast-ion losses quantitatively. The passive fast-ion D-alpha simulation (P-FIDAsim) forward models the Doppler-shifted spectra of first-orbit fast ions that charge exchange with edge neutrals. Simulated spectra are up to 80 % correlated with experimental spectra. Calibrated spectra are used to estimate the 2D neutral density profile by inverting simulated spectra. The inferred neutral density shows the expected increase toward each x-point and an average value of 8 × 10 9 cm-3 at the plasma boundary and 1 × 10 11 cm-3 near the wall. Measuring and simulating first-orbit spectra effectively ``calibrates'' the system, allowing for the quantification of more general fast-ion losses. Sawtooth crashes are estimated to eject 1.2 % of the fast-ion inventory, in good agreement with a 1.7 % loss estimate made by TRANSP. Sightlines sensitive to passing ions observe larger sawtooth losses than sightlines sensitive to trapped ions. Supported by US DOE under SC-G903402, DE-FC02-04ER54698.
Gheza, Federico; Raimondi, Paolo; Solaini, Leonardo; Coccolini, Federico; Baiocchi, Gian Luca; Portolani, Nazario; Tiberio, Guido Alberto Massimo
2018-04-11
Outside the US, FLS certification is not required and its teaching methods are not well standardized. Even if the FLS was designed as "stand alone" training system, most of Academic Institution offer support to residents during training. We present the first systematic application of FLS in Italy. Our aim was to evaluate the role of mentoring/coaching on FLS training in terms of the passing rate and global performance in the search for resource optimization. Sixty residents in general surgery, obstetrics & gynecology, and urology were selected to be enrolled in a randomized controlled trial, practicing FLS with the goal of passing a simulated final exam. The control group practiced exclusively with video material from SAGES, whereas the interventional group was supported by a mentor. Forty-six subjects met the requirements and completed the trial. For the other 14 subjects no results are available for comparison. One subject for each group failed the exam, resulting in a passing rate of 95.7%, with no obvious differences between groups. Subgroup analysis did not reveal any difference between the groups for FLS tasks. We confirm that methods other than video instruction and deliberate FLS practice are not essential to pass the final exam. Based on these results, we suggest the introduction of the FLS system even where a trained tutor is not available. This trial is the first single institution application of the FLS in Italy and one of the few experiences outside the US. Trial Number: NCT02486575 ( https://www.clinicaltrials.gov ).
NASA Astrophysics Data System (ADS)
Zhu, Chen-Xi; Wang, Chi-Chuan
2018-01-01
This study proposes a numerical model for plate heat exchanger that is capable of handling supercritical CO2 fluid. The plate heat exchangers under investigation include Z-type (1-pass), U-type (1-pass), and 1-2 pass configurations. The plate spacing is 2.9 mm with a plate thickness of 0.8 mm, and the size of the plate is 600 mm wide and 218 mm in height with 60 degrees chevron angle. The proposed model takes into account the influence of gigantic change of CO2 properties. The simulation is first compared with some existing data for water-to-water plate heat exchangers with good agreements. The flow distribution, pressure drop, and heat transfer performance subject to the supercritical CO2 in plate heat exchangers are then investigated. It is found that the flow velocity increases consecutively from the entrance plate toward the last plate for the Z-type arrangement, and this is applicable for either water side or CO2 side. However, the flow distribution of the U-type arrangement in the water side shows opposite trend. Conversely, the flow distribution for U-type arrangement of CO2 depends on the specific flow ratio (C*). A lower C* like 0.1 may reverse the distribution, i.e. the flow velocity increases moderately alongside the plate channel like Z-type while a large C* of 1 would resemble the typical distribution in water channel. The flow distribution of CO2 side at the first and last plate shows a pronounced drop/surge phenomenon while the channels in water side does not reveal this kind of behavior. The performance of 2-pass plate heat exchanger, in terms of heat transfer rate, is better than that of 1-pass design only when C* is comparatively small (C* < 0.5). Multi-pass design is more effective when the dominant thermal resistance falls in the CO2 side.
A satellite-based personal communication system for the 21st century
NASA Technical Reports Server (NTRS)
Sue, Miles K.; Dessouky, Khaled; Levitt, Barry; Rafferty, William
1990-01-01
Interest in personal communications (PCOMM) has been stimulated by recent developments in satellite and terrestrial mobile communications. A personal access satellite system (PASS) concept was developed at the Jet Propulsion Laboratory (JPL) which has many attractive user features, including service diversity and a handheld terminal. Significant technical challenges addressed in formulating the PASS space and ground segments are discussed. PASS system concept and basic design features, high risk enabling technologies, an optimized multiple access scheme, alternative antenna coverage concepts, the use of non-geostationary orbits, user terminal radiation constraints, and user terminal frequency reference are covered.
Radar remote sensing for archaeology in Hangu Frontier Pass in Xin’an, China
NASA Astrophysics Data System (ADS)
Jiang, A. H.; Chen, F. L.; Tang, P. P.; Liu, G. L.; Liu, W. K.; Wang, H. C.; Lu, X.; Zhao, X. L.
2017-02-01
As a non-invasive tool, remote sensing can be applied to archaeology taking the advantage of large scale covering, in-time acquisition, high spatial-temporal resolution and etc. In archaeological research, optical approaches have been widely used. However, the capability of Synthetic Aperture Radar (SAR) for archaeological detection has not been fully exploded so far. In this study, we chose Hangu Frontier Pass of Han Dynasty located in Henan Province as the experimental site (included into the cluster of Silk Roads World Heritage sites). An exploratory study to detect the historical remains was conducted. Firstly, TanDEM-X SAR data were applied to generate high resolution DEM of Hangu Frontier Pass; and then the relationship between the pass and derived ridge lines was analyzed. Second, the temporal-averaged amplitude SAR images highlighted archaeological traces owing to the depressed speckle noise. For instance, the processing of 20-scene PALSAR data (spanning from 2007 to 2011) enabled us to detect unknown archaeological features. Finally, the heritage remains detected by SAR data were verified by Ground Penetrating Radar (GPR) prospecting, implying the potential of the space-to-ground radar remote sensing for archaeological applications.
Reconsidering Simulations in Science Education at a Distance: Features of Effective Use
ERIC Educational Resources Information Center
Blake, C.; Scanlon, E.
2007-01-01
This paper proposes a reconsideration of use of computer simulations in science education. We discuss three studies of the use of science simulations for undergraduate distance learning students. The first one, "The Driven Pendulum" simulation is a computer-based experiment on the behaviour of a pendulum. The second simulation, "Evolve" is…
LANDMARK-BASED SPEECH RECOGNITION: REPORT OF THE 2004 JOHNS HOPKINS SUMMER WORKSHOP.
Hasegawa-Johnson, Mark; Baker, James; Borys, Sarah; Chen, Ken; Coogan, Emily; Greenberg, Steven; Juneja, Amit; Kirchhoff, Katrin; Livescu, Karen; Mohan, Srividya; Muller, Jennifer; Sonmez, Kemal; Wang, Tianyu
2005-01-01
Three research prototype speech recognition systems are described, all of which use recently developed methods from artificial intelligence (specifically support vector machines, dynamic Bayesian networks, and maximum entropy classification) in order to implement, in the form of an automatic speech recognizer, current theories of human speech perception and phonology (specifically landmark-based speech perception, nonlinear phonology, and articulatory phonology). All three systems begin with a high-dimensional multiframe acoustic-to-distinctive feature transformation, implemented using support vector machines trained to detect and classify acoustic phonetic landmarks. Distinctive feature probabilities estimated by the support vector machines are then integrated using one of three pronunciation models: a dynamic programming algorithm that assumes canonical pronunciation of each word, a dynamic Bayesian network implementation of articulatory phonology, or a discriminative pronunciation model trained using the methods of maximum entropy classification. Log probability scores computed by these models are then combined, using log-linear combination, with other word scores available in the lattice output of a first-pass recognizer, and the resulting combination score is used to compute a second-pass speech recognition output.
Schmidt, Robert L; Howard, Kirsten; Hall, Brian J; Layfield, Lester J
2012-12-01
Sample adequacy is an important aspect of overall fine-needle aspiration cytology (FNAC) performance. FNAC effectiveness is augmented by an increasing number of needle passes, but increased needle passes are associated with higher costs and greater risk of adverse events. The objective of this study was to compare the impact of several different sampling policies on FNAC effectiveness and adverse event rates using discrete event simulation. We compared 8 different sampling policies in 12 different sampling environments. All sampling policies were effective when the per-pass accuracy is high (>80%). Rapid on-site evaluation (ROSE) improves FNAC effectiveness when the per-pass adequacy rate is low. ROSE is unlikely to be cost-effective in sampling environments in which the per-pass adequacy is high. Alternative ROSE assessors (eg, cytotechnologists) may be a cost-effective alternative to pathologists when the per-pass adequacy rate is moderate (60%-80%) or when the number of needle passes is limited.
M and D SIG progress report: Laboratory simulations of LDEF impact features
NASA Technical Reports Server (NTRS)
Horz, Friedrich; Bernhard, R. P.; See, Thomas H.; Atkinson, Dale R.; Allbrooks, Martha K.
1991-01-01
Reported here are impact simulations into pure Teflon and aluminum targets. These experiments will allow first order interpretations of impact features on the Long Duration Exposure Facility (LDEF), and they will serve as guides for dedicated experiments that employ the real LDEF blankets, both unexposed and exposed, for a refined understanding of the Long Duration Exposure Facility's collisional environment.
NASA Astrophysics Data System (ADS)
Watanabe, Koji; Matsuno, Kenichi
This paper presents a new method for simulating flows driven by a body traveling with neither restriction on motion nor a limit of a region size. In the present method named 'Moving Computational Domain Method', the whole of the computational domain including bodies inside moves in the physical space without the limit of region size. Since the whole of the grid of the computational domain moves according to the movement of the body, a flow solver of the method has to be constructed on the moving grid system and it is important for the flow solver to satisfy physical and geometric conservation laws simultaneously on moving grid. For this issue, the Moving-Grid Finite-Volume Method is employed as the flow solver. The present Moving Computational Domain Method makes it possible to simulate flow driven by any kind of motion of the body in any size of the region with satisfying physical and geometric conservation laws simultaneously. In this paper, the method is applied to the flow around a high-speed car passing through a hairpin curve. The distinctive flow field driven by the car at the hairpin curve has been demonstrated in detail. The results show the promising feature of the method.
A data fusion approach for track monitoring from multiple in-service trains
NASA Astrophysics Data System (ADS)
Lederman, George; Chen, Siheng; Garrett, James H.; Kovačević, Jelena; Noh, Hae Young; Bielak, Jacobo
2017-10-01
We present a data fusion approach for enabling data-driven rail-infrastructure monitoring from multiple in-service trains. A number of researchers have proposed using vibration data collected from in-service trains as a low-cost method to monitor track geometry. The majority of this work has focused on developing novel features to extract information about the tracks from data produced by individual sensors on individual trains. We extend this work by presenting a technique to combine extracted features from multiple passes over the tracks from multiple sensors aboard multiple vehicles. There are a number of challenges in combining multiple data sources, like different relative position coordinates depending on the location of the sensor within the train. Furthermore, as the number of sensors increases, the likelihood that some will malfunction also increases. We use a two-step approach that first minimizes position offset errors through data alignment, then fuses the data with a novel adaptive Kalman filter that weights data according to its estimated reliability. We show the efficacy of this approach both through simulations and on a data-set collected from two instrumented trains operating over a one-year period. Combining data from numerous in-service trains allows for more continuous and more reliable data-driven monitoring than analyzing data from any one train alone; as the number of instrumented trains increases, the proposed fusion approach could facilitate track monitoring of entire rail-networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nielsen, Yousef W., E-mail: yujwni01@heh.regionh.d; Eiberg, Jonas P.; Logager, Vibeke B.
The purpose of this investigation was to determine if addition of infragenicular steady-state (SS) magnetic resonance angiography (MRA) to first-pass imaging improves diagnostic performance compared with first-pass imaging alone in patients with peripheral arterial disease (PAD) undergoing whole-body (WB) MRA. Twenty consecutive patients with PAD referred to digital-subtraction angiography (DSA) underwent WB-MRA. Using a bolus-chase technique, first-pass WB-MRA was performed from the supra-aortic vessels to the ankles. The blood-pool contrast agent gadofosveset trisodium was used at a dose of 0.03 mmol/kg body weight. Ten minutes after injection of the contrast agent, high-resolution (0.7-mm isotropic voxels) SS-MRA of the infragenicular arteriesmore » was performed. Using DSA as the 'gold standard,' sensitivities and specificities for detecting significant arterial stenoses ({>=}50% luminal narrowing) with first-pass WB-MRA, SS-MRA, and combined first-pass and SS-MRA were calculated. Kappa statistics were used to determine intermodality agreement between MRA and DSA. Overall sensitivity and specificity for detecting significant arterial stenoses with first-pass WB-MRA was 0.70 (95% confidence interval 0.61 to 0.78) and 0.97 (0.94 to 0.99), respectively. In first-pass WB-MRA, the lowest sensitivity was in the infragenicular region, with a value of 0.42 (0.23 to 0.63). Combined analysis of first-pass WB-MRA and SS-MRA increased sensitivity to 0.81 (0.60 to 0.93) in the infragenicular region, with specificity of 0.94 (0.88 to 0.97). Sensitivity and specificity for detecting significant arterial stenoses with isolated infragenicular SS-MRA was 0.47 (0.27 to 0.69) and 0.86 (0.78 to 0.91), respectively. Intermodality agreement between MRA and DSA in the infragenicular region was moderate for first-pass WB-MRA ({kappa} = 0.49), fair for SS-MRA ({kappa} = 0.31), and good for combined first-pass/SS-MRA ({kappa} = 0.71). Addition of infragenicular SS-MRA to first-pass WB MRA improves diagnostic performance.« less
Improved First Pass Spiral Myocardial Perfusion Imaging with Variable Density Trajectories
Salerno, Michael; Sica, Christopher; Kramer, Christopher M.; Meyer, Craig H.
2013-01-01
Purpose To develop and evaluate variable-density (VD) spiral first-pass perfusion pulse sequences for improved efficiency and off-resonance performance and to demonstrate the utility of an apodizing density compensation function (DCF) to improve SNR and reduce dark-rim artifact caused by cardiac motion and Gibbs Ringing. Methods Three variable density spiral trajectories were designed, simulated, and evaluated in 18 normal subjects, and in 8 patients with cardiac pathology on a 1.5T scanner. Results By utilizing a density compensation function (DCF) which intentionally apodizes the k-space data, the side-lobe amplitude of the theoretical PSF is reduced by 68%, with only a 13% increase in the FWHM of the main-lobe as compared to the same data corrected with a conventional VD DCF, and has an 8% higher resolution than a uniform density spiral with the same number of interleaves and readout duration. Furthermore, this strategy results in a greater than 60% increase in measured SNR as compared to the same VD spiral data corrected with a conventional DCF (p<0.01). Perfusion defects could be clearly visualized with minimal off-resonance and dark-rim artifacts. Conclusion VD spiral pulse sequences using an apodized DCF produce high-quality first-pass perfusion images with minimal dark-rim and off-resonance artifacts, high SNR and CNR and good delineation of resting perfusion abnormalities. PMID:23280884
Chen, Xiao; Salerno, Michael; Yang, Yang; Epstein, Frederick H.
2014-01-01
Purpose Dynamic contrast-enhanced MRI of the heart is well-suited for acceleration with compressed sensing (CS) due to its spatiotemporal sparsity; however, respiratory motion can degrade sparsity and lead to image artifacts. We sought to develop a motion-compensated CS method for this application. Methods A new method, Block LOw-rank Sparsity with Motion-guidance (BLOSM), was developed to accelerate first-pass cardiac MRI, even in the presence of respiratory motion. This method divides the images into regions, tracks the regions through time, and applies matrix low-rank sparsity to the tracked regions. BLOSM was evaluated using computer simulations and first-pass cardiac datasets from human subjects. Using rate-4 acceleration, BLOSM was compared to other CS methods such as k-t SLR that employs matrix low-rank sparsity applied to the whole image dataset, with and without motion tracking, and to k-t FOCUSS with motion estimation and compensation that employs spatial and temporal-frequency sparsity. Results BLOSM was qualitatively shown to reduce respiratory artifact compared to other methods. Quantitatively, using root mean squared error and the structural similarity index, BLOSM was superior to other methods. Conclusion BLOSM, which exploits regional low rank structure and uses region tracking for motion compensation, provides improved image quality for CS-accelerated first-pass cardiac MRI. PMID:24243528
Trace-Driven Debugging of Message Passing Programs
NASA Technical Reports Server (NTRS)
Frumkin, Michael; Hood, Robert; Lopez, Louis; Bailey, David (Technical Monitor)
1998-01-01
In this paper we report on features added to a parallel debugger to simplify the debugging of parallel message passing programs. These features include replay, setting consistent breakpoints based on interprocess event causality, a parallel undo operation, and communication supervision. These features all use trace information collected during the execution of the program being debugged. We used a number of different instrumentation techniques to collect traces. We also implemented trace displays using two different trace visualization systems. The implementation was tested on an SGI Power Challenge cluster and a network of SGI workstations.
Hovgaard, Lisette Hvid; Andersen, Steven Arild Wuyts; Konge, Lars; Dalsgaard, Torur; Larsen, Christian Rifbjerg
2018-03-30
The use of robotic surgery for minimally invasive procedures has increased considerably over the last decade. Robotic surgery has potential advantages compared to laparoscopic surgery but also requires new skills. Using virtual reality (VR) simulation to facilitate the acquisition of these new skills could potentially benefit training of robotic surgical skills and also be a crucial step in developing a robotic surgical training curriculum. The study's objective was to establish validity evidence for a simulation-based test for procedural competency for the vaginal cuff closure procedure that can be used in a future simulation-based, mastery learning training curriculum. Eleven novice gynaecological surgeons without prior robotic experience and 11 experienced gynaecological robotic surgeons (> 30 robotic procedures) were recruited. After familiarization with the VR simulator, participants completed the module 'Guided Vaginal Cuff Closure' six times. Validity evidence was investigated for 18 preselected simulator metrics. The internal consistency was assessed using Cronbach's alpha and a composite score was calculated based on metrics with significant discriminative ability between the two groups. Finally, a pass/fail standard was established using the contrasting groups' method. The experienced surgeons significantly outperformed the novice surgeons on 6 of the 18 metrics. The internal consistency was 0.58 (Cronbach's alpha). The experienced surgeons' mean composite score for all six repetitions were significantly better than the novice surgeons' (76.1 vs. 63.0, respectively, p < 0.001). A pass/fail standard of 75/100 was established. Four novice surgeons passed this standard (false positives) and three experienced surgeons failed (false negatives). Our study has gathered validity evidence for a simulation-based test for procedural robotic surgical competency in the vaginal cuff closure procedure and established a credible pass/fail standard for future proficiency-based training.
Feature and contrast enhancement of mammographic image based on multiscale analysis and morphology.
Wu, Shibin; Yu, Shaode; Yang, Yuhan; Xie, Yaoqin
2013-01-01
A new algorithm for feature and contrast enhancement of mammographic images is proposed in this paper. The approach bases on multiscale transform and mathematical morphology. First of all, the Laplacian Gaussian pyramid operator is applied to transform the mammography into different scale subband images. In addition, the detail or high frequency subimages are equalized by contrast limited adaptive histogram equalization (CLAHE) and low-pass subimages are processed by mathematical morphology. Finally, the enhanced image of feature and contrast is reconstructed from the Laplacian Gaussian pyramid coefficients modified at one or more levels by contrast limited adaptive histogram equalization and mathematical morphology, respectively. The enhanced image is processed by global nonlinear operator. The experimental results show that the presented algorithm is effective for feature and contrast enhancement of mammogram. The performance evaluation of the proposed algorithm is measured by contrast evaluation criterion for image, signal-noise-ratio (SNR), and contrast improvement index (CII).
Feature and Contrast Enhancement of Mammographic Image Based on Multiscale Analysis and Morphology
Wu, Shibin; Xie, Yaoqin
2013-01-01
A new algorithm for feature and contrast enhancement of mammographic images is proposed in this paper. The approach bases on multiscale transform and mathematical morphology. First of all, the Laplacian Gaussian pyramid operator is applied to transform the mammography into different scale subband images. In addition, the detail or high frequency subimages are equalized by contrast limited adaptive histogram equalization (CLAHE) and low-pass subimages are processed by mathematical morphology. Finally, the enhanced image of feature and contrast is reconstructed from the Laplacian Gaussian pyramid coefficients modified at one or more levels by contrast limited adaptive histogram equalization and mathematical morphology, respectively. The enhanced image is processed by global nonlinear operator. The experimental results show that the presented algorithm is effective for feature and contrast enhancement of mammogram. The performance evaluation of the proposed algorithm is measured by contrast evaluation criterion for image, signal-noise-ratio (SNR), and contrast improvement index (CII). PMID:24416072
Estimating normal mixture parameters from the distribution of a reduced feature vector
NASA Technical Reports Server (NTRS)
Guseman, L. F.; Peters, B. C., Jr.; Swasdee, M.
1976-01-01
A FORTRAN computer program was written and tested. The measurements consisted of 1000 randomly chosen vectors representing 1, 2, 3, 7, and 10 subclasses in equal portions. In the first experiment, the vectors are computed from the input means and covariances. In the second experiment, the vectors are 16 channel measurements. The starting covariances were constructed as if there were no correlation between separate passes. The biases obtained from each run are listed.
Evolution of a localized Langmuir packet in the solar wind and on auroral field lines
NASA Technical Reports Server (NTRS)
Roth, I.; Muschietti, L.; Brown, E. F.; Gray, P. C.
1994-01-01
Langmuir emissions in space are reported to be clumpy and intermittent. The high-frequency wave power appears concentrated in spatial packets, whether amidst the solar wind or on auroral field lines. Due to the plasma motion relative to the spacecraft, determining the source for the wave free energy in the three-dimensional electron distribution function has always been difficult, since the unstable features pass by the detector in presumably too short time to be measured. The range of unstable phase velocities and growth rates have generally been estimated rather than determined by unequivocal measurements. The analysis of wave-particle interactions in a space environment has taken recently a new turn with the development of wave correlators on board rockets and satellites. Such instruments seek to identify correlations between the phase of the wave-field and the fluxes of energetic particles. The data interpretation is complex, however, it must be backed by a detailed theoretical understanding of the wave-particle interaction, including the phase relation for inhomogeneous packets. To this end Langmuir packets interacting with fast electrons can be studied in the appropriate regime by means of particle-in-cell simulations, provided that one succeeds in reducing the level of the fluctuations, enhancing the signal-to-noise ratio, and incorporating the appropriate boundary conditions. The first results of such simulations are presented here as a test and expansion of previous analysis.
Three-dimensional structure of clumpy outflow from supercritical accretion flow onto black holes
NASA Astrophysics Data System (ADS)
Kobayashi, Hiroshi; Ohsuga, Ken; Takahashi, Hiroyuki R.; Kawashima, Tomohisa; Asahina, Yuta; Takeuchi, Shun; Mineshige, Shin
2018-03-01
We perform global three-dimensional (3D) radiation-hydrodynamic (RHD) simulations of outflow from supercritical accretion flow around a 10 M⊙ black hole. We only solve the outflow part, starting from the axisymmetric 2D simulation data in a nearly steady state but with small perturbations in a sinusoidal form being added in the azimuthal direction. The mass accretion rate onto the black hole is ˜102LE/c2 in the underlying 2D simulation data, and the outflow rate is ˜10 LE/c2 (with LE and c being the Eddington luminosity and speed of light, respectively). We first confirm the emergence of clumpy outflow, which was discovered by the 2D RHD simulations, above the photosphere located at a few hundreds of Schwarzschild radii (rS) from the central black hole. As prominent 3D features we find that the clumps have the shape of a torn sheet, rather than a cut string, and that they are rotating around the central black hole with a sub-Keplerian velocity at a distance of ˜103 rS from the center. The typical clump size is ˜30 rS or less in the radial direction, and is more elongated in the angular directions, ˜ hundreds of rS at most. The sheet separation ranges from 50 to 150 rS. We expect stochastic time variations when clumps pass across the line of the sight of a distant observer. Variation timescales are estimated to be several seconds for a black hole with mass of ten to several tens of M⊙, in rough agreement with the observations of some ultra-luminous X-ray sources.
Track-monitoring from the dynamic response of an operational train
NASA Astrophysics Data System (ADS)
Lederman, George; Chen, Siheng; Garrett, James; Kovačević, Jelena; Noh, Hae Young; Bielak, Jacobo
2017-03-01
We explore a data-driven approach for monitoring rail infrastructure from the dynamic response of a train in revenue-service. Presently, track inspection is performed either visually or with dedicated track geometry cars. In this study, we examine a more economical approach where track inspection is performed by analyzing vibration data collected from an operational passenger train. The high frequency with which passenger trains travel each section of track means that faults can be detected sooner than with dedicated inspection vehicles, and the large number of passes over each section of track makes a data-driven approach statistically feasible. We have deployed a test-system on a light-rail vehicle and have been collecting data for the past two years. The collected data underscores two of the main challenges that arise in train-based track monitoring: the speed of the train at a given location varies from pass to pass and the position of the train is not known precisely. In this study, we explore which feature representations of the data best characterize the state of the tracks despite these sources of uncertainty (i.e., in the spatial domain or frequency domain), and we examine how consistently change detection approaches can identify track changes from the data. We show the accuracy of these different representations, or features, and different change detection approaches on two types of track changes, track replacement and tamping (a maintenance procedure to improve track geometry), and two types of data, simulated data and operational data from our test-system. The sensing, signal processing, and data analysis we propose in the study could facilitate safer trains and more cost-efficient maintenance in the future. Moreover, the proposed approach is quite general and could be extended to other parts of the infrastructure, including bridges.
NASA Astrophysics Data System (ADS)
Saijyo, Katsuya; Nishiwaki, Kazuie; Yoshihara, Yoshinobu
The CFD simulations were performed integrating the low-temperature oxidation reaction. Analyses were made with respect to the first auto-ignition location in the case of a premixed-charge compression auto-ignition in a laminar flow field and in the case of the auto-ignition in an end gas during an S. I. Engine combustion process. In the latter simulation, the spatially-filtered transport equations were solved to express fluctuating temperatures in a turbulent flow in consideration of strong non-linearity to temperature in the reaction equations. It is suggested that the first auto-ignition location does not always occur at higher-temperature locations and that the difference in the locations of the first auto-ignition depends on the time period during which the local end gas temperature passes through the region of shorter ignition delay, including the NTC region.
Roberts, William L; McKinley, Danette W; Boulet, John R
2010-05-01
Due to the high-stakes nature of medical exams it is prudent for test agencies to critically evaluate test data and control for potential threats to validity. For the typical multiple station performance assessments used in medicine, it may take time for examinees to become comfortable with the test format and administrative protocol. Since each examinee in the rotational sequence starts with a different task (e.g., simulated clinical encounter), those who are administered non-scored pretest material on their first station may have an advantage compared to those who are not. The purpose of this study is to investigate whether pass/fail rates are different across the sequence of pretest encounters administered during the testing day. First-time takers were grouped by the sequential order in which they were administered the pretest encounter. No statistically significant difference in fail rates was found between examinees who started with the pretest encounter and those who encountered the pretest encounter later in the sequence. Results indicate that current examination administration protocols do not present a threat to the validity of test score interpretations.
Tough2{_}MP: A parallel version of TOUGH2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Keni; Wu, Yu-Shu; Ding, Chris
2003-04-09
TOUGH2{_}MP is a massively parallel version of TOUGH2. It was developed for running on distributed-memory parallel computers to simulate large simulation problems that may not be solved by the standard, single-CPU TOUGH2 code. The new code implements an efficient massively parallel scheme, while preserving the full capacity and flexibility of the original TOUGH2 code. The new software uses the METIS software package for grid partitioning and AZTEC software package for linear-equation solving. The standard message-passing interface is adopted for communication among processors. Numerical performance of the current version code has been tested on CRAY-T3E and IBM RS/6000 SP platforms. Inmore » addition, the parallel code has been successfully applied to real field problems of multi-million-cell simulations for three-dimensional multiphase and multicomponent fluid and heat flow, as well as solute transport. In this paper, we will review the development of the TOUGH2{_}MP, and discuss the basic features, modules, and their applications.« less
NASA Astrophysics Data System (ADS)
Berchem, J.; Marchaudon, A.; Bosqued, J.; Escoubet, C. P.; Dunlop, M.; Owen, C. J.; Reme, H.; Balogh, A.; Carr, C.; Fazakerley, A. N.; Cao, J. B.
2005-12-01
Synoptic measurements from the DOUBLE STAR and CLUSTER spacecraft offer a unique opportunity to evaluate global models in simulating the complex topology and dynamics of the dayside merging region. We compare observations from the DOUBLE STAR TC-1 and CLUSTER spacecraft on May 8, 2004 with the predictions from a three-dimensional magnetohydrodynamic (MHD) simulation that uses plasma and magnetic field parameters measured upstream of the bow shock by the WIND spacecraft. Results from the global simulation are consistent with the large-scale features observed by CLUSTER and TC-1. We discuss topological changes and plasma flows at the dayside magnetospheric boundary inferred from the simulation results. The simulation shows that the DOUBLE STAR spacecraft passed through the dawn side merging region as the IMF rotated. In particular, the simulation indicates that at times TC-1 was very close to the merging region. In addition, we found that the bifurcation of the merging region in the simulation results is consistent with predictions by the antiparallel merging model. However, because of the draping of the magnetosheath field lines over the magnetopause, the positions and shape of the merging region differ significantly from those predicted by the model.
Wing galaxies: A formation mechanism of the clumpy irregular galaxy Markarian 297
NASA Technical Reports Server (NTRS)
Taniguchi, Yoshiaki; Noguchi, Masafumi
1990-01-01
In order to contribute to an understanding of collision-induced starburst activities, the authors present a detailed case study on the starburst galaxy Markarian 297 (= NGC 6052 = Arp 209; hereafter Mrk 297). This galaxy is classified as a clumpy irregular galaxy (CIG) according to its morphological properties (cf. Heidmann, 1987). Two major clumps and many small clumps are observed in the entire region of Mrk 297 (Hecquet, Coupinot, and Maucherat 1987). The overall morphology of Mrk 297 is highly chaotic and thus it seems difficult to determine possible orbits of galaxy-galaxy collision. However, the authors have serendipitously found a possible orbit during a course of numerical simulations for a radial-penetration collision between galaxies. The radial-penetration collision means that an intruder penetrates a target galaxy radially passing by its nucleus. This kind of collision is known to explain a formation mechanism of ripples around disk galaxies (Wallin and Struck-Marcell 1988). Here, the authors show that the radial-penetration collision between galaxies successfully explains both overall morphological and kinematical properties of Mrk 297. The authors made two kinds of numerical simulations for Mrk 297. One is N-body (1x10(exp 4) particles) simulations in which effects of self gravity of the stellar disk are taken into account. These simulations are used to study detailed morphological feature of Mrk 297. The response of gas clouds are also investigated in order to estimate star formation rates in such collisions. The other is test-particle simulations, which are utilized to obtain a rough picture of Mrk 297 and to analyze the velocity field of Mrk 297. The techniques of the numerical simulations are the same as those in Noguchi (1988) and Noguchi and Ishibashi (1986). In the present model, an intruding galaxy with the same mass of a target galaxy moves on a rectilinear orbit which passes the center of the target.
Research on metallic material defect detection based on bionic sensing of human visual properties
NASA Astrophysics Data System (ADS)
Zhang, Pei Jiang; Cheng, Tao
2018-05-01
Due to the fact that human visual system can quickly lock the areas of interest in complex natural environment and focus on it, this paper proposes an eye-based visual attention mechanism by simulating human visual imaging features based on human visual attention mechanism Bionic Sensing Visual Inspection Model Method to Detect Defects of Metallic Materials in the Mechanical Field. First of all, according to the biologically visually significant low-level features, the mark of defect experience marking is used as the intermediate feature of simulated visual perception. Afterwards, SVM method was used to train the advanced features of visual defects of metal material. According to the weight of each party, the biometrics detection model of metal material defect, which simulates human visual characteristics, is obtained.
Differential Equations and Computational Simulations
1999-06-18
divergence operator of a vector field, which can be defined in terms of the Levi - Civita connection. Let $(x, t) be the orbit passing through x g M...differential equations 31 Junping Chen and Dadi Yang The limit cycle of two species predator-prey model with general functional response > 34 S. S...analysis of two -species nonlinear competition system with periodic coefficients 286 X. H. Tang and J. S. Yu Oscillation of first order delay
Single-pass incremental force updates for adaptively restrained molecular dynamics.
Singh, Krishna Kant; Redon, Stephane
2018-03-30
Adaptively restrained molecular dynamics (ARMD) allows users to perform more integration steps in wall-clock time by switching on and off positional degrees of freedoms. This article presents new, single-pass incremental force updates algorithms to efficiently simulate a system using ARMD. We assessed different algorithms for speedup measurements and implemented them in the LAMMPS MD package. We validated the single-pass incremental force update algorithm on four different benchmarks using diverse pair potentials. The proposed algorithm allows us to perform simulation of a system faster than traditional MD in both NVE and NVT ensembles. Moreover, ARMD using the new single-pass algorithm speeds up the convergence of observables in wall-clock time. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Impaired P600 in neuroleptic naive patients with first-episode schizophrenia.
Papageorgiou, C; Kontaxakis, V P; Havaki-Kontaxaki, B J; Stamouli, S; Vasios, C; Asvestas, P; Matsopoulos, G K; Kontopantelis, E; Rabavilas, A; Uzunoglu, N; Christodoulou, G N
2001-09-17
Deficits of working memory (WM) are recognized as an important pathological feature in schizophrenia. Since the P600 component of event related potentials has been hypothesized that represents aspects of second-pass parsing processes of information processing, and is related to WM, the present study focuses on P600 elicited during a WM test in drug-naive first-episode schizophrenics (FES) compared to healthy controls. We examined 16 drug-naive first-episode schizophrenic patients and 23 healthy controls matched for age and sex. Compared with controls schizophrenic patients showed reduced P600 amplitude on left temporoparietal region and increased P600 amplitude on left occipital region. With regard to the latency, the patients exhibited significantly prolongation on right temporoparietal region. The obtained pattern of differences classified correctly 89.20% of patients. Memory performance of patients was also significantly impaired relative to controls. Our results suggest that second-pass parsing process of information processing, as indexed by P600, elicited during a WM test, is impaired in FES. Moreover, these findings lend support to the view that the auditory WM in schizophrenia involves or affects a circuitry including temporoparietal and occipital brain areas.
Form-To-Expectation Matching Effects on First-Pass Eye Movement Measures During Reading
Farmer, Thomas A.; Yan, Shaorong; Bicknell, Klinton; Tanenhaus, Michael K.
2015-01-01
Recent EEG/MEG studies suggest that when contextual information is highly predictive of some property of a linguistic signal, expectations generated from context can be translated into surprisingly low-level estimates of the physical form-based properties likely to occur in subsequent portions of the unfolding signal. Whether form-based expectations are generated and assessed during natural reading, however, remains unclear. We monitored eye movements while participants read phonologically typical and atypical nouns in noun-predictive contexts (Experiment 1), demonstrating that when a noun is strongly expected, fixation durations on first-pass eye movement measures, including first fixation duration, gaze duration, and go-past times, are shorter for nouns with category typical form-based features. In Experiments 2 and 3, typical and atypical nouns were placed in sentential contexts normed to create expectations of variable strength for a noun. Context and typicality interacted significantly at gaze duration. These results suggest that during reading, form-based expectations that are translated from higher-level category-based expectancies can facilitate the processing of a word in context, and that their effect on lexical processing is graded based on the strength of category expectancy. PMID:25915072
NASA Astrophysics Data System (ADS)
Lee, Jae-Seung; Im, In-Chul; Kang, Su-Man; Goo, Eun-Hoe; Baek, Seong-Min
2013-11-01
The purpose of this study is to present a new method of quality assurance (QA) in order to ensure effective evaluation of the accuracy of respiratory-gated radiotherapy (RGR). This would help in quantitatively analyzing the patient's respiratory cycle and respiration-induced tumor motion and in performing a subsequent comparative analysis of dose distributions, using the gamma-index method, as reproduced in our in-house developed respiration-simulating phantom. Therefore, we designed a respiration-simulating phantom capable of reproducing the patient's respiratory cycle and respiration-induced tumor motion and evaluated the accuracy of RGR by estimating its pass rates. We applied the gamma index passing criteria of accepted error ranges of 3% and 3 mm for the dose distribution calculated by using the treatment planning system (TPS) and the actual dose distribution of RGR. The pass rate clearly increased inversely to the gating width chosen. When respiration-induced tumor motion was 12 mm or less, pass rates of 85% and above were achieved for the 30-70% respiratory phase, and pass rates of 90% and above were achieved for the 40-60% respiratory phase. However, a respiratory cycle with a very small fluctuation range of pass rates failed to prove reliable in evaluating the accuracy of RGR. Therefore, accurate and reliable outcomes of radiotherapy will be obtainable only by establishing a novel QA system using the respiration-simulating phantom, the gamma-index analysis, and a quantitative analysis of diaphragmatic motion, enabling an indirect measurement of tumor motion.
Nelissen, Ellen; Ersdal, Hege; Ostergaard, Doris; Mduma, Estomih; Broerse, Jacqueline; Evjen-Olsen, Bjørg; van Roosmalen, Jos; Stekelenburg, Jelle
2014-03-01
To evaluate "Helping Mothers Survive Bleeding After Birth" (HMS BAB) simulation-based training in a low-resource setting. Educational intervention study. Rural referral hospital in Northern Tanzania. Clinicians, nurse-midwives, medical attendants, and ambulance drivers involved in maternity care. In March 2012, health care workers were trained in HMS BAB, a half-day simulation-based training, using a train-the-trainer model. The training focused on basic delivery care, active management of third stage of labor, and treatment of postpartum hemorrhage, including bimanual uterine compression. Evaluation questionnaires provided information on course perception. Knowledge, skills, and confidence of facilitators and learners were tested before and after training. Four master trainers trained eight local facilitators, who subsequently trained 89 learners. After training, all facilitators passed the knowledge test, but pass rates for the skills test were low (29% pass rate for basic delivery and 0% pass rate for management of postpartum hemorrhage). Evaluation revealed that HMS BAB training was considered acceptable and feasible, although more time should be allocated for training, and teaching materials should be translated into the local language. Knowledge, skills, and confidence of learners increased significantly immediately after training. However, overall pass rates for skills tests of learners after training were low (3% pass rate for basic delivery and management of postpartum hemorrhage). The HMS BAB simulation-based training has potential to contribute to education of health care providers. We recommend a full day of training and validation of the facilitators to improve the training. © 2013 Nordic Federation of Societies of Obstetrics and Gynecology.
NASA Astrophysics Data System (ADS)
Zhang, Gaohui; Zhao, Guozhong; Zhang, Shengbo
2012-12-01
The terahertz transmission characteristics of bilayer metallic meshes are studied based on the finite difference time domain method. The bilayer well-shaped grid, the array of complementary square metallic pill and the cross wire-hole array were investigated. The results show that the bilayer well-shaped grid achieves a high-pass of filter function, while the bilayer array of complementary square metallic pill achieves a low-pass of filter function, the bilayer cross wire-hole array achieves a band-pass of filter function. Between two metallic microstructures, the medium need to be deposited. Obviously, medium thicknesses have an influence on the terahertz transmission characteristics of metallic microstructures. Simulation results show that with increasing the thicknesses of the medium the cut-off frequency of high-pass filter and low-pass filter move to low frequency. But the bilayer cross wire-hole array possesses two transmission peaks which display competition effect.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williamson, David L.; Olson, Jerry G.; Hannay, Cécile
An error in the energy formulation in the Community Atmosphere Model (CAM) is identified and corrected. Ten year AMIP simulations are compared using the correct and incorrect energy formulations. Statistics of selected primary variables all indicate physically insignificant differences between the simulations, comparable to differences with simulations initialized with rounding sized perturbations. The two simulations are so similar mainly because of an inconsistency in the application of the incorrect energy formulation in the original CAM. CAM used the erroneous energy form to determine the states passed between the parameterizations, but used a form related to the correct formulation for themore » state passed from the parameterizations to the dynamical core. If the incorrect form is also used to determine the state passed to the dynamical core the simulations are significantly different. In addition, CAM uses the incorrect form for the global energy fixer, but that seems to be less important. The difference of the magnitude of the fixers using the correct and incorrect energy definitions is very small.« less
Energy considerations in the Community Atmosphere Model (CAM)
Williamson, David L.; Olson, Jerry G.; Hannay, Cécile; ...
2015-06-30
An error in the energy formulation in the Community Atmosphere Model (CAM) is identified and corrected. Ten year AMIP simulations are compared using the correct and incorrect energy formulations. Statistics of selected primary variables all indicate physically insignificant differences between the simulations, comparable to differences with simulations initialized with rounding sized perturbations. The two simulations are so similar mainly because of an inconsistency in the application of the incorrect energy formulation in the original CAM. CAM used the erroneous energy form to determine the states passed between the parameterizations, but used a form related to the correct formulation for themore » state passed from the parameterizations to the dynamical core. If the incorrect form is also used to determine the state passed to the dynamical core the simulations are significantly different. In addition, CAM uses the incorrect form for the global energy fixer, but that seems to be less important. The difference of the magnitude of the fixers using the correct and incorrect energy definitions is very small.« less
NASA Astrophysics Data System (ADS)
Cao, Duc; Moses, Gregory; Delettrez, Jacques; Collins, Timothy
2014-10-01
A design process is presented for the nonlocal thermal transport iSNB (implicit Schurtz, Nicolai, and Busquet) model to provide reliable nonlocal thermal transport in polar-drive ICF simulations. Results from the iSNB model are known to be sensitive to changes in the SNB ``mean free path'' formula, and the latter's original form required modification to obtain realistic preheat levels. In the presented design process, SNB mean free paths are first modified until the model can match temperatures from Goncharov's thermal transport model in 1D temperature relaxation simulations. Afterwards the same mean free paths are tested in a 1D polar-drive surrogate simulation to match adiabats from Goncharov's model. After passing the two previous steps, the model can then be run in a full 2D polar-drive simulation. This research is supported by the University of Rochester Laboratory for Laser Energetics.
Drummond-Braga, Bernardo; Peleja, Sebastião Berquó; Macedo, Guaracy; Drummond, Carlos Roberto S A; Costa, Pollyana H V; Garcia-Zapata, Marco T; Oliveira, Marcelo Magaldi
2016-12-01
Neurosurgery simulation has gained attention recently due to changes in the medical system. First-year neurosurgical residents in low-income countries usually perform their first craniotomy on a real subject. Development of high-fidelity, cheap, and largely available simulators is a challenge in residency training. An original model for the first steps of craniotomy with cerebrospinal fluid leak avoidance practice using a coconut is described. The coconut is a drupe from Cocos nucifera L. (coconut tree). The green coconut has 4 layers, and some similarity can be seen between these layers and the human skull. The materials used in the simulation are the same as those used in the operating room. The coconut is placed on the head holder support with the face up. The burr holes are made until endocarp is reached. The mesocarp is dissected, and the conductor is passed from one hole to the other with the Gigli saw. The hook handle for the wire saw is positioned, and the mesocarp and endocarp are cut. After sawing the 4 margins, mesocarp is detached from endocarp. Four burr holes are made from endocarp to endosperm. Careful dissection of the endosperm is done, avoiding liquid albumen leak. The Gigli saw is passed through the trephine holes. Hooks are placed, and the endocarp is cut. After cutting the 4 margins, it is dissected from the endosperm and removed. The main goal of the procedure is to remove the endocarp without fluid leakage. The coconut model for learning the first steps of craniotomy and cerebrospinal fluid leak avoidance has some limitations. It is more realistic while trying to remove the endocarp without damage to the endosperm. It is also cheap and can be widely used in low-income countries. However, the coconut does not have anatomic landmarks. The mesocarp makes the model less realistic because it has fibers that make the procedure more difficult and different from a real craniotomy. The model has a potential pedagogic neurosurgical application for freshman residents before they perform a real craniotomy for the first time. Further validity is necessary to confirm this hypothesis. Copyright © 2016 Elsevier Inc. All rights reserved.
Latif, Rana K; VanHorne, Edgar M; Kandadai, Sunitha Kanchi; Bautista, Alexander F; Neamtu, Aurel; Wadhwa, Anupama; Carter, Mary B; Ziegler, Craig H; Memon, Mohammed Faisal; Akça, Ozan
2016-01-20
Lung isolation skills, such as correct insertion of double lumen endobronchial tube and bronchial blocker, are essential in anesthesia training; however, how to teach novices these skills is underexplored. Our aims were to determine (1) if novices can be trained to a basic proficiency level of lung isolation skills, (2) whether video-didactic and simulation-based trainings are comparable in teaching lung isolation basic skills, and (3) whether novice learners' lung isolation skills decay over time without practice. First, five board certified anesthesiologist with experience of more than 100 successful lung isolations were tested on Human Airway Anatomy Simulator (HAAS) to establish Expert proficiency skill level. Thirty senior medical students, who were naive to bronchoscopy and lung isolation techniques (Novice) were randomized to video-didactic and simulation-based trainings to learn lung isolation skills. Before and after training, Novices' performances were scored for correct placement using pass/fail scoring and a 5-point Global Rating Scale (GRS); and time of insertion was recorded. Fourteen novices were retested 2 months later to assess skill decay. Experts' and novices' double lumen endobronchial tube and bronchial blocker passing rates showed similar success rates after training (P >0.99). There were no differences between the video-didactic and simulation-based methods. Novices' time of insertion decayed within 2 months without practice. Novices could be trained to basic skill proficiency level of lung isolation. Video-didactic and simulation-based methods we utilized were found equally successful in training novices for lung isolation skills. Acquired skills partially decayed without practice.
GEANT4 Tuning For pCT Development
NASA Astrophysics Data System (ADS)
Yevseyeva, Olga; de Assis, Joaquim T.; Evseev, Ivan; Schelin, Hugo R.; Paschuk, Sergei A.; Milhoretto, Edney; Setti, João A. P.; Díaz, Katherin S.; Hormaza, Joel M.; Lopes, Ricardo T.
2011-08-01
Proton beams in medical applications deal with relatively thick targets like the human head or trunk. Thus, the fidelity of proton computed tomography (pCT) simulations as a tool for proton therapy planning depends in the general case on the accuracy of results obtained for the proton interaction with thick absorbers. GEANT4 simulations of proton energy spectra after passing thick absorbers do not agree well with existing experimental data, as showed previously. Moreover, the spectra simulated for the Bethe-Bloch domain showed an unexpected sensitivity to the choice of low-energy electromagnetic models during the code execution. These observations were done with the GEANT4 version 8.2 during our simulations for pCT. This work describes in more details the simulations of the proton passage through aluminum absorbers with varied thickness. The simulations were done by modifying only the geometry in the Hadrontherapy Example, and for all available choices of the Electromagnetic Physics Models. As the most probable reasons for these effects is some specific feature in the code, or some specific implicit parameters in the GEANT4 manual, we continued our study with version 9.2 of the code. Some improvements in comparison with our previous results were obtained. The simulations were performed considering further applications for pCT development.
Realistic Simulations of Coronagraphic Observations with WFIRST
NASA Astrophysics Data System (ADS)
Rizzo, Maxime; Zimmerman, Neil; Roberge, Aki; Lincowski, Andrew; Arney, Giada; Stark, Chris; Jansen, Tiffany; Turnbull, Margaret; WFIRST Science Investigation Team (Turnbull)
2018-01-01
We present a framework to simulate observing scenarios with the WFIRST Coronagraphic Instrument (CGI). The Coronagraph and Rapid Imaging Spectrograph in Python (crispy) is an open-source package that can be used to create CGI data products for analysis and development of post-processing routines. The software convolves time-varying coronagraphic PSFs with realistic astrophysical scenes which contain a planetary architecture, a consistent dust structure, and a background field composed of stars and galaxies. The focal plane can be read out by a WFIRST electron-multiplying CCD model directly, or passed through a WFIRST integral field spectrograph model first. Several elementary post-processing routines are provided as part of the package.
Large signal design - Performance and simulation of a 3 W C-band GaAs power MMIC
NASA Astrophysics Data System (ADS)
White, Paul M.; Hendrickson, Mary A.; Chang, Wayne H.; Curtice, Walter R.
1990-04-01
This paper describes a C-band GaAs power MMIC amplifier that achieved a gain of 17 dB and 1 dB compressed CW power output of 34 dBm across a 4.5-6.25-GHz frequency range, without design iteration. The first-pass design success was achieved due to the application of a harmonic balance simulator to define the optimum output load, using a large-signal FET model determined statistically on a well controlled foundry-ready process line. The measured performance was close to that predicted by a full harmonic balance circuit analysis.
Ship electric propulsion simulator based on networking technology
NASA Astrophysics Data System (ADS)
Zheng, Huayao; Huang, Xuewu; Chen, Jutao; Lu, Binquan
2006-11-01
According the new ship building tense, a novel electric propulsion simulator (EPS) had been developed in Marine Simulation Center of SMU. The architecture, software function and FCS network technology of EPS and integrated power system (IPS) were described. In allusion to the POD propeller in ship, a special physical model was built. The POD power was supplied from the simulative 6.6 kV Medium Voltage Main Switchboard, its control could be realized in local or remote mode. Through LAN, the simulated feature information of EPS will pass to the physical POD model, which would reflect the real thruster working status in different sea conditions. The software includes vessel-propeller math module, thruster control system, distribution and emergency integrated management, double closed loop control system, vessel static water resistance and dynamic software; instructor main control software. The monitor and control system is realized by real time data collection system and CAN bus technology. During the construction, most devices such as monitor panels and intelligent meters, are developed in lab which were based on embedded microcomputer system with CAN interface to link the network. They had also successfully used in practice and would be suitable for the future demands of digitalization ship.
NASA Astrophysics Data System (ADS)
Padhy, S.; Furumura, T.; Maeda, T.
2017-12-01
The Okinawa Trough is a young continental back-arc basin located behind the Ryukyu subduction zone in southwestern Japan, where the Philippine Sea Plate dives beneath the trough, resulting in localized mantle upwelling and crustal thinning of the overriding Eurasian Plate. The attenuation structure of the plates and surrounding mantle in this region associated with such complex tectonic environment are poorly documented. Here we present seismological evidence for these features based on the high-resolution waveform analyses and 3D finite difference method (FDM) simulation. We analyzed regional broadband waveforms recorded by F-net (NIED) of in-slab events (M>4, H>100 km). Using band-passed (0.5-8 Hz), mean-squared envelopes, we parameterized coda-decay in terms of rise-time (time from P-arrival to maximum amplitude in P-coda), decay-time (time from maximum amplitude to theoretical S-arrival), and energy-ratio defined as the ratio of energy in P-coda to that in direct P wave. The following key features are observed. First, there is a striking difference in S-excitation along paths traversing and not traversing the trough: events from SW Japan not crossing the trough show clear S waves, while those occurring in the trough show very weak S waves at a station close to the volcanic front. Second, some trough events exhibit spindle-shaped seismograms with strong P-coda excitation, obscuring the development of S waves, at back-arc stations; these waveforms are characterized by high decay-time (>10s) and high energy-ratio (>>1.0), suggesting strong forward scattering along ray paths. Third, some trough events show weak S-excitation characterized by low decay-time (<5s) and low energy-ratio (<1.0) at fore-arc stations, suggesting high intrinsic absorption. To investigate the mechanism of the observed anomalies, we will conduct FDM simulation for a suite of models comprising the key subduction features like localized mantle-upwelling and crustal thinning expected in the region. It is expected that simulation results help to resolve rift-induced crust and upper mantle anomalies in the trough showing maximum waveform distortion as we observed in broadband records, and will enhance understanding of tectonic processes related to back-arc rifting in the region.
A VLSI implementation of DCT using pass transistor technology
NASA Technical Reports Server (NTRS)
Kamath, S.; Lynn, Douglas; Whitaker, Sterling
1992-01-01
A VLSI design for performing the Discrete Cosine Transform (DCT) operation on image blocks of size 16 x 16 in a real time fashion operating at 34 MHz (worst case) is presented. The process used was Hewlett-Packard's CMOS26--A 3 metal CMOS process with a minimum feature size of 0.75 micron. The design is based on Multiply-Accumulate (MAC) cells which make use of a modified Booth recoding algorithm for performing multiplication. The design of these cells is straight forward, and the layouts are regular with no complex routing. Two versions of these MAC cells were designed and their layouts completed. Both versions were simulated using SPICE to estimate their performance. One version is slightly faster at the cost of larger silicon area and higher power consumption. An improvement in speed of almost 20 percent is achieved after several iterations of simulation and re-sizing.
A simple numerical model for membrane oxygenation of an artificial lung machine
NASA Astrophysics Data System (ADS)
Subraveti, Sai Nikhil; Sai, P. S. T.; Viswanathan Pillai, Vinod Kumar; Patnaik, B. S. V.
2015-11-01
Optimal design of membrane oxygenators will have far reaching ramification in the development of artificial heart-lung systems. In the present CFD study, we simulate the gas exchange between the venous blood and air that passes through the hollow fiber membranes on a benchmark device. The gas exchange between the tube side fluid and the shell side venous liquid is modeled by solving mass, momentum conservation equations. The fiber bundle was modelled as a porous block with a bundle porosity of 0.6. The resistance offered by the fiber bundle was estimated by the standard Ergun correlation. The present numerical simulations are validated against available benchmark data. The effect of bundle porosity, bundle size, Reynolds number, non-Newtonian constitutive relation, upstream velocity distribution etc. on the pressure drop, oxygen saturation levels etc. are investigated. To emulate the features of gas transfer past the alveoli, the effect of pulsatility on the membrane oxygenation is also investigated.
FDTD simulation of transmittance characteristics of one-dimensional conducting electrodes.
Lee, Kilbock; Song, Seok Ho; Ahn, Jinho
2014-03-24
We investigated transparent conducting electrodes consisting of periodic one-dimensional Ag or Al grids with widths from 25 nm to 5 μm via the finite-difference time-domain method. To retain high transmittance, two grid configurations with opening ratios of 90% and 95% were simulated. Polarization-dependent characteristics of the transmission spectra revealed that the overall transmittance of micron-scale grid electrodes may be estimated by the sum of light power passing through the uncovered area and the light power penetrating the covered metal layer. However, several dominant physical phenomena significantly affect the transmission spectra of the nanoscale grids: Rayleigh anomaly, transmission decay in TE polarized mode, and localized surface plasmon resonance. We conclude that, for applications of transparent electrodes, the critical feature sizes of conducting 1D grids should not be less than the wavelength scale in order to maintain uniform and predictable transmission spectra and low electrical resistivity.
NASA Technical Reports Server (NTRS)
Norbury, John W.; Slaba, Tony C.; Rusek, Adam; Durante, Marco; Reitz, Guenther
2015-01-01
An international collaboration on Galactic Cosmic Ray (GCR) simulation is being formed to make recommendations on how to best simulate the GCR spectrum at ground based accelerators. The external GCR spectrum is significantly modified when it passes through spacecraft shielding and astronauts. One approach for simulating the GCR space radiation environment at ground based accelerators would use the modified spectrum, rather than the external spectrum, in the accelerator beams impinging on biological targets. Two recent workshops have studied such GCR simulation. The first workshop was held at NASA Langley Research Center in October 2014. The second workshop was held at the NASA Space Radiation Investigators' workshop in Galveston, Texas in January 2015. The anticipated outcome of these and other studies may be a report or journal article, written by an international collaboration, making accelerator beam recommendations for GCR simulation. This poster describes the status of GCR simulation at the NASA Space Radiation Laboratory and encourages others to join the collaboration.
Improved first-pass spiral myocardial perfusion imaging with variable density trajectories.
Salerno, Michael; Sica, Christopher; Kramer, Christopher M; Meyer, Craig H
2013-11-01
To develop and evaluate variable-density spiral first-pass perfusion pulse sequences for improved efficiency and off-resonance performance and to demonstrate the utility of an apodizing density compensation function (DCF) to improve signal-to-noise ratio (SNR) and reduce dark-rim artifact caused by cardiac motion and Gibbs Ringing. Three variable density spiral trajectories were designed, simulated, and evaluated in 18 normal subjects, and in eight patients with cardiac pathology on a 1.5T scanner. By using a DCF, which intentionally apodizes the k-space data, the sidelobe amplitude of the theoretical point spread function (PSF) is reduced by 68%, with only a 13% increase in the full-width at half-maximum of the main-lobe when compared with the same data corrected with a conventional variable-density DCF, and has an 8% higher resolution than a uniform density spiral with the same number of interleaves and readout duration. Furthermore, this strategy results in a greater than 60% increase in measured SNR when compared with the same variable-density spiral data corrected with a conventional DCF (P < 0.01). Perfusion defects could be clearly visualized with minimal off-resonance and dark-rim artifacts. Variable-density spiral pulse sequences using an apodized DCF produce high-quality first-pass perfusion images with minimal dark-rim and off-resonance artifacts, high SNR and contrast-to-noise ratio, and good delineation of resting perfusion abnormalities. Copyright © 2012 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Pei, Youbin; Xiang, Nong; Shen, Wei; Hu, Youjun; Todo, Y.; Zhou, Deng; Huang, Juan
2018-05-01
Kinetic-MagnetoHydroDynamic (MHD) hybrid simulations are carried out to study fast ion driven toroidal Alfvén eigenmodes (TAEs) on the Experimental Advanced Superconducting Tokamak (EAST). The first part of this article presents the linear benchmark between two kinetic-MHD codes, namely MEGA and M3D-K, based on a realistic EAST equilibrium. Parameter scans show that the frequency and the growth rate of the TAE given by the two codes agree with each other. The second part of this article discusses the resonance interaction between the TAE and fast ions simulated by the MEGA code. The results show that the TAE exchanges energy with the co-current passing particles with the parallel velocity |v∥ | ≈VA 0/3 or |v∥ | ≈VA 0/5 , where VA 0 is the Alfvén speed on the magnetic axis. The TAE destabilized by the counter-current passing ions is also analyzed and found to have a much smaller growth rate than the co-current ions driven TAE. One of the reasons for this is found to be that the overlapping region of the TAE spatial location and the counter-current ion orbits is narrow, and thus the wave-particle energy exchange is not efficient.
Incorporating a Spatial Prior into Nonlinear D-Bar EIT Imaging for Complex Admittivities.
Hamilton, Sarah J; Mueller, J L; Alsaker, M
2017-02-01
Electrical Impedance Tomography (EIT) aims to recover the internal conductivity and permittivity distributions of a body from electrical measurements taken on electrodes on the surface of the body. The reconstruction task is a severely ill-posed nonlinear inverse problem that is highly sensitive to measurement noise and modeling errors. Regularized D-bar methods have shown great promise in producing noise-robust algorithms by employing a low-pass filtering of nonlinear (nonphysical) Fourier transform data specific to the EIT problem. Including prior data with the approximate locations of major organ boundaries in the scattering transform provides a means of extending the radius of the low-pass filter to include higher frequency components in the reconstruction, in particular, features that are known with high confidence. This information is additionally included in the system of D-bar equations with an independent regularization parameter from that of the extended scattering transform. In this paper, this approach is used in the 2-D D-bar method for admittivity (conductivity as well as permittivity) EIT imaging. Noise-robust reconstructions are presented for simulated EIT data on chest-shaped phantoms with a simulated pneumothorax and pleural effusion. No assumption of the pathology is used in the construction of the prior, yet the method still produces significant enhancements of the underlying pathology (pneumothorax or pleural effusion) even in the presence of strong noise.
HyperPASS, a New Aeroassist Tool
NASA Technical Reports Server (NTRS)
Gates, Kristin; McRonald, Angus; Nock, Kerry
2005-01-01
A new software tool designed to perform aeroassist studies has been developed by Global Aerospace Corporation (GAC). The Hypersonic Planetary Aeroassist Simulation System (HyperPASS) [1] enables users to perform guided aerocapture, guided ballute aerocapture, aerobraking, orbit decay, or unguided entry simulations at any of six target bodies (Venus, Earth, Mars, Jupiter, Titan, or Neptune). HyperPASS is currently being used for trade studies to investigate (1) aerocapture performance with alternate aeroshell types, varying flight path angle and entry velocity, different gload and heating limits, and angle of attack and angle of bank variations; (2) variable, attached ballute geometry; (3) railgun launched projectile trajectories, and (4) preliminary orbit decay evolution. After completing a simulation, there are numerous visualization options in which data can be plotted, saved, or exported to various formats. Several analysis examples will be described.
NASA Technical Reports Server (NTRS)
Hermance, J. F. (Principal Investigator)
1982-01-01
The two stages of analysis of MAGSAT magnetic data which are designed to evaluate electromagnetic induction effects are described. The first stage consists of comparison of data from contiguous orbit passes over large scale geologic boundaries, such as ocean-land interfaces, at several levels of magnetic disturbance. The purpose of these comparisons is to separate induction effects from effects of lithospheric magnetization. The procdure for reducing the data includes: (1) identifying and subtracting quiet time effects; (2) modelling and subtracting first order ring current effects; and (3) projecting an orbit track onto a map as a nearly straight line so it can serve as an axis on which to plot the corresponding orbit pass data in the context of geography. The second stage consists of comparison of MAGSAT data with standard hourly observatory data. The purpose is to constrain the time evolution of ionospheric and magnetospheric current systems. Qualitative features of the ground based dataset are discussed. Methods for reducing the ground based data are described.
First-principles molecular dynamics simulation study on electrolytes for use in redox flow battery
NASA Astrophysics Data System (ADS)
Choe, Yoong-Kee; Tsuchida, Eiji; Tokuda, Kazuya; Ootsuka, Jun; Saito, Yoshihiro; Masuno, Atsunobu; Inoue, Hiroyuki
2017-11-01
Results of first-principles molecular dynamics simulations carried out to investigate structural aspects of electrolytes for use in a redox flow battery are reported. The electrolytes studied here are aqueous sulfuric acid solutions where its property is of importance for dissolving redox couples in redox flow battery. The simulation results indicate that structural features of the acid solutions depend on the concentration of sulfuric acid. Such dependency arises from increase of proton dissociation from sulfuric acid.
Feature-oriented regional modeling and simulations in the Gulf of Maine and Georges Bank
NASA Astrophysics Data System (ADS)
Gangopadhyay, Avijit; Robinson, Allan R.; Haley, Patrick J.; Leslie, Wayne G.; Lozano, Carlos J.; Bisagni, James J.; Yu, Zhitao
2003-03-01
The multiscale synoptic circulation system in the Gulf of Maine and Georges Bank (GOMGB) region is presented using a feature-oriented approach. Prevalent synoptic circulation structures, or 'features', are identified from previous observational studies. These features include the buoyancy-driven Maine Coastal Current, the Georges Bank anticyclonic frontal circulation system, the basin-scale cyclonic gyres (Jordan, Georges and Wilkinson), the deep inflow through the Northeast Channel (NEC), the shallow outflow via the Great South Channel (GSC), and the shelf-slope front (SSF). Their synoptic water-mass ( T- S) structures are characterized and parameterized in a generalized formulation to develop temperature-salinity feature models. A synoptic initialization scheme for feature-oriented regional modeling and simulation (FORMS) of the circulation in the coastal-to-deep region of the GOMGB system is then developed. First, the temperature and salinity feature-model profiles are placed on a regional circulation template and then objectively analyzed with appropriate background climatology in the coastal region. Furthermore, these fields are melded with adjacent deep-ocean regional circulation (Gulf Stream Meander and Ring region) along and across the SSF. These initialization fields are then used for dynamical simulations via the primitive equation model. Simulation results are analyzed to calibrate the multiparameter feature-oriented modeling system. Experimental short-term synoptic simulations are presented for multiple resolutions in different regions with and without atmospheric forcing. The presented 'generic and portable' methodology demonstrates the potential of applying similar FORMS in many other regions of the Global Coastal Ocean.
A Compact Band-Pass Filter with High Selectivity and Second Harmonic Suppression
Hadarig, Ramona Cosmina; de Cos Gomez, Maria Elena; Las-Heras, Fernando
2013-01-01
The design of a novel band-pass filter with narrow-band features based on an electromagnetic resonator at 6.4 GHz is presented. A prototype is manufactured and characterized in terms of transmission and reflection coefficient. The selective passband and suppression of the second harmonic make the filter suitable to be used in a C band frequency range for radar systems and satellite/terrestrial applications. To avoid substantial interference for this kind of applications, passive components with narrow band features and small dimensions are required. Between 3.6 GHz and 4.2 GHz the band-pass filter with harmonic suppression should have an attenuation of at least 35 dB, whereas for a passband, less than 10% is sufficient. PMID:28788412
NASA Technical Reports Server (NTRS)
Pearson, Richard (Inventor); Lynch, Dana H. (Inventor); Gunter, William D. (Inventor)
1995-01-01
A method and apparatus for passing light bundles through a multiple pass sampling cell is disclosed. The multiple pass sampling cell includes a sampling chamber having first and second ends positioned along a longitudinal axis of the sampling cell. The sampling cell further includes an entrance opening, located adjacent the first end of the sampling cell at a first azimuthal angular position. The entrance opening permits a light bundle to pass into the sampling cell. The sampling cell also includes an exit opening at a second azimuthal angular position. The light exit permits a light bundle to pass out of the sampling cell after the light bundle has followed a predetermined path.
Muon Simulation at the Daya Bay SIte
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mengyun, Guan; Jun, Cao; Changgen, Yang
2006-05-23
With a pretty good-resolution mountain profile, we simulated the underground muon background at the Daya Bay site. To get the sea-level muon flux parameterization, a modification to the standard Gaisser's formula was introduced according to the world muon data. MUSIC code was used to transport muon through the mountain rock. To deploy the simulation, first we generate a statistic sample of sea-level muon events according to the sea-level muon flux distribution formula; then calculate the slant depth of muon passing through the mountain using an interpolation method based on the digitized data of the mountain; finally transport muons through rockmore » to get underground muon sample, from which we can get results of muon flux, mean energy, energy distribution and angular distribution.« less
Auroral displays near the 'foot' of the field line of the ATS-5 satellite
NASA Technical Reports Server (NTRS)
Akasofu, S.-I.; Deforest, S.; Mcilwain, C.
1974-01-01
Summary of an extensive correlative study of ATS-5 particle and magnetic field data with all-sky photographs from Great Whale River which is near the 'foot' of the field lines passing through the ATS-5 satellite. In particular, an effort is made to identify specific particle features with specific auroral displays during substorms, such as a westward traveling surge, poleward expansive motion, and drifting patches. It is found that, in early evening hours, the first encounter of ATS-5 with hot plasma is associated with the equatorward shift of the diffuse aurora, but not necessarily with westward traveling surges (even when the satellite is embedded in the plasma sheet). In the midnight sector, an injection corresponds very well to the initial brightening of an auroral arc. Specific features of morning sector auroras are difficult to correlate with specific particle features.
Adaptive pattern recognition by mini-max neural networks as a part of an intelligent processor
NASA Technical Reports Server (NTRS)
Szu, Harold H.
1990-01-01
In this decade and progressing into 21st Century, NASA will have missions including Space Station and the Earth related Planet Sciences. To support these missions, a high degree of sophistication in machine automation and an increasing amount of data processing throughput rate are necessary. Meeting these challenges requires intelligent machines, designed to support the necessary automations in a remote space and hazardous environment. There are two approaches to designing these intelligent machines. One of these is the knowledge-based expert system approach, namely AI. The other is a non-rule approach based on parallel and distributed computing for adaptive fault-tolerances, namely Neural or Natural Intelligence (NI). The union of AI and NI is the solution to the problem stated above. The NI segment of this unit extracts features automatically by applying Cauchy simulated annealing to a mini-max cost energy function. The feature discovered by NI can then be passed to the AI system for future processing, and vice versa. This passing increases reliability, for AI can follow the NI formulated algorithm exactly, and can provide the context knowledge base as the constraints of neurocomputing. The mini-max cost function that solves the unknown feature can furthermore give us a top-down architectural design of neural networks by means of Taylor series expansion of the cost function. A typical mini-max cost function consists of the sample variance of each class in the numerator, and separation of the center of each class in the denominator. Thus, when the total cost energy is minimized, the conflicting goals of intraclass clustering and interclass segregation are achieved simultaneously.
NASA Astrophysics Data System (ADS)
Huang, Haiping
2017-05-01
Revealing hidden features in unlabeled data is called unsupervised feature learning, which plays an important role in pretraining a deep neural network. Here we provide a statistical mechanics analysis of the unsupervised learning in a restricted Boltzmann machine with binary synapses. A message passing equation to infer the hidden feature is derived, and furthermore, variants of this equation are analyzed. A statistical analysis by replica theory describes the thermodynamic properties of the model. Our analysis confirms an entropy crisis preceding the non-convergence of the message passing equation, suggesting a discontinuous phase transition as a key characteristic of the restricted Boltzmann machine. Continuous phase transition is also confirmed depending on the embedded feature strength in the data. The mean-field result under the replica symmetric assumption agrees with that obtained by running message passing algorithms on single instances of finite sizes. Interestingly, in an approximate Hopfield model, the entropy crisis is absent, and a continuous phase transition is observed instead. We also develop an iterative equation to infer the hyper-parameter (temperature) hidden in the data, which in physics corresponds to iteratively imposing Nishimori condition. Our study provides insights towards understanding the thermodynamic properties of the restricted Boltzmann machine learning, and moreover important theoretical basis to build simplified deep networks.
A loop-based neural architecture for structured behavior encoding and decoding.
Gisiger, Thomas; Boukadoum, Mounir
2018-02-01
We present a new type of artificial neural network that generalizes on anatomical and dynamical aspects of the mammal brain. Its main novelty lies in its topological structure which is built as an array of interacting elementary motifs shaped like loops. These loops come in various types and can implement functions such as gating, inhibitory or executive control, or encoding of task elements to name a few. Each loop features two sets of neurons and a control region, linked together by non-recurrent projections. The two neural sets do the bulk of the loop's computations while the control unit specifies the timing and the conditions under which the computations implemented by the loop are to be performed. By functionally linking many such loops together, a neural network is obtained that may perform complex cognitive computations. To demonstrate the potential offered by such a system, we present two neural network simulations. The first illustrates the structure and dynamics of a single loop implementing a simple gating mechanism. The second simulation shows how connecting four loops in series can produce neural activity patterns that are sufficient to pass a simplified delayed-response task. We also show that this network reproduces electrophysiological measurements gathered in various regions of the brain of monkeys performing similar tasks. We also demonstrate connections between this type of neural network and recurrent or long short-term memory network models, and suggest ways to generalize them for future artificial intelligence research. Copyright © 2017 Elsevier Ltd. All rights reserved.
Studies of particle wake potentials in plasmas
NASA Astrophysics Data System (ADS)
Ellis, Ian N.; Graziani, Frank R.; Glosli, James N.; Strozzi, David J.; Surh, Michael P.; Richards, David F.; Decyk, Viktor K.; Mori, Warren B.
2011-09-01
A detailed understanding of electron stopping and scattering in plasmas with variable values for the number of particles within a Debye sphere is still not at hand. Presently, there is some disagreement in the literature concerning the proper description of these processes. Theoretical models assume electrostatic (Coulomb force) interactions between particles and neglect magnetic effects. Developing and validating proper descriptions requires studying the processes using first-principle plasma simulations. We are using the particle-particle particle-mesh (PPPM) code ddcMD and the particle-in-cell (PIC) code BEPS to perform these simulations. As a starting point in our study, we examine the wake of a particle passing through a plasma in 3D electrostatic simulations performed with ddcMD and BEPS. In this paper, we compare the wakes observed in these simulations with each other and predictions from collisionless kinetic theory. The relevance of the work to Fast Ignition is discussed.
NASA Astrophysics Data System (ADS)
Kang, Yongjoon; Park, Gitae; Jeong, Seonghoon; Lee, Changhee
2018-01-01
A large fraction of reheated weld metal is formed during multi-pass welding, which significantly affects the mechanical properties (especially toughness) of welded structures. In this study, the low-temperature toughness of the simulated reheated zone in multi-pass weld metal was evaluated and compared to that of the as-deposited zone using microstructural analyses. Two kinds of high-strength steel welds with different hardenabilities were produced by single-pass, bead-in-groove welding, and both welds were thermally cycled to peak temperatures above Ac3 using a Gleeble simulator. When the weld metals were reheated, their toughness deteriorated in response to the increase in the fraction of detrimental microstructural components, i.e., grain boundary ferrite and coalesced bainite in the weld metals with low and high hardenabilities, respectively. In addition, toughness deterioration occurred in conjunction with an increase in the effective grain size, which was attributed to the decrease in nucleation probability of acicular ferrite; the main cause for this decrease changed depending on the hardenability of the weld metal.
Computational simulation of weld microstructure and distortion by considering process mechanics
NASA Astrophysics Data System (ADS)
Mochizuki, M.; Mikami, Y.; Okano, S.; Itoh, S.
2009-05-01
Highly precise fabrication of welded materials is in great demand, and so microstructure and distortion controls are essential. Furthermore, consideration of process mechanics is important for intelligent fabrication. In this study, the microstructure and hardness distribution in multi-pass weld metal are evaluated by computational simulations under the conditions of multiple heat cycles and phase transformation. Because conventional CCT diagrams of weld metal are not available even for single-pass weld metal, new diagrams for multi-pass weld metals are created. The weld microstructure and hardness distribution are precisely predicted when using the created CCT diagram for multi-pass weld metal and calculating the weld thermal cycle. Weld distortion is also investigated by using numerical simulation with a thermal elastic-plastic analysis. In conventional evaluations of weld distortion, the average heat input has been used as the dominant parameter; however, it is difficult to consider the effect of molten pool configurations on weld distortion based only on the heat input. Thus, the effect of welding process conditions on weld distortion is studied by considering molten pool configurations, determined by temperature distribution and history.
Satellite observations of mesoscale features in lower Cook Inlet and Shelikof Strait, Gulf of Alaska
NASA Technical Reports Server (NTRS)
Schumacher, James D.; Barber, Willard E.; Holt, Benjamin; Liu, Antony K.
1991-01-01
The Seasat satellite launched in Summer 1978 carried a synthetic aperture radar (SAR). Although Seasat failed after 105 days in orbit, it provided observations that demonstrate the potential to examine and monitor upper oceanic processes. Seasat made five passes over lower Cook Inlet and Shelikof Strait, Alaska, during Summer 1978. SAR images from the passes show oceanographic features, including a meander in a front, a pair of mesoscale eddies, and internal waves. These features are compared with contemporary and representative images from a satellite-borne Advanced Very High Resolution Radiometer (AVHRR) and Coastal Zone Color Scanner (CZCS), with water property data, and with current observations from moored instruments. The results indicate that SAR data can be used to monitor mesoscale oceanographic features.
Characterization and Simulation of Transient Vibrations Using Band Limited Temporal Moments
Smallwood, David O.
1994-01-01
A method is described to characterize shocks (transient time histories) in terms of the Fourier energy spectrum and the temporal moments of the shock passed through a contiguous set of band pass filters. The product model is then used to generate of a random process as simulations that in the mean will have the same energy and moments as the characterization of the transient event.
Zhang, Z; Liu, X J; Liu, Y Z; Lu, P; Crawley, J C; Lahiri, A
1990-08-01
A new technique has been developed for measuring right ventricular function by nonimaging first pass ventriculography. The right ventricular ejection fraction (RVEF) obtained by non-imaging first pass ventriculography was compared with that obtained by gamma camera first pass and equilibrium ventriculography. The data has demonstrated that the correlation of RVEFs obtained by the nonimaging nuclear cardiac probe and by gamma camera first pass ventriculography in 15 subjects was comparable (r = 0.93). There was also a good correlation between RVEFs obtained by the nonimaging nuclear probe and by equilibrium gated blood pool studies in 33 subjects (r = 0.89). RVEF was significantly reduced in 15 patients with right ventricular and/or inferior myocardial infarction compared to normal subjects (28 +/- 9% v. 45 +/- 9%). The data suggests that nonimaging probes may be used for assessing right ventricular function accurately.
Binary classification of items of interest in a repeatable process
Abell, Jeffrey A; Spicer, John Patrick; Wincek, Michael Anthony; Wang, Hui; Chakraborty, Debejyo
2015-01-06
A system includes host and learning machines. Each machine has a processor in electrical communication with at least one sensor. Instructions for predicting a binary quality status of an item of interest during a repeatable process are recorded in memory. The binary quality status includes passing and failing binary classes. The learning machine receives signals from the at least one sensor and identifies candidate features. Features are extracted from the candidate features, each more predictive of the binary quality status. The extracted features are mapped to a dimensional space having a number of dimensions proportional to the number of extracted features. The dimensional space includes most of the passing class and excludes at least 90 percent of the failing class. Received signals are compared to the boundaries of the recorded dimensional space to predict, in real time, the binary quality status of a subsequent item of interest.
Parallel multiscale simulations of a brain aneurysm
Grinberg, Leopold; Fedosov, Dmitry A.; Karniadakis, George Em
2012-01-01
Cardiovascular pathologies, such as a brain aneurysm, are affected by the global blood circulation as well as by the local microrheology. Hence, developing computational models for such cases requires the coupling of disparate spatial and temporal scales often governed by diverse mathematical descriptions, e.g., by partial differential equations (continuum) and ordinary differential equations for discrete particles (atomistic). However, interfacing atomistic-based with continuum-based domain discretizations is a challenging problem that requires both mathematical and computational advances. We present here a hybrid methodology that enabled us to perform the first multi-scale simulations of platelet depositions on the wall of a brain aneurysm. The large scale flow features in the intracranial network are accurately resolved by using the high-order spectral element Navier-Stokes solver εκ αr. The blood rheology inside the aneurysm is modeled using a coarse-grained stochastic molecular dynamics approach (the dissipative particle dynamics method) implemented in the parallel code LAMMPS. The continuum and atomistic domains overlap with interface conditions provided by effective forces computed adaptively to ensure continuity of states across the interface boundary. A two-way interaction is allowed with the time-evolving boundary of the (deposited) platelet clusters tracked by an immersed boundary method. The corresponding heterogeneous solvers ( εκ αr and LAMMPS) are linked together by a computational multilevel message passing interface that facilitates modularity and high parallel efficiency. Results of multiscale simulations of clot formation inside the aneurysm in a patient-specific arterial tree are presented. We also discuss the computational challenges involved and present scalability results of our coupled solver on up to 300K computer processors. Validation of such coupled atomistic-continuum models is a main open issue that has to be addressed in future work. PMID:23734066
NASA Astrophysics Data System (ADS)
Stanke, J.; Trauth, D.; Feuerhack, A.; Klocke, F.
2017-09-01
Die roll is a morphological feature of fine blanked sheared edges. The die roll reduces the functional part of the sheared edge. To compensate for the die roll thicker sheet metal strips and secondary machining must be used. However, in order to avoid this, the influence of various fine blanking process parameters on the die roll has been experimentally and numerically studied, but there is still a lack of knowledge on the effects of some factors and especially factor interactions on the die roll. Recent changes in the field of artificial intelligence motivate the hybrid use of the finite element method and artificial neural networks to account for these non-considered parameters. Therefore, a set of simulations using a validated finite element model of fine blanking is firstly used to train an artificial neural network. Then the artificial neural network is trained with thousands of experimental trials. Thus, the objective of this contribution is to develop an artificial neural network that reliably predicts the die roll. Therefore, in this contribution, the setup of a fully parameterized 2D FE model is presented that will be used for batch training of an artificial neural network. The FE model enables an automatic variation of the edge radii of blank punch and die plate, the counter and blank holder force, the sheet metal thickness and part diameter, V-ring height and position, cutting velocity as well as material parameters covered by the Hensel-Spittel model for 16MnCr5 (1.7131, AISI/SAE 5115). The FE model is validated using experimental trails. The results of this contribution is a FE model suitable to perform 9.623 simulations and to pass the simulated die roll width and height automatically to an artificial neural network.
Parallel multiscale simulations of a brain aneurysm.
Grinberg, Leopold; Fedosov, Dmitry A; Karniadakis, George Em
2013-07-01
Cardiovascular pathologies, such as a brain aneurysm, are affected by the global blood circulation as well as by the local microrheology. Hence, developing computational models for such cases requires the coupling of disparate spatial and temporal scales often governed by diverse mathematical descriptions, e.g., by partial differential equations (continuum) and ordinary differential equations for discrete particles (atomistic). However, interfacing atomistic-based with continuum-based domain discretizations is a challenging problem that requires both mathematical and computational advances. We present here a hybrid methodology that enabled us to perform the first multi-scale simulations of platelet depositions on the wall of a brain aneurysm. The large scale flow features in the intracranial network are accurately resolved by using the high-order spectral element Navier-Stokes solver εκ αr . The blood rheology inside the aneurysm is modeled using a coarse-grained stochastic molecular dynamics approach (the dissipative particle dynamics method) implemented in the parallel code LAMMPS. The continuum and atomistic domains overlap with interface conditions provided by effective forces computed adaptively to ensure continuity of states across the interface boundary. A two-way interaction is allowed with the time-evolving boundary of the (deposited) platelet clusters tracked by an immersed boundary method. The corresponding heterogeneous solvers ( εκ αr and LAMMPS) are linked together by a computational multilevel message passing interface that facilitates modularity and high parallel efficiency. Results of multiscale simulations of clot formation inside the aneurysm in a patient-specific arterial tree are presented. We also discuss the computational challenges involved and present scalability results of our coupled solver on up to 300K computer processors. Validation of such coupled atomistic-continuum models is a main open issue that has to be addressed in future work.
Parallel multiscale simulations of a brain aneurysm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grinberg, Leopold; Fedosov, Dmitry A.; Karniadakis, George Em, E-mail: george_karniadakis@brown.edu
2013-07-01
Cardiovascular pathologies, such as a brain aneurysm, are affected by the global blood circulation as well as by the local microrheology. Hence, developing computational models for such cases requires the coupling of disparate spatial and temporal scales often governed by diverse mathematical descriptions, e.g., by partial differential equations (continuum) and ordinary differential equations for discrete particles (atomistic). However, interfacing atomistic-based with continuum-based domain discretizations is a challenging problem that requires both mathematical and computational advances. We present here a hybrid methodology that enabled us to perform the first multiscale simulations of platelet depositions on the wall of a brain aneurysm.more » The large scale flow features in the intracranial network are accurately resolved by using the high-order spectral element Navier–Stokes solver NεκTαr. The blood rheology inside the aneurysm is modeled using a coarse-grained stochastic molecular dynamics approach (the dissipative particle dynamics method) implemented in the parallel code LAMMPS. The continuum and atomistic domains overlap with interface conditions provided by effective forces computed adaptively to ensure continuity of states across the interface boundary. A two-way interaction is allowed with the time-evolving boundary of the (deposited) platelet clusters tracked by an immersed boundary method. The corresponding heterogeneous solvers (NεκTαr and LAMMPS) are linked together by a computational multilevel message passing interface that facilitates modularity and high parallel efficiency. Results of multiscale simulations of clot formation inside the aneurysm in a patient-specific arterial tree are presented. We also discuss the computational challenges involved and present scalability results of our coupled solver on up to 300 K computer processors. Validation of such coupled atomistic-continuum models is a main open issue that has to be addressed in future work.« less
NASA Astrophysics Data System (ADS)
Li, Xiaowen; Janiga, Matthew A.; Wang, Shuguang; Tao, Wei-Kuo; Rowe, Angela; Xu, Weixin; Liu, Chuntao; Matsui, Toshihisa; Zhang, Chidong
2018-04-01
Evolution of precipitation structures are simulated and compared with radar observations for the November Madden-Julian Oscillation (MJO) event during the DYNAmics of the MJO (DYNAMO) field campaign. Three ground-based, ship-borne, and spaceborne precipitation radars and three cloud-resolving models (CRMs) driven by observed large-scale forcing are used to study precipitation structures at different locations over the central equatorial Indian Ocean. Convective strength is represented by 0-dBZ echo-top heights, and convective organization by contiguous 17-dBZ areas. The multi-radar and multi-model framework allows for more stringent model validations. The emphasis is on testing models' ability to simulate subtle differences observed at different radar sites when the MJO event passed through. The results show that CRMs forced by site-specific large-scale forcing can reproduce not only common features in cloud populations but also subtle variations observed by different radars. The comparisons also revealed common deficiencies in CRM simulations where they underestimate radar echo-top heights for the strongest convection within large, organized precipitation features. Cross validations with multiple radars and models also enable quantitative comparisons in CRM sensitivity studies using different large-scale forcing, microphysical schemes and parameters, resolutions, and domain sizes. In terms of radar echo-top height temporal variations, many model sensitivity tests have better correlations than radar/model comparisons, indicating robustness in model performance on this aspect. It is further shown that well-validated model simulations could be used to constrain uncertainties in observed echo-top heights when the low-resolution surveillance scanning strategy is used.
Biofilm formation and control in a simulated spacecraft water system - Two-year results
NASA Technical Reports Server (NTRS)
Schultz, John R.; Taylor, Robert D.; Flanagan, David T.; Carr, Sandra E.; Bruce, Rebekah J.; Svoboda, Judy V.; Huls, M. H.; Sauer, Richard L.; Pierson, Duane L.
1991-01-01
The ability of iodine to maintain microbial water quality in a simulated spacecraft water system is being studied. An iodine level of about 2.0 mg/L is maintained by passing ultrapure influent water through an iodinated ion exchange resin. Six liters are withdrawn daily and the chemical and microbial quality of the water is monitored regularly. Stainless steel coupons used to monitor biofilm formation are being analyzed by culture methods, epifluorescence microscopy, and scanning electron microscopy. Results from the first two years of operation show a single episode of high bacterial colony counts in the iodinated system. This growth was apparently controlled by replacing the iodinated ion exchange resin. Scanning electron microscopy indicates that the iodine has limited but not completely eliminated the formation of biofilm during the first two years of operation. Significant microbial contamination has been present continuously in a parallel noniodinated system since the third week of operation.
User's guide to the Reliability Estimation System Testbed (REST)
NASA Technical Reports Server (NTRS)
Nicol, David M.; Palumbo, Daniel L.; Rifkin, Adam
1992-01-01
The Reliability Estimation System Testbed is an X-window based reliability modeling tool that was created to explore the use of the Reliability Modeling Language (RML). RML was defined to support several reliability analysis techniques including modularization, graphical representation, Failure Mode Effects Simulation (FMES), and parallel processing. These techniques are most useful in modeling large systems. Using modularization, an analyst can create reliability models for individual system components. The modules can be tested separately and then combined to compute the total system reliability. Because a one-to-one relationship can be established between system components and the reliability modules, a graphical user interface may be used to describe the system model. RML was designed to permit message passing between modules. This feature enables reliability modeling based on a run time simulation of the system wide effects of a component's failure modes. The use of failure modes effects simulation enhances the analyst's ability to correctly express system behavior when using the modularization approach to reliability modeling. To alleviate the computation bottleneck often found in large reliability models, REST was designed to take advantage of parallel processing on hypercube processors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoshikawa, M., E-mail: yosikawa@prc.tsukuba.ac.jp; Nagasu, K.; Shimamura, Y.
2014-11-15
A multi-pass Thomson scattering (TS) has the advantage of enhancing scattered signals. We constructed a multi-pass TS system for a polarisation-based system and an image relaying system modelled on the GAMMA 10 TS system. We undertook Raman scattering experiments both for the multi-pass setting and for checking the optical components. Moreover, we applied the system to the electron temperature measurements in the GAMMA 10 plasma for the first time. The integrated scattering signal was magnified by approximately three times by using the multi-pass TS system with four passes. The electron temperature measurement accuracy is improved by using this multi-pass system.
Yoshikawa, M; Yasuhara, R; Nagasu, K; Shimamura, Y; Shima, Y; Kohagura, J; Sakamoto, M; Nakashima, Y; Imai, T; Ichimura, M; Yamada, I; Funaba, H; Kawahata, K; Minami, T
2014-11-01
A multi-pass Thomson scattering (TS) has the advantage of enhancing scattered signals. We constructed a multi-pass TS system for a polarisation-based system and an image relaying system modelled on the GAMMA 10 TS system. We undertook Raman scattering experiments both for the multi-pass setting and for checking the optical components. Moreover, we applied the system to the electron temperature measurements in the GAMMA 10 plasma for the first time. The integrated scattering signal was magnified by approximately three times by using the multi-pass TS system with four passes. The electron temperature measurement accuracy is improved by using this multi-pass system.
Taira, Breena R; Orue, Aristides; Stapleton, Edward; Lovato, Luis; Vangala, Sitaram; Tinoco, Lucia Solorzano; Morales, Orlando
2016-01-01
Project Strengthening Emergency Medicine, Investing in Learners in Latin America (SEMILLA) created a novel, language and resource appropriate course for the resuscitation of cardiac arrest for Nicaraguan resident physicians. We hypothesized that participation in the Project SEMILLA resuscitation program would significantly improve the physician's management of simulated code scenarios. Thirteen Nicaraguan resident physicians were evaluated while managing simulated cardiac arrest scenarios before, immediately, and at 6 months after participating in the Project SEMILLA resuscitation program. This project was completed in 2014 in Leon, Nicaragua. The Cardiac Arrest Simulation Test (CASTest), a validated scoring system, was used to evaluate performance on a standardized simulated cardiac arrest scenario. Mixed effect logistic regression models were constructed to assess outcomes. On the pre-course simulation exam, only 7.7% of subjects passed the test. Immediately post-course, the subjects achieved a 30.8% pass rate and at 6 months after the course, the pass rate was 46.2%. Compared with pre-test scores, the odds of passing the CASTest at 6 months after the course were 21.7 times higher (95% CI 4.2 to 112.8, P<0.001). Statistically significant improvement was also seen on the number of critical items completed (OR=3.75, 95% CI 2.71-5.19), total items completed (OR=4.55, 95% CI 3.4-6.11), and number of "excellent" scores on a Likert scale (OR=2.66, 95% CI 1.85-3.81). Nicaraguan resident physicians demonstrate improved ability to manage simulated cardiac arrest scenarios after participation in the Project SEMILLA resuscitation course and retain these skills.
Comparison of neuronal spike exchange methods on a Blue Gene/P supercomputer.
Hines, Michael; Kumar, Sameer; Schürmann, Felix
2011-01-01
For neural network simulations on parallel machines, interprocessor spike communication can be a significant portion of the total simulation time. The performance of several spike exchange methods using a Blue Gene/P (BG/P) supercomputer has been tested with 8-128 K cores using randomly connected networks of up to 32 M cells with 1 k connections per cell and 4 M cells with 10 k connections per cell, i.e., on the order of 4·10(10) connections (K is 1024, M is 1024(2), and k is 1000). The spike exchange methods used are the standard Message Passing Interface (MPI) collective, MPI_Allgather, and several variants of the non-blocking Multisend method either implemented via non-blocking MPI_Isend, or exploiting the possibility of very low overhead direct memory access (DMA) communication available on the BG/P. In all cases, the worst performing method was that using MPI_Isend due to the high overhead of initiating a spike communication. The two best performing methods-the persistent Multisend method using the Record-Replay feature of the Deep Computing Messaging Framework DCMF_Multicast; and a two-phase multisend in which a DCMF_Multicast is used to first send to a subset of phase one destination cores, which then pass it on to their subset of phase two destination cores-had similar performance with very low overhead for the initiation of spike communication. Departure from ideal scaling for the Multisend methods is almost completely due to load imbalance caused by the large variation in number of cells that fire on each processor in the interval between synchronization. Spike exchange time itself is negligible since transmission overlaps with computation and is handled by a DMA controller. We conclude that ideal performance scaling will be ultimately limited by imbalance between incoming processor spikes between synchronization intervals. Thus, counterintuitively, maximization of load balance requires that the distribution of cells on processors should not reflect neural net architecture but be randomly distributed so that sets of cells which are burst firing together should be on different processors with their targets on as large a set of processors as possible.
Assessment and Calibration of a Crimp Tool Equipped with Ultrasonic Analysis Features
NASA Technical Reports Server (NTRS)
Yost, William T. (Inventor); Perey, Daniel F. (Inventor); Cramer, K. Elliott (Inventor)
2013-01-01
A method is provided for calibrating ultrasonic signals passed through a crimp formed with respect to a deformable body via an ultrasonically-equipped crimp tool (UECT). The UECT verifies a crimp quality using the ultrasonic signals. The method includes forming the crimp, transmitting a first signal, e.g., a pulse, to a first transducer of the UECT, and converting the first signal, using the first transducer, into a second signal which defines an ultrasonic pulse. This pulse is transmitted through the UECT into the crimp. A second transducer converts the second signal into a third signal, which may be further conditioned, and the ultrasonic signals are calibrated using the third signal or its conditioned variant. An apparatus for calibrating the ultrasonic signals includes a pulse module (PM) electrically connected to the first and second transducers, and an oscilloscope or display electrically connected to the PM for analyzing an electrical output signal therefrom.
Eye movement assessment of selective attentional capture by emotional pictures.
Nummenmaa, Lauri; Hyönä, Jukka; Calvo, Manuel G
2006-05-01
The eye-tracking method was used to assess attentional orienting to and engagement on emotional visual scenes. In Experiment 1, unpleasant, neutral, or pleasant target pictures were presented simultaneously with neutral control pictures in peripheral vision under instruction to compare pleasantness of the pictures. The probability of first fixating an emotional picture, and the frequency of subsequent fixations, were greater than those for neutral pictures. In Experiment 2, participants were instructed to avoid looking at the emotional pictures, but these were still more likely to be fixated first and gazed longer during the first-pass viewing than neutral pictures. Low-level visual features cannot explain the results. It is concluded that overt visual attention is captured by both unpleasant and pleasant emotional content. 2006 APA, all rights reserved
Linear fixed-field multipass arcs for recirculating linear accelerators
Morozov, V. S.; Bogacz, S. A.; Roblin, Y. R.; ...
2012-06-14
Recirculating Linear Accelerators (RLA's) provide a compact and efficient way of accelerating particle beams to medium and high energies by reusing the same linac for multiple passes. In the conventional scheme, after each pass, the different energy beams coming out of the linac are separated and directed into appropriate arcs for recirculation, with each pass requiring a separate fixed-energy arc. In this paper we present a concept of an RLA return arc based on linear combined-function magnets, in which two and potentially more consecutive passes with very different energies are transported through the same string of magnets. By adjusting themore » dipole and quadrupole components of the constituting linear combined-function magnets, the arc is designed to be achromatic and to have zero initial and final reference orbit offsets for all transported beam energies. We demonstrate the concept by developing a design for a droplet-shaped return arc for a dog-bone RLA capable of transporting two beam passes with momenta different by a factor of two. Finally, we present the results of tracking simulations of the two passes and lay out the path to end-to-end design and simulation of a complete dog-bone RLA.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, H.L.; Spronsen, G. van; Klaus, E.H.
A simulation model of the dynamics of a by-pass pig and related two-phase flow behavior along with field trials of the pig in a dry-gas pipeline have revealed significant gains in use of a by-pass pig in modifying gas and liquid production rates. The method can widen the possibility of applying two-phase flow pipeline transportation to cases in which separator or slug-catcher capacity is limited by practicality or cost. Pigging two-phase pipelines normally generates large liquid slug volumes in front of the pig. These require large separators or slug catchers. Using a high by-pass pig to disperse the liquid andmore » reduce the maximum liquid production rate before pig arrival has been investigated by Shell Exploration and Production companies. A simulation model of the dynamics of the pig and related two-phase flow behavior in the pipeline was used to predict the performance of by-pass pigs. Field trials in a dry-gas pipeline were carried out to provide friction data and to validate the model. The predicted mobility of the high by-pass pig in the pipeline and risers was verified and the beneficial effects due to the by-pass concept exceeded the prediction of the simplified model.« less
Face aging effect simulation model based on multilayer representation and shearlet transform
NASA Astrophysics Data System (ADS)
Li, Yuancheng; Li, Yan
2017-09-01
In order to extract detailed facial features, we build a face aging effect simulation model based on multilayer representation and shearlet transform. The face is divided into three layers: the global layer of the face, the local features layer, and texture layer, which separately establishes the aging model. First, the training samples are classified according to different age groups, and we use active appearance model (AAM) at the global level to obtain facial features. The regression equations of shape and texture with age are obtained by fitting the support vector machine regression, which is based on the radial basis function. We use AAM to simulate the aging of facial organs. Then, for the texture detail layer, we acquire the significant high-frequency characteristic components of the face by using the multiscale shearlet transform. Finally, we get the last simulated aging images of the human face by the fusion algorithm. Experiments are carried out on the FG-NET dataset, and the experimental results show that the simulated face images have less differences from the original image and have a good face aging simulation effect.
The Serpent Strikes: Simulation in a Large First-Year Course.
ERIC Educational Resources Information Center
Schrag, Philip G.
1989-01-01
A year-long simulation of a single case supplements a traditional civil procedure course at Georgetown University. Experience with the approach suggests that design features can reduce the burdens on the instructor without reducing course effectiveness, making the approach feasible even with larger classes. (MSE)
Ball lightning passage through a glass without breaking it
NASA Astrophysics Data System (ADS)
Bychkov, Vladimir L.; Nikitin, Anatoly I.; Ivanenko, Ilia P.; Nikitina, Tamara F.; Velichko, Alexander M.; Nosikov, Igor A.
2016-12-01
In long history of ball lightning (BL) theory development there is a struggle of two concepts. According to the first one, BL - is a high frequency electrical discharge, burning in the air due to action of alternating electric field or a continuous current generated by an external source of energy. According to the second one, the BL is a material body, storing energy within itself. Data banks of BL observations give evidence that BL can pass through glasses, leaving no traces on them. Supporters of the first concept consider this as the proof of the correctness of the "electric field" BL nature. Representation of BL as a material body with internal source of energy explains most of its features, but has difficulties in explanation of BL penetration through glasses. We describe results of research of the glass, through which BL freely passed, that was observed by one of the authors. They proved the presence of traces left by BL. With a help of optical and scanning microscopes and laser beam probing of the glass, that experienced action of 20 cm BL, we have found traces in it: in the glass we found a region of 1-2 mm, at the center of which a cavity of 0.24 mm diameter is located. This gives evidence to a "material" nature of BL. BL possibility to pass through small holes and its ability to "make" such holes poses a number of difficult issues to researchers indicated in the article.
Best bang for your buck: GPU nodes for GROMACS biomolecular simulations
Páll, Szilárd; Fechner, Martin; Esztermann, Ansgar; de Groot, Bert L.; Grubmüller, Helmut
2015-01-01
The molecular dynamics simulation package GROMACS runs efficiently on a wide variety of hardware from commodity workstations to high performance computing clusters. Hardware features are well‐exploited with a combination of single instruction multiple data, multithreading, and message passing interface (MPI)‐based single program multiple data/multiple program multiple data parallelism while graphics processing units (GPUs) can be used as accelerators to compute interactions off‐loaded from the CPU. Here, we evaluate which hardware produces trajectories with GROMACS 4.6 or 5.0 in the most economical way. We have assembled and benchmarked compute nodes with various CPU/GPU combinations to identify optimal compositions in terms of raw trajectory production rate, performance‐to‐price ratio, energy efficiency, and several other criteria. Although hardware prices are naturally subject to trends and fluctuations, general tendencies are clearly visible. Adding any type of GPU significantly boosts a node's simulation performance. For inexpensive consumer‐class GPUs this improvement equally reflects in the performance‐to‐price ratio. Although memory issues in consumer‐class GPUs could pass unnoticed as these cards do not support error checking and correction memory, unreliable GPUs can be sorted out with memory checking tools. Apart from the obvious determinants for cost‐efficiency like hardware expenses and raw performance, the energy consumption of a node is a major cost factor. Over the typical hardware lifetime until replacement of a few years, the costs for electrical power and cooling can become larger than the costs of the hardware itself. Taking that into account, nodes with a well‐balanced ratio of CPU and consumer‐class GPU resources produce the maximum amount of GROMACS trajectory over their lifetime. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:26238484
Best bang for your buck: GPU nodes for GROMACS biomolecular simulations.
Kutzner, Carsten; Páll, Szilárd; Fechner, Martin; Esztermann, Ansgar; de Groot, Bert L; Grubmüller, Helmut
2015-10-05
The molecular dynamics simulation package GROMACS runs efficiently on a wide variety of hardware from commodity workstations to high performance computing clusters. Hardware features are well-exploited with a combination of single instruction multiple data, multithreading, and message passing interface (MPI)-based single program multiple data/multiple program multiple data parallelism while graphics processing units (GPUs) can be used as accelerators to compute interactions off-loaded from the CPU. Here, we evaluate which hardware produces trajectories with GROMACS 4.6 or 5.0 in the most economical way. We have assembled and benchmarked compute nodes with various CPU/GPU combinations to identify optimal compositions in terms of raw trajectory production rate, performance-to-price ratio, energy efficiency, and several other criteria. Although hardware prices are naturally subject to trends and fluctuations, general tendencies are clearly visible. Adding any type of GPU significantly boosts a node's simulation performance. For inexpensive consumer-class GPUs this improvement equally reflects in the performance-to-price ratio. Although memory issues in consumer-class GPUs could pass unnoticed as these cards do not support error checking and correction memory, unreliable GPUs can be sorted out with memory checking tools. Apart from the obvious determinants for cost-efficiency like hardware expenses and raw performance, the energy consumption of a node is a major cost factor. Over the typical hardware lifetime until replacement of a few years, the costs for electrical power and cooling can become larger than the costs of the hardware itself. Taking that into account, nodes with a well-balanced ratio of CPU and consumer-class GPU resources produce the maximum amount of GROMACS trajectory over their lifetime. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Abedian, A.; Poursina, M.; Golestanian, H.
2007-05-01
Radial forging is an open die forging process used for reducing the diameter of shafts, tubes, stepped shafts and axels, and creating internal profiles for tubes such as rifling of gun barrels. In this work, a comprehensive study of multi-pass hot radial forging of short hollow and solid products are presented using 2-D axisymmetric finite element simulation. The workpiece is modeled as an elastic-viscoplastic material. A mixture of Coulomb law and constant limit shear is used to model the die-workpiece and mandrel-workpiece contacts. Thermal effects are also taken in to account. Three-pass radial forging of solid cylinders and tube products are considered. Temperature, stress, strain and metal flow distribution are obtained in each pass through thermo-mechanical simulation. The numerical results are compared with available experimental data and are in good agreement with them.
Molecular Dynamic Studies of Particle Wake Potentials in Plasmas
NASA Astrophysics Data System (ADS)
Ellis, Ian; Graziani, Frank; Glosli, James; Strozzi, David; Surh, Michael; Richards, David; Decyk, Viktor; Mori, Warren
2010-11-01
Fast Ignition studies require a detailed understanding of electron scattering, stopping, and energy deposition in plasmas with variable values for the number of particles within a Debye sphere. Presently there is disagreement in the literature concerning the proper description of these processes. Developing and validating proper descriptions requires studying the processes using first-principle electrostatic simulations and possibly including magnetic fields. We are using the particle-particle particle-mesh (P^3M) code ddcMD to perform these simulations. As a starting point in our study, we examined the wake of a particle passing through a plasma. In this poster, we compare the wake observed in 3D ddcMD simulations with that predicted by Vlasov theory and those observed in the electrostatic PIC code BEPS where the cell size was reduced to .03λD.
The life-cycle of upper-tropospheric jet streams identified with a novel data segmentation algorithm
NASA Astrophysics Data System (ADS)
Limbach, S.; Schömer, E.; Wernli, H.
2010-09-01
Jet streams are prominent features of the upper-tropospheric atmospheric flow. Through the thermal wind relationship these regions with intense horizontal wind speed (typically larger than 30 m/s) are associated with pronounced baroclinicity, i.e., with regions where extratropical cyclones develop due to baroclinic instability processes. Individual jet streams are non-stationary elongated features that can extend over more than 2000 km in the along-flow and 200-500 km in the across-flow direction, respectively. Their lifetime can vary between a few days and several weeks. In recent years, feature-based algorithms have been developed that allow compiling synoptic climatologies and typologies of upper-tropospheric jet streams based upon objective selection criteria and climatological reanalysis datasets. In this study a novel algorithm to efficiently identify jet streams using an extended region-growing segmentation approach is introduced. This algorithm iterates over a 4-dimensional field of horizontal wind speed from ECMWF analyses and decides at each grid point whether all prerequisites for a jet stream are met. In a single pass the algorithm keeps track of all adjacencies of these grid points and creates the 4-dimensional connected segments associated with each jet stream. In addition to the detection of these sets of connected grid points, the algorithm analyzes the development over time of the distinct 3-dimensional features each segment consists of. Important events in the development of these features, for example mergings and splittings, are detected and analyzed on a per-grid-point and per-feature basis. The output of the algorithm consists of the actual sets of grid-points augmented with information about the particular events, and of the so-called event graphs, which are an abstract representation of the distinct 3-dimensional features and events of each segment. This technique provides comprehensive information about the frequency of upper-tropospheric jet streams, their preferred regions of genesis, merging, splitting, and lysis, and statistical information about their size, amplitude and lifetime. The presentation will introduce the technique, provide example visualizations of the time evolution of the identified 3-dimensional jet stream features, and present results from a first multi-month "climatology" of upper-tropospheric jets. In the future, the technique can be applied to longer datasets, for instance reanalyses and output from global climate model simulations - and provide detailed information about key characteristics of jet stream life cycles.
n-body simulations using message passing parallel computers.
NASA Astrophysics Data System (ADS)
Grama, A. Y.; Kumar, V.; Sameh, A.
The authors present new parallel formulations of the Barnes-Hut method for n-body simulations on message passing computers. These parallel formulations partition the domain efficiently incurring minimal communication overhead. This is in contrast to existing schemes that are based on sorting a large number of keys or on the use of global data structures. The new formulations are augmented by alternate communication strategies which serve to minimize communication overhead. The impact of these communication strategies is experimentally studied. The authors report on experimental results obtained from an astrophysical simulation on an nCUBE2 parallel computer.
Hydrogen production by high-temperature water splitting using electron-conducting membranes
Lee, Tae H.; Wang, Shuangyan; Dorris, Stephen E.; Balachandran, Uthamalingam
2004-04-27
A device and method for separating water into hydrogen and oxygen is disclosed. A first substantially gas impervious solid electron-conducting membrane for selectively passing hydrogen is provided and spaced from a second substantially gas impervious solid electron-conducting membrane for selectively passing oxygen. When steam is passed between the two membranes at disassociation temperatures the hydrogen from the disassociation of steam selectively and continuously passes through the first membrane and oxygen selectively and continuously passes through the second membrane, thereby continuously driving the disassociation of steam producing hydrogen and oxygen.
Pass-Band Characteristics of an L-Shaped Waveguide in a Diamond Structure Photonic Crystal
NASA Astrophysics Data System (ADS)
Chen, Shibin; Ma, Jingcun; Yao, Yunshi; Liu, Xin; Lin, Ping
2018-06-01
The conduction characteristics of a L-shaped waveguide in a diamond structure photonic crystal is investigated in this paper. The waveguides were fabricated with titanium dioxide ceramic via 3-D printing and sintering. The effects of the position and size of line defects on the transmission characteristics are first simulated using a finite-difference time-domain method. The simulated results show that, when the length of the rectangular defect equals the lattice constant, multiple extended modes are generated. When the centers of the single unit cell of the diamond structure and the line defect waveguide coincide, higher transmission efficiency in the line defect can be achieved. In addition, the corner of the L-shaped waveguide was optimized to reduce reflection loss at the turning point using the arc transition of the large diameter. Our experimental results indicate that L-shaped waveguides with an optimized photonic band gap structure and high-K materials can produce a pass-band between 13.8 GHz and 14.4 GHz and increase transmission efficiency. The computed results agree with the experimental results. Our results may help the integration of microwave devices in the future and possibly enable new applications of photonic crystals.
NASA Astrophysics Data System (ADS)
Benítez, P.; Mohedano, R.; Buljan, M.; Miñano, J. C.; Sun, Y.; Falicoff, W.; Vilaplana, J.; Chaves, J.; Biot, G.; López, J.
2011-12-01
A novel HCPV nonimaging concentrator concept with high concentration (>500×) is presented. It uses the combination of a commercial concentration GaInP/GaInAs/Ge 3J cell and a concentration Back-Point-Contact (BPC) concentration silicon cell for efficient spectral utilization, and external confinement techniques for recovering the 3J cell's reflection. The primary optical element (POE) is a flat Fresnel lens and the secondary optical element (SOE) is a free-form RXI-type concentrator with a band-pass filter embedded it, both POE and SOE performing Köhler integration to produce light homogenization. The band-pass filter sends the IR photons in the 900-1200 nm band to the silicon cell. Computer simulations predict that four-terminal terminal designs could achieve ˜46% added cell efficiencies using commercial 39% 3J and 26% Si cells. A first proof-of concept receiver prototype has been manufactured using a simpler optical architecture (with a lower concentration, ˜100× and lower simulated added efficiency), and experimental measurements have shown up to 39.8% 4J receiver efficiency using a 3J with peak efficiency of 36.9%.
Tang, T S; Sohal, P S; Garg, A K
2013-06-01
The purpose of this single-cohort study was to implement and evaluate a programme that trains peers to deliver a diabetes self-management support programme for South-Asian adults with Type 2 diabetes and to assess the perceived efficacy of and satisfaction with this programme. We recruited eight South-Asian adults who completed a 20-h peer-leader training programme conducted over five sessions (4 h per session). The programme used multiple instructional methods (quizzes, group brainstorming, skill building, group sharing, role-play and facilitation simulation) and provided communication, facilitation, and behaviour change skills training. To graduate, participants were required to achieve the pre-established competency criteria in four training domains: active listening, empowerment-based facilitation, five-step behavioural goal-setting, and self-efficacy. Participants were given three attempts to pass each competency domain. On the first attempt six (75%), eight (100%), five (63%) and five (63%) participants passed active listening, empowerment-based facilitation, five-step behavioural goal-setting, and self-efficacy, respectively. Those participants who did not pass a competency domain on the first attempt were successful in passing on the second attempt. As a result, all eight participants graduated from the training programme and became peer leaders. Satisfaction ratings for programme length, balance between content and skills development, and preparation for leading support activities were uniformly high. Ratings for the instructional methods ranged between effective and very effective. Findings suggest it is feasible to train and graduate peer leaders with the necessary skills to facilitate a diabetes self-management support intervention. © 2013 The Authors. Diabetic Medicine © 2013 Diabetes UK.
MRXCAT: Realistic numerical phantoms for cardiovascular magnetic resonance
2014-01-01
Background Computer simulations are important for validating novel image acquisition and reconstruction strategies. In cardiovascular magnetic resonance (CMR), numerical simulations need to combine anatomical information and the effects of cardiac and/or respiratory motion. To this end, a framework for realistic CMR simulations is proposed and its use for image reconstruction from undersampled data is demonstrated. Methods The extended Cardiac-Torso (XCAT) anatomical phantom framework with various motion options was used as a basis for the numerical phantoms. Different tissue, dynamic contrast and signal models, multiple receiver coils and noise are simulated. Arbitrary trajectories and undersampled acquisition can be selected. The utility of the framework is demonstrated for accelerated cine and first-pass myocardial perfusion imaging using k-t PCA and k-t SPARSE. Results MRXCAT phantoms allow for realistic simulation of CMR including optional cardiac and respiratory motion. Example reconstructions from simulated undersampled k-t parallel imaging demonstrate the feasibility of simulated acquisition and reconstruction using the presented framework. Myocardial blood flow assessment from simulated myocardial perfusion images highlights the suitability of MRXCAT for quantitative post-processing simulation. Conclusion The proposed MRXCAT phantom framework enables versatile and realistic simulations of CMR including breathhold and free-breathing acquisitions. PMID:25204441
Atanasov, Nicholas A; Sargent, Jennifer L; Parmigiani, John P; Palme, Rupert; Diggs, Helen E
2015-01-01
Excessive environmental vibrations can have deleterious effects on animal health and experimental results, but they remain poorly understood in the animal laboratory setting. The aims of this study were to characterize train-associated vibration in a rodent vivarium and to assess the effects of this vibration on the reproductive success and fecal corticosterone metabolite levels of mice. An instrumented cage, featuring a high-sensitivity microphone and accelerometer, was used to characterize the vibrations and sound in a vivarium that is near an active railroad. The vibrations caused by the passing trains are 3 times larger in amplitude than are the ambient facility vibrations, whereas most of the associated sound was below the audible range for mice. Mice housed in the room closest to the railroad tracks had pregnancy rates that were 50% to 60% lower than those of mice of the same strains but bred in other parts of the facility. To verify the effect of the train vibrations, we used a custom-built electromagnetic shaker to simulate the train-induced vibrations in a controlled environment. Fecal pellets were collected from male and female mice that were exposed to the simulated vibrations and from unexposed control animals. Analysis of the fecal samples revealed that vibrations similar to those produced by a passing train can increase the levels of fecal corticosterone metabolites in female mice. These increases warrant attention to the effects of vibration on mice and, consequently, on reproduction and experimental outcomes. PMID:26632783
Atanasov, Nicholas A; Sargent, Jennifer L; Parmigiani, John P; Palme, Rupert; Diggs, Helen E
2015-11-01
Excessive environmental vibrations can have deleterious effects on animal health and experimental results, but they remain poorly understood in the animal laboratory setting. The aims of this study were to characterize train-associated vibration in a rodent vivarium and to assess the effects of this vibration on the reproductive success and fecal corticosterone metabolite levels of mice. An instrumented cage, featuring a high-sensitivity microphone and accelerometer, was used to characterize the vibrations and sound in a vivarium that is near an active railroad. The vibrations caused by the passing trains are 3 times larger in amplitude than are the ambient facility vibrations, whereas most of the associated sound was below the audible range for mice. Mice housed in the room closest to the railroad tracks had pregnancy rates that were 50% to 60% lower than those of mice of the same strains but bred in other parts of the facility. To verify the effect of the train vibrations, we used a custom-built electromagnetic shaker to simulate the train-induced vibrations in a controlled environment. Fecal pellets were collected from male and female mice that were exposed to the simulated vibrations and from unexposed control animals. Analysis of the fecal samples revealed that vibrations similar to those produced by a passing train can increase the levels of fecal corticosterone metabolites in female mice. These increases warrant attention to the effects of vibration on mice and, consequently, on reproduction and experimental outcomes.
Modeling of the static recrystallization for 7055 aluminum alloy by cellular automaton
NASA Astrophysics Data System (ADS)
Zhang, Tao; Lu, Shi-hong; Zhang, Jia-bin; Li, Zheng-fang; Chen, Peng; Gong, Hai; Wu, Yun-xin
2017-09-01
In order to simulate the flow behavior and microstructure evolution during the pass interval period of the multi-pass deformation process, models of static recovery (SR) and static recrystallization (SRX) by the cellular automaton (CA) method for the 7055 aluminum alloy were established. Double-pass hot compression tests were conducted to acquire flow stress and microstructure variation during the pass interval period. With the basis of the material constants obtained from the compression tests, models of the SR, incubation period, nucleation rate and grain growth were fitted by least square method. A model of the grain topology and a statistical computation of the CA results were also introduced. The effects of the pass interval time, temperature, strain, strain rate and initial grain size on the microstructure variation for the SRX of the 7055 aluminum alloy were studied. The results show that a long pass interval time, large strain, high temperature and large strain rate are beneficial for finer grains during the pass interval period. The stable size of the static recrystallized grain is not concerned with the initial grain size, but mainly depends on the strain rate and temperature. The SRX plays a vital role in grain refinement, while the SR has no effect on the variation of microstructure morphology. Using flow stress and microstructure comparisons of the simulated and experimental CA results, the established CA models can accurately predict the flow stress and microstructure evolution during the pass interval period, and provide guidance for the selection of optimized parameters for the multi-pass deformation process.
The graphical brain: Belief propagation and active inference
Friston, Karl J.; Parr, Thomas; de Vries, Bert
2018-01-01
This paper considers functional integration in the brain from a computational perspective. We ask what sort of neuronal message passing is mandated by active inference—and what implications this has for context-sensitive connectivity at microscopic and macroscopic levels. In particular, we formulate neuronal processing as belief propagation under deep generative models. Crucially, these models can entertain both discrete and continuous states, leading to distinct schemes for belief updating that play out on the same (neuronal) architecture. Technically, we use Forney (normal) factor graphs to elucidate the requisite message passing in terms of its form and scheduling. To accommodate mixed generative models (of discrete and continuous states), one also has to consider link nodes or factors that enable discrete and continuous representations to talk to each other. When mapping the implicit computational architecture onto neuronal connectivity, several interesting features emerge. For example, Bayesian model averaging and comparison, which link discrete and continuous states, may be implemented in thalamocortical loops. These and other considerations speak to a computational connectome that is inherently state dependent and self-organizing in ways that yield to a principled (variational) account. We conclude with simulations of reading that illustrate the implicit neuronal message passing, with a special focus on how discrete (semantic) representations inform, and are informed by, continuous (visual) sampling of the sensorium. Author Summary This paper considers functional integration in the brain from a computational perspective. We ask what sort of neuronal message passing is mandated by active inference—and what implications this has for context-sensitive connectivity at microscopic and macroscopic levels. In particular, we formulate neuronal processing as belief propagation under deep generative models that can entertain both discrete and continuous states. This leads to distinct schemes for belief updating that play out on the same (neuronal) architecture. Technically, we use Forney (normal) factor graphs to characterize the requisite message passing, and link this formal characterization to canonical microcircuits and extrinsic connectivity in the brain. PMID:29417960
Cuesta-Gragera, Ana; Navarro-Fontestad, Carmen; Mangas-Sanjuan, Victor; González-Álvarez, Isabel; García-Arieta, Alfredo; Trocóniz, Iñaki F; Casabó, Vicente G; Bermejo, Marival
2015-07-10
The objective of this paper is to apply a previously developed semi-physiologic pharmacokinetic model implemented in NONMEM to simulate bioequivalence trials (BE) of acetyl salicylic acid (ASA) in order to validate the model performance against ASA human experimental data. ASA is a drug with first-pass hepatic and intestinal metabolism following Michaelis-Menten kinetics that leads to the formation of two main metabolites in two generations (first and second generation metabolites). The first aim was to adapt the semi-physiological model for ASA in NOMMEN using ASA pharmacokinetic parameters from literature, showing its sequential metabolism. The second aim was to validate this model by comparing the results obtained in NONMEM simulations with published experimental data at a dose of 1000 mg. The validated model was used to simulate bioequivalence trials at 3 dose schemes (100, 1000 and 3000 mg) and with 6 test formulations with decreasing in vivo dissolution rate constants versus the reference formulation (kD 8-0.25 h (-1)). Finally, the third aim was to determine which analyte (parent drug, first generation or second generation metabolite) was more sensitive to changes in formulation performance. The validation results showed that the concentration-time curves obtained with the simulations reproduced closely the published experimental data, confirming model performance. The parent drug (ASA) was the analyte that showed to be more sensitive to the decrease in pharmaceutical quality, with the highest decrease in Cmax and AUC ratio between test and reference formulations. Copyright © 2015 Elsevier B.V. All rights reserved.
LED-pumped Alexandrite laser oscillator and amplifier
NASA Astrophysics Data System (ADS)
Pichon, Pierre; Blanchot, Jean-Philippe; Balembois, François; Druon, Frédéric; Georges, Patrick
2018-02-01
In this paper, we report the first LED-pumped transition-metal-doped laser oscillator and amplifier based on an alexandrite crystal (Cr3+:BeAl2O4). A Ce:YAG luminescent concentrator illuminated by blue LEDs is used to reach higher pump powers than with LEDs alone. The luminescent 200-mm-long-composit luminescent concentrator involving 2240 LEDs can delivers up to 268 mJ for a peak irradiance of 8.5 kW/cm2. In oscillator configuration, an LED-pumped alexandrite laser delivering an energy of 2.9 mJ at 748 nm in free running operation is demonstrated. In the cavity, we measured a double pass small signal gain of 1.28, in good agreement with numerical simulations. As amplifier, the system demonstrated to boost a CW Ti:sapphire laser by a factor of 4 at 750 nm in 8 passes with a large tuning range from 710 nm to 800 nm.
Design and evaluation of a GaAs MMIC X-band active RC quadrature power divider
NASA Astrophysics Data System (ADS)
Henkus, J. C.
1991-03-01
The design and evaluation of a GaAs MMIC (Microwave Monolithic Integrated Circuit) X-band active RC Quadrature Power Divider (QPD) is addressed. This QPD can be used as part of a vector modulator. The chosen QPD topology consists of two active first order RC all pass networks and was converted into an MMIC design. The design is completely symmetrical except for two key resistors. On-wafer S parameter measurements were carried out; a special probe head configuration was composed in order to avoid measurement accuracy degradation associated with the reversal of the active output of the QPD. The measured nominal RF behavior of the chips complies with the simulated behavior to a very high degree. The optical, DC, and RF yield is very large (97, 83, and 74 percent respectively). A modification to Takashi's all pass network was proposed which offers gain/frequency slope control and compensation ability.
Alqahtani, Saeed; Bukhari, Ishfaq; Albassam, Ahmed; Alenazi, Maha
2018-05-28
The intestinal absorption process is a combination of several events that are governed by various factors. Several transport mechanisms are involved in drug absorption through enterocytes via active and/or passive processes. The transported molecules then undergo intestinal metabolism, which together with intestinal transport may affect the systemic availability of drugs. Many studies have provided clear evidence on the significant role of intestinal first-pass metabolism on drug bioavailability and degree of drug-drug interactions (DDIs). Areas covered: This review provides an update on the role of intestinal first-pass metabolism in the oral bioavailability of drugs and prediction of drug-drug interactions. It also provides a comprehensive overview and summary of the latest update in the role of PBPK modeling in prediction of intestinal metabolism and DDIs in humans. Expert opinion: The contribution of intestinal first-pass metabolism in the oral bioavailability of drugs and prediction of DDIs has become more evident over the last few years. Several in vitro, in situ, and in vivo models have been developed to evaluate the role of first-pass metabolism and to predict DDIs. Currently, physiologically based pharmacokinetic modeling is considered the most valuable tool for the prediction of intestinal first-pass metabolism and DDIs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Richard S.; Carlson, Thomas J.; Welch, Abigail E.
A multifactor study was conducted by Battelle for the US Army Corps of Engineers to assess the significance of the presence of a radio telemetry transmitter on the effects of rapid decompression from simulated hydro turbine passage on depth acclimated juvenile run-of-the-river Chinook salmon. Study factors were: (1) juvenile chinook salmon age;, subyearling or yearling, (2) radio transmitter present or absent, (3) three transmitter implantation factors: gastric, surgical, and no transmitter, and (4) four acclimation depth factors: 1, 10, 20, and 40 foot submergence equivalent absolute pressure, for a total of 48 unique treatments. Exposed fish were examined for changesmore » in behavior, presence or absence of barotrauma injuries, and immediate or delayed mortality. Logistic models were used to test hypotheses that addressed study objectives. The presence of a radio transmitter was found to significantly increase the risk of barotrauma injury and mortality at exposure to rapid decompression. Gastric implantation was found to present a higher risk than surgical implantation. Fish were exposed within 48 hours of transmitter implantation so surgical incisions were not completely healed. The difference in results obtained for gastric and surgical implantation methods may be the result of study design and the results may have been different if tested fish had completely healed surgical wounds. However, the test did simulate the typical surgical-release time frame for in-river telemetry studies of fish survival so the results are probably representative for fish passing through a turbine shortly following release into the river. The finding of a significant difference in response to rapid decompression between fish bearing radio transmitters and those not implies a bias may exist in estimates of turbine passage survival obtained using radio telemetry. However, the rapid decompression (simulated turbine passage) conditions used for the study represented near worst case exposure for fish passing through turbines. At this time, insufficient data exist about the distribution of river-run fish entering turbines, and particularly, the distribution of fish passing through turbine runners, to extrapolate study findings to the population of fish passing through FCRPS turbines. This study is the first study examining rapid decompression study to include acclimation depth as an experimental factor for physostomous fish. We found that fish acclimated to deeper depth were significantly more vulnerable to barotrauma injury and death. Insufficient information about the distribution of fish entering turbines and their depth acclimation currently exists to extrapolate these findings to the population of fish passing through turbines. However, the risk of barotrauma for turbine-passed fish could be particularly high for subyearling Chinook salmon that migrate downstream at deeper depths late in the early summer portion of the outmigration. Barotrauma injuries led to immediate mortality delayed mortality and potential mortality due to increased susceptibility to predation resulting from loss of equilibrium or swim bladder rupture.« less
2016-01-01
Purpose: Project Strengthening Emergency Medicine, Investing in Learners in Latin America (SEMILLA) created a novel, language and resource appropriate course for the resuscitation of cardiac arrest for Nicaraguan resident physicians. We hypothesized that participation in the Project SEMILLA resuscitation program would significantly improve the physician’s management of simulated code scenarios. Methods: Thirteen Nicaraguan resident physicians were evaluated while managing simulated cardiac arrest scenarios before, immediately, and at 6 months after participating in the Project SEMILLA resuscitation program. This project was completed in 2014 in Leon, Nicaragua. The Cardiac Arrest Simulation Test (CASTest), a validated scoring system, was used to evaluate performance on a standardized simulated cardiac arrest scenario. Mixed effect logistic regression models were constructed to assess outcomes. Results: On the pre-course simulation exam, only 7.7% of subjects passed the test. Immediately post-course, the subjects achieved a 30.8% pass rate and at 6 months after the course, the pass rate was 46.2%. Compared with pre-test scores, the odds of passing the CASTest at 6 months after the course were 21.7 times higher (95% CI 4.2 to 112.8, P<0.001). Statistically significant improvement was also seen on the number of critical items completed (OR=3.75, 95% CI 2.71-5.19), total items completed (OR=4.55, 95% CI 3.4-6.11), and number of “excellent” scores on a Likert scale (OR=2.66, 95% CI 1.85-3.81). Conclusions: Nicaraguan resident physicians demonstrate improved ability to manage simulated cardiac arrest scenarios after participation in the Project SEMILLA resuscitation course and retain these skills. PMID:27378010
Dust Dynamics in Protoplanetary Disks: Parallel Computing with PVM
NASA Astrophysics Data System (ADS)
de La Fuente Marcos, Carlos; Barge, Pierre; de La Fuente Marcos, Raúl
2002-03-01
We describe a parallel version of our high-order-accuracy particle-mesh code for the simulation of collisionless protoplanetary disks. We use this code to carry out a massively parallel, two-dimensional, time-dependent, numerical simulation, which includes dust particles, to study the potential role of large-scale, gaseous vortices in protoplanetary disks. This noncollisional problem is easy to parallelize on message-passing multicomputer architectures. We performed the simulations on a cache-coherent nonuniform memory access Origin 2000 machine, using both the parallel virtual machine (PVM) and message-passing interface (MPI) message-passing libraries. Our performance analysis suggests that, for our problem, PVM is about 25% faster than MPI. Using PVM and MPI made it possible to reduce CPU time and increase code performance. This allows for simulations with a large number of particles (N ~ 105-106) in reasonable CPU times. The performances of our implementation of the pa! rallel code on an Origin 2000 supercomputer are presented and discussed. They exhibit very good speedup behavior and low load unbalancing. Our results confirm that giant gaseous vortices can play a dominant role in giant planet formation.
Effective method for detecting regions of given colors and the features of the region surfaces
NASA Astrophysics Data System (ADS)
Gong, Yihong; Zhang, HongJiang
1994-03-01
Color can be used as a very important cue for image recognition. In industrial and commercial areas, color is widely used as a trademark or identifying feature in objects, such as packaged goods, advertising signs, etc. In image database systems, one may retrieve an image of interest by specifying prominent colors and their locations in the image (image retrieval by contents). These facts enable us to detect or identify a target object using colors. However, this task depends mainly on how effectively we can identify a color and detect regions of the given color under possibly non-uniform illumination conditions such as shade, highlight, and strong contrast. In this paper, we present an effective method to detect regions matching given colors, along with the features of the region surfaces. We adopt the HVC color coordinates in the method because of its ability of completely separating the luminant and chromatic components of colors. Three basis functions functionally serving as the low-pass, high-pass, and band-pass filters, respectively, are introduced.
NASA Astrophysics Data System (ADS)
Gusyev, M. A.; Toews, M.; Morgenstern, U.; Stewart, M.; White, P.; Daughney, C.; Hadfield, J.
2013-03-01
Here we present a general approach of calibrating transient transport models to tritium concentrations in river waters developed for the MT3DMS/MODFLOW model of the western Lake Taupo catchment, New Zealand. Tritium has a known pulse-shaped input to groundwater systems due to the bomb tritium in the early 1960s and, with its radioactive half-life of 12.32 yr, allows for the determination of the groundwater age. In the transport model, the tritium input (measured in rainfall) passes through the groundwater system, and the simulated tritium concentrations are matched to the measured tritium concentrations in the river and stream outlets for the Waihaha, Whanganui, Whareroa, Kuratau and Omori catchments from 2000-2007. For the Kuratau River, tritium was also measured between 1960 and 1970, which allowed us to fine-tune the transport model for the simulated bomb-peak tritium concentrations. In order to incorporate small surface water features in detail, an 80 m uniform grid cell size was selected in the steady-state MODFLOW model for the model area of 1072 km2. The groundwater flow model was first calibrated to groundwater levels and stream baseflow observations. Then, the transient tritium transport MT3DMS model was matched to the measured tritium concentrations in streams and rivers, which are the natural discharge of the groundwater system. The tritium concentrations in the rivers and streams correspond to the residence time of the water in the groundwater system (groundwater age) and mixing of water with different age. The transport model output showed a good agreement with the measured tritium values. Finally, the tritium-calibrated MT3DMS model is applied to simulate groundwater ages, which are used to obtain groundwater age distributions with mean residence times (MRTs) in streams and rivers for the five catchments. The effect of regional and local hydrogeology on the simulated groundwater ages is investigated by demonstrating groundwater ages at five model cross-sections to better understand MRTs simulated with tritium-calibrated MT3DMS and lumped parameter models.
Beam dynamics simulation of a double pass proton linear accelerator
Hwang, Kilean; Qiang, Ji
2017-04-03
A recirculating superconducting linear accelerator with the advantage of both straight and circular accelerator has been demonstrated with relativistic electron beams. The acceleration concept of a recirculating proton beam was recently proposed and is currently under study. In order to further support the concept, the beam dynamics study on a recirculating proton linear accelerator has to be carried out. In this paper, we study the feasibility of a two-pass recirculating proton linear accelerator through the direct numerical beam dynamics design optimization and the start-to-end simulation. This study shows that the two-pass simultaneous focusing without particle losses is attainable including fullymore » 3D space-charge effects through the entire accelerator system.« less
A 3D Reconstruction Strategy of Vehicle Outline Based on Single-Pass Single-Polarization CSAR Data.
Leping Chen; Daoxiang An; Xiaotao Huang; Zhimin Zhou
2017-11-01
In the last few years, interest in circular synthetic aperture radar (CSAR) acquisitions has arisen as a consequence of the potential achievement of 3D reconstructions over 360° azimuth angle variation. In real-world scenarios, full 3D reconstructions of arbitrary targets need multi-pass data, which makes the processing complex, money-consuming, and time expending. In this paper, we propose a processing strategy for the 3D reconstruction of vehicle, which can avoid using multi-pass data by introducing a priori information of vehicle's shape. Besides, the proposed strategy just needs the single-pass single-polarization CSAR data to perform vehicle's 3D reconstruction, which makes the processing much more economic and efficient. First, an analysis of the distribution of attributed scattering centers from vehicle facet model is presented. And the analysis results show that a smooth and continuous basic outline of vehicle could be extracted from the peak curve of a noncoherent processing image. Second, the 3D location of vehicle roofline is inferred from layover with empirical insets of the basic outline. At last, the basic line and roofline of the vehicle are used to estimate the vehicle's 3D information and constitute the vehicle's 3D outline. The simulated and measured data processing results prove the correctness and effectiveness of our proposed strategy.
Electron Beam Welding of IN792 DS: Effects of Pass Speed and PWHT on Microstructure and Hardness.
Angella, Giuliano; Barbieri, Giuseppe; Donnini, Riccardo; Montanari, Roberto; Richetta, Maria; Varone, Alessandra
2017-09-05
Electron Beam (EB) welding has been used to realize seams on 2 mm-thick plates of directionally solidified (DS) IN792 superalloy. The first part of this work evidenced the importance of pre-heating the workpiece to avoid the formation of long cracks in the seam. The comparison of different pre-heating temperatures (PHT) and pass speeds ( v ) allowed the identification of optimal process parameters, namely PHT = 300 °C and v = 2.5 m/min. The microstructural features of the melted zone (MZ); the heat affected zone (HAZ), and base material (BM) were investigated by optical microscopy (OM), scanning electron microscopy (SEM), energy dispersion spectroscopy (EDS), electron back-scattered diffraction (EBSD), X-ray diffraction (XRD), and micro-hardness tests. In the as-welded condition; the structure of directionally oriented grains was completely lost in MZ. The γ' phase in MZ consisted of small (20-40 nm) round shaped particles and its total amount depended on both PHT and welding pass speed, whereas in HAZ, it was the same BM. Even if the amount of γ' phase in MZ was lower than that of the as-received material, the nanometric size of the particles induced an increase in hardness. EDS examinations did not show relevant composition changes in the γ' and γ phases. Post-welding heat treatments (PWHT) at 700 and 750 °C for two hours were performed on the best samples. After PWHTs, the amount of the ordered phase increased, and the effect was more pronounced at 750 °C, while the size of γ' particles in MZ remained almost the same. The hardness profiles measured across the joints showed an upward shift, but peak-valley height was a little lower, indicating more homogeneous features in the different zones.
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Dutra, L. V.; Mascarenhas, N. D. A.; Mitsuo, Fernando Augusta, II
1984-01-01
A study area near Ribeirao Preto in Sao Paulo state was selected, with predominance in sugar cane. Eight features were extracted from the 4 original bands of LANDSAT image, using low-pass and high-pass filtering to obtain spatial features. There were 5 training sites in order to acquire the necessary parameters. Two groups of four channels were selected from 12 channels using JM-distance and entropy criterions. The number of selected channels was defined by physical restrictions of the image analyzer and computacional costs. The evaluation was performed by extracting the confusion matrix for training and tests areas, with a maximum likelihood classifier, and by defining performance indexes based on those matrixes for each group of channels. Results show that in spatial features and supervised classification, the entropy criterion is better in the sense that allows a more accurate and generalized definition of class signature. On the other hand, JM-distance criterion strongly reduces the misclassification within training areas.
By-Pass Diode Temperature Tests of a Solar Array Coupon under Space Thermal Environment Conditions
NASA Technical Reports Server (NTRS)
Wright, Kenneth H.; Schneider, Todd A.; Vaughn, Jason A.; Hoang, Bao; Wong, Frankie; Wu, Gordon
2016-01-01
By-Pass diodes are a key design feature of solar arrays and system design must be robust against local heating, especially with implementation of larger solar cells. By-Pass diode testing was performed to aid thermal model development for use in future array designs that utilize larger cell sizes that result in higher string currents. Testing was performed on a 56-cell Advanced Triple Junction solar array coupon provided by SSL. Test conditions were vacuum with cold array backside using discrete by-pass diode current steps of 0.25 A ranging from 0 A to 2.0 A.
CPU SIM: A Computer Simulator for Use in an Introductory Computer Organization-Architecture Class.
ERIC Educational Resources Information Center
Skrein, Dale
1994-01-01
CPU SIM, an interactive low-level computer simulation package that runs on the Macintosh computer, is described. The program is designed for instructional use in the first or second year of undergraduate computer science, to teach various features of typical computer organization through hands-on exercises. (MSE)
NASA Technical Reports Server (NTRS)
Wiehle, S.; Plaschke, F.; Motschmann, U.; Glassmeier, K. H.; Auster, H. U.; Angelopoulos, V.; Mueller, J.; Kriegel, H.; Georgescu, E.; Halekas, J.;
2011-01-01
The spacecraft P1 of the new ARTEMIS (Acceleration, Reconnection, Turbulence, and Electrodynamics of the Moon's Interaction with the Sun) mission passed the lunar wake for the first time on February 13, 2010. We present magnetic field and plasma data of this event and results of 3D hybrid simulations. As the solar wind magnetic field was highly dynamic during the passage, a simulation with stationary solar wind input cannot distinguish whether distortions were caused by these solar wind variations or by the lunar wake; therefore, a dynamic real-time simulation of the flyby has been performed. The input values of this simulation are taken from NASA OMNI data and adapted to the P1 data, resulting in a good agreement between simulation and measurements. Combined with the stationary simulation showing non-transient lunar wake structures, a separation of solar wind and wake effects is achieved. An anisotropy in the magnitude of the plasma bulk flow velocity caused by a non-vanishing magnetic field component parallel to the solar wind flow and perturbations created by counterstreaming ions in the lunar wake are observed in data and simulations. The simulations help to interpret the data granting us the opportunity to examine the entire lunar plasma environment and, thus, extending the possibilities of measurements alone: A comparison of a simulation cross section to theoretical predictions of MHD wave propagation shows that all three basic MHD modes are present in the lunar wake and that their expansion governs the lunar wake refilling process.
NASA Astrophysics Data System (ADS)
Gillies, R. G.; Yau, A. W.; James, H. G.; Hussey, G. C.; McWilliams, K. A.
2014-12-01
The enhanced Polar Outflow Probe (ePOP) Canadian small-satellite was launched in September 2013. Included in this suite of eight scientific instruments is the Radio Receiver Instrument (RRI). The RRI has been used to measure VLF and HF radio waves from various ground and spontaneous ionospheric sources. The first dedicated ground transmission that was detected by RRI was from the Saskatoon Super Dual Auroral Radar Network (SuperDARN) radar on Nov. 7, 2013 at 14 MHz. Several other passes over the Saskatoon SuperDARN radar have been recorded since then. Ground transmissions have also been observed from other radars, such as the SPEAR, HAARP, and SURA ionospheric heaters. However, the focus of this study will be on the results obtained from the SuperDARN passes. An analysis of the signal recorded by the RRI provides estimates of signal power, Doppler shift, polarization, absolute time delay, differential mode delay, and angle of arrival. By comparing these parameters to similar parameters derived from ray tracing simulations, ionospheric electron density structures may be detected and measured. Further analysis of the results from the other ground transmitters and future SuperDARN passes will be used to refine these results.
1990-12-01
nadir radiometer viewing angle. The reference standard was a 25.4 cm x 25.4 cm x 1.0 cm pressed Halon "Spectralon" plate that was backed by a 0.5 cm...against the sphere’s sample port. Light transmitted through the leaf was trapped in the sample chamber and did not pass back into the integrating sphere...leaf layers. The leaves were added to the back of the stack, so leaf #1 was always the first leaf in the stack. Each spectrum was taken in the lower 1
Compact, Two-Sided Structural Cold Plate Configuration
NASA Technical Reports Server (NTRS)
Zaffetti, Mark
2011-01-01
In two-sided structural cold plates, typically there is a structural member, such as a honeycomb panel, that provides the structural strength for the cold plates that cool equipment. The cold plates are located on either side of the structural member and thus need to have the cooling fluid supplied to them. One method of accomplishing this is to route the inlet and outlet tubing to both sides of the structural member. Another method might be to supply the inlet to one side and the outlet to the other. With the latter method, an external feature such as a hose, tube, or manifold must be incorporated to pass the fluid from one side of the structural member to the other. Although this is a more compact design than the first option, since it eliminates the need for a dedicated supply and return line to each side of the structural member, it still poses problems, as these external features can be easily damaged and are now new areas for potential fluid leakage. This invention eliminates the need for an external feature and instead incorporates the feature internally to the structural member. This is accomplished by utilizing a threaded insert that not only connects the cold plate to the structural member, but also allows the cooling fluid to flow through it into the structural member, and then to the cold plate on the opposite side. The insert also employs a cap that acts as a cover to seal the open area needed to install the insert. There are multiple options for location of o-ring style seals, as well as the option to use adhesive for redundant sealing. Another option is to weld the cap to the cold plate after its installation, thus making it an integral part of the structural member. This new configuration allows the fluid to pass from one cold plate to the other without any exposed external features.
Taming Wild Horses: The Need for Virtual Time-based Scheduling of VMs in Network Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B; Perumalla, Kalyan S; Henz, Brian J
2012-01-01
The next generation of scalable network simulators employ virtual machines (VMs) to act as high-fidelity models of traffic producer/consumer nodes in simulated networks. However, network simulations could be inaccurate if VMs are not scheduled according to virtual time, especially when many VMs are hosted per simulator core in a multi-core simulator environment. Since VMs are by default free-running, on the outset, it is not clear if, and to what extent, their untamed execution affects the results in simulated scenarios. Here, we provide the first quantitative basis for establishing the need for generalized virtual time scheduling of VMs in network simulators,more » based on an actual prototyped implementations. To exercise breadth, our system is tested with multiple disparate applications: (a) a set of message passing parallel programs, (b) a computer worm propagation phenomenon, and (c) a mobile ad-hoc wireless network simulation. We define and use error metrics and benchmarks in scaled tests to empirically report the poor match of traditional, fairness-based VM scheduling to VM-based network simulation, and also clearly show the better performance of our simulation-specific scheduler, with up to 64 VMs hosted on a 12-core simulator node.« less
Factors Associated with First-Pass Success in Pediatric Intubation in the Emergency Department.
Goto, Tadahiro; Gibo, Koichiro; Hagiwara, Yusuke; Okubo, Masashi; Brown, David F M; Brown, Calvin A; Hasegawa, Kohei
2016-03-01
The objective of this study was to investigate the factors associated with first-pass success in pediatric intubation in the emergency department (ED). We analyzed the data from two multicenter prospective studies of ED intubation in 17 EDs between April 2010 and September 2014. The studies prospectively measured patient's age, sex, principal indication for intubation, methods (e.g., rapid sequence intubation [RSI]), devices, and intubator's level of training and specialty. To evaluate independent predictors of first-pass success, we fit logistic regression model with generalized estimating equations. In the sensitivity analysis, we repeated the analysis in children <10 years. A total of 293 children aged ≤18 years who underwent ED intubation were eligible for the analysis. The overall first-pass success rate was 60% (95%CI [54%-66%]). In the multivariable model, age ≥10 years (adjusted odds ratio [aOR], 2.45; 95% CI [1.23-4.87]), use of RSI (aOR, 2.17; 95% CI [1.31-3.57]), and intubation attempt by an emergency physician (aOR, 3.21; 95% CI [1.78-5.83]) were significantly associated with a higher chance of first-pass success. Likewise, in the sensitivity analysis, the use of RSI (aOR, 3.05; 95% CI [1.63-5.70]), and intubation attempt by an emergency physician (aOR, 4.08; 95% CI [1.92-8.63]) were significantly associated with a higher chance of first-pass success. Based on two large multicenter prospective studies of ED airway management, we found that older age, use of RSI, and intubation by emergency physicians were the independent predictors of a higher chance of first-pass success in children. Our findings should facilitate investigations to develop optimal airway management strategies in critically-ill children in the ED.
Unsteady Aerodynamic Force Sensing from Strain Data
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi
2017-01-01
A simple approach for computing unsteady aerodynamic forces from simulated measured strain data is proposed in this study. First, the deflection and slope of the structure are computed from the unsteady strain using the two-step approach. Velocities and accelerations of the structure are computed using the autoregressive moving average model, on-line parameter estimator, low-pass filter, and a least-squares curve fitting method together with analytical derivatives with respect to time. Finally, aerodynamic forces over the wing are computed using modal aerodynamic influence coefficient matrices, a rational function approximation, and a time-marching algorithm.
Implementation of a 3D mixing layer code on parallel computers
NASA Technical Reports Server (NTRS)
Roe, K.; Thakur, R.; Dang, T.; Bogucz, E.
1995-01-01
This paper summarizes our progress and experience in the development of a Computational-Fluid-Dynamics code on parallel computers to simulate three-dimensional spatially-developing mixing layers. In this initial study, the three-dimensional time-dependent Euler equations are solved using a finite-volume explicit time-marching algorithm. The code was first programmed in Fortran 77 for sequential computers. The code was then converted for use on parallel computers using the conventional message-passing technique, while we have not been able to compile the code with the present version of HPF compilers.
Evidence for a Right-Ear Advantage in Newborn Hearing Screening Results.
Ari-Even Roth, Daphne; Hildesheimer, Minka; Roziner, Ilan; Henkin, Yael
2016-12-06
The aim of the present study was to investigate the effect of ear asymmetry, order of testing, and gender on transient-evoked otoacoustic emission (TEOAE) pass rates and response levels in newborn hearing screening. The screening results of 879 newborns, of whom 387 (study group) passed screening successfully in only one ear in the first TEOAE screening, but passed screening successfully in both ears thereafter, and 492 (control group) who passed screening successfully in both ears in the first TEOAE, were retrospectively examined for pass rates and TEOAE characteristics. Results indicated a right-ear advantage, as manifested by significantly higher pass rates in the right ear (61% and 39% for right and left ears, respectively) in the study group, and in 1.75 dB greater TEOAE response amplitudes in the control group. The right-ear advantage was enhanced when the first tested ear was the right ear (76%). When the left ear was tested first, pass rates were comparable in both ears. The right-ear advantage in pass rates was similar in females versus males, but manifested in 1.5 dB higher response amplitudes in females compared with males, regardless of the tested ear and order of testing in both study and control groups. The study provides further evidence for the functional lateralization of the auditory system at the cochlear level already apparent soon after birth in both males and females. While order of testing plays a significant role in the asymmetry in pass rates, the innate right-ear advantage seems to be a more dominant contributor. © The Author(s) 2016.
Evidence for a Right-Ear Advantage in Newborn Hearing Screening Results
Hildesheimer, Minka; Roziner, Ilan; Henkin, Yael
2016-01-01
The aim of the present study was to investigate the effect of ear asymmetry, order of testing, and gender on transient-evoked otoacoustic emission (TEOAE) pass rates and response levels in newborn hearing screening. The screening results of 879 newborns, of whom 387 (study group) passed screening successfully in only one ear in the first TEOAE screening, but passed screening successfully in both ears thereafter, and 492 (control group) who passed screening successfully in both ears in the first TEOAE, were retrospectively examined for pass rates and TEOAE characteristics. Results indicated a right-ear advantage, as manifested by significantly higher pass rates in the right ear (61% and 39% for right and left ears, respectively) in the study group, and in 1.75 dB greater TEOAE response amplitudes in the control group. The right-ear advantage was enhanced when the first tested ear was the right ear (76%). When the left ear was tested first, pass rates were comparable in both ears. The right-ear advantage in pass rates was similar in females versus males, but manifested in 1.5 dB higher response amplitudes in females compared with males, regardless of the tested ear and order of testing in both study and control groups. The study provides further evidence for the functional lateralization of the auditory system at the cochlear level already apparent soon after birth in both males and females. While order of testing plays a significant role in the asymmetry in pass rates, the innate right-ear advantage seems to be a more dominant contributor. PMID:27927982
An analysis of physician antitrust exemption legislation: adjusting the balance of power.
Hellinger, F J; Young, G J
2001-07-04
Current antitrust law restricts physicians from joining together to collectively negotiate. However, such activities may be approved by state laws under the so-called state action immunity doctrine and by federal legislation under an explicit antitrust exemption. In 1999, Texas became the first state to pass physician antitrust exemption legislation allowing physicians, under certain defined circumstances, to collectively negotiate fees with health plans. Last year, similar legislation was introduced in the US Congress, in 18 state legislatures, and in the District of Columbia. This legislation was passed only in the District of Columbia where its implementation was blocked by the city's financial control board. Nonetheless, legislation permitting physicians to collectively negotiate fees with managed care plans has been introduced in 10 state legislatures this year, and there is continued interest in introducing similar legislation in the US Congress. This analysis examines the basic features of this legislation and its potential impact on the balance of power between physicians and managed care plans.
Ihlefeld, Antje; Litovsky, Ruth Y
2012-01-01
Spatial release from masking refers to a benefit for speech understanding. It occurs when a target talker and a masker talker are spatially separated. In those cases, speech intelligibility for target speech is typically higher than when both talkers are at the same location. In cochlear implant listeners, spatial release from masking is much reduced or absent compared with normal hearing listeners. Perhaps this reduced spatial release occurs because cochlear implant listeners cannot effectively attend to spatial cues. Three experiments examined factors that may interfere with deploying spatial attention to a target talker masked by another talker. To simulate cochlear implant listening, stimuli were vocoded with two unique features. First, we used 50-Hz low-pass filtered speech envelopes and noise carriers, strongly reducing the possibility of temporal pitch cues; second, co-modulation was imposed on target and masker utterances to enhance perceptual fusion between the two sources. Stimuli were presented over headphones. Experiments 1 and 2 presented high-fidelity spatial cues with unprocessed and vocoded speech. Experiment 3 maintained faithful long-term average interaural level differences but presented scrambled interaural time differences with vocoded speech. Results show a robust spatial release from masking in Experiments 1 and 2, and a greatly reduced spatial release in Experiment 3. Faithful long-term average interaural level differences were insufficient for producing spatial release from masking. This suggests that appropriate interaural time differences are necessary for restoring spatial release from masking, at least for a situation where there are few viable alternative segregation cues.
Simulation of the pentose cycle in lactating rat mammary gland
Haut, Michael J.; London, Jack W.; Garfinkel, David
1974-01-01
A computer model representing the pentose cycle, the tricarboxylic acid cycle and glycolysis in slices of lactating rat mammary glands has been constructed. This model is based primarily on the studies, with radioactive chemicals, of Abraham & Chaikoff (1959) [although some of the discrepant data of Katz & Wals (1972) could be accommodated by changing one enzyme activity]. Data obtained by using [1-14C]-, [6-14C]- and [3,4-14C]-glucose were simulated, as well as data obtained by using unlabelled glucose (for which some new experimental data are presented). Much past work on the pentose cycle has been mainly concerned with the division of glucose flow between the pentose cycle and glycolysis, and has relied on the assumption that the system is in steady state (both labelled and unlabelled). This assumption may not apply to lactating rat mammary glands, since the model shows that the percentage flow through the shunt progressively decreased for the first 2h of a 3h experiment, and we were unable to construct a completely steady-state model. The model allows examination of many quantitative features of the system, especially the amount of material passing through key enzymes, some of which appear to be regulated by NADP+ concentrations as proposed by McLean (1960). Supplementary information for this paper has been deposited as Supplementary Publication SUP 50023 at the British Museum (Lending Division) (formerly the National Lending Library for Science and Technology), Boston Spa, Yorks. LS23 7BQ, U.K., from whom copies can be obtained on the terms indicated in Biochem. J. (1973) 131, 5. PMID:4154746
Meador, M.R.; McIntyre, J.P.; Pollock, K.H.
2003-01-01
Two-pass backpack electrofishing data collected as part of the U.S. Geological Survey's National Water-Quality Assessment Program were analyzed to assess the efficacy of single-pass backpack electrofishing. A two-capture removal model was used to estimate, within 10 river basins across the United States, proportional fish species richness from one-pass electrofishing and probabilities of detection for individual fish species. Mean estimated species richness from first-pass sampling (ps1) ranged from 80.7% to 100% of estimated total species richness for each river basin, based on at least seven samples per basin. However, ps1 values for individual sites ranged from 40% to 100% of estimated total species richness. Additional species unique to the second pass were collected in 50.3% of the samples. Of these, cyprinids and centrarchids were collected most frequently. Proportional fish species richness estimated for the first pass increased significantly with decreasing stream width for 1 of the 10 river basins. When used to calculate probabilities of detection of individual fish species, the removal model failed 48% of the time because the number of individuals of a species was greater in the second pass than in the first pass. Single-pass backpack electrofishing data alone may make it difficult to determine whether characterized fish community structure data are real or spurious. The two-pass removal model can be used to assess the effectiveness of sampling species richness with a single electrofishing pass. However, the two-pass removal model may have limited utility to determine probabilities of detection of individual species and, thus, limit the ability to assess the effectiveness of single-pass sampling to characterize species relative abundances. Multiple-pass (at least three passes) backpack electrofishing at a large number of sites may not be cost-effective as part of a standardized sampling protocol for large-geographic-scale studies. However, multiple-pass electrofishing at some sites may be necessary to better evaluate the adequacy of single-pass electrofishing and to help make meaningful interpretations of fish community structure.
RECURRENT SOLAR JETS INDUCED BY A SATELLITE SPOT AND MOVING MAGNETIC FEATURES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jie; Su, Jiangtao; Yin, Zhiqiang
2015-12-10
Recurrent and homologous jets were observed to the west edge of active region NOAA 11513 at the boundary of a coronal hole. We find two kinds of cancellations between opposite polarity magnetic fluxes, inducing the generation of recurrent jets. First, a satellite spot continuously collides with a pre-existing opposite polarity magnetic field and causes recurrent solar jets. Second, moving magnetic features, which emerge near the sunspot penumbra, pass through the ambient plasma and eventually collide with the opposite polarity magnetic field. Among these recurrent jets, a blowout jet that occurred around 21:10 UT is investigated. The rotation of the pre-existingmore » magnetic field and the shear motion of the satellite spot accumulate magnetic energy, which creates the possibility for the jet to experience blowout right from the standard.« less
Studies of Particle Wake Potentials in Plasmas
NASA Astrophysics Data System (ADS)
Ellis, Ian; Graziani, Frank; Glosli, James; Strozzi, David; Surh, Michael; Richards, David; Decyk, Viktor; Mori, Warren
2011-10-01
Fast Ignition studies require a detailed understanding of electron scattering, stopping, and energy deposition in plasmas with variable values for the number of particles within a Debye sphere. Presently there is disagreement in the literature concerning the proper description of these processes. Developing and validating proper descriptions requires studying the processes using first-principle electrostatic simulations and possibly including magnetic fields. We are using the particle-particle particle-mesh (PPPM) code ddcMD and the particle-in-cell (PIC) code BEPS to perform these simulations. As a starting point in our study, we examine the wake of a particle passing through a plasma in 3D electrostatic simulations performed with ddcMD and with BEPS using various cell sizes. In this poster, we compare the wakes we observe in these simulations with each other and predictions from Vlasov theory. Prepared by LLNL under Contract DE-AC52-07NA27344 and by UCLA under Grant DE-FG52-09NA29552.
Laboratory Demonstration of Axicon-Lens Coronagraph
NASA Astrophysics Data System (ADS)
Choi, Jaeho; Jea, Geonho
2018-01-01
The results of laboratory based experiments of the proposed coronagraph using axicon-lenses that is conjunction with a method of noninterferometric quantitative phase imaging for direct imaging of exoplanets is will present. The light source is passing through tiny holes drilled on the thin metal plate is used as the simulated stellar and its companions. Those diffracted light at the edge of the holes bears a similarity to the light from the bright stellar. Those images are evaginated about the optical axis after the maximum focal length of the first axicon lens. Then the evaginated images of have cut off using the motorized iris which means the suppressed the central stellar light preferentially. Various length between the holes which represent the angular distance are examined. The laboratory experimental results are shown that the axicon-lens coronagraph has feature of ability to achieve the smaller IWA than l/D and high-contrast direct imaging. The laboratory based axicon-lens coronagraph imaging support the symbolic computation results which has potential in direct imaging for finding exoplanet and various astrophysical activities. The setup of the coronagraph is simple to build and is durable to operate. Moreover it can be transported the planets images to a broadband spectrometric instrument that able to investigate the constituent of the planetary system.
Blocking Mechanism Study of Self-Compacting Concrete Based on Discrete Element Method
NASA Astrophysics Data System (ADS)
Zhang, Xuan; Li, Zhida; Zhang, Zhihua
2017-11-01
In order to study the influence factors of blocking mechanism of Self-Compaction Concrete (SCC), Roussel’s granular blocking model was verified and extended by establishing the discrete element model of SCC. The influence of different parameters on the filling capacity and blocking mechanism of SCC were also investigated. The results showed that: it was feasible to simulate the blocking mechanism of SCC by using Discrete Element Method (DEM). The passing ability of pebble aggregate was superior to the gravel aggregate and the passing ability of hexahedron particles was bigger than tetrahedron particles, while the tetrahedron particle simulation results were closer to the actual situation. The flow of SCC as another significant factor affected the passing ability that with the flow increased, the passing ability increased. The correction coefficient λ of the steel arrangement (channel section shape) and flow rate γ in the block model were introduced that the value of λ was 0.90-0.95 and the maximum casting rate was 7.8 L/min.
Effective resolution concepts for lidar observations
NASA Astrophysics Data System (ADS)
Iarlori, M.; Madonna, F.; Rizi, V.; Trickl, T.; Amodeo, A.
2015-05-01
Since its first establishment in 2000, EARLINET (European Aerosol Research Lidar NETwork) has been devoted to providing, through its database, exclusively quantitative aerosol properties, such as aerosol backscatter and aerosol extinction coefficients, the latter only for stations able to retrieve it independently (from Raman or High Spectral Resolution Lidars). As these coefficients are provided in terms of vertical profiles, EARLINET database must also include the details on the range resolution of the submitted data. In fact, the algorithms used in the lidar data analysis often alter the spectral content of the data, mainly working as low pass filters with the purpose of noise damping. Low pass filters are mathematically described by the Digital Signal Processing (DSP) theory as a convolution sum. As a consequence, this implies that each filter's output, at a given range (or time) in our case, will be the result of a linear combination of several lidar input data relative to different ranges (times) before and after the given range (time): a first hint of loss of resolution of the output signal. The application of filtering processes will also always distort the underlying true profile whose relevant features, like aerosol layers, will then be affected both in magnitude and in spatial extension. Thus, both the removal of noise and the spatial distortion of the true profile produce a reduction of the range resolution. This paper provides the determination of the effective resolution (ERes) of the vertical profiles of aerosol properties retrieved starting from lidar data. Large attention has been addressed to provide an assessment of the impact of low-pass filtering on the effective range resolution in the retrieval procedure.
Direct Numerical Simulation of A Shaped Hole Film Cooling Flow
NASA Astrophysics Data System (ADS)
Oliver, Todd; Moser, Robert
2015-11-01
The combustor exit temperatures in modern gas turbine engines are generally higher than the melting temperature of the turbine blade material. Film cooling, where cool air is fed through holes in the turbine blades, is one strategy which is used extensively in such engines to reduce heat transfer to the blades and thus reduce their temperature. While these flows have been investigated both numerically and experimentally, many features are not yet well understood. For example, the geometry of the hole is known to have a large impact on downstream cooling performance. However, the details of the flow in the hole, particularly for geometries similar to those used in practice, are generally know well-understood, both because it is difficult to experimentally observe the flow inside the hole and because much of the numerical literature has focused on round hole simulations. In this work, we show preliminary direct numerical simulation results for a film cooling flow passing through a shaped hole into a the boundary layer developing on a flat plate. The case has density ratio 1.6, blowing ratio 2.0, and the Reynolds number (based on momentum thickness) of incoming boundary layer is approximately 600. We compare the new simulations against both previous experiments and LES.
Reddy, Tyler; Manrique, Santiago; Buyan, Amanda; Hall, Benjamin A; Chetwynd, Alan; Sansom, Mark S P
2014-01-21
Receptor tyrosine kinases are single-pass membrane proteins that form dimers within the membrane. The interactions of their transmembrane domains (TMDs) play a key role in dimerization and signaling. Fibroblast growth factor receptor 3 (FGFR3) is of interest as a G380R mutation in its TMD is the underlying cause of ~99% of the cases of achondroplasia, the most common form of human dwarfism. The structural consequences of this mutation remain uncertain: the mutation shifts the position of the TMD relative to the lipid bilayer but does not alter the association free energy. We have combined coarse-grained and all-atom molecular dynamics simulations to study the dimerization of wild-type, heterodimer, and mutant FGFR3 TMDs. The simulations reveal that the helices pack together in the dimer to form a flexible interface. The primary packing mode is mediated by a Gx3G motif. There is also a secondary dimer interface that is more highly populated in heterodimer and mutant configurations that may feature in the molecular mechanism of pathology. Both coarse-grained and atomistic simulations reveal a significant shift of the G380R mutant dimer TMD relative to the bilayer to allow interactions of the arginine side chain with lipid headgroup phosphates.
Reddy, Tyler; Manrique, Santiago; Buyan, Amanda; Hall, Benjamin A.; Chetwynd, Alan; Sansom, Mark S.P.
2016-01-01
Receptor tyrosine kinases are single pass membrane proteins which form dimers within the membrane. The interactions of their transmembrane domains (TMDs) play a key role in dimerization and signaling. The fibroblast growth factor receptor 3 (FGFR3) is of interest as a G380R mutation in its TMD is the underlying cause of ~99% of cases of achondroplasia, the most common form of human dwarfism. The structural consequences of this mutation remain uncertain: the mutation shifts the position relative of the TMD relative to the lipid bilayer but does not alter the association free energy. We have combined coarse-grained and all-atom molecular dynamics simulations to study the dimerization of wild-type, heterodimer, and mutant FGFR3 TMDs. The simulations reveal that the helices pack together in the dimer to form a flexible interface. The primary packing mode is mediated by a Gx3G motif. There is also a secondary dimer interface which is more highly populated in heterodimer and mutant configurations which may feature in the molecular mechanism of pathology. Both coarse-grained and atomistic simulations reveal a significant shift of the G380R mutant dimer TMD relative to the bilayer so as to enable interactions of the arginine sidechain with lipid head group phosphates. PMID:24397339
Electrical features of eighteen automated external defibrillators: a systematic evaluation.
Kette, Fulvio; Locatelli, Aldo; Bozzola, Marcella; Zoli, Alberto; Li, Yongqin; Salmoiraghi, Marco; Ristagno, Giuseppe; Andreassi, Aida
2013-11-01
Assessment and comparison of the electrical parameters (energy, current, first and second phase waveform duration) among eighteen AEDs. Engineering bench tests for a descriptive systematic evaluation in commercially available AEDs. AEDs were tested through an ECG simulator, an impedance simulator, an oscilloscope and a measuring device detecting energy delivered, peak and average current, and duration of first and second phase of the biphasic waveforms. All tests were performed at the engineering facility of the Lombardia Regional Emergency Service (AREU). Large variations in the energy delivered at the first shock were observed. The trend of current highlighted a progressive decline concurrent with the increases of impedance. First and second phase duration varied substantially among the AEDs using the exponential biphasic waveform, unlike rectilinear waveform AEDs in which phase duration remained relatively constant. There is a large variability in the electrical features of the AEDs tested. Energy is likely not to be the best indicator for strength dose selection. Current and shock duration should be both considered when approaching the technical features of AEDs. These findings may prompt further investigations to define the optimal current and duration of the shock waves to increase the success rate in the clinical setting. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Front panel engineering with CAD simulation tool
NASA Astrophysics Data System (ADS)
Delacour, Jacques; Ungar, Serge; Mathieu, Gilles; Hasna, Guenther; Martinez, Pascal; Roche, Jean-Christophe
1999-04-01
THe progress made recently in display technology covers many fields of application. The specification of radiance, colorimetry and lighting efficiency creates some new challenges for designers. Photometric design is limited by the capability of correctly predicting the result of a lighting system, to save on the costs and time taken to build multiple prototypes or bread board benches. The second step of the research carried out by company OPTIS is to propose an optimization method to be applied to the lighting system, developed in the software SPEOS. The main features of the tool requires include the CAD interface, to enable fast and efficient transfer between mechanical and light design software, the source modeling, the light transfer model and an optimization tool. The CAD interface is mainly a prototype of transfer, which is not the subjects here. Photometric simulation is efficiently achieved by using the measured source encoding and a simulation by the Monte Carlo method. Today, the advantages and the limitations of the Monte Carlo method are well known. The noise reduction requires a long calculation time, which increases with the complexity of the display panel. A successful optimization is difficult to achieve, due to the long calculation time required for each optimization pass including a Monte Carlo simulation. The problem was initially defined as an engineering method of study. The experience shows that good understanding and mastering of the phenomenon of light transfer is limited by the complexity of non sequential propagation. The engineer must call for the help of a simulation and optimization tool. The main point needed to be able to perform an efficient optimization is a quick method for simulating light transfer. Much work has been done in this area and some interesting results can be observed. It must be said that the Monte Carlo method wastes time calculating some results and information which are not required for the needs of the simulation. Low efficiency transfer system cost a lot of lost time. More generally, the light transfer simulation can be treated efficiently when the integrated result is composed of elementary sub results that include quick analytical calculated intersections. The first axis of research appear. The quick integration research and the quick calculation of geometric intersections. The first axis of research brings some general solutions also valid for multi-reflection systems. The second axis requires some deep thinking on the intersection calculation. An interesting way is the subdivision of space in VOXELS. This is an adapted method of 3D division of space according to the objects and their location. An experimental software has been developed to provide a validation of the method. The gain is particularly high in complex systems. An important reduction in the calculation time has been achieved.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-20
... Time at Which the Mortgage-Backed Securities Division Runs Its Daily Morning Pass August 14, 2012... Division (``MBSD'') runs its first processing pass of the day from 2 p.m. to 4 p.m. Eastern Standard Time... MBSD intends to move the time at which it runs its first processing pass of the day (historically...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dieterich, S; Trestrail, E; Holt, R
2015-06-15
Purpose: To assess if the TrueBeam HD120 collimator is delivering small IMRT fields accurately and consistently throughout the course of treatment using the SunNuclear PerFraction software. Methods: 7-field IMRT plans for 8 canine patients who passed IMRT QA using SunNuclear Mapcheck DQA were selected for this study. The animals were setup using CBCT image guidance. The EPID fluence maps were captured for each treatment field and each treatment fraction, with the first fraction EPID data serving as the baseline for comparison. The Sun Nuclear PerFraction Software was used to compare the EPID data for subsequent fractions using a Gamma (3%/3mm)more » pass rate of 90%. To simulate requirements for SRS, the data was reanalyzed using a Gamma (3%/1mm) pass rate of 90%. Low-dose, low- and high gradient thresholds were used to focus the analysis on clinically relevant parts of the dose distribution. Results: Not all fractions could be analyzed, because during some of the treatment courses the DICOM tags in the EPID images intermittently change from CU to US (unspecified), which would indicate a temporary loss of EPID calibration. This technical issue is still being investigated. For the remaining fractions, the vast majority (7/8 of patients, 95% of fractions, and 96.6% of fields) are passing the less stringent Gamma criteria. The more stringent Gamma criteria caused a drop in pass rate (90 % of fractions, 84% of fields). For the patient with the lowest pass rate, wet towel bolus was used. Another patient with low pass rates experienced masseter muscle wasting. Conclusion: EPID dosimetry using the PerFraction software demonstrated that the majority of fields passed a Gamma (3%/3mm) for IMRT treatments delivered with a TrueBeam HD120 MLC. Pass rates dropped for a DTA of 1mm to model SRS tolerances. PerFraction pass rates can flag missing bolus or internal shields. Sanjeev Saini is an employee of Sun Nuclear Corporation. For this study, a pre-release version of PerFRACTION 1.1 software from Sun Nuclear Corporation was used.« less
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
1990-01-01
All vision systems, both human and machine, transform the spatial image into a coded representation. Particular codes may be optimized for efficiency or to extract useful image features. Researchers explored image codes based on primary visual cortex in man and other primates. Understanding these codes will advance the art in image coding, autonomous vision, and computational human factors. In cortex, imagery is coded by features that vary in size, orientation, and position. Researchers have devised a mathematical model of this transformation, called the Hexagonal oriented Orthogonal quadrature Pyramid (HOP). In a pyramid code, features are segregated by size into layers, with fewer features in the layers devoted to large features. Pyramid schemes provide scale invariance, and are useful for coarse-to-fine searching and for progressive transmission of images. The HOP Pyramid is novel in three respects: (1) it uses a hexagonal pixel lattice, (2) it uses oriented features, and (3) it accurately models most of the prominent aspects of primary visual cortex. The transform uses seven basic features (kernels), which may be regarded as three oriented edges, three oriented bars, and one non-oriented blob. Application of these kernels to non-overlapping seven-pixel neighborhoods yields six oriented, high-pass pyramid layers, and one low-pass (blob) layer.
Fingeret, Abbey L; Arnell, Tracey; McNelis, John; Statter, Mindy; Dresner, Lisa; Widmann, Warren
We sought to determine whether sequential participation in a multi-institutional mock oral examination affected the likelihood of passing the American Board of Surgery Certifying Examination (ABSCE) in first attempt. Residents from 3 academic medical centers were able to participate in a regional mock oral examination in the fall and spring of their fourth and fifth postgraduate year from 2011 to 2014. Candidate׳s highest composite score of all mock orals attempts was classified as risk for failure, intermediate, or likely to pass. Factors including United States Medical Licensing Examination steps 1, 2, and 3, number of cases logged, American Board of Surgery In-Training Examination performance, American Board of Surgery Qualifying Examination (ABSQE) performance, number of attempts, and performance in the mock orals were assessed to determine factors predictive of passing the ABSCE. A total of 128 mock oral examinations were administered to 88 (71%) of 124 eligible residents. The overall first-time pass rate for the ABSCE was 82%. There was no difference in pass rates between participants and nonparticipants. Of them, 16 (18%) residents were classified as at risk, 47 (53%) as intermediate, and 25 (29%) as likely to pass. ABSCE pass rate for each group was as follows: 36% for at risk, 84% for intermediate, and 96% for likely pass. The following 4 factors were associated with first-time passing of ABSCE on bivariate analysis: mock orals participation in postgraduate year 4 (p = 0.05), sequential participation in mock orals (p = 0.03), ABSQE performance (p = 0.01), and best performance on mock orals (p = 0.001). In multivariable logistic regression, the following 3 factors remained associated with ABSCE passing: ABSQE performance, odds ratio (OR) = 2.9 (95% CI: 1.3-6.1); mock orals best performance, OR = 1.7 (1.2-2.4); and participation in multiple mock oral examinations, OR = 1.4 (1.1-2.7). Performance on a multi-institutional mock oral examination can identify residents at risk for failure of the ABSCE. Sequential participation in mock oral examinations is associated with improved ABSCE first-time pass rate. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Bloodgood, Robert A; Short, Jerry G; Jackson, John M; Martindale, James R
2009-05-01
To measure the impact of a change in grading system in the first two years of medical school, from graded (A, B, C, D, F) to pass/fail, on medical students' academic performance, attendance, residency match, satisfaction, and psychological well-being. For both the graded and pass/fail classes, objective data were collected on academic performance in the first- and second-year courses, the clerkships, United States Medical Licensing Examination (USMLE) Steps 1 and 2 Clinical Knowledge (CK), and residency placement. Self-report data were collected using a Web survey (which included the Dupuy General Well-Being Schedule) administered each of the first four semesters of medical school. The study was conducted from 2002 to 2007 at the University of Virginia School of Medicine. The pass/fail class exhibited a significant increase in well-being during each of the first three semesters of medical school relative to the graded class, greater satisfaction with the quality of their medical education during the first four semesters of medical school, and greater satisfaction with their personal lives during the first three semesters of medical school. The graded and pass/fail classes showed no significant differences in performance in first- and second-year courses, grades in clerkships, scores on USMLE Step 1 and Step 2CK, success in residency placement, and attendance at academic activities. A change in grading from letter grades to pass/fail in the first two years of medical school conferred distinct advantages to medical students, in terms of improved psychological well-being and satisfaction, without any reduction in performance in courses or clerkships, USMLE test scores, success in residency placement, or level of attendance.
Influence of carbohydrate supplementation on skill performance during a soccer match simulation.
Russell, Mark; Benton, David; Kingsley, Michael
2012-07-01
This study investigated the influence of carbohydrate supplementation on skill performance throughout exercise that replicates soccer match-play. Experimentation was conducted in a randomised, double-blind and cross-over study design. After familiarization, 15 professional academy soccer players completed a soccer match simulation incorporating passing, dribbling and shooting on two separate occasions. Participants received a 6% carbohydrate-electrolyte solution (CHO) or electrolyte solution (PL). Precision, success rate, ball speed and an overall index (speed-precision-success; SPS) were determined for all skills. Blood samples were taken at rest, immediately before exercise, every 15 min during exercise (first half: 15, 30 and 45 min; second half: 60, 75 and 90 min), and 10 min into the half time (half-time). Carbohydrate supplementation influenced shooting (time×treatment interaction: p<0.05), where CHO attenuated the decline in shot speed and SPS index. Supplementation did not affect passing or dribbling. Blood glucose responses to exercise were influenced by supplementation (time×treatment interaction: p<0.05), where concentrations were higher at 45 min and during half-time in CHO compared with PL. Blood glucose concentrations reduced by 30±1% between half-time and 60 min in CHO. Carbohydrate supplementation attenuated decrements in shooting performance during simulated soccer match-play; however, further research is warranted to optimise carbohydrate supplementation regimes for high-intensity intermittent sports. Copyright © 2012 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Electron holes observed in the Moon Plasma Wake
NASA Astrophysics Data System (ADS)
Hutchinson, I. H.; Malaspina, D.; Zhou, C.
2017-10-01
Electrostatic instabilities are predicted in the magnetized wake of plasma flowing past a non-magnetic absorbing object such as a probe or the moon. Analysis of the data from the Artemis satellites, now orbiting the moon at distances ten moon radii and less, shows very clear evidence of fast-moving isolated solitary potential structures causing bipolar electric field excursions as they pass the satellite's probes. These structures have all the hallmarks of electron holes: BGK solitons typically a few Debye-lengths in size, self-sustaining by a deficit of phase-space density on trapped orbits. Electron holes are now observed to be widespread in space plasmas. They have been observed in PIC simulations of the moon wake to be the non-linear consequence of the predicted electron instabilities. Simulations document hole prevalence, speed, length, and depth; and theory can explain many of these features from kinetic analysis. The solar wind wake is certainly the cause of the overwhelming majority of the holes observed by Artemis, because we observe almost all holes to be in or very near to the wake. We compare theory and simulation of the hole generation, lifetime, and transport mechanisms with observations. Work partially supported by NASA Grant NNX16AG82G.
Tracking Blade Tip Vortices for Numerical Flow Simulations of Hovering Rotorcraft
NASA Technical Reports Server (NTRS)
Kao, David L.
2016-01-01
Blade tip vortices generated by a helicopter rotor blade are a major source of rotor noise and airframe vibration. This occurs when a vortex passes closely by, and interacts with, a rotor blade. The accurate prediction of Blade Vortex Interaction (BVI) continues to be a challenge for Computational Fluid Dynamics (CFD). Though considerable research has been devoted to BVI noise reduction and experimental techniques for measuring the blade tip vortices in a wind tunnel, there are only a handful of post-processing tools available for extracting vortex core lines from CFD simulation data. In order to calculate the vortex core radius, most of these tools require the user to manually select a vortex core to perform the calculation. Furthermore, none of them provide the capability to track the growth of a vortex core, which is a measure of how quickly the vortex diffuses over time. This paper introduces an automated approach for tracking the core growth of a blade tip vortex from CFD simulations of rotorcraft in hover. The proposed approach offers an effective method for the quantification and visualization of blade tip vortices in helicopter rotor wakes. Keywords: vortex core, feature extraction, CFD, numerical flow visualization
NASA Astrophysics Data System (ADS)
Vijayanand, V. D.; Vasudevan, M.; Ganesan, V.; Parameswaran, P.; Laha, K.; Bhaduri, A. K.
2016-06-01
Creep deformation and rupture behavior of single-pass and dual-pass 316LN stainless steel (SS) weld joints fabricated by an autogenous activated tungsten inert gas welding process have been assessed by performing metallography, hardness, and conventional and impression creep tests. The fusion zone of the single-pass joint consisted of columnar zones adjacent to base metals with a central equiaxed zone, which have been modified extensively by the thermal cycle of the second pass in the dual-pass joint. The equiaxed zone in the single-pass joint, as well as in the second pass of the dual-pass joint, displayed the lowest hardness in the joints. In the dual-pass joint, the equiaxed zone of the first pass had hardness comparable to the columnar zone. The hardness variations in the joints influenced the creep deformation. The equiaxed and columnar zone in the first pass of the dual-pass joint was more creep resistant than that of the second pass. Both joints possessed lower creep rupture life than the base metal. However, the creep rupture life of the dual-pass joint was about twofolds more than that of the single-pass joint. Creep failure in the single-pass joint occurred in the central equiaxed fusion zone, whereas creep cavitation that originated in the second pass was blocked at the weld pass interface. The additional interface and strength variation between two passes in the dual-pass joint provides more restraint to creep deformation and crack propagation in the fusion zone, resulting in an increase in the creep rupture life of the dual-pass joint over the single-pass joint. Furthermore, the differences in content, morphology, and distribution of delta ferrite in the fusion zone of the joints favors more creep cavitation resistance in the dual-pass joint over the single-pass joint with the enhancement of creep rupture life.
A probabilistic model of cross-categorization.
Shafto, Patrick; Kemp, Charles; Mansinghka, Vikash; Tenenbaum, Joshua B
2011-07-01
Most natural domains can be represented in multiple ways: we can categorize foods in terms of their nutritional content or social role, animals in terms of their taxonomic groupings or their ecological niches, and musical instruments in terms of their taxonomic categories or social uses. Previous approaches to modeling human categorization have largely ignored the problem of cross-categorization, focusing on learning just a single system of categories that explains all of the features. Cross-categorization presents a difficult problem: how can we infer categories without first knowing which features the categories are meant to explain? We present a novel model that suggests that human cross-categorization is a result of joint inference about multiple systems of categories and the features that they explain. We also formalize two commonly proposed alternative explanations for cross-categorization behavior: a features-first and an objects-first approach. The features-first approach suggests that cross-categorization is a consequence of attentional processes, where features are selected by an attentional mechanism first and categories are derived second. The objects-first approach suggests that cross-categorization is a consequence of repeated, sequential attempts to explain features, where categories are derived first, then features that are poorly explained are recategorized. We present two sets of simulations and experiments testing the models' predictions about human categorization. We find that an approach based on joint inference provides the best fit to human categorization behavior, and we suggest that a full account of human category learning will need to incorporate something akin to these capabilities. Copyright © 2011 Elsevier B.V. All rights reserved.
First Pass Effect: A New Measure for Stroke Thrombectomy Devices.
Zaidat, Osama O; Castonguay, Alicia C; Linfante, Italo; Gupta, Rishi; Martin, Coleman O; Holloway, William E; Mueller-Kronast, Nils; English, Joey D; Dabus, Guilherme; Malisch, Tim W; Marden, Franklin A; Bozorgchami, Hormozd; Xavier, Andrew; Rai, Ansaar T; Froehler, Michael T; Badruddin, Aamir; Nguyen, Thanh N; Taqi, M Asif; Abraham, Michael G; Yoo, Albert J; Janardhan, Vallabh; Shaltoni, Hashem; Novakovic, Roberta; Abou-Chebl, Alex; Chen, Peng R; Britz, Gavin W; Sun, Chung-Huan J; Bansal, Vibhav; Kaushal, Ritesh; Nanda, Ashish; Nogueira, Raul G
2018-03-01
In acute ischemic stroke, fast and complete recanalization of the occluded vessel is associated with improved outcomes. We describe a novel measure for newer generation devices: the first pass effect (FPE). FPE is defined as achieving a complete recanalization with a single thrombectomy device pass. The North American Solitaire Acute Stroke Registry database was used to identify a FPE subgroup. Their baseline features and clinical outcomes were compared with non-FPE patients. Clinical outcome measures included 90-days modified Rankin Scale score, National Institutes of Health Stroke Scale score, mortality, and symptomatic intracranial hemorrhage. Multivariate analyses were performed to determine whether FPE independently resulted in improved outcomes and to identify predictors of FPE. A total of 354 acute ischemic stroke patients underwent thrombectomy in the North American Solitaire Acute Stroke registry. FPE was achieved in 89 out of 354 (25.1%). More middle cerebral artery occlusions (64% versus 52.5%) and fewer internal carotid artery occlusions (10.1% versus 27.7%) were present in the FPE group. Balloon guide catheters were used more frequently with FPE (64.0% versus 34.7%). Median time to revascularization was significantly faster in the FPE group (median 34 versus 60 minutes; P =0.0003). FPE was an independent predictor of good clinical outcome (modified Rankin Scale score ≤2 was seen in 61.3% in FPE versus 35.3% in non-FPE cohort; P =0.013; odds ratio, 1.7; 95% confidence interval, 1.1-2.7). The independent predictors of achieving FPE were use of balloon guide catheters and non-internal carotid artery terminus occlusion. The achievement of complete revascularization from a single Solitaire thrombectomy device pass (FPE) is associated with significantly higher rates of good clinical outcome. The FPE is more frequently associated with the use of balloon guide catheters and less likely to be achieved with internal carotid artery terminus occlusion. © 2018 American Heart Association, Inc.
Emergency Medical Technician Training for Medical Students: A Two-Year Experience.
Blackwell, Thomas H; Halsey, R Maglin; Reinovsky, Jennifer H
2016-01-01
New medical school educational curriculum encourages early clinical experiences along with clinical and biomedical integration. The University of South Carolina School of Medicine Greenville, one of the new expansion schools, was established in 2011 with the first class matriculating in 2012. To promote clinical skills early in the curriculum, emergency medical technician (EMT) training was included and begins in the first semester. Along with the early clinical exposure, the program introduces interprofessional health and teams and provides the opportunity for students to personally see and appreciate the wide variety of environments from which their future patients emanate. This report describes the EMT program and changes that were made after the first class that were designed to integrate EMT training with the biomedical sciences and to assess the value of these integrative changes using objective criteria. A two-year retrospective study was conducted that involved the first two classes of medical students. Baseline student data and pass rates from the psychomotor skill and written components of the State examination were used to determine if students performed better in the integrated, prolonged course. There were 53 students in the first class and 54 in the second. Of the 51 students in the first class and 53 students in the second class completing the state psychomotor and written examination, 20 (39%) in the first class and 17 (32%) in the second passed on the initial psychomotor skill attempt; however, more students passed in the first three attempts in the second class than the first class, 51 (96%) versus 45 (88%) , respectively. All students passed by 5 attempts. For the written examination, 50 (98%) students in the first class and 51 (96%) in the second class passed on the first attempt. All students passed by the third attempt. Pass rates on both components of the State examination were not significantly different between classes. Medical students who received their EMT training in a 6-week, non-integrated format performed similarly on the EMT State certification examination to those who received their training in a prolonged, integrated structure.
Supply Chain Simulator: A Scenario-Based Educational Tool to Enhance Student Learning
ERIC Educational Resources Information Center
Siddiqui, Atiq; Khan, Mehmood; Akhtar, Sohail
2008-01-01
Simulation-based educational products are excellent set of illustrative tools that proffer features like visualization of the dynamic behavior of a real system, etc. Such products have great efficacy in education and are known to be one of the first-rate student centered learning methodologies. These products allow students to practice skills such…
Study for the dispersion of double-diffraction spectrometers
NASA Astrophysics Data System (ADS)
Pang, Yajun; Zhang, Yinxin; Yang, Huaidong; Huang, Zhanhua; Xu, Mingming; Jin, Guofan
2018-01-01
Double-cascade spectrometers and double-pass spectrometers can be uniformly called double-diffraction spectrometers. In current double-diffraction spectrometers design theory, the differences of the incident angles in the second diffraction are ignored. There is a significant difference between the design in theory and the actual result. In this study, based on the geometries of the double-diffraction spectrometers, we strictly derived the theoretical formulas of their dispersion. By employing the ZEMAX simulation software, verification of our theoretical model is implemented, and the simulation results show big agreement with our theoretical formulas. Based on the conclusions, a double-pass spectrometer was set up and tested, and the experiment results agree with the theoretical model and the simulation.
Using the arthroscopic surgery skill evaluation tool as a pass-fail examination.
Koehler, Ryan J; Nicandri, Gregg T
2013-12-04
Examination of arthroscopic skill requires evaluation tools that are valid and reliable with clear criteria for passing. The Arthroscopic Surgery Skill Evaluation Tool was developed as a video-based assessment of technical skill with criteria for passing established by a panel of experts. The purpose of this study was to test the validity and reliability of the Arthroscopic Surgery Skill Evaluation Tool as a pass-fail examination of arthroscopic skill. Twenty-eight residents and two sports medicine faculty members were recorded performing diagnostic knee arthroscopy on a left and right cadaveric specimen in our arthroscopic skills laboratory. Procedure videos were evaluated with use of the Arthroscopic Surgery Skill Evaluation Tool by two raters blind to subject identity. Subjects were considered to pass the Arthroscopic Surgery Skill Evaluation Tool when they attained scores of ≥ 3 on all eight assessment domains. The raters agreed on a pass-fail rating for fifty-five of sixty videos rated with an interclass correlation coefficient value of 0.83. Ten of thirty participants were assigned passing scores by both raters for both diagnostic arthroscopies performed in the laboratory. Receiver operating characteristic analysis demonstrated that logging more than eighty arthroscopic cases or performing more than thirty-five arthroscopic knee cases was predictive of attaining a passing Arthroscopic Surgery Skill Evaluation Tool score on both procedures performed in the laboratory. The Arthroscopic Surgery Skill Evaluation Tool is valid and reliable as a pass-fail examination of diagnostic arthroscopy of the knee in the simulation laboratory. This study demonstrates that the Arthroscopic Surgery Skill Evaluation Tool may be a useful tool for pass-fail examination of diagnostic arthroscopy of the knee in the simulation laboratory. Further study is necessary to determine whether the Arthroscopic Surgery Skill Evaluation Tool can be used for the assessment of multiple arthroscopic procedures and whether it can be used to evaluate arthroscopic procedures performed in the operating room.
NASA Technical Reports Server (NTRS)
Goodelle, G. S.; Brooks, G. R.; Seaman, C. H.
1981-01-01
The development and implementation of an instrument for spectral measurement of solar simulators for testing solar cell characteristics is reported. The device was constructed for detecting changes in solar simulator behavior and for comparing simulator spectral irradiance to solar AM0 output. It consists of a standard solar cell equipped with a band pass filter narrow enough so that, when flown on a balloon to sufficient altitude along with sufficient numbers of cells, each equipped with filters of different bandpass ratings, the entire spectral response of the standard cell can be determined. Measured short circuit currents from the balloon flights thus produce cell devices which, when exposed to solar simulator light, have a current which does or does not respond as observed under actual AM0 conditions. Improvements of the filtered cells in terms of finer bandpass filter tuning and measurement of temperature coefficients are indicated.
Jin, Hang; Yun, Hong; Ma, Jianying; Chen, Zhangwei; Chang, Shufu; Zeng, Mengsu
2016-01-01
To assess magnetic resonance imaging (MRI) features of coronary microembolization in a swine model induced by small-sized microemboli, which may cause microinfarcts invisible to the naked eye. Eleven pigs underwent intracoronary injection of small-sized microspheres (42 µm) and catheter coronary angiography was obtained before and after microembolization. Cardiac MRI and measurement of cardiac troponin T (cTnT) were performed at baseline, 6 hours, and 1 week after microembolization. Postmortem evaluation was performed after completion of the imaging studies. Coronary angiography pre- and post-microembolization revealed normal epicardial coronary arteries. Systolic wall thickening of the microembolized regions decreased significantly from 42.6 ± 2.0% at baseline to 20.3 ± 2.3% at 6 hours and 31.5 ± 2.1% at 1 week after coronary microembolization (p < 0.001 for both). First-pass perfusion defect was visualized at 6 hours but the extent was largely decreased at 1 week. Delayed contrast enhancement MRI (DE-MRI) demonstrated hyperenhancement within the target area at 6 hours but not at 1 week. The microinfarcts on gross specimen stained with nitrobluetetrazolium chloride were invisible to the naked eye and only detectable microscopically. Increased cTnT was observed at 6 hours and 1 week after microembolization. Coronary microembolization induced by a certain load of small-sized microemboli may result in microinfarcts invisible to the naked eye with normal epicardial coronary arteries. MRI features of myocardial impairment secondary to such microembolization include the decline in left ventricular function and myocardial perfusion at cine and first-pass perfusion imaging, and transient hyperenhancement at DE-MRI.
Microscopic motion of particles flowing through a porous medium
NASA Astrophysics Data System (ADS)
Lee, Jysoo; Koplik, Joel
1999-01-01
Stokesian dynamics simulations are used to study the microscopic motion of particles suspended in fluids passing through porous media. Model porous media with fixed spherical particles are constructed, and mobile ones move through this fixed bed under the action of an ambient velocity field. The pore scale motion of individual suspended particles at pore junctions are first considered. The relative particle flux into different possible directions exiting from a single pore, for two- and three-dimensional model porous media is found to approximately equal the corresponding fractional channel width or area. Next the waiting time distribution for particles which are delayed in a junction due to a stagnation point caused by a flow bifurcation is considered. The waiting times are found to be controlled by two-particle interactions, and the distributions take the same form in model porous media as in two-particle systems. A simple theoretical estimate of the waiting time is consistent with the simulations. It is found that perturbing such a slow-moving particle by another nearby one leads to rather complicated behavior. Finally, the stability of geometrically trapped particles is studied. For simple model traps, it is found that particles passing nearby can "relaunch" the trapped particle through its hydrodynamic interaction, although the conditions for relaunching depend sensitively on the details of the trap and its surroundings.
Simulations relevant to the beam instability in the foreshock
NASA Technical Reports Server (NTRS)
Cairns, I. H.; Nishikawa, K.-I.
1989-01-01
The results presently obtained from two-dimensional simulations of the reactive instability for Maxwellian beams and cutoff distributions are noted to be consistent with recent suggestions that electrons backstreaming into earth's foreshock have steep-sided cutoff distributions, which are initially unstable to the reactive instability, and that the back-reaction to the wave growth causes the instability to pass into its kinetic phase. It is demonstrated that the reactive instability is a bunching instability, and that the reactive instability saturates and passes over into the kinetic phase by particle trapping.
Productivity and cost of conventional understory biomass harvesting systems
Douglas E. Miller; Thomas J. Straka; Bryce J. Stokes; William Watson
1987-01-01
Conventional harvesting equipment was tested for removing forest understory biomass (energywood) for use as fuel. Two types of systems were tested--a one-pass system and a two-pass system. In the one-pass system, the energywood and pulpwood were harvested simultaneously. In the two-pass system, the energywood was harvested in a first pass through the stand, and the...
Space Radar Image of Long Valley, California in 3-D
1999-05-01
This three-dimensional perspective view of Long Valley, California was created from data taken by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar on board the space shuttle Endeavour. This image was constructed by overlaying a color composite SIR-C radar image on a digital elevation map. The digital elevation map was produced using radar interferometry, a process by which radar data are acquired on different passes of the space shuttle. The two data passes are compared to obtain elevation information. The interferometry data were acquired on April 13,1994 and on October 3, 1994, during the first and second flights of the SIR-C/X-SAR instrument. The color composite radar image was taken in October and was produced by assigning red to the C-band (horizontally transmitted and vertically received) polarization; green to the C-band (vertically transmitted and received) polarization; and blue to the ratio of the two data sets. Blue areas in the image are smooth and yellow areas are rock outcrops with varying amounts of snow and vegetation. The view is looking north along the northeastern edge of the Long Valley caldera, a volcanic collapse feature created 750,000 years ago and the site of continued subsurface activity. Crowley Lake is the large dark feature in the foreground. http://photojournal.jpl.nasa.gov/catalog/PIA01769
Radar range data signal enhancement tracker
NASA Technical Reports Server (NTRS)
1975-01-01
The design, fabrication, and performance characteristics are described of two digital data signal enhancement filters which are capable of being inserted between the Space Shuttle Navigation Sensor outputs and the guidance computer. Commonality of interfaces has been stressed so that the filters may be evaluated through operation with simulated sensors or with actual prototype sensor hardware. The filters will provide both a smoothed range and range rate output. Different conceptual approaches are utilized for each filter. The first filter is based on a combination low pass nonrecursive filter and a cascaded simple average smoother for range and range rate, respectively. Filter number two is a tracking filter which is capable of following transient data of the type encountered during burn periods. A test simulator was also designed which generates typical shuttle navigation sensor data.
Effect of an Energy Reservoir on the Atmospheric Propagation of Laser-Plasma Filaments
NASA Astrophysics Data System (ADS)
Eisenmann, Shmuel; Peñano, Joseph; Sprangle, Phillip; Zigler, Arie
2008-04-01
The ability to select and stabilize a single filament during propagation of an ultrashort, high-intensity laser pulse in air makes it possible to examine the longitudinal structure of the plasma channel left in its wake. We present the first detailed measurements and numerical 3-D simulations of the longitudinal plasma density variation in a laser-plasma filament after it passes through an iris that blocks the surrounding energy reservoir. Since no compensation is available from the surrounding background energy, filament propagation is terminated after a few centimeters. For this experiment, simulations indicate that filament propagation is terminated by plasma defocusing and ionization loss, which reduces the pulse power below the effective self-focusing power. With no blockage, a plasma filament length of over a few meters was observed.
Carpenter, Gail A; Gaddam, Sai Chaitanya
2010-04-01
Memories in Adaptive Resonance Theory (ART) networks are based on matched patterns that focus attention on those portions of bottom-up inputs that match active top-down expectations. While this learning strategy has proved successful for both brain models and applications, computational examples show that attention to early critical features may later distort memory representations during online fast learning. For supervised learning, biased ARTMAP (bARTMAP) solves the problem of over-emphasis on early critical features by directing attention away from previously attended features after the system makes a predictive error. Small-scale, hand-computed analog and binary examples illustrate key model dynamics. Two-dimensional simulation examples demonstrate the evolution of bARTMAP memories as they are learned online. Benchmark simulations show that featural biasing also improves performance on large-scale examples. One example, which predicts movie genres and is based, in part, on the Netflix Prize database, was developed for this project. Both first principles and consistent performance improvements on all simulation studies suggest that featural biasing should be incorporated by default in all ARTMAP systems. Benchmark datasets and bARTMAP code are available from the CNS Technology Lab Website: http://techlab.bu.edu/bART/. Copyright 2009 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Saini, Subash; Bailey, David; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
High Performance Fortran (HPF), the high-level language for parallel Fortran programming, is based on Fortran 90. HALF was defined by an informal standards committee known as the High Performance Fortran Forum (HPFF) in 1993, and modeled on TMC's CM Fortran language. Several HPF features have since been incorporated into the draft ANSI/ISO Fortran 95, the next formal revision of the Fortran standard. HPF allows users to write a single parallel program that can execute on a serial machine, a shared-memory parallel machine, or a distributed-memory parallel machine. HPF eliminates the complex, error-prone task of explicitly specifying how, where, and when to pass messages between processors on distributed-memory machines, or when to synchronize processors on shared-memory machines. HPF is designed in a way that allows the programmer to code an application at a high level, and then selectively optimize portions of the code by dropping into message-passing or calling tuned library routines as 'extrinsics'. Compilers supporting High Performance Fortran features first appeared in late 1994 and early 1995 from Applied Parallel Research (APR) Digital Equipment Corporation, and The Portland Group (PGI). IBM introduced an HPF compiler for the IBM RS/6000 SP/2 in April of 1996. Over the past two years, these implementations have shown steady improvement in terms of both features and performance. The performance of various hardware/ programming model (HPF and MPI (message passing interface)) combinations will be compared, based on latest NAS (NASA Advanced Supercomputing) Parallel Benchmark (NPB) results, thus providing a cross-machine and cross-model comparison. Specifically, HPF based NPB results will be compared with MPI based NPB results to provide perspective on performance currently obtainable using HPF versus MPI or versus hand-tuned implementations such as those supplied by the hardware vendors. In addition we would also present NPB (Version 1.0) performance results for the following systems: DEC Alpha Server 8400 5/440, Fujitsu VPP Series (VX, VPP300, and VPP700), HP/Convex Exemplar SPP2000, IBM RS/6000 SP P2SC node (120 MHz) NEC SX-4/32, SGI/CRAY T3E, SGI Origin2000.
A cognitive task analysis for dental hygiene.
Cameron, C A; Beemsterboer, P L; Johnson, L A; Mislevy, R J; Steinberg, L S; Breyer, F J
2000-05-01
To be an effective assessment tool, a simulation-based examination must be able to evoke and interpret observable evidence about targeted knowledge, strategies, and skills in a manner that is logical and defensible. Dental Interactive Simulations Corporation's first assessment effort is the development of a scoring algorithm for a simulation-based dental hygiene initial licensure examination. The first phase in developing a scoring system is the completion of a cognitive task analysis (CTA) of the dental hygiene domain. In the first step of the CTA, a specifications map was generated to provide a framework of the tasks and knowledge that are important to the practice of dental hygiene. Using this framework, broad classes of behaviors that would tend to distinguish along the dental hygiene expert-novice continuum were identified. Nine paper-based cases were then designed with the expectation that the solutions of expert, competent, and novice dental hygienists would differ. Interviews were conducted with thirty-one dental hygiene students/practitioners to capture solutions to the paper-based cases. Transcripts of the interviews were analyzed to identify performance features that distinguish among the interviewees on the basis of their expertise. These features were more detailed and empirically grounded than the originating broad classes and better serve to ground the design of a scoring system. The resulting performance features were collapsed into nine major categories: 1) gathering and using information, 2) formulating problems and investigating hypotheses, 3) communication and language, 4) scripting behavior, 5) ethics, 6) patient assessment, 7) treatment planning, 8) treatment, and 9) evaluation. The results of the CTA provide critical information for defining the necessary elements of a simulation-based dental hygiene examination.
NASA Astrophysics Data System (ADS)
Wang, Wei; Bhandari, Sagar; Yi, Wei; Bell, David; Westervelt, Robert; Kaxiras, Efthimios
2012-02-01
Ultra-thin membranes such as graphene[1] are of great importance for basic science and technology applications. Graphene sets the ultimate limit of thinness, demonstrating that a free-standing single atomic layer not only exists but can be extremely stable and strong [2--4]. However, both theory [5, 6] and experiments [3, 7] suggest that the existence of graphene relies on intrinsic ripples that suppress the long-wavelength thermal fluctuations which otherwise spontaneously destroy long range order in a two dimensional system. Here we show direct imaging of the atomic features in graphene including the ripples resolved using monochromatic aberration-corrected transmission electron microscopy (TEM). We compare the images observed in TEM with simulated images based on an accurate first-principles total potential. We show that these atomic scale features can be mapped through accurate first-principles simulations into high resolution TEM contrast. [1] Geim, A. K. & Novoselov, K. S. Nat. Mater. 6, 183-191, (2007). [2] Novoselov, K. S.et al. Science 306, 666-669, (2004). [3] Meyer, J. C. et al. Nature 446, 60-63, (2007). [4] Lee, C., Wei, X. D., Kysar, J. W. & Hone, J. Science 321, 385-388, (2008). [5] Nelson, D. R. & Peliti, L. J Phys-Paris 48, 1085-1092, (1987). [6] Fasolino, A., Los, J. H. & Katsnelson, M. I. Nat. Mater. 6, 858-861, (2007). [7] Meyer, J. C. et al. Solid State Commun. 143, 101-109, (2007).
Simulation-based education with mastery learning improves residents' lumbar puncture skills
Cohen, Elaine R.; Caprio, Timothy; McGaghie, William C.; Simuni, Tanya; Wayne, Diane B.
2012-01-01
Objective: To evaluate the effect of simulation-based mastery learning (SBML) on internal medicine residents' lumbar puncture (LP) skills, assess neurology residents' acquired LP skills from traditional clinical education, and compare the results of SBML to traditional clinical education. Methods: This study was a pretest-posttest design with a comparison group. Fifty-eight postgraduate year (PGY) 1 internal medicine residents received an SBML intervention in LP. Residents completed a baseline skill assessment (pretest) using a 21-item LP checklist. After a 3-hour session featuring deliberate practice and feedback, residents completed a posttest and were expected to meet or exceed a minimum passing score (MPS) set by an expert panel. Simulator-trained residents' pretest and posttest scores were compared to assess the impact of the intervention. Thirty-six PGY2, 3, and 4 neurology residents from 3 medical centers completed the same simulated LP assessment without SBML. SBML posttest scores were compared to neurology residents' baseline scores. Results: PGY1 internal medicine residents improved from a mean of 46.3% to 95.7% after SBML (p < 0.001) and all met the MPS at final posttest. The performance of traditionally trained neurology residents was significantly lower than simulator-trained residents (mean 65.4%, p < 0.001) and only 6% met the MPS. Conclusions: Residents who completed SBML showed significant improvement in LP procedural skills. Few neurology residents were competent to perform a simulated LP despite clinical experience with the procedure. PMID:22675080
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gigley, H.M.
1982-01-01
An artificial intelligence approach to the simulation of neurolinguistically constrained processes in sentence comprehension is developed using control strategies for simulation of cooperative computation in associative networks. The desirability of this control strategy in contrast to ATN and production system strategies is explained. A first pass implementation of HOPE, an artificial intelligence simulation model of sentence comprehension, constrained by studies of aphasic performance, psycholinguistics, neurolinguistics, and linguistic theory is described. Claims that the model could serve as a basis for sentence production simulation and for a model of language acquisition as associative learning are discussed. HOPE is a model thatmore » performs in a normal state and includes a lesion simulation facility. HOPE is also a research tool. Its modifiability and use as a tool to investigate hypothesized causes of degradation in comprehension performance by aphasic patients are described. Issues of using behavioral constraints in modelling and obtaining appropriate data for simulated process modelling are discussed. Finally, problems of validation of the simulation results are raised; and issues of how to interpret clinical results to define the evolution of the model are discussed. Conclusions with respect to the feasibility of artificial intelligence simulation process modelling are discussed based on the current state of research.« less
Higgins, Rana M; Deal, Rebecca A; Rinewalt, Daniel; Hollinger, Edward F; Janssen, Imke; Poirier, Jennifer; Austin, Delores; Rendina, Megan; Francescatti, Amanda; Myers, Jonathan A; Millikan, Keith W; Luu, Minh B
2016-02-01
Determine the utility of mock oral examinations in preparation for the American Board of Surgery certifying examination (ABS CE). Between 2002 and 2012, blinded data were collected on 63 general surgery residents: 4th and 5th-year mock oral examination scores, first-time pass rates on ABS CE, and an online survey. Fifty-seven residents took the 4th-year mock oral examination: 30 (52.6%) passed and 27 (47.4%) failed, with first-time ABS CE pass rates 93.3% and 81.5% (P = .238). Fifty-nine residents took the 5th-year mock oral examination: 28 (47.5%) passed and 31 (52.5%) failed, with first-time ABS CE pass rates 82.1% and 93.5% (P = .240). Thirty-eight responded to the online survey, 77.1% ranked mock oral examinations as very or extremely helpful with ABS CE preparation. Although mock oral examinations and ABS CE passing rates do not directly correlate, residents perceive the mock oral examinations to be helpful. Copyright © 2016 Elsevier Inc. All rights reserved.
First search for extraterrestrial neutrino-induced cascades with IceCube
DOE Office of Scientific and Technical Information (OSTI.GOV)
IceCube Collaboration; Kiryluk, Joanna
2009-05-22
We report on the first search for extraterrestrial neutrino-induced cascades in IceCube.The analyzed data were collected in the year 2007 when 22 detector strings were installed and operated. We will discuss the analysis methods used to reconstruct cascades and to suppress backgrounds. Simulated neutrino signal events with a E-2 energy spectrum, which pass the background rejection criteria, are reconstructed with a resolution Delta(log E) ~;; 0.27 in the energy range from ~;; 20 TeV to a few PeV. We present the range of the diffuse flux of extra-terrestrial neutrinos in the cascade channel in IceCube within which we expect tomore » be able to put a limit.« less
Geometric characterization and simulation of planar layered elastomeric fibrous biomaterials
Carleton, James B.; D'Amore, Antonio; Feaver, Kristen R.; Rodin, Gregory J.; Sacks, Michael S.
2014-01-01
Many important biomaterials are composed of multiple layers of networked fibers. While there is a growing interest in modeling and simulation of the mechanical response of these biomaterials, a theoretical foundation for such simulations has yet to be firmly established. Moreover, correctly identifying and matching key geometric features is a critically important first step for performing reliable mechanical simulations. The present work addresses these issues in two ways. First, using methods of geometric probability we develop theoretical estimates for the mean linear and areal fiber intersection densities for two-dimensional fibrous networks. These densities are expressed in terms of the fiber density and the orientation distribution function, both of which are relatively easy-to-measure properties. Secondly, we develop a random walk algorithm for geometric simulation of two-dimensional fibrous networks which can accurately reproduce the prescribed fiber density and orientation distribution function. Furthermore, the linear and areal fiber intersection densities obtained with the algorithm are in agreement with the theoretical estimates. Both theoretical and computational results are compared with those obtained by post-processing of SEM images of actual scaffolds. These comparisons reveal difficulties inherent to resolving fine details of multilayered fibrous networks. The methods provided herein can provide a rational means to define and generate key geometric features from experimentally measured or prescribed scaffold structural data. PMID:25311685
Differential Muon Tomography to Continuously Monitor Changes in the Composition of Subsurface Fluids
NASA Technical Reports Server (NTRS)
Coleman, Max; Kudryavtsev, Vitaly A.; Spooner, Neil J.; Fung, Cora; Gluyas, John
2013-01-01
Muon tomography has been used to seek hidden chambers in Egyptian pyramids and image subsurface features in volcanoes. It seemed likely that it could be used to image injected, supercritical carbon dioxide as it is emplaced in porous geological structures being used for carbon sequestration, and also to check on subsequent leakage. It should work equally well in any other application where there are two fluids of different densities, such as water and oil, or carbon dioxide and heavy oil in oil reservoirs. Continuous monitoring of movement of oil and/or flood fluid during enhanced oil recovery activities for managing injection is important for economic reasons. Checking on leakage for geological carbon storage is essential both for safety and for economic purposes. Current technology (for example, repeat 3D seismic surveys) is expensive and episodic. Muons are generated by high- energy cosmic rays resulting from supernova explosions, and interact with gas molecules in the atmosphere. This innovation has produced a theoretical model of muon attenuation in the thickness of rock above and within a typical sandstone reservoir at a depth of between 1.00 and 1.25 km. Because this first simulation was focused on carbon sequestration, the innovators chose depths sufficient for the pressure there to ensure that the carbon dioxide would be supercritical. This innovation demonstrates for the first time the feasibility of using the natural cosmic-ray muon flux to generate continuous tomographic images of carbon dioxide in a storage site. The muon flux is attenuated to an extent dependent on, amongst other things, the density of the materials through which it passes. The density of supercritical carbon dioxide is only three quarters that of the brine in the reservoir that it displaces. The first realistic simulations indicate that changes as small as 0.4% in the storage site bulk density could be detected (equivalent to 7% of the porosity, in this specific case). The initial muon flux is effectively constant at the surface of the Earth. Sensitivity of the method would be decreased with increasing depth. However, sensitivity can be improved by emplacing a greater array of particle detectors at the base of the reservoir.
Sikkens, Jonne J; Caris, Martine G; Schutte, Tim; Kramer, Mark H H; Tichelaar, Jelle; van Agtmael, Michiel A
2018-05-09
Antimicrobial prescribing behaviour is first established during medical study, but teachers often cite lack of time as an important problem in the implementation of antimicrobial stewardship in the medical curriculum. The use of electronic learning (e-learning) is a potentially time-efficient solution, but its effectiveness in changing long-term prescribing behaviour in medical students is as yet unknown. We performed a prospective controlled intervention study of the long-term effects of a short interactive e-learning course among fourth year medical students in a Dutch university. The e-learning was temporarily implemented as a non-compulsory course during a 6 week period. Six months later, all students underwent an infectious disease-based objective structured clinical examination (OSCE) aimed at simulating postgraduate prescribing. If they passed, each student did the OSCE only once. We created a control group of students from a period when the e-learning was not implemented. Main outcomes were the OSCE pass percentage and knowledge, drug choice and overall scores. We used propensity scores to create equal comparisons. We included 71 students in the intervention group and 285 students in the control group. E-learning participation in the intervention group was 81%. The OSCE pass percentage was 86% in the control group versus 97% in the intervention group (+11%, OR 5.9, 95% CI 1.7-20.0). OSCE overall, knowledge and drug choice grades (1-10) were also significantly higher in the intervention group (differences +0.31, +0.31 and +0.51, respectively). E-learning during a limited period can significantly improve medical students' performance of an antimicrobial therapeutic consultation in a situation simulating clinical practice 6 months later.
Distributed Offline Data Reconstruction in BaBar
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pulliam, Teela M
The BaBar experiment at SLAC is in its fourth year of running. The data processing system has been continuously evolving to meet the challenges of higher luminosity running and the increasing bulk of data to re-process each year. To meet these goals a two-pass processing architecture has been adopted, where 'rolling calibrations' are quickly calculated on a small fraction of the events in the first pass and the bulk data reconstruction done in the second. This allows for quick detector feedback in the first pass and allows for the parallelization of the second pass over two or more separate farms.more » This two-pass system allows also for distribution of processing farms off-site. The first such site has been setup at INFN Padova. The challenges met here were many. The software was ported to a full Linux-based, commodity hardware system. The raw dataset, 90 TB, was imported from SLAC utilizing a 155 Mbps network link. A system for quality control and export of the processed data back to SLAC was developed. Between SLAC and Padova we are currently running three pass-one farms, with 32 CPUs each, and nine pass-two farms with 64 to 80 CPUs each. The pass-two farms can process between 2 and 4 million events per day. Details about the implementation and performance of the system will be presented.« less
Community Project for Accelerator Science and Simulation (ComPASS) Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cary, John R.; Cowan, Benjamin M.; Veitzer, S. A.
2016-03-04
Tech-X participated across the full range of ComPASS activities, with efforts in the Energy Frontier primarily through modeling of laser plasma accelerators and dielectric laser acceleration, in the Intensity Frontier primarily through electron cloud modeling, and in Uncertainty Quantification being applied to dielectric laser acceleration. In the following we present the progress and status of our activities for the entire period of the ComPASS project for the different areas of Energy Frontier, Intensity Frontier and Uncertainty Quantification.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spentzouris, Panagiotis; /Fermilab; Cary, John
The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessarymore » accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.« less
2015-01-01
This paper proposes a new adaptive filter for wind generators that combines instantaneous reactive power compensation technology and current prediction controller, and therefore this system is characterized by low harmonic distortion, high power factor, and small DC-link voltage variations during load disturbances. The performance of the system was first simulated using MATLAB/Simulink, and the possibility of an adaptive digital low-pass filter eliminating current harmonics was confirmed in steady and transient states. Subsequently, a digital signal processor was used to implement an active power filter. The experimental results indicate, that for the rated operation of 2 kVA, the system has a total harmonic distortion of current less than 5.0% and a power factor of 1.0 on the utility side. Thus, the transient performance of the adaptive filter is superior to the traditional digital low-pass filter and is more economical because of its short computation time compared with other types of adaptive filters. PMID:26451391
Chen, Ming-Hung
2015-01-01
This paper proposes a new adaptive filter for wind generators that combines instantaneous reactive power compensation technology and current prediction controller, and therefore this system is characterized by low harmonic distortion, high power factor, and small DC-link voltage variations during load disturbances. The performance of the system was first simulated using MATLAB/Simulink, and the possibility of an adaptive digital low-pass filter eliminating current harmonics was confirmed in steady and transient states. Subsequently, a digital signal processor was used to implement an active power filter. The experimental results indicate, that for the rated operation of 2 kVA, the system has a total harmonic distortion of current less than 5.0% and a power factor of 1.0 on the utility side. Thus, the transient performance of the adaptive filter is superior to the traditional digital low-pass filter and is more economical because of its short computation time compared with other types of adaptive filters.
Computer-aided roll pass design in rolling of airfoil shapes
NASA Technical Reports Server (NTRS)
Akgerman, N.; Lahoti, G. D.; Altan, T.
1980-01-01
This paper describes two computer-aided design (CAD) programs developed for modeling the shape rolling process for airfoil sections. The first program, SHPROL, uses a modular upper-bound method of analysis and predicts the lateral spread, elongation, and roll torque. The second program, ROLPAS, predicts the stresses, roll separating force, the roll torque and the details of metal flow by simulating the rolling process, using the slab method of analysis. ROLPAS is an interactive program; it offers graphic display capabilities and allows the user to interact with the computer via a keyboard, CRT, and a light pen. The accuracy of the computerized models was evaluated by (a) rolling a selected airfoil shape at room temperature from 1018 steel and isothermally at high temperature from Ti-6Al-4V, and (b) comparing the experimental results with computer predictions. The comparisons indicated that the CAD systems, described here, are useful for practical engineering purposes and can be utilized in roll pass design and analysis for airfoil and similar shapes.
NASA Astrophysics Data System (ADS)
Hariyadi, T.; Mulyasari, S.; Mukhidin
2018-02-01
In this paper we have designed and simulated a Band Pass Filter (BPF) at X-band frequency. This filter is designed for X-band weather radar application with 9500 MHz center frequency and bandwidth -3 dB is 120 MHz. The filter design was performed using a hairpin microstrip combined with an open stub and defected ground structure (DGS). The substrate used is Rogers RT5880 with a dielectric constant of 2.2 and a thickness of 1.575 mm. Based on the simulation results, it is found that the filter works on frequency 9,44 - 9,56 GHz with insertion loss value at pass band is -1,57 dB.
Design and manufacture of super-multilayer optical filters based on PARMS technology
NASA Astrophysics Data System (ADS)
Lü, Shaobo; Wang, Ruisheng; Ma, Jing; Jiang, Chao; Mu, Jiali; Zhao, Shuaifeng; Yin, Xiaojun
2018-04-01
Three multilayer interference optical filters, including a UV band-pass, a VIS dual-band-pass and a notch filter, were designed by using Ta2O5, Nb2O5, Al2O3 and SiO2 as high- and low-index materials. During the design of the coating process, a hybrid optical monitoring and RATE-controlled layer thickness control scheme was adopted. The coating process was simulated by using the optical monitoring system (OMS) Simulator, and the simulation result indicated that the layer thickness can be controlled within an error of less than ±0.1%. The three filters were manufactured on a plasma-assisted reactive magnetic sputtering (PARMS) coating machine. The measurements indicate that for the UV band-pass filter, the peak transmittance is higher than 95% and the blocking density is better than OD6 in the 300-1100 nm region, whereas for the dual-band-pass filter, the center wavelength positioning accuracy of the two passbands are less than ±2 nm, the peak transmittance is higher than 95% and blocking density is better than OD6 in the 300-950 nm region. Finally, for the notch filter, the minimum transmittance rates are >90% and >94% in the visible and near infrared, respectively, and the blocking density is better than OD5.5 at 808 nm.
1986-03-01
pin is along the escape wheel tooth, and is a measure of the distance along the plane of the tooth to its end. [Again, parameter g has a negative...Reference 2, appendix C gives the details of how thes." parameters are evaluated.) The parameter f is a measure of the distance between the pallet pin... measures the distance from the pallet pin center tO the escape wheel tooth tip along the plane of the tooth. First the parameter f is monitored. If f
Phase behavior of metastable liquid silicon at negative pressure: Ab initio molecular dynamics
NASA Astrophysics Data System (ADS)
Zhao, G.; Yu, Y. J.; Yan, J. L.; Ding, M. C.; Zhao, X. G.; Wang, H. Y.
2016-04-01
Extensive first-principle molecular dynamics simulations are performed to study the phase behavior of metastable liquid Si at negative pressure. Our results show that the high-density liquid (HDL) and HDL-vapor spinodals indeed form a continuous reentrant curve and the liquid-liquid critical point seems to just coincide with its minimum. The line of density maxima also has a strong tendency to pass through this minimum. The phase behaviour of metastable liquid Si therefore tends to be a critical-point-free scenario rather than a second-critical-point one based on SW potential.
Perspective: Optical measurement of feature dimensions and shapes by scatterometry
NASA Astrophysics Data System (ADS)
Diebold, Alain C.; Antonelli, Andy; Keller, Nick
2018-05-01
The use of optical scattering to measure feature shape and dimensions, scatterometry, is now routine during semiconductor manufacturing. Scatterometry iteratively improves an optical model structure using simulations that are compared to experimental data from an ellipsometer. These simulations are done using the rigorous coupled wave analysis for solving Maxwell's equations. In this article, we describe the Mueller matrix spectroscopic ellipsometry based scatterometry. Next, the rigorous coupled wave analysis for Maxwell's equations is presented. Following this, several example measurements are described as they apply to specific process steps in the fabrication of gate-all-around (GAA) transistor structures. First, simulations of measurement sensitivity for the inner spacer etch back step of horizontal GAA transistor processing are described. Next, the simulated metrology sensitivity for sacrificial (dummy) amorphous silicon etch back step of vertical GAA transistor processing is discussed. Finally, we present the application of plasmonically active test structures for improving the sensitivity of the measurement of metal linewidths.
Calculating inspector probability of detection using performance demonstration program pass rates
NASA Astrophysics Data System (ADS)
Cumblidge, Stephen; D'Agostino, Amy
2016-02-01
The United States Nuclear Regulatory Commission (NRC) staff has been working since the 1970's to ensure that nondestructive testing performed on nuclear power plants in the United States will provide reasonable assurance of structural integrity of the nuclear power plant components. One tool used by the NRC has been the development and implementation of the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code Section XI Appendix VIII[1] (Appendix VIII) blind testing requirements for ultrasonic procedures, equipment, and personnel. Some concerns have been raised, over the years, by the relatively low pass rates for the Appendix VIII qualification testing. The NRC staff has applied statistical tools and simulations to determine the expected probability of detection (POD) for ultrasonic examinations under ideal conditions based on the pass rates for the Appendix VIII qualification tests for the ultrasonic testing personnel. This work was primarily performed to answer three questions. First, given a test design and pass rate, what is the expected overall POD for inspectors? Second, can we calculate the probability of detection for flaws of different sizes using this information? Finally, if a previously qualified inspector fails a requalification test, does this call their earlier inspections into question? The calculations have shown that one can expect good performance from inspectors who have passed appendix VIII testing in a laboratory-like environment, and the requalification pass rates show that the inspectors have maintained their skills between tests. While these calculations showed that the PODs for the ultrasonic inspections are very good under laboratory conditions, the field inspections are conducted in a very different environment. The NRC staff has initiated a project to systematically analyze the human factors differences between qualification testing and field examinations. This work will be used to evaluate and prioritize potential human factors issues that may degrade performance in the field.
Video laryngoscopy in pre-hospital critical care - a quality improvement study.
Rhode, Marianne Grønnebæk; Vandborg, Mads Partridge; Bladt, Vibeke; Rognås, Leif
2016-06-13
Pre-hospital endotracheal intubation is challenging and repeated endotracheal intubation is associated with increased morbidity and mortality. We investigated whether the introduction of the McGrath MAC video laryngoscope as the primary device for pre-hospital endotracheal intubation could improve first-pass success rate in our anaesthesiologist-staffed pre-hospital critical care services. We also investigated the incidence of failed pre-hospital endotracheal intubation, the use of airway adjuncts and back-up devices and problems encountered using the McGrath MAC video laryngoscope. Prospective quality improvement study collecting data from all adult pre-hospital endotracheal intubation performed by four anaesthesiologist-staffed pre-hospital critical care teams between December 15(th) 2013 and December 15(th) 2014. We registered data from 273 consecutive patients. When using the McGrath MAC video laryngoscope the overall pre-hospital endotracheal intubation first-pass success rate was 80.8 %. Following rapid sequence intubation (RSI) it was 88.9 %. This was not significantly different from previously reported first-pass success rates in our system (p = 0.27 and p = 0.41). During the last nine months of the study period the overall first-pass success rate was 80.1 (p = 0.47) but the post-RSI first-pass success rate improved to 94.4 % (0.048). The overall pre-hospital endotracheal intubation success rate with the McGrath MAC video laryngoscope was 98.9 % (p = 0.17). Gastric content, blood or secretion in the airway resulted in reduced vision when using the McGrath MAC video laryngoscope. In this study of video laryngoscope implementation in a Scandinavian anaesthesiologist-staffed pre-hospital critical care service, overall pre-hospital endotracheal first pass success rate did not change. The post-RSI first-pass success rate was significantly higher during the last nine months of our 12-month study compared with our results from before introducing McGrath MAC video laryngoscope. The implementation of the Standard Operating Procedure and check list for pre-hospital anaesthesia during the study period may have influenced the first-pass success rate and constitutes a potential confounder. The potential limitations of the McGrath MAC video laryngoscope when there are gastric content, blood and secretions in the airways need to be further investigated before the McGrath MAC video laryngoscope can be recommended as the primary device in all pre-hospital endotracheal intubations.
NASA Astrophysics Data System (ADS)
den, Mitsue; Amo, Hiroyoshi; Sugihara, Kohta; Takei, Toshifumi; Ogawa, Tomoya; Tanaka, Takashi; Watari, Shinichi
We describe prediction system of the 1-AU arrival times of interplanetary shock waves associated with coromal mass ejections (CMEs). The system is based on modeling of the shock propagation using a three-dimensional adaptive mesh refinement (AMR) code. Once a CME is observed by LASCO/SOHO, firstly ambient solar wind is obtained by numerical simulation, which reproduces the solar wind parameters at that time observed by ACE spacecraft. Then we input the expansion speed and occurrence position data of that CME as initial condtions for an CME model, and 3D simulation of the CME and the shock propagation is perfomed until the shock wave passes the 1-AU. Input the parameters, execution of simulation and output of the result are available on Web, so a person who is not familiar with operation of computer or simulations or is not a researcher can use this system to predict the shock passage time. Simulated CME and shock evolution is visuallized at the same time with simulation and snap shots appear on the web automatically, so that user can follow the propagation. This system is expected to be useful for forecasters of space weather. We will describe the system and simulation model in detail.
Imaging Spectrometer on a Chip
NASA Technical Reports Server (NTRS)
Wang, Yu; Pain, Bedabrata; Cunningham, Thomas; Zheng, Xinyu
2007-01-01
A proposed visible-light imaging spectrometer on a chip would be based on the concept of a heterostructure comprising multiple layers of silicon-based photodetectors interspersed with long-wavelength-pass optical filters. In a typical application, this heterostructure would be replicated in each pixel of an image-detecting integrated circuit of the active-pixel-sensor type (see figure). The design of the heterostructure would exploit the fact that within the visible portion of the spectrum, the characteristic depth of penetration of photons increases with wavelength. Proceeding from the front toward the back, each successive long-wavelength-pass filter would have a longer cutoff wavelength, and each successive photodetector would be made thicker to enable it to absorb a greater proportion of incident longer-wavelength photons. Incident light would pass through the first photodetector and encounter the first filter, which would reflect light having wavelengths shorter than its cutoff wavelength and pass light of longer wavelengths. A large portion of the incident and reflected shorter-wavelength light would be absorbed in the first photodetector. The light that had passed through the first photodetector/filter pair of layers would pass through the second photodetector and encounter the second filter, which would reflect light having wavelengths shorter than its cutoff wavelength while passing light of longer wavelengths. Thus, most of the light reflected by the second filter would lie in the wavelength band between the cutoff wavelengths of the first and second filters. Thus, further, most of the light absorbed in the second photodetector would lie in this wavelength band. In a similar manner, each successive photodetector would detect, predominantly, light in a successively longer wavelength band bounded by the shorter cutoff wavelength of the preceding filter and the longer cutoff wavelength of the following filter.
NASA Astrophysics Data System (ADS)
Piantschitsch, Isabell; Vršnak, Bojan; Hanslmeier, Arnold; Lemmerer, Birgit; Veronig, Astrid; Hernandez-Perez, Aaron; Čalogović, Jaša
2018-06-01
We performed 2.5D magnetohydrodynamic (MHD) simulations showing the propagation of fast-mode MHD waves of different initial amplitudes and their interaction with a coronal hole (CH), using our newly developed numerical code. We find that this interaction results in, first, the formation of reflected, traversing, and transmitted waves (collectively, secondary waves) and, second, in the appearance of stationary features at the CH boundary. Moreover, we observe a density depletion that is moving in the opposite direction of the incoming wave. We find a correlation between the initial amplitude of the incoming wave and the amplitudes of the secondary waves as well as the peak values of the stationary features. Additionally, we compare the phase speed of the secondary waves and the lifetime of the stationary features to observations. Both effects obtained in the simulation, the evolution of secondary waves, as well as the formation of stationary fronts at the CH boundary, strongly support the theory that coronal waves are fast-mode MHD waves.
Optical phase conjugation: principles, techniques, and applications
NASA Astrophysics Data System (ADS)
He, Guang S.
2002-05-01
Over the last three decades, optical phase conjugation (OPC) has been one of the major research subjects in the field of nonlinear optics and quantum electronics. OPC defines usually a special relationship between two coherent optical beams propagating in opposite directions with reversed wave front and identical transverse amplitude distributions. The unique feature of a pair of phase-conjugate beams is that the aberration influence imposed on the forward beam passed through an inhomogeneous or disturbing medium can be automatically removed for the backward beam passed through the same disturbing medium. To date there have been three major technical approaches that can efficiently produce the backward phase-conjugate beam. The first approach is based on the degenerate (or partially degenerate) four-wave mixing processes, the second is based on various backward simulated (Brillouin, Raman, Rayleigh-wing or Kerr) scattering processes, and the third is based on one-photon or multi-photon pumped backward stimulated emission (lasing) processes. Among these three different approaches, there is a common physical mechanism that plays the same essential role in generating a backward phase-conjugate beam, which is the formation of the induced holographic grating and the subsequent wave-front restoration via a backward reading beam. In most experimental studies, certain types of resonance enhancements of induced refractive-index changes are desirable for obtaining higher grating-refraction efficiency. The momentum of OPC studies has recently become even stronger because there are more prospective potentials and achievements for applications. OPC-associated techniques can be successfully utilized in many different application areas: such as high-brightness laser oscillator/amplifier systems, cavity-less lasing devices, laser target-aiming systems, aberration correction for coherent-light transmission and reflection through disturbing media, long distance optical fiber communications with ultra-high bit-rate, optical phase locking and coupling systems, and novel optical data storage and processing systems.
Poulsen, Per Rugaard; Eley, John; Langner, Ulrich; Simone, Charles B; Langen, Katja
2018-01-01
To develop and implement a practical repainting method for efficient interplay effect mitigation in proton pencil beam scanning (PBS). A new flexible repainting scheme with spot-adapted numbers of repainting evenly spread out over the whole breathing cycle (assumed to be 4 seconds) was developed. Twelve fields from 5 thoracic and upper abdominal PBS plans were delivered 3 times using the new repainting scheme to an ion chamber array on a motion stage. One time was static and 2 used 4-second, 3-cm peak-to-peak sinusoidal motion with delivery started at maximum inhalation and maximum exhalation. For comparison, all dose measurements were repeated with no repainting and with 8 repaintings. For each motion experiment, the 3%/3-mm gamma pass rate was calculated using the motion-convolved static dose as the reference. Simulations were first validated with the experiments and then used to extend the study to 0- to 5-cm motion magnitude, 2- to 6-second motion periods, patient-measured liver tumor motion, and 1- to 6-fraction treatments. The effect of the proposed method was evaluated for the 5 clinical cases using 4-dimensional (4D) dose reconstruction in the planning 4D computed tomography scan. The target homogeneity index, HI = (D 2 - D 98 )/D mean , of a single-fraction delivery is reported, where D 2 and D 98 is the dose delivered to 2% and 98% of the target, respectively, and D mean is the mean dose. The gamma pass rates were 59.6% ± 9.7% with no repainting, 76.5% ± 10.8% with 8 repaintings, and 92.4% ± 3.8% with the new repainting scheme. Simulations reproduced the experimental gamma pass rates with a 1.3% root-mean-square error and demonstrated largely improved gamma pass rates with the new repainting scheme for all investigated motion scenarios. One- and two-fraction deliveries with the new repainting scheme had gamma pass rates similar to those of 3-4 and 6-fraction deliveries with 8 repaintings. The mean HI for the 5 clinical cases was 14.2% with no repainting, 13.7% with 8 repaintings, 12.0% with the new repainting scheme, and 11.6% for the 4D dose without interplay effects. A novel repainting strategy for efficient interplay effect mitigation was proposed, implemented, and shown to outperform conventional repainting in experiments, simulations, and dose reconstructions. This strategy could allow for safe and more optimal clinical delivery of thoracic and abdominal proton PBS and better facilitate hypofractionated and stereotactic treatments. Copyright © 2017 Elsevier Inc. All rights reserved.
Monte Carlo simulation of ion-neutral charge exchange collisions and grid erosion in an ion thruster
NASA Technical Reports Server (NTRS)
Peng, Xiaohang; Ruyten, Wilhelmus M.; Keefer, Dennis
1991-01-01
A combined particle-in-cell (PIC)/Monte Carlo simulation model has been developed in which the PIC method is used to simulate the charge exchange collisions. It is noted that a number of features were reproduced correctly by this code, but that its assumption of two-dimensional axisymmetry for a single set of grid apertures precluded the reproduction of the most characteristic feature of actual test data; namely, the concentrated grid erosion at the geometric center of the hexagonal aperture array. The first results of a three-dimensional code, which takes into account the hexagonal symmetry of the grid, are presented. It is shown that, with this code, the experimentally observed erosion patterns are reproduced correctly, demonstrating explicitly the concentration of sputtering between apertures.
Characterizing core-periphery structure of complex network by h-core and fingerprint curve
NASA Astrophysics Data System (ADS)
Li, Simon S.; Ye, Adam Y.; Qi, Eric P.; Stanley, H. Eugene; Ye, Fred Y.
2018-02-01
It is proposed that the core-periphery structure of complex networks can be simulated by h-cores and fingerprint curves. While the features of core structure are characterized by h-core, the features of periphery structure are visualized by rose or spiral curve as the fingerprint curve linking to entire-network parameters. It is suggested that a complex network can be approached by h-core and rose curves as the first-order Fourier-approach, where the core-periphery structure is characterized by five parameters: network h-index, network radius, degree power, network density and average clustering coefficient. The simulation looks Fourier-like analysis.
Qualification of security printing features
NASA Astrophysics Data System (ADS)
Simske, Steven J.; Aronoff, Jason S.; Arnabat, Jordi
2006-02-01
This paper describes the statistical and hardware processes involved in qualifying two related printing features for their deployment in product (e.g. document and package) security. The first is a multi-colored tiling feature that can also be combined with microtext to provide additional forms of security protection. The color information is authenticated automatically with a variety of handheld, desktop and production scanners. The microtext is authenticated either following magnification or manually by a field inspector. The second security feature can also be tile-based. It involves the use of two inks that provide the same visual color, but differ in their transparency to infrared (IR) wavelengths. One of the inks is effectively transparent to IR wavelengths, allowing emitted IR light to pass through. The other ink is effectively opaque to IR wavelengths. These inks allow the printing of a seemingly uniform, or spot, color over a (truly) uniform IR emitting ink layer. The combination converts a uniform covert ink and a spot color to a variable data region capable of encoding identification sequences with high density. Also, it allows the extension of variable data printing for security to ostensibly static printed regions, affording greater security protection while meeting branding and marketing specifications.
LMD Based Features for the Automatic Seizure Detection of EEG Signals Using SVM.
Zhang, Tao; Chen, Wanzhong
2017-08-01
Achieving the goal of detecting seizure activity automatically using electroencephalogram (EEG) signals is of great importance and significance for the treatment of epileptic seizures. To realize this aim, a newly-developed time-frequency analytical algorithm, namely local mean decomposition (LMD), is employed in the presented study. LMD is able to decompose an arbitrary signal into a series of product functions (PFs). Primarily, the raw EEG signal is decomposed into several PFs, and then the temporal statistical and non-linear features of the first five PFs are calculated. The features of each PF are fed into five classifiers, including back propagation neural network (BPNN), K-nearest neighbor (KNN), linear discriminant analysis (LDA), un-optimized support vector machine (SVM) and SVM optimized by genetic algorithm (GA-SVM), for five classification cases, respectively. Confluent features of all PFs and raw EEG are further passed into the high-performance GA-SVM for the same classification tasks. Experimental results on the international public Bonn epilepsy EEG dataset show that the average classification accuracy of the presented approach are equal to or higher than 98.10% in all the five cases, and this indicates the effectiveness of the proposed approach for automated seizure detection.
NASA Astrophysics Data System (ADS)
Pei, Youbin; Xiang, Nong; Hu, Youjun; Todo, Y.; Li, Guoqiang; Shen, Wei; Xu, Liqing
2017-03-01
Kinetic-MagnetoHydroDynamic hybrid simulations are carried out to investigate fishbone modes excited by fast ions on the Experimental Advanced Superconducting Tokamak. The simulations use realistic equilibrium reconstructed from experiment data with the constraint of the q = 1 surface location (q is the safety factor). Anisotropic slowing down distribution is used to model the distribution of the fast ions from neutral beam injection. The resonance condition is used to identify the interaction between the fishbone mode and the fast ions, which shows that the fishbone mode is simultaneously in resonance with the bounce motion of the trapped particles and the transit motion of the passing particles. Both the passing and trapped particles are important in destabilizing the fishbone mode. The simulations show that the mode frequency chirps down as the mode reaches the nonlinear stage, during which there is a substantial flattening of the perpendicular pressure of fast ions, compared with that of the parallel pressure. For passing particles, the resonance remains within the q = 1 surface, while, for trapped particles, the resonant location moves out radially during the nonlinear evolution. In addition, parameter scanning is performed to examine the dependence of the linear frequency and growth rate of fishbones on the pressure and injection velocity of fast ions.
Hybrid Methods for Muon Accelerator Simulations with Ionization Cooling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kunz, Josiah; Snopok, Pavel; Berz, Martin
Muon ionization cooling involves passing particles through solid or liquid absorbers. Careful simulations are required to design muon cooling channels. New features have been developed for inclusion in the transfer map code COSY Infinity to follow the distribution of charged particles through matter. To study the passage of muons through material, the transfer map approach alone is not sufficient. The interplay of beam optics and atomic processes must be studied by a hybrid transfer map--Monte-Carlo approach in which transfer map methods describe the deterministic behavior of the particles, and Monte-Carlo methods are used to provide corrections accounting for the stochasticmore » nature of scattering and straggling of particles. The advantage of the new approach is that the vast majority of the dynamics are represented by fast application of the high-order transfer map of an entire element and accumulated stochastic effects. The gains in speed are expected to simplify the optimization of cooling channels which is usually computationally demanding. Progress on the development of the required algorithms and their application to modeling muon ionization cooling channels is reported.« less
Featured Image: Experimental Simulation of Melting Meteoroids
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2017-03-01
Ever wonder what experimental astronomy looks like? Some days, it looks like this piece of rock in a wind tunnel (click for a betterlook!). In this photo, a piece of agrillite (a terrestrial rock) is exposed to conditions in a plasma wind tunnel as a team of scientists led by Stefan Loehle (Stuttgart University) simulate what happens to a meteoroid as it hurtles through Earths atmosphere. With these experiments, the scientists hope to better understand meteoroid ablation the process by which meteoroids are heated, melt, and evaporateas they pass through our atmosphere so that we can learn more from the meteorite fragments that make it to the ground. In the scientists experiment, the rock samples were exposed to plasma flow until they disintegrated, and this process was simultaneously studied via photography, video, high-speed imaging, thermography, and Echelle emission spectroscopy. To find out what the team learned from these experiments, you can check out the original article below.CitationStefan Loehle et al 2017 ApJ 837 112. doi:10.3847/1538-4357/aa5cb5
Three-dimensional radar imaging of structures and craters in the Martian polar caps.
Putzig, Nathaniel E; Smith, Isaac B; Perry, Matthew R; Foss, Frederick J; Campbell, Bruce A; Phillips, Roger J; Seu, Roberto
2018-07-01
Over the last decade, observations acquired by the Shallow Radar (SHARAD) sounder on individual passes of the Mars Reconnaissance Orbiter have revealed the internal structure of the Martian polar caps and provided new insights into the formation of the icy layers within and their relationship to climate. However, a complete picture of the cap interiors has been hampered by interfering reflections from off-nadir surface features and signal losses associated with sloping structures and scattering. Foss et al. (2017) addressed these limitations by assembling three-dimensional data volumes of SHARAD observations from thousands of orbital passes over each polar region and applying geometric corrections simultaneously. The radar volumes provide unprecedented views of subsurface features, readily imaging structures previously inferred from time-intensive manual analysis of single-orbit data (e.g., trough-bounding surfaces, a buried chasma, and a basal unit in the north, massive carbon-dioxide ice deposits and discontinuous layered sequences in the south). Our new mapping of the carbon-dioxide deposits yields a volume of 16,500 km 3 , 11% larger than the prior estimate. In addition, the radar volumes newly reveal other structures, including what appear to be buried impact craters with no surface expression. Our first assessment of 21 apparent craters at the base of the north polar layered deposits suggests a Hesperian age for the substrate, consistent with that of the surrounding plains as determined from statistics of surface cratering rates. Planned mapping of similar features throughout both polar volumes may provide new constraints on the age of the icy layered deposits. The radar volumes also provide new topographic data between the highest latitudes observed by the Mars Orbiter Laser Altimeter and those observed by SHARAD. In general, mapping of features in these radar volumes is placing new constraints on the nature and evolution of the polar deposits and associated climate changes.
Three-dimensional radar imaging of structures and craters in the Martian polar caps
NASA Astrophysics Data System (ADS)
Putzig, Nathaniel E.; Smith, Isaac B.; Perry, Matthew R.; Foss, Frederick J.; Campbell, Bruce A.; Phillips, Roger J.; Seu, Roberto
2018-07-01
Over the last decade, observations acquired by the Shallow Radar (SHARAD) sounder on individual passes of the Mars Reconnaissance Orbiter have revealed the internal structure of the Martian polar caps and provided new insights into the formation of the icy layers within and their relationship to climate. However, a complete picture of the cap interiors has been hampered by interfering reflections from off-nadir surface features and signal losses associated with sloping structures and scattering. Foss et al. (The Leading Edge 36, 43-57, 2017, https://doi.org/10.1190/tle36010043.1) addressed these limitations by assembling three-dimensional data volumes of SHARAD observations from thousands of orbital passes over each polar region and applying geometric corrections simultaneously. The radar volumes provide unprecedented views of subsurface features, readily imaging structures previously inferred from time-intensive manual analysis of single-orbit data (e.g., trough-bounding surfaces, a buried chasma, and a basal unit in the north, massive carbon-dioxide ice deposits and discontinuous layered sequences in the south). Our new mapping of the carbon-dioxide deposits yields a volume of 16,500 km3, 11% larger than the prior estimate. In addition, the radar volumes newly reveal other structures, including what appear to be buried impact craters with no surface expression. Our first assessment of 21 apparent craters at the base of the north polar layered deposits suggests a Hesperian age for the substrate, consistent with that of the surrounding plains as determined from statistics of surface cratering rates. Planned mapping of similar features throughout both polar volumes may provide new constraints on the age of the icy layered deposits. The radar volumes also provide new topographic data between the highest latitudes observed by the Mars Orbiter Laser Altimeter and those observed by SHARAD. In general, mapping of features in these radar volumes is placing new constraints on the nature and evolution of the polar deposits and associated climate changes.
A New Concept of Coronagraph using Axicon Lenses
NASA Astrophysics Data System (ADS)
Choi, Jae Ho
2017-06-01
High-contrast direct imaging of faint objects nearby bright stellar is essential to investigate planetary systems. The goal of such effort is to find and characterize planets similar to Earth that is a challenging task due to it requires a high angular resolution and high dynamic range detections concurrently. A coronagraph that can be suppressed the bright stellar light or active galactic nuclei during the direct detection of astrophysical activities became one of the essential instruments to image exoplanets. In this presentation, a novel concept of a coronagraph using axicon-lenses is will be presented that is conjunction with a method of noninterferometric quantitative phase imaging for direct imaging of exoplanets. The essential scheme of the axicon-lenses coronagraph is the apodization carried out by excluding evaginated images of the planetary systems by a pair of axicon lens. The laboratory based coronagraph imaging is carried out with the axicon-lenses coronagraph setup which included the axicon lenses optics and phase contrast imaging unit. A simulated stellar and its companion are provided by illuminating light through small holes drilled on a thin metal plate. Those diffracted light at the edge of the holes bears a similarity to the light from the bright stellar. The images are evaginated about the optical axis by passing the first axicon lens. Then the evaginated beams of its external area have cut off by an iris which means the suppressed its central light of the bright stellar light preferentially. A symbolic calculation also is carried out to verify the scheme of the the axicon-lenses coronagraph using the symbolic computation program. The simulation results are shown that the the axicon-lenses coronagraph has feature of ability to achieve the IWA smaller than l/D. The laboratory based coronagraph imaging and simulation results support its potentials in direct imaging for finding exo-planet and various astrophysical activities.
Yanamadala, Janakinadh; Noetscher, Gregory M; Rathi, Vishal K; Maliye, Saili; Win, Htay A; Tran, Anh L; Jackson, Xavier J; Htet, Aung T; Kozlov, Mikhail; Nazarian, Ara; Louie, Sara; Makarov, Sergey N
2015-01-01
Simulation of the electromagnetic response of the human body relies heavily upon efficient computational models or phantoms. The first objective of this paper is to present a new platform-independent full-body electromagnetic computational model (computational phantom), the Visible Human Project(®) (VHP)-Female v. 2.0 and to describe its distinct features. The second objective is to report phantom simulation performance metrics using the commercial FEM electromagnetic solver ANSYS HFSS.
Brownian dynamics simulation of protein diffusion in crowded environments
NASA Astrophysics Data System (ADS)
Mereghetti, Paolo; Wade, Rebecca C.
2013-02-01
High macromolecular concentrations are a distinguishing feature of living organisms. Understanding how the high concentration of solutes affects the dynamic properties of biological macromolecules is fundamental for the comprehension of biological processes in living systems. We first describe the development of a Brownian dynamics simulation methodology to investigate the dynamic and structural properties of protein solutions using atomic-detail protein structures. We then discuss insights obtained from applying this approach to simulation of solutions of a range of types of proteins.
Hydrogen production by high temperature water splitting using electron conducting membranes
Balachandran, Uthamalingam; Wang, Shuangyan; Dorris, Stephen E.; Lee, Tae H.
2006-08-08
A device and method for separating water into hydrogen and oxygen is disclosed. A first substantially gas impervious solid electron-conducting membrane for selectively passing protons or hydrogen is provided and spaced from a second substantially gas impervious solid electron-conducting membrane for selectively passing oxygen. When steam is passed between the two membranes at dissociation temperatures the hydrogen from the dissociation of steam selectively and continuously passes through the first membrane and oxygen selectively and continuously passes through the second membrane, thereby continuously driving the dissociation of steam producing hydrogen and oxygen. The oxygen is thereafter reacted with methane to produce syngas which optimally may be reacted in a water gas shift reaction to produce CO2 and H2.
Experimental and Computational Study of Trapped Vortex Combustor Sector Rig With Tri-Pass Diffuser
NASA Technical Reports Server (NTRS)
Hendricks, R. C.; Shouse, D. T.; Roquernore, W. M.; Burrus, D. L.; Duncan, B. S.; Ryder, R. C.; Brankovic, A.; Liu, N.-S.; Gallagher, J. R.; Hendricks, J. A.
2004-01-01
The Trapped Vortex Combustor (TVC) potentially offers numerous operational advantages over current production gas turbine engine combustors. These include lower weight, lower pollutant emissions, effective flame stabilization, high combustion efficiency, excellent high altitude relight capability, and operation in the lean burn or RQL modes of combustion. The present work describes the operational principles of the TVC, and extends diffuser velocities toward choked flow and provides system performance data. Performance data include EINOx results for various fuel-air ratios and combustor residence times, combustion efficiency as a function of combustor residence time, and combustor lean blow-out (LBO) performance. Computational fluid dynamics (CFD) simulations using liquid spray droplet evaporation and combustion modeling are performed and related to flow structures observed in photographs of the combustor. The CFD results are used to understand the aerodynamics and combustion features under different fueling conditions. Performance data acquired to date are favorable compared to conventional gas turbine combustors. Further testing over a wider range of fuel-air ratios, fuel flow splits, and pressure ratios is in progress to explore the TVC performance. In addition, alternate configurations for the upstream pressure feed, including bi-pass diffusion schemes, as well as variations on the fuel injection patterns, are currently in test and evaluation phases.
Deep nets vs expert designed features in medical physics: An IMRT QA case study.
Interian, Yannet; Rideout, Vincent; Kearney, Vasant P; Gennatas, Efstathios; Morin, Olivier; Cheung, Joey; Solberg, Timothy; Valdes, Gilmer
2018-03-30
The purpose of this study was to compare the performance of Deep Neural Networks against a technique designed by domain experts in the prediction of gamma passing rates for Intensity Modulated Radiation Therapy Quality Assurance (IMRT QA). A total of 498 IMRT plans across all treatment sites were planned in Eclipse version 11 and delivered using a dynamic sliding window technique on Clinac iX or TrueBeam Linacs. Measurements were performed using a commercial 2D diode array, and passing rates for 3%/3 mm local dose/distance-to-agreement (DTA) were recorded. Separately, fluence maps calculated for each plan were used as inputs to a convolution neural network (CNN). The CNNs were trained to predict IMRT QA gamma passing rates using TensorFlow and Keras. A set of model architectures, inspired by the convolutional blocks of the VGG-16 ImageNet model, were constructed and implemented. Synthetic data, created by rotating and translating the fluence maps during training, was created to boost the performance of the CNNs. Dropout, batch normalization, and data augmentation were utilized to help train the model. The performance of the CNNs was compared to a generalized Poisson regression model, previously developed for this application, which used 78 expert designed features. Deep Neural Networks without domain knowledge achieved comparable performance to a baseline system designed by domain experts in the prediction of 3%/3 mm Local gamma passing rates. An ensemble of neural nets resulted in a mean absolute error (MAE) of 0.70 ± 0.05 and the domain expert model resulted in a 0.74 ± 0.06. Convolutional neural networks (CNNs) with transfer learning can predict IMRT QA passing rates by automatically designing features from the fluence maps without human expert supervision. Predictions from CNNs are comparable to a system carefully designed by physicist experts. © 2018 American Association of Physicists in Medicine.
Considerations of a ship defense with a pulsed COIL
NASA Astrophysics Data System (ADS)
Takehisa, K.
2015-10-01
Ship defense system with a pulsed COIL (Chemical Oxygen-Iodine Laser) has been considered. One of the greatest threats for battle ships and carriers in warfare are supersonic anti-ship cruise missiles (ASCMs). A countermeasure is considered to be a supersonic RAM (Rolling Airframe Missile) at first. A gun-type CIWS (Close-In Weapon System) should be used as the last line of defense. However since an ASCM can be detected at only 30-50km away due to radar horizon, a speed-of-light weapon is desirable as the first defense especially if the ASCM flies at >Mach 6. Our previous report explained several advantages of a giant pulse from a chemical oxygen laser (COL) to shoot down supersonic aircrafts. Since the first defense has the target distance of ~30km, the use of COIL is better considering its beam having high transmissivity in air. Therefore efficient operation of a giant-pulsed COIL has been investigated with rate-equation simulations. The simulation results indicate that efficient single-pass amplification can be expected. Also a design example of a giant-pulsed COIL MOPA (master oscillator and power amplifier) system has been shown, in which the output energy can be increased without limit.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sadayappan, Ponnuswamy
Exascale computing systems will provide a thousand-fold increase in parallelism and a proportional increase in failure rate relative to today's machines. Systems software for exascale machines must provide the infrastructure to support existing applications while simultaneously enabling efficient execution of new programming models that naturally express dynamic, adaptive, irregular computation; coupled simulations; and massive data analysis in a highly unreliable hardware environment with billions of threads of execution. We propose a new approach to the data and work distribution model provided by system software based on the unifying formalism of an abstract file system. The proposed hierarchical data model providesmore » simple, familiar visibility and access to data structures through the file system hierarchy, while providing fault tolerance through selective redundancy. The hierarchical task model features work queues whose form and organization are represented as file system objects. Data and work are both first class entities. By exposing the relationships between data and work to the runtime system, information is available to optimize execution time and provide fault tolerance. The data distribution scheme provides replication (where desirable and possible) for fault tolerance and efficiency, and it is hierarchical to make it possible to take advantage of locality. The user, tools, and applications, including legacy applications, can interface with the data, work queues, and one another through the abstract file model. This runtime environment will provide multiple interfaces to support traditional Message Passing Interface applications, languages developed under DARPA's High Productivity Computing Systems program, as well as other, experimental programming models. We will validate our runtime system with pilot codes on existing platforms and will use simulation to validate for exascale-class platforms. In this final report, we summarize research results from the work done at the Ohio State University towards the larger goals of the project listed above.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eskandari Nasrabad, Afshin; Coalson, Rob D.; Jasnow, David
Polymer-nanoparticle composites are a promising new class of materials for creation of controllable nano-patterned surfaces and nanopores. We use coarse-grained molecular dynamics simulations augmented with analytical theory to study the structural transitions of surface grafted polymer layers (brushes) induced by infiltration of nanoparticles that are attracted to the polymers in the layer. We systematically compare two different polymer brush geometries: one where the polymer chains are grafted to a planar surface and the other where the chains are grafted to the inside of a cylindrical nanochannel. We perform a comprehensive study of the effects of the material parameters such asmore » the polymer chain length, chain grafting density, nanoparticle size, strength of attraction between nanoparticles and polymer monomers, and, in the case of the cylindrically grafted brush, the radius of the cylinder. We find a very general behavioral motif for all geometries and parameter values: the height of the polymer brush is non-monotonic in the nanoparticle concentration in solution. As the nanoparticle concentration increases, the brush height first decreases and after passing through a minimum value begins to increase, resulting in the swelling of the nanoparticle infused brush. These morphological features may be useful for devising tunable “smart” nano-devices whose effective dimensions can be reversibly and precisely adjusted by changing the nanoparticle concentration in solution. The results of approximate Self-Consistent Field Theory (SCFT) calculations, applicable in the regime of strong brush stretching, are compared to the simulation results. The SCFT calculations are found to be qualitatively, even semi-quantitatively, accurate when applied within their intended regime of validity, and provide a useful and efficient tool for modeling such materials.« less
NASA Technical Reports Server (NTRS)
Nakamura, S.; Scott, J. N.
1993-01-01
A two-dimensional model to solve compressible Navier-Stokes equations for the flow through stator and rotor blades of a turbine is developed. The flow domains for the stator and rotor blades are coupled by the Chimera method that makes grid generation easy and enhances accuracy because the area of the grid that have high turning of grid lines or high skewness can be eliminated from the computational domain after the grids are generated. The results of flow computations show various important features of unsteady flows including the acoustic waves interacting with boundary layers, Karman vortex shedding from the trailing edge of the stator blades, pulsating incoming flow to a rotor blade from passing stator blades, and flow separation from both suction and pressure sides of the rotor blades.
Wave breaking induced surface wakes and jets observed during a bora event
NASA Astrophysics Data System (ADS)
Jiang, Qingfang; Doyle, James D.
2005-09-01
An observational and modeling study of a bora event that occurred during the field phase of the Mesoscale Alpine Programme is presented. Research aircraft in-situ measurements and airborne remote-sensing observations indicate the presence of strong low-level wave breaking and alternating surface wakes and jets along the Croatian coastline over the Adriatic Sea. The observed features are well captured by a high-resolution COAMPS simulation. Analysis of the observations and modeling results indicate that the long-extending wakes above the boundary layer are induced by dissipation associated with the low-level wave breaking, which locally tends to accelerate the boundary layer flow beneath the breaking. Farther downstream of the high peaks, a hydraulic jump occurs in the boundary layer, which creates surface wakes. Downstream of lower-terrain (passes), the boundary layer flow stays strong, resembling supercritical flow.
Adaptive Control for Microgravity Vibration Isolation System
NASA Technical Reports Server (NTRS)
Yang, Bong-Jun; Calise, Anthony J.; Craig, James I.; Whorton, Mark S.
2005-01-01
Most active vibration isolation systems that try to a provide quiescent acceleration environment for space science experiments have utilized linear design methods. In this paper, we address adaptive control augmentation of an existing classical controller that employs a high-gain acceleration feedback together with a low-gain position feedback to center the isolated platform. The control design feature includes parametric and dynamic uncertainties because the hardware of the isolation system is built as a payload-level isolator, and the acceleration Sensor exhibits a significant bias. A neural network is incorporated to adaptively compensate for the system uncertainties, and a high-pass filter is introduced to mitigate the effect of the measurement bias. Simulations show that the adaptive control improves the performance of the existing acceleration controller and keep the level of the isolated platform deviation to that of the existing control system.
Solitary waves in a peridynamic elastic solid
Silling, Stewart A.
2016-06-23
The propagation of large amplitude nonlinear waves in a peridynamic solid is ana- lyzed. With an elastic material model that hardens in compression, sufficiently large wave pulses propagate as solitary waves whose velocity can far exceed the linear wave speed. In spite of their large velocity and amplitude, these waves leave the material they pass through with no net change in velocity and stress. They are nondissipative and nondispersive, and they travel unchanged over large distances. An approximate solution for solitary waves is derived that reproduces the main features of these waves observed in computational simulations. We demonstrate, by numericalmore » studies, that waves interact only weakly with each other when they collide. Finally, we found that wavetrains composed of many non-interacting solitary waves form and propagate under certain boundary and initial conditions.« less
The evolution of space simulation
NASA Technical Reports Server (NTRS)
Edwards, Arthur A.
1992-01-01
Thirty years have passed since the first large (more than 15 ft diameter) thermal vacuum space simulation chambers were built in this country. Many changes have been made since then, and the industry has learned a great deal as the designs have evolved in that time. I was fortunate to have been part of that beginning, and have participated in many of the changes that have occurred since. While talking with vacuum friends recently, I realized that many of the engineers working in the industry today may not be aware of the evolution of space simulation because they did not experience the changes that brought us today's technology. With that in mind, it seems to be appropriate to take a moment and review some of the events that were a big part of the past thirty years in the thermal vacuum business. Perhaps this review will help to understand a little of the 'why' as well as the 'how' of building and operating large thermal vacuum chambers.
NASA Astrophysics Data System (ADS)
Khotimah, Chusnul; Purnami, Santi Wulan; Prastyo, Dedy Dwi; Chosuvivatwong, Virasakdi; Sriplung, Hutcha
2017-11-01
Support Vector Machines (SVMs) has been widely applied for prediction in many fields. Recently, SVM is also developed for survival analysis. In this study, Additive Survival Least Square SVM (A-SURLSSVM) approach is used to analyze cervical cancer dataset and its performance is compared with the Cox model as a benchmark. The comparison is evaluated based on the prognostic index produced: concordance index (c-index), log rank, and hazard ratio. The higher prognostic index represents the better performance of the corresponding methods. This work also applied feature selection to choose important features using backward elimination technique based on the c-index criterion. The cervical cancer dataset consists of 172 patients. The empirical results show that nine out of the twelve features: age at marriage, age of first getting menstruation, age, parity, type of treatment, history of family planning, stadium, long-time of menstruation, and anemia status are selected as relevant features that affect the survival time of cervical cancer patients. In addition, the performance of the proposed method is evaluated through a simulation study with the different number of features and censoring percentages. Two out of three performance measures (c-index and hazard ratio) obtained from A-SURLSSVM consistently yield better results than the ones obtained from Cox model when it is applied on both simulated and cervical cancer data. Moreover, the simulation study showed that A-SURLSSVM performs better when the percentage of censoring data is small.
Microfluidic Gut-liver chip for reproducing the first pass metabolism.
Choe, Aerim; Ha, Sang Keun; Choi, Inwook; Choi, Nakwon; Sung, Jong Hwan
2017-03-01
After oral intake of drugs, drugs go through the first pass metabolism in the gut and the liver, which greatly affects the final outcome of the drugs' efficacy and side effects. The first pass metabolism is a complex process involving the gut and the liver tissue, with transport and reaction occurring simultaneously at various locations, which makes it difficult to be reproduced in vitro with conventional cell culture systems. In an effort to tackle this challenge, here we have developed a microfluidic gut-liver chip that can reproduce the dynamics of the first pass metabolism. The microfluidic chip consists of two separate layers for gut epithelial cells (Caco-2) and the liver cells (HepG2), and is designed so that drugs go through a sequential absorption in the gut chamber and metabolic reaction in the liver chamber. We fabricated the chip and showed that the two different cell lines can be successfully co-cultured on chip. When the two cells are cultured on chip, changes in the physiological function of Caco-2 and HepG2 cells were noted. The cytochrome P450 metabolic activity of both cells were significantly enhanced, and the absorptive property of Caco-2 cells on chip also changed in response to the presence of flow. Finally, first pass metabolism of a flavonoid, apigenin, was evaluated as a model compound, and co-culture of gut and liver cells on chip resulted in a metabolic profile that is closer to the reported profile than a monoculture of gut cells. This microfluidic gut-liver chip can potentially be a useful platform to study the complex first pass metabolism of drugs in vitro.
Numerical simulation of humidification and heating during inspiration within an adult nose.
Sommer, F; Kroger, R; Lindemann, J
2012-06-01
The temperature of inhaled air is highly relevant for the humidification process. Narrow anatomical conditions limit possibilities for in vivo measurements. Numerical simulations offer a great potential to examine the function of the human nose. In the present study, the nasal humidification of inhaled air was simulated simultaneously with temperature distribution during a respiratory cycle. A realistic nose model based on a multislice CT scan was created. The simulation was performed by the Software Fluent(r). Boundary conditions were based on previous in vivo measurements. Inhaled air had a temperature of 20(deg)C and relative humidity of 30%. The wall temperature was assumed to be variable from 34(deg)C to 30(deg)C with constant humidity saturation of 100% during the respiratory cycle. A substantial increase in temperature and humidity can be observed after passing the nasal valve area. Areas with high speed air flow, e.g. the space around the turbinates, show an intensive humidification and heating potential. Inspired air reaches 95% humidity and 28(deg)C within the nasopharynx. The human nose features an enormous humidification and heating capability. Warming and humidification are dependent on each other and show a similar spacial pattern. Concerning the climatisation function, the middle turbinate is of high importance. In contrast to in vivo measurements, numerical simulations can explore the impact of airflow distribution on nasal air conditioning. They are an effective method to investigate nasal pathologies and impacts of surgical procedures.
Keenan, Kevin G.; Valero-Cuevas, Francisco J.
2008-01-01
Researchers and clinicians routinely rely on interference electromyograms (EMGs) to estimate muscle forces and command signals in the neuromuscular system (e.g., amplitude, timing, and frequency content). The amplitude cancellation intrinsic to interference EMG, however, raises important questions about how to optimize these estimates. For example, what should the length of the epoch (time window) be to average an EMG signal to reliably estimate muscle forces and command signals? Shorter epochs are most practical, and significant reductions in epoch have been reported with high-pass filtering and whitening. Given that this processing attenuates power at frequencies of interest (< 250 Hz), however, it is unclear how it improves the extraction of physiologically-relevant information. We examined the influence of amplitude cancellation and high-pass filtering on the epoch necessary to accurately estimate the “true” average EMG amplitude calculated from a 28 s EMG trace (EMGref) during simulated constant isometric conditions. Monte Carlo iterations of a motor-unit model simulating 28 s of surface EMG produced 245 simulations under 2 conditions: with and without amplitude cancellation. For each simulation, we calculated the epoch necessary to generate average full-wave rectified EMG amplitudes that settled within 5% of EMGref. For the no-cancellation EMG, the necessary epochs were short (e.g., < 100 ms). For the more realistic interference EMG (i.e., cancellation condition), epochs shortened dramatically after using high-pass filter cutoffs above 250 Hz, producing epochs short enough to be practical (i.e., < 500 ms). We conclude that the need to use long epochs to accurately estimate EMG amplitude is likely the result of unavoidable amplitude cancellation, which helps to clarify why high-pass filtering (> 250 Hz) improves EMG estimates. PMID:19081815
Feature-space-based FMRI analysis using the optimal linear transformation.
Sun, Fengrong; Morris, Drew; Lee, Wayne; Taylor, Margot J; Mills, Travis; Babyn, Paul S
2010-09-01
The optimal linear transformation (OLT), an image analysis technique of feature space, was first presented in the field of MRI. This paper proposes a method of extending OLT from MRI to functional MRI (fMRI) to improve the activation-detection performance over conventional approaches of fMRI analysis. In this method, first, ideal hemodynamic response time series for different stimuli were generated by convolving the theoretical hemodynamic response model with the stimulus timing. Second, constructing hypothetical signature vectors for different activity patterns of interest by virtue of the ideal hemodynamic responses, OLT was used to extract features of fMRI data. The resultant feature space had particular geometric clustering properties. It was then classified into different groups, each pertaining to an activity pattern of interest; the applied signature vector for each group was obtained by averaging. Third, using the applied signature vectors, OLT was applied again to generate fMRI composite images with high SNRs for the desired activity patterns. Simulations and a blocked fMRI experiment were employed for the method to be verified and compared with the general linear model (GLM)-based analysis. The simulation studies and the experimental results indicated the superiority of the proposed method over the GLM-based analysis in detecting brain activities.
Doan, Nhat Trung; van den Bogaard, Simon J A; Dumas, Eve M; Webb, Andrew G; van Buchem, Mark A; Roos, Raymund A C; van der Grond, Jeroen; Reiber, Johan H C; Milles, Julien
2014-03-01
To develop a framework for quantitative detection of between-group textural differences in ultrahigh field T2*-weighted MR images of the brain. MR images were acquired using a three-dimensional (3D) T2*-weighted gradient echo sequence on a 7 Tesla MRI system. The phase images were high-pass filtered to remove phase wraps. Thirteen textural features were computed for both the magnitude and phase images of a region of interest based on 3D Gray-Level Co-occurrence Matrix, and subsequently evaluated to detect between-group differences using a Mann-Whitney U-test. We applied the framework to study textural differences in subcortical structures between premanifest Huntington's disease (HD), manifest HD patients, and controls. In premanifest HD, four phase-based features showed a difference in the caudate nucleus. In manifest HD, 7 magnitude-based features showed a difference in the pallidum, 6 phase-based features in the caudate nucleus, and 10 phase-based features in the putamen. After multiple comparison correction, significant differences were shown in the putamen in manifest HD by two phase-based features (both adjusted P values=0.04). This study provides the first evidence of textural heterogeneity of subcortical structures in HD. Texture analysis of ultrahigh field T2*-weighted MR images can be useful for noninvasive monitoring of neurodegenerative diseases. Copyright © 2013 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batista, Rafael Alves; Dundovic, Andrej; Sigl, Guenter
2016-05-01
We present the simulation framework CRPropa version 3 designed for efficient development of astrophysical predictions for ultra-high energy particles. Users can assemble modules of the most relevant propagation effects in galactic and extragalactic space, include their own physics modules with new features, and receive on output primary and secondary cosmic messengers including nuclei, neutrinos and photons. In extension to the propagation physics contained in a previous CRPropa version, the new version facilitates high-performance computing and comprises new physical features such as an interface for galactic propagation using lensing techniques, an improved photonuclear interaction calculation, and propagation in time dependent environmentsmore » to take into account cosmic evolution effects in anisotropy studies and variable sources. First applications using highlighted features are presented as well.« less
Assessing Mental Health First Aid Skills Using Simulated Patients
Chen, Timothy F.; Moles, Rebekah J.; O’Reilly, Claire
2018-01-01
Objective. To evaluate mental health first aid (MHFA) skills using simulated patients and to compare self-reported confidence in providing MHFA with performance during simulated patient roleplays. Methods. Pharmacy students self-evaluated their confidence in providing MHFA post-training. Two mental health vignettes and an assessment rubric based on the MHFA Action Plan were developed to assess students’ observed MHFA skills during audio-recorded simulated patient roleplays. Results. There were 163 students who completed the MHFA training, of which 88% completed self-evaluations. There were 84% to 98% of students who self-reported that they agreed or strongly agreed they were confident providing MHFA. Postnatal depression (PND) and suicide vignettes were randomly assigned to 36 students. More students participating in the PND roleplay took appropriate actions, compared to those participating in the suicide role-play. However, more students participating in the suicide role play assessed alcohol and/or drug use. Ten (71%) participants in the PND roleplay and six (40%) in the suicide roleplay either avoided using suicide-specific terminology completely or used multiple terms rendering their inquiry unclear. Conclusion. Self-evaluated confidence levels in providing MHFA did not always reflect observed performance. Students had difficulty addressing suicide with only half passing the suicide vignette and many avoiding suicide-specific terminology. This indicates that both self-reported and observed behaviors should be used for post-training assessments. PMID:29606711
The Pattern of Indoor Smoking Restriction Law Transitions, 1970–2009: Laws Are Sticky
Sanders-Jackson, Ashley; Gonzalez, Mariaelena; Zerbe, Brandon; Song, Anna V.
2013-01-01
Objectives. We examined the pattern of the passage of smoking laws across venues (government and private workplaces, restaurants, bars) and by strength (no law to 100% smoke-free). Methods. We conducted transition analyses of local and state smoking restrictions passed between 1970 and 2009, with data from the Americans for Nonsmokers’ Rights Ordinance Database. Results. Each decade, more laws were enacted, from 18 passed in the 1970s to 3172 in the first decade of this century, when 91% of existing state laws were passed. Most laws passed took states and localities from no law to some level of smoking restriction, and most new local (77%; 5148/6648) and state (73%; 115/158) laws passed in the study period did not change strength. Conclusions. Because these laws are “sticky”—once a law has passed, strength of the law and venues covered do not change often—policymakers and advocates should focus on passing strong laws the first time, rather than settling for less comprehensive laws with the hope of improving them in the future. PMID:23763408
Electron Beam Welding of IN792 DS: Effects of Pass Speed and PWHT on Microstructure and Hardness
Angella, Giuliano; Montanari, Roberto; Richetta, Maria; Varone, Alessandra
2017-01-01
Electron Beam (EB) welding has been used to realize seams on 2 mm-thick plates of directionally solidified (DS) IN792 superalloy. The first part of this work evidenced the importance of pre-heating the workpiece to avoid the formation of long cracks in the seam. The comparison of different pre-heating temperatures (PHT) and pass speeds (v) allowed the identification of optimal process parameters, namely PHT = 300 °C and v = 2.5 m/min. The microstructural features of the melted zone (MZ); the heat affected zone (HAZ), and base material (BM) were investigated by optical microscopy (OM), scanning electron microscopy (SEM), energy dispersion spectroscopy (EDS), electron back-scattered diffraction (EBSD), X-ray diffraction (XRD), and micro-hardness tests. In the as-welded condition; the structure of directionally oriented grains was completely lost in MZ. The γ’ phase in MZ consisted of small (20–40 nm) round shaped particles and its total amount depended on both PHT and welding pass speed, whereas in HAZ, it was the same BM. Even if the amount of γ’ phase in MZ was lower than that of the as-received material, the nanometric size of the particles induced an increase in hardness. EDS examinations did not show relevant composition changes in the γ’ and γ phases. Post-welding heat treatments (PWHT) at 700 and 750 °C for two hours were performed on the best samples. After PWHTs, the amount of the ordered phase increased, and the effect was more pronounced at 750 °C, while the size of γ’ particles in MZ remained almost the same. The hardness profiles measured across the joints showed an upward shift, but peak-valley height was a little lower, indicating more homogeneous features in the different zones. PMID:28872620
Simulation of a Real-Time Brain Computer Interface for Detecting a Self-Paced Hitting Task.
Hammad, Sofyan H; Kamavuako, Ernest N; Farina, Dario; Jensen, Winnie
2016-12-01
An invasive brain-computer interface (BCI) is a promising neurorehabilitation device for severely disabled patients. Although some systems have been shown to work well in restricted laboratory settings, their utility must be tested in less controlled, real-time environments. Our objective was to investigate whether a specific motor task could be reliably detected from multiunit intracortical signals from freely moving animals in a simulated, real-time setting. Intracortical signals were first obtained from electrodes placed in the primary motor cortex of four rats that were trained to hit a retractable paddle (defined as a "Hit"). In the simulated real-time setting, the signal-to-noise-ratio was first increased by wavelet denoising. Action potentials were detected, and features were extracted (spike count, mean absolute values, entropy, and combination of these features) within pre-defined time windows (200 ms, 300 ms, and 400 ms) to classify the occurrence of a "Hit." We found higher detection accuracy of a "Hit" (73.1%, 73.4%, and 67.9% for the three window sizes, respectively) when the decision was made based on a combination of features rather than on a single feature. However, the duration of the window length was not statistically significant (p = 0.5). Our results showed the feasibility of detecting a motor task in real time in a less restricted environment compared to environments commonly applied within invasive BCI research, and they showed the feasibility of using information extracted from multiunit recordings, thereby avoiding the time-consuming and complex task of extracting and sorting single units. © 2016 International Neuromodulation Society.
Geometric characterization and simulation of planar layered elastomeric fibrous biomaterials
Carleton, James B.; D’Amore, Antonio; Feaver, Kristen R.; ...
2014-10-13
Many important biomaterials are composed of multiple layers of networked fibers. While there is a growing interest in modeling and simulation of the mechanical response of these biomaterials, a theoretical foundation for such simulations has yet to be firmly established. Moreover, correctly identifying and matching key geometric features is a critically important first step for performing reliable mechanical simulations. This paper addresses these issues in two ways. First, using methods of geometric probability, we develop theoretical estimates for the mean linear and areal fiber intersection densities for 2-D fibrous networks. These densities are expressed in terms of the fiber densitymore » and the orientation distribution function, both of which are relatively easy-to-measure properties. Secondly, we develop a random walk algorithm for geometric simulation of 2-D fibrous networks which can accurately reproduce the prescribed fiber density and orientation distribution function. Furthermore, the linear and areal fiber intersection densities obtained with the algorithm are in agreement with the theoretical estimates. Both theoretical and computational results are compared with those obtained by post-processing of scanning electron microscope images of actual scaffolds. These comparisons reveal difficulties inherent to resolving fine details of multilayered fibrous networks. Finally, the methods provided herein can provide a rational means to define and generate key geometric features from experimentally measured or prescribed scaffold structural data.« less
NASA Astrophysics Data System (ADS)
Hayashi, Keiji; Feng, Xueshang; Xiong, Ming; Jiang, Chaowei
2018-03-01
For realistic magnetohydrodynamics (MHD) simulation of the solar active region (AR), two types of capabilities are required. The first is the capability to calculate the bottom-boundary electric field vector, with which the observed magnetic field can be reconstructed through the induction equation. The second is a proper boundary treatment to limit the size of the sub-Alfvénic simulation region. We developed (1) a practical inversion method to yield the solar-surface electric field vector from the temporal evolution of the three components of magnetic field data maps, and (2) a characteristic-based free boundary treatment for the top and side sub-Alfvénic boundary surfaces. We simulate the temporal evolution of AR 11158 over 16 hr for testing, using Solar Dynamics Observatory/Helioseismic Magnetic Imager vector magnetic field observation data and our time-dependent three-dimensional MHD simulation with these two features. Despite several assumptions in calculating the electric field and compromises for mitigating computational difficulties at the very low beta regime, several features of the AR were reasonably retrieved, such as twisting field structures, energy accumulation comparable to an X-class flare, and sudden changes at the time of the X-flare. The present MHD model can be a first step toward more realistic modeling of AR in the future.
Performance Evaluation of an Actuator Dust Seal for Lunar Operation
NASA Technical Reports Server (NTRS)
Delgado, Irebert R.; Gaier, James R.; Handschuh, Michael; Panko, Scott; Sechkar, Ed
2013-01-01
Exploration of extraterrestrial surfaces (e.g. moon, Mars, asteroid) will require durable space mechanisms that will survive potentially dusty surface conditions in addition to the hard vacuum and extreme temperatures of space. Baseline tests with lunar simulant were recently completed at NASA GRC on a new Low-Temperature Mechanism (LTM) dust seal for space actuator application. Following are top-level findings of the tests completed to date in vacuum using NU-LHT-2M lunar-highlands simulant. A complete set of findings are found in the conclusions section.Tests were run at approximately 10-7 torr with unidirectional rotational speed of 39 RPM.Initial break-in runs were performed at atmospheric conditions with no simulant. During the break-in runs, the maximum torque observed was 16.7 lbf-in. while the maximum seal outer diameter temperature was 103F. Only 0.4 milligrams of NU-LHT-2M simulant passed through the sealshaft interface in the first 511,000 cycles while under vacuum despite a chip on the secondary sealing surface.Approximately 650,000 of a planned 1,000,000 cycles were completed in vacuum with NU-LHT-2M simulant.Upon test disassembly NU-LHT-2M was found on the secondary sealing surface.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kraus, Adam; Merzari, Elia; Sofu, Tanju
2016-08-01
High-fidelity analysis has been utilized in the design of beam target options for an accelerator driven subcritical system. Designs featuring stacks of plates with square cross section have been investigated for both tungsten and uranium target materials. The presented work includes the first thermal-hydraulic simulations of the full, detailed target geometry. The innovative target cooling manifold design features many regions with complex flow features, including 90 bends and merging jets, which necessitate three-dimensional fluid simulations. These were performed using the commercial computational fluid dynamics code STAR-CCM+. Conjugate heat transfer was modeled between the plates, cladding, manifold structure, and fluid. Steady-statemore » simulations were performed but lacked good residual convergence. Unsteady simulations were then performed, which converged well and demonstrated that flow instability existed in the lower portion of the manifold. It was established that the flow instability had little effect on the peak plate temperatures, which were well below the melting point. The estimated plate surface temperatures and target region pressure were shown to provide sufficient margin to subcooled boiling for standard operating conditions. This demonstrated the safety of both potential target configurations during normal operation.« less
A biologically inspired controller to solve the coverage problem in robotics.
Rañó, Iñaki; Santos, José A
2017-06-05
The coverage problem consists on computing a path or trajectory for a robot to pass over all the points in some free area and has applications ranging from floor cleaning to demining. Coverage is solved as a planning problem-providing theoretical validation of the solution-or through heuristic techniques which rely on experimental validation. Through a combination of theoretical results and simulations, this paper presents a novel solution to the coverage problem that exploits the chaotic behaviour of a simple biologically inspired motion controller, the Braitenberg vehicle 2b. Although chaos has been used for coverage, our approach has much less restrictive assumptions about the environment and can be implemented using on-board sensors. First, we prove theoretically that this vehicle-a well known model of animal tropotaxis-behaves as a charge in an electro-magnetic field. The motion equations can be reduced to a Hamiltonian system, and, therefore the vehicle follows quasi-periodic or chaotic trajectories, which pass arbitrarily close to any point in the work-space, i.e. it solves the coverage problem. Secondly, through a set of extensive simulations, we show that the trajectories cover regions of bounded workspaces, and full coverage is achieved when the perceptual range of the vehicle is short. We compare the performance of this new approach with different types of random motion controllers in the same bounded environments.
Cyclotron resonant scattering feature simulations. II. Description of the CRSF simulation process
NASA Astrophysics Data System (ADS)
Schwarm, F.-W.; Ballhausen, R.; Falkner, S.; Schönherr, G.; Pottschmidt, K.; Wolff, M. T.; Becker, P. A.; Fürst, F.; Marcu-Cheatham, D. M.; Hemphill, P. B.; Sokolova-Lapa, E.; Dauser, T.; Klochkov, D.; Ferrigno, C.; Wilms, J.
2017-05-01
Context. Cyclotron resonant scattering features (CRSFs) are formed by scattering of X-ray photons off quantized plasma electrons in the strong magnetic field (of the order 1012 G) close to the surface of an accreting X-ray pulsar. Due to the complex scattering cross-sections, the line profiles of CRSFs cannot be described by an analytic expression. Numerical methods, such as Monte Carlo (MC) simulations of the scattering processes, are required in order to predict precise line shapes for a given physical setup, which can be compared to observations to gain information about the underlying physics in these systems. Aims: A versatile simulation code is needed for the generation of synthetic cyclotron lines. Sophisticated geometries should be investigatable by making their simulation possible for the first time. Methods: The simulation utilizes the mean free path tables described in the first paper of this series for the fast interpolation of propagation lengths. The code is parallelized to make the very time-consuming simulations possible on convenient time scales. Furthermore, it can generate responses to monoenergetic photon injections, producing Green's functions, which can be used later to generate spectra for arbitrary continua. Results: We develop a new simulation code to generate synthetic cyclotron lines for complex scenarios, allowing for unprecedented physical interpretation of the observed data. An associated XSPEC model implementation is used to fit synthetic line profiles to NuSTAR data of Cep X-4. The code has been developed with the main goal of overcoming previous geometrical constraints in MC simulations of CRSFs. By applying this code also to more simple, classic geometries used in previous works, we furthermore address issues of code verification and cross-comparison of various models. The XSPEC model and the Green's function tables are available online (see link in footnote, page 1).
Cornell, A.A.; Dunbar, J.V.; Ruffner, J.H.
1959-09-29
A semi-automatic method is described for the weld joining of pipes and fittings which utilizes the inert gasshielded consumable electrode electric arc welding technique, comprising laying down the root pass at a first peripheral velocity and thereafter laying down the filler passes over the root pass necessary to complete the weld by revolving the pipes and fittings at a second peripheral velocity different from the first peripheral velocity, maintaining the welding head in a fixed position as to the specific direction of revolution, while the longitudinal axis of the welding head is disposed angularly in the direction of revolution at amounts between twenty minutas and about four degrees from the first position.
NASA Astrophysics Data System (ADS)
Fatimah, Siti; Setiawan, Wawan; Kusnendar, Jajang; Rasim, Junaeti, Enjun; Anggraeni, Ria
2017-05-01
Debriefing of pedagogical competence through both theory and practice which became a requirement for prospective teachers were through micro teaching and teaching practice program. But, some reports from the partner schools stated that the participants of teaching practice program have not well prepared on implementing the learning in the classroom because of lacking the debriefing. In line with the development of information technology, it is very possible to develop a media briefing of pedagogical competencies for prospective teachers through an application so that they can use it anytime and anywhere. This study was one answer to the problem of unpreparedness participants of the teaching practice program. This study developed a teaching simulator, which was an application for learning simulation with the animated film to enhance the professional pedagogical competence prospective teachers. By the application of this teaching simulator, students as prospective teacher could test their own pedagogic competence through learning models with different varied characteristics of students. Teaching Simulator has been equipped with features that allow users to be able to explore the quality of teaching techniques that they employ for the teaching and learning activities in the classroom. These features included the election approaches, the student's character, learning materials, questioning techniques, discussion, and evaluation. Teaching simulator application provided the ease of prospective teachers or teachers in implementing the development of lessons for practice in the classroom. Applications that have been developed to apply simulation models allow users to freely manage a lesson. Development of teaching simulator application was passed through the stages which include needs assessment, design, coding, testing, revision, improvement, grading, and packaging. The application of teaching simulator was also enriched with some real instructional video as a comparison for the user. Based on the two experts, the media expert and education expert, stated that the application of teaching simulator is feasible to be used as an instrument for the debriefing of students as potential participants of the teaching practice program. The results of the use of the application to the students as potential participants of teaching practice program, showed significant increases in the pedagogic competence. This study was presented at an international seminar and in the process of publishing in international reputated journals. Applications teaching simulator was in the process of registration to obtain the copyright of the Ministry of Justice and Human Rights. Debriefing for prospective teachers to use teaching simulator application could improve the mastery of pedagogy, give clear feedback, and perform repetitions at anytime.
The aerodynamic effects of passing trains to surrounding objects and people
DOT National Transportation Integrated Search
2009-04-01
Two safety issues are raised on the aerodynamic effects of a passing train on its surroundings. First, a high-speed train passing other trains on an adjacent track exerts aerodynamic pressure that can affect the structural integrity of window mount a...
Qi, Miao; Wang, Ting; Yi, Yugen; Gao, Na; Kong, Jun; Wang, Jianzhong
2017-04-01
Feature selection has been regarded as an effective tool to help researchers understand the generating process of data. For mining the synthesis mechanism of microporous AlPOs, this paper proposes a novel feature selection method by joint l 2,1 norm and Fisher discrimination constraints (JNFDC). In order to obtain more effective feature subset, the proposed method can be achieved in two steps. The first step is to rank the features according to sparse and discriminative constraints. The second step is to establish predictive model with the ranked features, and select the most significant features in the light of the contribution of improving the predictive accuracy. To the best of our knowledge, JNFDC is the first work which employs the sparse representation theory to explore the synthesis mechanism of six kinds of pore rings. Numerical simulations demonstrate that our proposed method can select significant features affecting the specified structural property and improve the predictive accuracy. Moreover, comparison results show that JNFDC can obtain better predictive performances than some other state-of-the-art feature selection methods. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
[An Improved Cubic Spline Interpolation Method for Removing Electrocardiogram Baseline Drift].
Wang, Xiangkui; Tang, Wenpu; Zhang, Lai; Wu, Minghu
2016-04-01
The selection of fiducial points has an important effect on electrocardiogram(ECG)denoise with cubic spline interpolation.An improved cubic spline interpolation algorithm for suppressing ECG baseline drift is presented in this paper.Firstly the first order derivative of original ECG signal is calculated,and the maximum and minimum points of each beat are obtained,which are treated as the position of fiducial points.And then the original ECG is fed into a high pass filter with 1.5Hz cutoff frequency.The difference between the original and the filtered ECG at the fiducial points is taken as the amplitude of the fiducial points.Then cubic spline interpolation curve fitting is used to the fiducial points,and the fitting curve is the baseline drift curve.For the two simulated case test,the correlation coefficients between the fitting curve by the presented algorithm and the simulated curve were increased by 0.242and0.13 compared with that from traditional cubic spline interpolation algorithm.And for the case of clinical baseline drift data,the average correlation coefficient from the presented algorithm achieved 0.972.
NASA Astrophysics Data System (ADS)
Bernardo, Lawrence Patrick C.; Nadaoka, Kazuo; Nakamura, Takashi; Watanabe, Atsushi
2017-11-01
While widely known for their destructive power, typhoon events can also bring benefit to coral reef ecosystems through typhoon-induced cooling which can mitigate against thermally stressful conditions causing coral bleaching. Sensor deployments in Sekisei Lagoon, Japan's largest coral reef area, during the summer months of 2013, 2014, and 2015 were able to capture local hydrodynamic features of numerous typhoon passages. In particular, typhoons 2015-13 and 2015-15 featured steep drops in near-bottom temperature of 5 °C or more in the north and south sides of Sekisei Lagoon, respectively, indicating local cooling patterns which appeared to depend on the track and intensity of the passing typhoon. This was further investigated using Regional Ocean Modeling System (ROMS) numerical simulations conducted for the summer of 2015. The modeling results showed a cooling trend to the north of the Yaeyama Islands during the passage of typhoon 2015-13, and a cooling trend that moved clockwise from north to south of the islands during the passage of typhoon 2015-15. These local cooling events may have been initiated by the Yaeyama Islands acting as an obstacle to a strong typhoon-generated flow which was modulated and led to prominent cooling of waters on the leeward sides. These lower temperature waters from offshore may then be transported to the shallower inner parts of the lagoon area, which may partly be due to density-driven currents generated by the offshore-inner area temperature difference.
NASA Astrophysics Data System (ADS)
Edmond, J. A.; Hill, S. C.; Xu, H.; Perez, J. D.; Fok, M. C. H.; Goldstein, J.; McComas, D. J.; Valek, P. W.
2017-12-01
The Two Wide-Angle Imaging Neutral-Atom Spectrometers (TWINS) mission obtained energetic neutral atom (ENA) images during a 4 day storm on 7-10 September 2015. The storm has two separate SYM/H minima, so we divide the storm into four intervals: first main phase, first recovery phase, second main phase, and second recovery phase. Simulations with the Comprehensive Inner Magnetosphere-Ionosphere Model (CIMI) are compared and contrasted with the TWINS observations. We find good agreement in most aspects of the storm. E. G. (1) the location of the ion pressure peaks are most often in the dusk-midnight sector, (2) the pitch angle distributions at the pressure peaks most often display perpendicular anisotropy, and (3) the energy spectra at the pressure peaks have similar maximum energies. There are, however, some exceptions to these general features. We describe and interpret these notable events. We also have examined particle paths determined from the CIMI model simulations to assist in the interpretation of the notable events.In this poster, we focus upon the features of the CIMI simulations with a self-consistent electric field and with the semi-empirical Weimer electric potential in relationship to the TWINS observations.
Regional Myocardial Blood Volume and Flow: First-Pass MR Imaging with Polylysine-Gd-DTPA
Wilke, Norbert; Kroll, Keith; Merkle, Hellmut; Wang, Ying; Ishibashi, Yukata; Xu, Ya; Zhang, Jiani; Jerosch-Herold, Michael; Mühler, Andreas; Stillman, Arthur E.; Bassingthwaighte, James B.; Bache, Robert; Ugurbil, Kamil
2010-01-01
The authors investigated the utility of an intravascular magnetic resonance (MR) contrast agent, poly-L-lysine-gadolinium diethylenetriaminepentaacetic acid (DTPA), for differentiating acutely ischemic from normally perfused myocardium with first-pass MR imaging. Hypoperfused regions, identified with microspheres, on the first-pass images displayed significantly decreased signal intensities compared with normally perfused myocardium (P < .0007). Estimates of regional myocardial blood content, obtained by measuring the ratio of areas under the signal intensity-versus-time curves in tissue regions and the left ventricular chamber, averaged 0.12 mL/g ± 0.04 (n = 35), compared with a value of 0.11 mL/g ± 0.05 measured with radiolabeled albumin in the same tissue regions. To obtain MR estimates of regional myocardial blood flow, in situ calibration curves were used to transform first-pass intensity-time curves into content-time curves for analysis with a multiple-pathway, axially distributed model. Flow estimates, obtained by automated parameter optimization, averaged 1.2 mL/min/g ± 0.5 [n = 29), compared with 1.3 mL/min/g ± 0.3 obtained with tracer microspheres in the same tissue specimens at the same time. The results represent a combination of T1-weighted first-pass imaging, intravascular relaxation agents, and a spatially distributed perfusion model to obtain absolute regional myocardial blood flow and volume. PMID:7766986
Boxenbaum, H
1999-01-01
Assuming complete hepatic substrate metabolism and system linearity, quantitative effects of in vivo competitive inhibition are investigated. Following oral administration of a substrate in the presence of a competitive inhibitor, determination of the inhibition constant (Ki) is possible when plasma concentration-time profiles of both substrate and inhibitor are available. When triazolam is the P450 3A4 substrate and ketoconazole the competitive inhibitor, Ki approximately 1.2 microg/mL in humans. The effects of competitive inhibition can be divided into two components: first-pass hepatic metabolism and systemic metabolism. For drugs with high hepatic extraction ratios, the impact of competitive inhibition on hepatic first-pass metabolism can be particularly dramatic. For example, human terfenadine hepatic extraction goes from 95% in the absence of a competitive inhibitor to 35% in the presence of one (ketoconazole, 200 mg po Q 12 h dosed to steady-state). First-pass extraction therefore goes from 5% in the absence of the inhibitor to 65% in its presence. The combined effect on first-pass and systemic metabolism produces an approximate 37 fold increase in terfenadine area under the plasma concentration-time curve. Assuming intact drug is active and/or toxic, development of metabolized drugs with extensive first-pass metabolism should be avoided if possible, since inhibition of metabolism may lead to profound increases in exposure.
2016-09-01
as an example the integration of cryogenic superconductor components, including filters and amplifiers to improve the pulse quality and validate the...5 5.1 CRYOGENIC BAND-PASS FILTERS .............................................................................10 6. BIBLIOGRAPHY...10 16. Gain plot of DARPA SURF tunable band-pass filter tuned to 950-MHz .............................. 10 v 17. VSG at -50 dBm: Experimental
Rouillard, Andrew D; Hurle, Mark R; Agarwal, Pankaj
2018-05-01
Target selection is the first and pivotal step in drug discovery. An incorrect choice may not manifest itself for many years after hundreds of millions of research dollars have been spent. We collected a set of 332 targets that succeeded or failed in phase III clinical trials, and explored whether Omic features describing the target genes could predict clinical success. We obtained features from the recently published comprehensive resource: Harmonizome. Nineteen features appeared to be significantly correlated with phase III clinical trial outcomes, but only 4 passed validation schemes that used bootstrapping or modified permutation tests to assess feature robustness and generalizability while accounting for target class selection bias. We also used classifiers to perform multivariate feature selection and found that classifiers with a single feature performed as well in cross-validation as classifiers with more features (AUROC = 0.57 and AUPR = 0.81). The two predominantly selected features were mean mRNA expression across tissues and standard deviation of expression across tissues, where successful targets tended to have lower mean expression and higher expression variance than failed targets. This finding supports the conventional wisdom that it is favorable for a target to be present in the tissue(s) affected by a disease and absent from other tissues. Overall, our results suggest that it is feasible to construct a model integrating interpretable target features to inform target selection. We anticipate deeper insights and better models in the future, as researchers can reuse the data we have provided to improve methods for handling sample biases and learn more informative features. Code, documentation, and data for this study have been deposited on GitHub at https://github.com/arouillard/omic-features-successful-targets.
The Odds of Success: Predicting Registered Health Information Administrator Exam Success
Dolezel, Diane; McLeod, Alexander
2017-01-01
The purpose of this study was to craft a predictive model to examine the relationship between grades in specific academic courses, overall grade point average (GPA), on-campus versus online course delivery, and success in passing the Registered Health Information Administrator (RHIA) exam on the first attempt. Because student success in passing the exam on the first attempt is assessed as part of the accreditation process, this study is important to health information management (HIM) programs. Furthermore, passing the exam greatly expands the graduate's job possibilities because the demand for credentialed graduates far exceeds the supply of credentialed graduates. Binary logistic regression was utilized to explore the relationships between the predictor variables and success in passing the RHIA exam on the first attempt. Results indicate that the student's cumulative GPA, specific HIM course grades, and course delivery method were predictive of success. PMID:28566994
NASA Astrophysics Data System (ADS)
Lau, Graham Elliot
Sulfur is one of the most ubiquitous elements in the universe and one of those that is crucial for life, as we know it. This graduate dissertation presents the culmination of work conducted to better understand biological and geochemical processes related to sulfur cycling at a sulfur-dominated field site in the Canadian High Arctic. This site, situated in a valley called Borup Fiord Pass, provides a unique environment where sulfide-rich fluids emerge from a glacier and form large deposits of ice that become covered in elemental sulfur. The role of biology is compelling and yet challenging to define in each step of sulfur cycling at Borup Fiord pass, whether one considers the origin of the sulfide (presumed biological sulfate reduction in the subsurface) or one focuses on the processes driving sulfur oxidation and stabilization at the glacier's surface. This dissertation presents results from a field expedition in 2014 as well as detailed mineralogical and spectroscopic analyses of sulfur-rich materials returned from the field. The importance of sulfur and carbonate minerals at this site is considered. Also, analyses of materials within pyrite alteration features in the valley are explored. These features appear to represent emplaced subsurface sulfide ores, which have been subsequently leached near the surface, forming gossanous structures. The geochemistry and mineralogy of these features is explored, as well as is their potential to serve as analogs for the exploration of Mars. The dissertation then concludes with some consideration of potential future work to be considered as well as a recapitulation of the current state of knowledge of processes at Borup Fiord Pass.
Using Functional Languages and Declarative Programming to analyze ROOT data: LINQtoROOT
NASA Astrophysics Data System (ADS)
Watts, Gordon
2015-05-01
Modern high energy physics analysis is complex. It typically requires multiple passes over different datasets, and is often held together with a series of scripts and programs. For example, one has to first reweight the jet energy spectrum in Monte Carlo to match data before plots of any other jet related variable can be made. This requires a pass over the Monte Carlo and the Data to derive the reweighting, and then another pass over the Monte Carlo to plot the variables the analyser is really interested in. With most modern ROOT based tools this requires separate analysis loops for each pass, and script files to glue to the results of the two analysis loops together. A framework has been developed that uses the functional and declarative features of the C# language and its Language Integrated Query (LINQ) extensions to declare the analysis. The framework uses language tools to convert the analysis into C++ and runs ROOT or PROOF as a backend to get the results. This gives the analyser the full power of an object-oriented programming language to put together the analysis and at the same time the speed of C++ for the analysis loop. The tool allows one to incorporate C++ algorithms written for ROOT by others. A by-product of the design is the ability to cache results between runs, dramatically reducing the cost of adding one-more-plot and also to keep a complete record associated with each plot for data preservation reasons. The code is mature enough to have been used in ATLAS analyses. The package is open source and available on the open source site CodePlex.
... Iontophoresis URL of this page: //medlineplus.gov/ency/article/007293.htm Iontophoresis To use the sharing features on this page, please enable JavaScript. Iontophoresis involves passing a weak electrical current through ...
Apparatus and method to compensate for refraction of radiation
Allen, Gary R.; Moskowitz, Philip E.
1990-01-01
An apparatus to compensate for refraction of radiation passing through a curved wall of an article is provided. The apparatus of a preferred embodiment is particularly advantageous for use in arc tube discharge diagnostics. The apparatus of the preferred embodiment includes means for pre-refracting radiation on a predetermined path by an amount equal and inverse to refraction which occurs when radiation passes through a first wall of the arc tube such that, when the radiation passes through the first wall of the arc tube and into the cavity thereof, the radiation passes through the cavity approximately on the predetermined path; means for releasably holding the article such that the radiation passes through the cavity thereof; and means for post-refracting radiation emerging from a point of the arc tube opposite its point of entry by an amount equal and inverse to refraction which occurs when radiation emerges from the arc tube. In one embodiment the means for pre-refracting radiation includes a first half tube comprising a longitudinally bisected tube obtained from a tube which is approximately identical to the arc tube's cylindrical portion and a first cylindrical lens, the first half tube being mounted with its concave side facing the radiation source and the first cylindrical lens being mounted between the first half tube and the arc tube and the means for post-refracting radiation includes a second half tube comprising a longitudinally bisected tube obtained from a tube which is approximately identical to the arc tube's cylindrical portion and a second cylindrical lens, the second half tube being mounted with its convex side facing the radiation source and the second cylindrical lens being mounted between the arc tube and the second half tube. Methods to compensate for refraction of radiation passing into and out of an arc tube is also provided.
Apparatus and method to compensate for refraction of radiation
Allen, G.R.; Moskowitz, P.E.
1990-03-27
An apparatus to compensate for refraction of radiation passing through a curved wall of an article is provided. The apparatus of a preferred embodiment is particularly advantageous for use in arc tube discharge diagnostics. The apparatus of the preferred embodiment includes means for pre-refracting radiation on a predetermined path by an amount equal and inverse to refraction which occurs when radiation passes through a first wall of the arc tube such that, when the radiation passes through the first wall of the arc tube and into the cavity thereof, the radiation passes through the cavity approximately on the predetermined path; means for releasably holding the article such that the radiation passes through the cavity thereof; and means for post-refracting radiation emerging from a point of the arc tube opposite its point of entry by an amount equal and inverse to refraction which occurs when radiation emerges from the arc tube. In one embodiment the means for pre-refracting radiation includes a first half tube comprising a longitudinally bisected tube obtained from a tube which is approximately identical to the arc tube's cylindrical portion and a first cylindrical lens, the first half tube being mounted with its concave side facing the radiation source and the first cylindrical lens being mounted between the first half tube and the arc tube and the means for post-refracting radiation includes a second half tube comprising a longitudinally bisected tube obtained from a tube which is approximately identical to the arc tube's cylindrical portion and a second cylindrical lens, the second half tube being mounted with its convex side facing the radiation source and the second cylindrical lens being mounted between the arc tube and the second half tube. Methods to compensate for refraction of radiation passing into and out of an arc tube is also provided. 4 figs.
Lazoura, Olga; Ismail, Tevfik F; Pavitt, Christopher; Lindsay, Alistair; Sriharan, Mona; Rubens, Michael; Padley, Simon; Duncan, Alison; Wong, Tom; Nicol, Edward
2016-02-01
Assessment of the left atrial appendage (LAA) for thrombus and anatomy is important prior to atrial fibrillation (AF) ablation and LAA exclusion. The use of cardiovascular CT (CCT) to detect LAA thrombus has been limited by the high incidence of pseudothrombus on single-pass studies. We evaluated the diagnostic accuracy of a two-phase protocol incorporating a limited low-dose delayed contrast-enhanced examination of the LAA, compared with a single-pass study for LAA morphological assessment, and transesophageal echocardiography (TEE) for the exclusion of thrombus. Consecutive patients (n = 122) undergoing left atrial interventions for AF were assessed. All had a two-phase CCT protocol (first-past scan plus a limited, 60-s delayed scan of the LAA) and TEE. Sensitivity, specificity, diagnostic accuracy, positive (PPV) and negative predictive values (NPV) were calculated for the detection of true thrombus on first-pass and delayed scans, using TEE as the gold standard. Overall, 20/122 (16.4 %) patients had filling defects on the first-pass study. All affected the full delineation of the LAA morphology; 17/20 (85 %) were confirmed as pseudo-filling defects. Three (15 %) were seen on late-pass and confirmed as true thrombi on TEE; a significant improvement in diagnostic performance relative to a single-pass scan (McNemar Chi-square 17, p < 0.001). The sensitivity, specificity, diagnostic accuracy, PPV and NPV was 100, 85.7, 86.1, 15.0 and 100 % respectively for first-pass scans, and 100 % for all parameters for the delayed scans. The median (range) additional radiation dose for the delayed scan was 0.4 (0.2-0.6) mSv. A low-dose delayed scan significantly improves the identification of true LAA anatomy and thrombus in patients undergoing LA intervention.
A prototype for the PASS Permanent All Sky Survey
NASA Astrophysics Data System (ADS)
Deeg, H. J.; Alonso, R.; Belmonte, J. A.; Horne, K.; Alsubai, K.; Collier Cameron, A.; Doyle, L. R.
2004-10-01
A prototype system for the Permanent All Sky Survey (PASS) project is presented. PASS is a continuous photometric survey of the entire celestial sphere with a high temporal resolution. Its major objectives are the detection of all giant-planet transits (with periods up to some weeks) across stars up to mag 10.5, and to deliver continuously photometry that is useful for the study of any variable stars. The prototype is based on CCD cameras with short focal length optics on a fixed mount. A small dome to house it at Teide Observatory, Tenerife, is currently being constructed. A placement at the antarctic Dome C is also being considered. The prototype will be used for a feasibility study of PASS, to define the best observing strategies, and to perform a detailed characterization of the capabilities and scope of the survey. Afterwards, a first partial sky surveying will be started with it. That first survey may be able to detect transiting planets during its first few hundred hours of operation. It will also deliver a data set around which software modules dealing with the various scientific objectives of PASS will be developed. The PASS project is still in its early phase and teams interested in specific scientific objectives, in providing technical expertise, or in participating with own observations are invited to collaborate.
ACHP | News | Legislation Passes Senate
Search skip specific nav links Home arrow News arrow Legislation Passes Senate Secretary Kempthorne continue historic preservation programs founded by each of the past two First Ladies in legislation passed Hillary Clinton. "Bipartisan approval of this legislation by an overwhelming margin reflects the
How Perturbing Ocean Floor Disturbs Tsunami Waves
NASA Astrophysics Data System (ADS)
Salaree, A.; Okal, E.
2017-12-01
Bathymetry maps play, perhaps the most crucial role in optimal tsunami simulations. Regardless of the simulation method, on one hand, it is desirable to include every detailed bathymetry feature in the simulation grids in order to predict tsunami amplitudes as accurately as possible, but on the other hand, large grids result in long simulation times. It is therefore, of interest to investigate a "sufficiency" level - if any - for the amount of details in bathymetry grids needed to reconstruct the most important features in tsunami simulations, as obtained from the actual bathymetry. In this context, we use a spherical harmonics series approach to decompose the bathymetry of the Pacific ocean into its components down to a resolution of 4 degrees (l=100) and create bathymetry grids by accumulating the resulting terms. We then use these grids to simulate the tsunami behavior from pure thrust events around the Pacific through the MOST algorithm (e.g. Titov & Synolakis, 1995; Titov & Synolakis, 1998). Our preliminary results reveal that one would only need to consider the sum of the first 40 coefficients (equivalent to a resolution of 1000 km) to reproduce the main components of the "real" results. This would result in simpler simulations, and potentially allowing for more efficient tsunami warning algorithms.
Static Noise Margin Enhancement by Flex-Pass-Gate SRAM
NASA Astrophysics Data System (ADS)
O'Uchi, Shin-Ichi; Masahara, Meishoku; Sakamoto, Kunihiro; Endo, Kazuhiko; Liu, Yungxun; Matsukawa, Takashi; Sekigawa, Toshihiro; Koike, Hanpei; Suzuki, Eiichi
A Flex-Pass-Gate SRAM, i.e. a fin-type-field-effect-transistor- (FinFET-) based SRAM, is proposed to enhance noise margin during both read and write operations. In its cell, the flip-flop is composed of usual three-terminal- (3T-) FinFETs while pass gates are composed of four-terminal- (4T-) FinFETs. The 4T-FinFETs enable to adopt a dynamic threshold-voltage control in the pass gates. During a write operation, the threshold voltage of the pass gates is lowered to enhance the writing speed and stability. During the read operation, on the other hand, the threshold voltage is raised to enhance the static noise margin. An asymmetric-oxide 4T-FinFET is helpful to manage the leakage current through the pass gate. In this paper, a design strategy of the pass gate with an asymmetric gate oxide is considered, and a TCAD-based Monte Carlo simulation reveals that the Flex-Pass-Gate SRAM based on that design strategy is expected to be effective in half-pitch 32-nm technology for low-standby-power (LSTP) applications, even taking into account the variability in the device performance.
An image mosaic method based on corner
NASA Astrophysics Data System (ADS)
Jiang, Zetao; Nie, Heting
2015-08-01
In view of the shortcomings of the traditional image mosaic, this paper describes a new algorithm for image mosaic based on the Harris corner. Firstly, Harris operator combining the constructed low-pass smoothing filter based on splines function and circular window search is applied to detect the image corner, which allows us to have better localisation performance and effectively avoid the phenomenon of cluster. Secondly, the correlation feature registration is used to find registration pair, remove the false registration using random sampling consensus. Finally use the method of weighted trigonometric combined with interpolation function for image fusion. The experiments show that this method can effectively remove the splicing ghosting and improve the accuracy of image mosaic.
FT-IR spectroscopy study on cutaneous neoplasie
NASA Astrophysics Data System (ADS)
Crupi, V.; De Domenico, D.; Interdonato, S.; Majolino, D.; Maisano, G.; Migliardo, P.; Venuti, V.
2001-05-01
In this work we report a preliminary study of Fourier transform infrared spectroscopy on normal and neoplastic human skin samples suffering from two kinds of cancer, namely epithelioma and basalioma. The analyzed skin samples have been drawn from different parts of the human body, after biopsies. By performing a complex band deconvolution due to the complexity of the tissue composition, the analysis within the considered frequency region (900-4000 cm -1) of the collected IR spectra, allowed us, first of all, to characterize the presence of the pathologies and to show clear different spectral features passing from the normal tissue to the malignant one in particular within the region (1500-2000 cm -1) typical of the lipid bands.
NASA Astrophysics Data System (ADS)
Xiao, Ze-xin; Chen, Kuan
2008-03-01
Biochemical analyzer is one of the important instruments in the clinical diagnosis, and its optical system is the important component. The operation of this optical system can be regard as three parts. The first is transforms the duplicate colored light as the monochromatic light. The second is transforms the light signal of the monochromatic, which have the information of the measured sample, as the electric signal by use the photoelectric detector. And the last is to send the signal to data processing system by use the control system. Generally, there are three types monochromators: prism, optical grating and narrow-band pass filter. Thereinto, the narrow-band pass filter were widely used in the semi-auto biochemical analyzer. Through analysed the principle of biochemical analyzer base on the narrow-band pass filter, we known that the optical has three features. The first is the optical path of the optical system is a non- imaging system. The second, this system is wide spectrum region that contain visible light and ultraviolet spectrum. The third, this is a little aperture and little field monochromatic light system. Therefore, design idea of this optical system is: (1) luminous energy in the system less transmission loss; (2) detector coupled to the luminous energy efficient; mainly correct spherical aberration. Practice showed the point of Image quality evaluation: (1) dispersion circle diameter equal the receiving device pixel effective width of 125%, and the energy distribution should point target of 80% of energy into the receiving device pixel width of the effective diameter in this dispersion circle; (2) With MTF evaluation, the requirements in 20lp/ mm spatial frequency, the MTF values should not be lower than 0.6. The optical system should be fit in with ultraviolet and visible light width spectrum, and the detector image plane can but suited the majority visible light spectrum when by defocus optimization, and the image plane of violet and ultraviolet excursion quite large. Traditional biochemical analyzer optical design not fully consider this point, the authors introduce a effective image plane compensation measure innovatively, it greatly increased the reception efficiency of the violet and ultraviolet.
Spanish validation of the Premorbid Adjustment Scale (PAS-S).
Barajas, Ana; Ochoa, Susana; Baños, Iris; Dolz, Montse; Villalta-Gil, Victoria; Vilaplana, Miriam; Autonell, Jaume; Sánchez, Bernardo; Cervilla, Jorge A; Foix, Alexandrina; Obiols, Jordi E; Haro, Josep Maria; Usall, Judith
2013-02-01
The Premorbid Adjustment Scale (PAS) has been the most widely used scale to quantify premorbid status in schizophrenia, coming to be regarded as the gold standard of retrospective assessment instruments. To examine the psychometric properties of the Spanish version of the PAS (PAS-S). Retrospective study of 140 individuals experiencing a first episode of psychosis (n=77) and individuals who have schizophrenia (n=63), both adult and adolescent patients. Data were collected through a socio-demographic questionnaire and a battery of instruments which includes the following scales: PAS-S, PANSS, LSP, GAF and DAS-sv. The Cronbach's alpha was performed to assess the internal consistency of PAS-S. Pearson's correlations were performed to assess the convergent and discriminant validity. The Cronbach's alpha of the PAS-S scale was 0.85. The correlation between social PAS-S and total PAS-S was 0.85 (p<0.001); while for academic PAS-S and total PAS-S it was 0.53 (p<0.001). Significant correlations were observed between all the scores of each age period evaluated across the PAS-S scale, with a significance value less than 0.001. There was a relationship between negative symptoms and social PAS-S (0.20, p<0.05) and total PAS-S (0.22, p<0.05), but not with academic PAS-S. However, there was a correlation between academic PAS-S and general subscale of the PANSS (0.19, p<0.05). Social PAS-S was related to disability measures (DAS-sv); and academic PAS-S showed discriminant validity with most of the variables of social functioning. PAS-S did not show association with the total LSP scale (discriminant validity). The Spanish version of the Premorbid Adjustment Scale showed appropriate psychometric properties in patients experiencing a first episode of psychosis and who have a chronic evolution of the illness. Moreover, each domain of the PAS-S (social and academic premorbid functioning) showed a differential relationship to other characteristics such as psychotic symptoms, disability or social functioning after onset of illness. Copyright © 2013 Elsevier Inc. All rights reserved.
Feature-level analysis of a novel smartphone application for smoking cessation.
Heffner, Jaimee L; Vilardaga, Roger; Mercer, Laina D; Kientz, Julie A; Bricker, Jonathan B
2015-01-01
Currently, there are over 400 smoking cessation smartphone apps available, downloaded an estimated 780,000 times per month. No prior studies have examined how individuals engage with specific features of cessation apps and whether use of these features is associated with quitting. Using data from a pilot trial of a novel smoking cessation app, we examined: (i) the 10 most-used app features, and (ii) prospective associations between feature usage and quitting. Participants (n = 76) were from the experimental arm of a randomized, controlled pilot trial of an app for smoking cessation called "SmartQuit," which includes elements of both Acceptance and Commitment Therapy (ACT) and traditional cognitive behavioral therapy (CBT). Utilization data were automatically tracked during the 8-week treatment phase. Thirty-day point prevalence smoking abstinence was assessed at 60-day follow-up. The most-used features - quit plan, tracking, progress, and sharing - were mostly CBT. Only two of the 10 most-used features were prospectively associated with quitting: viewing the quit plan (p = 0.03) and tracking practice of letting urges pass (p = 0.03). Tracking ACT skill practice was used by fewer participants (n = 43) but was associated with cessation (p = 0.01). In this exploratory analysis without control for multiple comparisons, viewing a quit plan (CBT) as well as tracking practice of letting urges pass (ACT) were both appealing to app users and associated with successful quitting. Aside from these features, there was little overlap between a feature's popularity and its prospective association with quitting. Tests of causal associations between feature usage and smoking cessation are now needed.
Special-purpose computing for dense stellar systems
NASA Astrophysics Data System (ADS)
Makino, Junichiro
2007-08-01
I'll describe the current status of the GRAPE-DR project. The GRAPE-DR is the next-generation hardware for N-body simulation. Unlike the previous GRAPE hardwares, it is programmable SIMD machine with a large number of simple processors integrated into a single chip. The GRAPE-DR chip consists of 512 simple processors and operates at the clock speed of 500 MHz, delivering the theoretical peak speed of 512/226 Gflops (single/double precision). As of August 2006, the first prototype board with the sample chip successfully passed the test we prepared. The full GRAPE-DR system will consist of 4096 chips, reaching the theoretical peak speed of 2 Pflops.
Vegetative leaf area is a critical input to models that simulate human and ecosystem exposure to atmospheric pollutants. Leaf area index (LAI) can be measured in the field or numerically simulated, but all contain some inherent uncertainty that is passed to the exposure assessmen...
Development of Pediatric Neurologic Emergency Life Support Course: A Preliminary Report.
Haque, Anwarul; Arif, Fehmina; Abass, Qalab; Ahmed, Khalid
2017-11-01
Acute neurological emergencies (ANEs) in children are common life-threatening illnesses and are associated with high mortality and severe neurological disability in survivors, if not recognized early and treated appropriately. We describe our experience of teaching a short, novel course "Pediatric Neurologic Emergency Life Support" to pediatricians and trainees in a resource-limited country. This course was conducted at 5 academic hospitals from November 2013 to December 2014. It is a hybrid of pediatric advance life support and emergency neurologic life support. This course is designed to increase knowledge and impart practical training on early recognition and timely appropriate treatment in the first hour of children with ANEs. Neuroresuscitation and neuroprotective strategies are key components of this course to prevent and treat secondary injuries. Four cases of ANEs (status epilepticus, nontraumatic coma, raised intracranial pressure, and severe traumatic brain injury) were taught as a case simulation in a stepped-care, protocolized approach based on best clinical practices with emphasis on key points of managements in the first hour. Eleven courses were conducted during the study period. One hundred ninety-six physicians including 19 consultants and 171 residents participated in these courses. The mean (SD) score was 65.15 (13.87%). Seventy percent (132) of participants were passed (passing score > 60%). The overall satisfaction rate was 85%. Pediatric Neurologic Emergency Life Support was the first-time delivered educational tool to improve outcome of children with ANEs with good achievement and high satisfaction rate of participants. Large number courses are required for future validation.
Alteren, Johanne; Nerdal, Lisbeth
2015-01-01
In Norwegian nurse education, students are required to achieve a perfect score in a medication calculation test before undertaking their first practice period during the second semester. Passing the test is a challenge, and students often require several attempts. Adverse events in medication administration can be related to poor mathematical skills. The purpose of this study was to explore the relationship between high school mathematics grade and the number of attempts required to pass the medication calculation test in nurse education. The study used an exploratory design. The participants were 90 students enrolled in a bachelor’s nursing program. They completed a self-report questionnaire, and statistical analysis was performed. The results provided no basis for the conclusion that a statistical relationship existed between high school mathematics grade and number of attempts required to pass the medication calculation test. Regardless of their grades in mathematics, 43% of the students passed the medication calculation test on the first attempt. All of the students who had achieved grade 5 had passed by the third attempt. High grades in mathematics were not crucial to passing the medication calculation test. Nonetheless, the grade may be important in ensuring a pass within fewer attempts. PMID:27417767
NASA Technical Reports Server (NTRS)
Kleinberg, L.
1982-01-01
Circuit uses standard components to overcome common limitation of JFET amplifiers. Low-noise band-pass amplifier employs JFET and operational amplifier. High gain and band-pass characteristics are achieved with suitable choice of resistances and capacitances. Circuit should find use as low-noise amplifier, for example as first stage instrumentation systems.
Liu, Yang; Glass, Nancy L; Glover, Chris D; Power, Robert W; Watcha, Mehernoor F
2013-12-01
Ultrasound-guided regional anesthesia (UGRA) skills are traditionally obtained by supervised performance on patients, but practice on phantom models improves success. Currently available models are expensive or use perishable products, for example, olive-in-chicken breasts (OCB). We constructed 2 inexpensive phantom (transparent and opaque) models with readily available nonperishable products and compared the process of learning UGRA skills by novice practitioners on these models with the OCB model. Three experts first established criteria for a satisfactory completion of the simulated UGRA task in the 3 models. Thirty-six novice trainees (<20 previous UGRA experience) were randomly assigned to perform a UGRA task on 1 of 3 models-the transparent, opaque, and OCB models, where the hyperechoic target was identified, a needle was advanced to it under ultrasound guidance, fluid was injected, and images were saved. We recorded the errors during task completion, number of attempts and needle passes, and the time for target identification and needle placement until the predetermined benchmark of 3 consecutive successful UGRA simulations was accomplished. The number of errors, needle passes, and time for task completion per attempt progressively decreased in all 3 groups. However, failure to identify the target and to visualize the needle on the ultrasound image occurred more frequently with the OCB model. The time to complete simulator training was shortest with the transparent model, owing to shorter target identification times. However, trainees were less likely to agree strongly that this model was realistic for teaching UGRA skills. Training on inexpensive synthetic simulation models with no perishable products permits learning of UGRA skills by novices. The OCB model has disadvantages of containing potentially infective material, requires refrigeration, cannot be used after multiple needle punctures, and is associated with more failures during simulated UGRA. Direct visualization of the target in the transparent model allows the trainee to focus on needle insertion skills, but the opaque model may be more realistic for learning target identification skills required when UGRA is performed on real patients in the operating room.
WarpIV: In situ visualization and analysis of ion accelerator simulations
Rubel, Oliver; Loring, Burlen; Vay, Jean -Luc; ...
2016-05-09
The generation of short pulses of ion beams through the interaction of an intense laser with a plasma sheath offers the possibility of compact and cheaper ion sources for many applications--from fast ignition and radiography of dense targets to hadron therapy and injection into conventional accelerators. To enable the efficient analysis of large-scale, high-fidelity particle accelerator simulations using the Warp simulation suite, the authors introduce the Warp In situ Visualization Toolkit (WarpIV). WarpIV integrates state-of-the-art in situ visualization and analysis using VisIt with Warp, supports management and control of complex in situ visualization and analysis workflows, and implements integrated analyticsmore » to facilitate query- and feature-based data analytics and efficient large-scale data analysis. WarpIV enables for the first time distributed parallel, in situ visualization of the full simulation data using high-performance compute resources as the data is being generated by Warp. The authors describe the application of WarpIV to study and compare large 2D and 3D ion accelerator simulations, demonstrating significant differences in the acceleration process in 2D and 3D simulations. WarpIV is available to the public via https://bitbucket.org/berkeleylab/warpiv. The Warp In situ Visualization Toolkit (WarpIV) supports large-scale, parallel, in situ visualization and analysis and facilitates query- and feature-based analytics, enabling for the first time high-performance analysis of large-scale, high-fidelity particle accelerator simulations while the data is being generated by the Warp simulation suite. Furthermore, this supplemental material https://extras.computer.org/extra/mcg2016030022s1.pdf provides more details regarding the memory profiling and optimization and the Yee grid recentering optimization results discussed in the main article.« less
Progressive Fracture of Composite Structures
NASA Technical Reports Server (NTRS)
Minnetyan, Levon
2001-01-01
This report includes the results of a research in which the COmposite Durability STRuctural ANalysis (CODSTRAN) computational simulation capabilities were augmented and applied to various structures for demonstration of the new features and verification. The first chapter of this report provides an introduction to the computational simulation or virtual laboratory approach for the assessment of damage and fracture progression characteristics in composite structures. The second chapter outlines the details of the overall methodology used, including the failure criteria and the incremental/iterative loading procedure with the definitions of damage, fracture, and equilibrium states. The subsequent chapters each contain an augmented feature of the code and/or demonstration examples. All but one of the presented examples contains laminated composite structures with various fiber/matrix constituents. For each structure simulated, damage initiation and progression mechanisms are identified and the structural damage tolerance is quantified at various degradation stages. Many chapters contain the simulation of defective and defect free structures to evaluate the effects of existing defects on structural durability.
White, James A P; Bond, Ian P; Jagger, Daryll C
2011-01-01
This study investigated how ribbed design features, including palatal rugae, may be used to significantly improve the structural performance of a maxillary denture under load. A computer-aided design model of a generic maxillary denture, incorporating various rib features, was created and imported into a finite element analysis program. The denture and ribbed features were assigned the material properties of standard denture acrylic resin, and load was applied in two different ways: the first simulating a three-point flexural bend of the posterior section and the second simulating loading of the entire palatal region. To investigate the combined use of ribbing and reinforcement, the same simulations were repeated with the ribbed features having a Young modulus two orders of magnitude greater than denture acrylic resin. For a prescribed load, total displacements of tracking nodes were compared to those of a control denture (without ribbing) to assess relative denture rigidity. When subjected to flexural loading, an increase in rib depth was seen to result in a reduction of both the transverse displacement of the last molar and vertical displacement at the centerline. However, ribbed features assigned the material properties of denture acrylic resin require a depth that may impose on speech and bolus propulsion before significant improvements are observed. The use of ribbed features, when made from a significantly stiffer material (eg, fiber-reinforced polymer) and designed to mimic palatal rugae, offer an acceptable method of providing significant improvements in rigidity to a maxillary denture under flexural load.
Experimental Investigation of Superradiance in a Tapered Free-Electron Laser Amplifier
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hidaka, Y.; She, Y.; Murphy, J.B.
2011-03-28
We report experimental studies of the effect of undulator tapering on superradiance in a single-pass high-gain free-electron laser (FEL) amplifier. The experiments were performed at the Source Development Laboratory (SDL) of National Synchrotron Light Source (NSLS). Efficiency was nearly tripled with tapering. Both the temporal and spectral properties of the superradiant FEL along the uniform and tapered undulator were experimentally characterized using frequency-resolved optical gating (FROG) images. Numerical studies predicted pulse broadening and spectral cleaning by undulator tapering Pulse broadening was experimentally verified. However, spectral cleanliness degraded with tapering. We have performed first experiments with a tapered undulator and amore » short seed laser pulse. Pulse broadening with tapering expected from simulations was experimentally confirmed. However, the experimentally obtained spectra degraded with tapering, whereas the simulations predicted improvement. A further numerical study is under way to resolve this issue.« less
A discontinuous Galerkin method for gravity-driven viscous fingering instabilities in porous media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scovazzi, G.; Gerstenberger, A.; Collis, S. S.
2013-01-01
We present a new approach to the simulation of gravity-driven viscous fingering instabilities in porous media flow. These instabilities play a very important role during carbon sequestration processes in brine aquifers. Our approach is based on a nonlinear implementation of the discontinuous Galerkin method, and possesses a number of key features. First, the method developed is inherently high order, and is therefore well suited to study unstable flow mechanisms. Secondly, it maintains high-order accuracy on completely unstructured meshes. The combination of these two features makes it a very appealing strategy in simulating the challenging flow patterns and very complex geometriesmore » of actual reservoirs and aquifers. This article includes an extensive set of verification studies on the stability and accuracy of the method, and also features a number of computations with unstructured grids and non-standard geometries.« less
What will happen to retirement income for 401(k) participants after the market decline?
VanDerhei, Jack
2010-04-01
This paper uses administrative data from millions of 401(k) participants dating back to 1996 as well as several simulation models to determine 401(k) plans' susceptibility to several alleged limitations as well as its potential for significant retirement wealth accumulation for employees working for employers who have chosen to sponsor these plans. What will happen to 401(k) participants after the 2008 market decline will be largely determined by the extent to which the features of automatic enrollment, automatic escalation of contributions, and automatic investment are allowed to play out. Simulation results suggest that the first two features will significantly improve retirement wealth for the lowest-income quartiles going forward, and the third feature (primarily target-date funds) suggest that a large percentage of those on the verge of retirement would benefit significantly by a reduction of equity concentrations to a more age-appropriate level.
Learning From Where Students Look While Observing Simulated Physical Phenomena
NASA Astrophysics Data System (ADS)
Demaree, Dedra
2005-04-01
The Physics Education Research (PER) Group at the Ohio State University (OSU) has developed Virtual Reality (VR) programs for teaching introductory physics concepts. Winter 2005, the PER group worked with OSU's cognitive science eye-tracking lab to probe what features students look at while using our VR programs. We see distinct differences in the features students fixate on depending upon whether or not they have formally studied the related physics. Students who first make predictions seem to fixate more on the relevant features of the simulation than those who do not, regardless of their level of education. It is known that students sometimes perform an experiment and report results consistent with their misconceptions but inconsistent with the experimental outcome. We see direct evidence of one student holding onto misconceptions despite fixating frequently on the information needed to understand the correct answer. Future studies using these technologies may prove valuable for tackling difficult questions regarding student learning.
Ritter, E Matthew; Taylor, Zachary A; Wolf, Kathryn R; Franklin, Brenton R; Placek, Sarah B; Korndorffer, James R; Gardner, Aimee K
2018-01-01
The fundamentals of endoscopic surgery (FES) program has considerable validity evidence for its use in measuring the knowledge, skills, and abilities required for competency in endoscopy. Beginning in 2018, the American Board of Surgery will require all candidates to have taken and passed the written and performance exams in the FES program. Recent work has shown that the current ACGME/ABS required case volume may not be enough to ensure trainees pass the FES skills exam. The aim of this study was to investigate the feasibility of a simulation-based mastery-learning curriculum delivered on a novel physical simulation platform to prepare trainees to pass the FES manual skills exam. The newly developed endoscopy training system (ETS) was used as the training platform. Seventeen PGY 1 (10) and PGY 2 (7) general surgery residents completed a pre-training assessment consisting of all 5 FES tasks on the GI Mentor II. Subjects then trained to previously determined expert performance benchmarks on each of 5 ETS tasks. Once training benchmarks were reached for all tasks, a post-training assessment was performed with all 5 FES tasks. Two subjects were lost to follow-up and never returned for training or post-training assessment. One additional subject failed to complete any portion of the curriculum, but did return for post-training assessment. The group had minimal endoscopy experience (median 0, range 0-67) and minimal prior simulation experience. Three trainees (17.6%) achieved a passing score on the pre-training FES assessment. Training consisted of an average of 48 ± 26 repetitions on the ETS platform distributed over 5.1 ± 2 training sessions. Seventy-one percent achieved proficiency on all 5 ETS tasks. There was dramatic improvement demonstrated on the mean post-training FES assessment when compared to pre-training (74.0 ± 8 vs. 50.4 ± 16, p < 0.0001, effect size = 2.4). The number of ETS tasks trained to proficiency correlated moderately with the score on the post-training assessment (r = 0.57, p = 0.028). Fourteen (100%) subjects who trained to proficiency on at least one ETS task passed the post-training FES manual skills exam. This simulation-based mastery learning curriculum using the ETS is feasible for training novices and allows for the acquisition of the technical skills required to pass the FES manual skills exam. This curriculum should be strongly considered by programs wishing to ensure that trainees are prepared for the FES exam.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartmann, A.; Frenkel, J.; Hopf, R.
Amyloidosis is a systemic disease frequently involving the myocardium and leading to functional disturbances of the heart. Amyloidosis can mimic other cardiac diseases. A conclusive clinical diagnosis of cardiac involvement can only be made by a combination of different diagnostic methods. In 7 patients with myocardial amyloidosis we used a combined first-pass and static scintigraphy with technetium-99 m-pyrophosphate. There was only insignificant myocardial uptake of the tracer. The first-pass studies however revealed reduced systolic function in 4/7 patients and impaired diastolic function in 6/7 patients. Therefore, although cardiac amyloid could not be demonstrated in the static scintigraphy due to amyloidmore » fibril amount and composition, myocardial functional abnormalities were seen in the first-pass study.« less
Recuperated atmospheric SOFC/gas turbine hybrid cycle
Lundberg, Wayne
2010-05-04
A method of operating an atmospheric-pressure solid oxide fuel cell generator (6) in combination with a gas turbine comprising a compressor (1) and expander (2) where an inlet oxidant (20) is passed through the compressor (1) and exits as a first stream (60) and a second stream (62) the first stream passing through a flow control valve (56) to control flow and then through a heat exchanger (54) followed by mixing with the second stream (62) where the mixed streams are passed through a combustor (8) and expander (2) and the first heat exchanger for temperature control before entry into the solid oxide fuel cell generator (6), which generator (6) is also supplied with fuel (40).
Recuperated atmosphere SOFC/gas turbine hybrid cycle
Lundberg, Wayne
2010-08-24
A method of operating an atmospheric-pressure solid oxide fuel cell generator (6) in combination with a gas turbine comprising a compressor (1) and expander (2) where an inlet oxidant (20) is passed through the compressor (1) and exits as a first stream (60) and a second stream (62) the first stream passing through a flow control valve (56) to control flow and then through a heat exchanger (54) followed by mixing with the second stream (62) where the mixed streams are passed through a combustor (8) and expander (2) and the first heat exchanger for temperature control before entry into the solid oxide fuel cell generator (6), which generator (6) is also supplied with fuel (40).
... page: //medlineplus.gov/ency/article/000242.htm Dubin-Johnson syndrome To use the sharing features on this page, please enable JavaScript. Dubin-Johnson syndrome (DJS) is a disorder passed down through ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, S.; Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi'an Jiaotong University, Xi'an 710049; Beeson, S.
Self-induced gaseous plasma is evaluated as active opening switch medium for pulsed high power microwave radiation. The self-induced plasma switch is investigated for N{sub 2} and Ar environments under pressure conditions ranging from 25 to 700 Torr. A multi-pass TE{sub 111} resonator is used to significantly reduce the delay time inherently associated with plasma generation. The plasma forms under the pulsed excitation of a 4 MW magnetron inside the central dielectric tube of the resonator, which isolates the inner atmospheric gas from the outer vacuum environment. The path from the power source to the load is designed such that the pulse passesmore » through the plasma twice with a 35 ns delay between these two passes. In the first pass, initial plasma density is generated, while the second affects the transition to a highly reflective state with as much as 30 dB attenuation. Experimental data revealed that virtually zero delay time may be achieved for N{sub 2} at 25 Torr. A two-dimensional fluid model was developed to study the plasma formation times for comparison with experimental data. The delay time predicted from this model agrees well with the experimental values in the lower pressure regime (error < 25%), however, due to filamentary plasma formation at higher pressures, simulated delay times may be underestimated by as much as 50%.« less
Li, Jinjian; Dridi, Mahjoub; El-Moudni, Abdellah
2016-01-01
The problem of reducing traffic delays and decreasing fuel consumption simultaneously in a network of intersections without traffic lights is solved by a cooperative traffic control algorithm, where the cooperation is executed based on the connection of Vehicle-to-Infrastructure (V2I). This resolution of the problem contains two main steps. The first step concerns the itinerary of which intersections are chosen by vehicles to arrive at their destination from their starting point. Based on the principle of minimal travel distance, each vehicle chooses its itinerary dynamically based on the traffic loads in the adjacent intersections. The second step is related to the following proposed cooperative procedures to allow vehicles to pass through each intersection rapidly and economically: on one hand, according to the real-time information sent by vehicles via V2I in the edge of the communication zone, each intersection applies Dynamic Programming (DP) to cooperatively optimize the vehicle passing sequence with minimal traffic delays so that the vehicles may rapidly pass the intersection under the relevant safety constraints; on the other hand, after receiving this sequence, each vehicle finds the optimal speed profiles with the minimal fuel consumption by an exhaustive search. The simulation results reveal that the proposed algorithm can significantly reduce both travel delays and fuel consumption compared with other papers under different traffic volumes. PMID:27999333
Dietl, Charles A; Russell, John C
2016-01-01
The purpose of this article is to review the literature on current technology for surgical education and to evaluate the effect of technological advances on the Accreditation Council of Graduate Medical Education (ACGME) Core Competencies, American Board of Surgery In-Training Examination (ABSITE) scores, and American Board of Surgery (ABS) certification. A literature search was obtained from MEDLINE via PubMed.gov, ScienceDirect.com, and Google Scholar on all peer-reviewed studies published since 2003 using the following search queries: technology for surgical education, simulation-based surgical training, simulation-based nontechnical skills (NTS) training, ACGME Core Competencies, ABSITE scores, and ABS pass rate. Our initial search list included the following: 648 on technology for surgical education, 413 on simulation-based surgical training, 51 on simulation-based NTS training, 78 on ABSITE scores, and 33 on ABS pass rate. Further, 42 articles on technological advances for surgical education met inclusion criteria based on their effect on ACGME Core Competencies, ABSITE scores, and ABS certification. Systematic review showed that 33 of 42 and 26 of 42 publications on technological advances for surgical education showed objective improvements regarding patient care and medical knowledge, respectively, whereas only 2 of 42 publications showed improved ABSITE scores, but none showed improved ABS pass rates. Improvements in the other ACGME core competencies were documented in 14 studies, 9 of which were on simulation-based NTS training. Most of the studies on technological advances for surgical education have shown a positive effect on patient care and medical knowledge. However, the effect of simulation-based surgical training and simulation-based NTS training on ABSITE scores and ABS certification has not been assessed. Studies on technological advances in surgical education and simulation-based NTS training showing quantitative evidence that surgery residency program objectives are achieved are still needed. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wu, Longtao; Wong, Sun; Wang, Tao; Huffman, George J.
2018-01-01
Simulation of moist convective processes is critical for accurately representing the interaction among tropical wave activities, atmospheric water vapor transport, and clouds associated with the Indian monsoon Intraseasonal Oscillation (ISO). In this study, we apply the Weather Research and Forecasting (WRF) model to simulate Indian monsoon ISO with three different treatments of moist convective processes: (1) the Betts-Miller-Janjić (BMJ) adjustment cumulus scheme without explicit simulation of moist convective processes; (2) the New Simplified Arakawa-Schubert (NSAS) mass-flux scheme with simplified moist convective processes; and (3) explicit simulation of moist convective processes at convection permitting scale (Nest). Results show that the BMJ experiment is unable to properly reproduce the equatorial Rossby wave activities and the corresponding phase relationship between moisture advection and dynamical convergence during the ISO. These features associated with the ISO are approximately captured in the NSAS experiment. The simulation with resolved moist convective processes significantly improves the representation of the ISO evolution, and has good agreements with the observations. This study features the first attempt to investigate the Indian monsoon at convection permitting scale.
NASA Technical Reports Server (NTRS)
Shih, Tsan-Hsing; Liu, nan-Suey
2010-01-01
A brief introduction of the temporal filter based partially resolved numerical simulation/very large eddy simulation approach (PRNS/VLES) and its distinct features are presented. A nonlinear dynamic subscale model and its advantages over the linear subscale eddy viscosity model are described. In addition, a guideline for conducting a PRNS/VLES simulation is provided. Results are presented for three turbulent internal flows. The first one is the turbulent pipe flow at low and high Reynolds numbers to illustrate the basic features of PRNS/VLES; the second one is the swirling turbulent flow in a LM6000 single injector to further demonstrate the differences in the calculated flow fields resulting from the nonlinear model versus the pure eddy viscosity model; the third one is a more complex turbulent flow generated in a single-element lean direct injection (LDI) combustor, the calculated result has demonstrated that the current PRNS/VLES approach is capable of capturing the dynamically important, unsteady turbulent structures while using a relatively coarse grid.
NASA Astrophysics Data System (ADS)
Jung, Y. K.; Udalski, A.; Bond, I. A.; Yee, J. C.; Gould, A.; Han, C.; Albrow, M. D.; Lee, C.-U.; Kim, S.-L.; Hwang, K.-H.; Chung, S.-J.; Ryu, Y.-H.; Shin, I.-G.; Zhu, W.; Cha, S.-M.; Kim, D.-J.; Lee, Y.; Park, B.-G.; Kim, H.-W.; Pogge, R. W.; KMTNet Collaboration; Skowron, J.; Szymański, M. K.; Poleski, R.; Mróz, P.; Kozłowski, S.; Pietrukowicz, P.; Soszyński, I.; Ulaczyk, K.; Pawlak, M.; OGLE Collaboration; Abe, F.; Bennett, D. P.; Barry, R.; Sumi, T.; Asakura, Y.; Bhattacharya, A.; Donachie, M.; Fukui, A.; Hirao, Y.; Itow, Y.; Koshimoto, N.; Li, M. C. A.; Ling, C. H.; Masuda, K.; Matsubara, Y.; Muraki, Y.; Nagakane, M.; Rattenbury, N. J.; Evans, P.; Sharan, A.; Sullivan, D. J.; Suzuki, D.; Tristram, P. J.; Yamada, T.; Yamada, T.; Yonehara, A.; MOA Collaboration
2017-06-01
We report the analysis of the first resolved caustic-crossing binary-source microlensing event OGLE-2016-BLG-1003. The event is densely covered by round-the-clock observations of three surveys. The light curve is characterized by two nested caustic-crossing features, which is unusual for typical caustic-crossing perturbations. From the modeling of the light curve, we find that the anomaly is produced by a binary source passing over a caustic formed by a binary lens. The result proves the importance of high-cadence and continuous observations, and the capability of second-generation microlensing experiments to identify such complex perturbations that are previously unknown. However, the result also raises the issues of the limitations of current analysis techniques for understanding lens systems beyond two masses and of determining the appropriate multiband observing strategy of survey experiments.
Two-electrode low supply voltage electrocardiogram signal amplifier.
Dobrev, D
2004-03-01
Portable biomedical instrumentation has become an important part of diagnostic and treatment instrumentation, including telemedicine applications. Low-voltage and low-power design tendencies prevail. Modern battery cell voltages in the range of 3-3.6 V require appropriate circuit solutions. A two-electrode biopotential amplifier design is presented, with a high common-mode rejection ratio (CMRR), high input voltage tolerance and standard first-order high-pass characteristic. Most of these features are due to a high-gain first stage design. The circuit makes use of passive components of popular values and tolerances. Powered by a single 3 V source, the amplifier tolerates +/- 1 V common mode voltage, +/- 50 microA common mode current and 2 V input DC voltage, and its worst-case CMRR is 60 dB. The amplifier is intended for use in various applications, such as Holter-type monitors, defibrillators, ECG monitors, biotelemetry devices etc.
Mid-Infrared Spectroscopy of Persistent Leonid Trains
NASA Technical Reports Server (NTRS)
Russell, Ray W.; Rossano, George S.; Chatelain, Mark A.; Lynch, David K.; Tessensohn, Ted K.; Abendroth, Eric; Kim, Daryl; Jenniskens, Peter; DeVincenzi, Donald L. (Technical Monitor)
2000-01-01
The first infrared spectroscopy in the 3-13 micron region has been obtained of several persistent Leonid meteor trains with two different instrument types, one at a desert ground-based site and the other on-board a high-flying aircraft. The spectra exhibit common structures assigned to enhanced emissions of warm CH4, CO2, CO and H2O which may originate from heated trace air compounds or materials created in the wake of the meteor. This is the first time that any of these molecules has been observed in the spectra of persistent trains. Hence, the mid-IR observations offer a new perspective on the physical processes that occur in the path of the meteor at some time after the meteor itself has passed by. Continuum emission is observed also, but its origin has not yet been established. No 10 micron dust emission feature has been observed.
Kim, Bum-Keun; Cho, Ah-Ra; Park, Dong-June
2016-09-01
We analyzed the physical properties and digestibility of apigenin-loaded emulsions as they passed through a simulated digestion model. As the emulsion passed through the simulated stages of digestion, the particle size and zeta potential of all the samples changed, except for the soybean oil-Tween 80 emulsion, in which zeta potential remained constant, through all stages, indicating that soybean oil-Tween 80 emulsions may have an effect on stability during all stages of digestion. Fluorescence microscopy was used to observe the morphology of the emulsions at each step. The in vivo pharmacokinetics revealed that apigenin-loaded soybean oil-Tween 80 emulsions had a higher oral bioavailability than did the orally administrated apigenin suspensions. These results suggest that W/O/W multiple emulsions formulated with soybean oil and tween 80 have great potential as targeted delivery systems for apigenin, and may enhance in vitro and in vivo bioavailability when they pass through the digestive tract. Copyright © 2016 Elsevier Ltd. All rights reserved.
Design and Analysis of a Micromachined LC Low Pass Filter For 2.4GHz Application
NASA Astrophysics Data System (ADS)
Saroj, Samruddhi R.; Rathee, Vishal R.; Pande, Rajesh S.
2018-02-01
This paper reports design and analysis of a passive low pass filter with cut-off frequency of 2.4 GHz using MEMS (Micro Electro-Mechanical Systems) technology. The passive components such as suspended spiral inductors and metal-insulator-metal (MIM) capacitor are arranged in T network form to implement LC low pass filter design. This design employs a simple approach of suspension thereby reducing parasitic losses to eliminate the performance degrading effects caused by integrating an off-chip inductor in the filter circuit proposed to be developed on a low cost silicon substrate using RF-MEMS components. The filter occupies only 2.1 mm x 0.66 mm die area and is designed using micro-strip transmission line placed on a silicon substrate. The design is implemented in High Frequency Structural Simulator (HFSS) software and fabrication flow is proposed for its implementation. The simulated results show that the design has an insertion loss of -4.98 dB and return loss of -2.60dB.
NASA Astrophysics Data System (ADS)
De Marco, Rossana; Marcucci, Maria Federica; Brienza, Daniele; Bruno, Roberto; Consolini, Giuseppe; Perrone, Denise; Valentini, Franceso; Servidio, Sergio; Stabile, Sara; Pezzi, Oreste; Sorriso-Valvo, Luca; Lavraud, Benoit; De Keyser, Johan; Retinò, Alessandro; Fazakerley, Andrew; Wicks, Robert; Vaivads, Andris; Salatti, Mario; Veltri, Pierliugi
2017-04-01
Turbulence Heating ObserveR (THOR) is the first mission devoted to study energization, acceleration and heating of turbulent space plasmas, and designed to perform field and particle measurements at kinetic scales in different near-Earth regions and in the solar wind. Solar Orbiter (SolO), together with Solar Probe Plus, will provide the first comprehensive remote and in situ measurements which are critical to establish the fundamental physical links between the Sun's dynamic atmosphere and the turbulent solar wind. The fundamental process of turbulent dissipation is mediated by physical mechanism that occur at a variety of temporal and spatial scales, and most efficiently at the kinetics scales. Hybrid Vlasov-Maxwell simulations of solar-wind turbulence show that kinetic effects manifest as particle beams, production of temperature anisotropies and ring-like modulations, preferential heating of heavy ions. We use a numerical code able to reproduce the response of a typical electrostatic analyzer of top-hat type starting from velocity distribution functions (VDFs) generated by Hybrid Vlasov-Maxwell (HVM) numerical simulations. Here, we show how optimized particle measurements by top-hat analysers can capture the kinetic features injected by turbulence in the VDFs.
Size effects on plasticity and fatigue microstructure evolution in FCC single crystals
NASA Astrophysics Data System (ADS)
El-Awady, Jaafar Abbas
In aircraft structures and engines, fatigue damage is manifest in the progressive emergence of distributed surface cracks near locations of high stress concentrations. At the present time, reliable methods for prediction of fatigue crack initiation are not available, because the phenomenon starts at the atomic scale. Initiation of fatigue cracks is associated with the formation of Persistent slip bands (PSBs), which start at certain critical conditions inside metals with specific microstructure dimensions. The main objective of this research is to develop predictive computational capabilities for plasticity and fatigue damage evolution in finite volumes. In that attempt, a dislocation dynamics model that incorporates the influence of free and internal interfaces on dislocation motion is presented. The model is based on a self-consistent formulation of 3-D Parametric Dislocation Dynamics (PDD) with the Boundary Element method (BEM) to describe dislocation motion, and hence microscopic plastic flow in finite volumes. The developed computer models are bench-marked by detailed comparisons with the experimental data, developed at the Wright-Patterson Air Force Lab (WP-AFRL), by three dimensional large scale simulations of compression loading on micro-scale samples of FCC single crystals. These simulation results provide an understanding of plastic deformation of micron-size single crystals. The plastic flow characteristics as well as the stress-strain behavior of simulated micropillars are shown to be in general agreement with experimental observations. New size scaling aspects of plastic flow and work-hardening are identified through the use of these simulations. The flow strength versus the diameter of the micropillar follows a power law with an exponent equal to -0.69. A stronger correlation is observed between the flow strength and the average length of activated dislocation sources. This relationship is again a power law, with an exponent -0.85. Simulation results with and without the activation of cross-slip are compared. Discontinuous hardening is observed when cross-slip is included. Experimentally-observed size effects on plastic flow and work- hardening are consistent with a "weakest-link activation mechanism". In addition, the variations and periodicity of dislocation activation are analyzed using the Fast Fourier Transform (FFT). We then present models of localized plastic deformation inside Persistent Slip Band channels. We investigate the interaction between screw dislocations as they pass one another inside channel walls in copper. The model shows the mechanisms of dislocation bowing, dipole formation and binding, and dipole destruction as screw dislocations pass one another. The mechanism of (dipole passing) is assessed and interpreted in terms of the fatigue saturation stress. We also present results for the effects of the wall dipole structure on the dipole passing mechanism. The edge dislocation dipolar walls is seen to have an effect on the passing stress as well. It is shown that the passing stress in the middle of the channel is reduced by 11 to 23% depending on the initial configuration of the screw dislocations with respect to one another. Finally, from large scale simulations of the expansion process of the edge dipoles from the walls in the channel the screw dislocations in the PSB channels may not meet "symmetrically", i.e. precisely in the center of the channel but preferably a little on one or the other side. For this configuration the passing stress will be lowered which is in agreement to experimental observations.
Parallelized direct execution simulation of message-passing parallel programs
NASA Technical Reports Server (NTRS)
Dickens, Phillip M.; Heidelberger, Philip; Nicol, David M.
1994-01-01
As massively parallel computers proliferate, there is growing interest in findings ways by which performance of massively parallel codes can be efficiently predicted. This problem arises in diverse contexts such as parallelizing computers, parallel performance monitoring, and parallel algorithm development. In this paper we describe one solution where one directly executes the application code, but uses a discrete-event simulator to model details of the presumed parallel machine such as operating system and communication network behavior. Because this approach is computationally expensive, we are interested in its own parallelization specifically the parallelization of the discrete-event simulator. We describe methods suitable for parallelized direct execution simulation of message-passing parallel programs, and report on the performance of such a system, Large Application Parallel Simulation Environment (LAPSE), we have built on the Intel Paragon. On all codes measured to date, LAPSE predicts performance well typically within 10 percent relative error. Depending on the nature of the application code, we have observed low slowdowns (relative to natively executing code) and high relative speedups using up to 64 processors.
Does a surgical simulator improve resident operative performance of laparoscopic tubal ligation?
Banks, Erika H; Chudnoff, Scott; Karmin, Ira; Wang, Cuiling; Pardanani, Setul
2007-11-01
The purpose of this study was to assess whether a surgical skills simulator laboratory improves resident knowledge and operative performance of laparoscopic tubal ligation. Twenty postgraduate year 1 residents were assigned randomly to either a surgical simulator laboratory on laparoscopic tubal ligation together with apprenticeship teaching in the operating room or to apprenticeship teaching alone. Tests that were given before and after the training assessed basic knowledge. Attending physicians who were blinded to resident randomization status evaluated postgraduate year 1 performance on a laparoscopic tubal ligation in the operating room with 3 validated tools: a task-specific checklist, global rating scale, and pass/fail grade. Postgraduate year 1 residents who were assigned randomly to the surgical simulator laboratory performed significantly better than control subjects on all 3 surgical assessment tools (the checklist, the global score, and the pass/fail analysis) and scored significantly better on the knowledge posttest (all P < .0005). Compared with apprenticeship teaching alone, a surgical simulator laboratory on laparoscopic tubal ligation improved resident knowledge and performance in the operating room.
Featured Image: The Birth of Spiral Arms
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2017-01-01
In this figure, the top panels show three spiral galaxies in the Virgo cluster, imaged with the Sloan Digital Sky Survey. The bottom panels provide a comparison with three morphologically similar galaxies generated insimulations. The simulations run by Marcin Semczuk, Ewa okas, and Andrs del Pino (Nicolaus Copernicus Astronomical Center, Poland) were designed to examine how the spiral arms of galaxies like the Milky Way may have formed. In particular, the group exploredthe possibility that so-called grand-design spiral arms are caused by tidal effects as a Milky-Way-like galaxy orbits a cluster of galaxies. The authors show that the gravitational potential of the cluster can trigger the formation of two spiral arms each time the galaxy passes through the pericenter of its orbit around the cluster. Check out the original paper below for more information!CitationMarcin Semczuk et al 2017 ApJ 834 7. doi:10.3847/1538-4357/834/1/7
Chavis, Pamella Ivey
Relationships between self-esteem, locus of control (LOC), and first-time passage of National Council Licensure Examination for Registered Nurses (NCLEX-RN®) were examined at baccalaureate nursing programs at two historically black colleges and universities. Shortages continue to exceed demands for RNs prepared at the baccalaureate level. Inconsistent pass rates on the NCLEX-RN for graduates of historically black colleges and universities impede the supply of RNs. Surveys and archival data were used to examine characteristics of the sample and explore relationships among variables. All participants (N = 90) reported high self-esteem and internal LOC. Models suggested that all those with high self-esteem and internal LOC would pass the NCLEX-RN; only 85 percent passed the first time. Statistical analysis revealed a lack of statistical significance between self-esteem, LOC, and first-time passage. Variables not included in the study may have affected first-time passage.
The clumpy absorber in the high-mass X-ray binary Vela X-1
Grinberg, V.; Hell, N.; El Mellah, I.; ...
2017-12-15
Bright and eclipsing, the high-mass X-ray binary Vela X-1 offers a unique opportunity to study accretion onto a neutron star from clumpy winds of O/B stars and to disentangle the complex accretion geometry of these systems. In Chandra-HETGS spectroscopy at orbital phase ~0.25, when our line of sight towards the source does not pass through the large-scale accretion structure such as the accretion wake, we observe changes in overall spectral shape on timescales of a few kiloseconds. This spectral variability is, at least in part, caused by changes in overall absorption and we show that such strongly variable absorption cannotmore » be caused by unperturbed clumpy winds of O/B stars. We detect line features from high and low ionization species of silicon, magnesium, and neon whose strengths and presence depend on the overall level of absorption. Finally, these features imply a co-existence of cool and hot gas phases in the system, which we interpret as a highly variable, structured accretion flow close to the compact object such as has been recently seen in simulations of wind accretion in high-mass X-ray binaries.« less
The clumpy absorber in the high-mass X-ray binary Vela X-1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grinberg, V.; Hell, N.; El Mellah, I.
Bright and eclipsing, the high-mass X-ray binary Vela X-1 offers a unique opportunity to study accretion onto a neutron star from clumpy winds of O/B stars and to disentangle the complex accretion geometry of these systems. In Chandra-HETGS spectroscopy at orbital phase ~0.25, when our line of sight towards the source does not pass through the large-scale accretion structure such as the accretion wake, we observe changes in overall spectral shape on timescales of a few kiloseconds. This spectral variability is, at least in part, caused by changes in overall absorption and we show that such strongly variable absorption cannotmore » be caused by unperturbed clumpy winds of O/B stars. We detect line features from high and low ionization species of silicon, magnesium, and neon whose strengths and presence depend on the overall level of absorption. Finally, these features imply a co-existence of cool and hot gas phases in the system, which we interpret as a highly variable, structured accretion flow close to the compact object such as has been recently seen in simulations of wind accretion in high-mass X-ray binaries.« less
NASA Astrophysics Data System (ADS)
Guo, C.; Tong, X.; Liu, S.; Liu, S.; Lu, X.; Chen, P.; Jin, Y.; Xie, H.
2017-07-01
Determining the attitude of satellite at the time of imaging then establishing the mathematical relationship between image points and ground points is essential in high-resolution remote sensing image mapping. Star tracker is insensitive to the high frequency attitude variation due to the measure noise and satellite jitter, but the low frequency attitude motion can be determined with high accuracy. Gyro, as a short-term reference to the satellite's attitude, is sensitive to high frequency attitude change, but due to the existence of gyro drift and integral error, the attitude determination error increases with time. Based on the opposite noise frequency characteristics of two kinds of attitude sensors, this paper proposes an on-orbit attitude estimation method of star sensors and gyro based on Complementary Filter (CF) and Unscented Kalman Filter (UKF). In this study, the principle and implementation of the proposed method are described. First, gyro attitude quaternions are acquired based on the attitude kinematics equation. An attitude information fusion method is then introduced, which applies high-pass filtering and low-pass filtering to the gyro and star tracker, respectively. Second, the attitude fusion data based on CF are introduced as the observed values of UKF system in the process of measurement updating. The accuracy and effectiveness of the method are validated based on the simulated sensors attitude data. The obtained results indicate that the proposed method can suppress the gyro drift and measure noise of attitude sensors, improving the accuracy of the attitude determination significantly, comparing with the simulated on-orbit attitude and the attitude estimation results of the UKF defined by the same simulation parameters.
Wide-bandwidth high-resolution search for extraterrestrial intelligence
NASA Technical Reports Server (NTRS)
Horowitz, Paul
1995-01-01
Research was accomplished during the third year of the grant on: BETA architecture, an FFT array, a feature extractor, the Pentium array and workstation, and a radio astronomy spectrometer. The BETA (this SETI project) system architecture has been evolving generally in the direction of greater robustness against terrestrial interference. The new design adds a powerful state-memory feature, multiple simultaneous thresholds, and the ability to integrate multiple spectra in a flexible state-machine architecture. The FFT array is reported with regards to its hardware verification, array production, and control. The feature extractor is responsible for maintaining a moving baseline, recognizing large spectral peaks, following the progress of previously identified interesting spectral regions, and blocking signals from regions previously identified as containing interference. The Pentium array consists of 21 Pentium-based PC motherboards, each with 16 MByte of RAM and an Ethernet interface. Each motherboard receives and processes the data from a feature extractor/correlator board set, passing on the results of a first analysis to the central Unix workstation (through which each is also booted). The radio astronomy spectrometer is a technological spinoff from SETI work. It is proposed to be a combined spectrometer and power-accumulator, for use at Arecibo Observatory to search for neutral hydrogen emission from condensations of neutral hydrogen at high redshift (z = 5).
Su, Hang; Yin, Zhaozheng; Huh, Seungil; Kanade, Takeo
2013-10-01
Phase-contrast microscopy is one of the most common and convenient imaging modalities to observe long-term multi-cellular processes, which generates images by the interference of lights passing through transparent specimens and background medium with different retarded phases. Despite many years of study, computer-aided phase contrast microscopy analysis on cell behavior is challenged by image qualities and artifacts caused by phase contrast optics. Addressing the unsolved challenges, the authors propose (1) a phase contrast microscopy image restoration method that produces phase retardation features, which are intrinsic features of phase contrast microscopy, and (2) a semi-supervised learning based algorithm for cell segmentation, which is a fundamental task for various cell behavior analysis. Specifically, the image formation process of phase contrast microscopy images is first computationally modeled with a dictionary of diffraction patterns; as a result, each pixel of a phase contrast microscopy image is represented by a linear combination of the bases, which we call phase retardation features. Images are then partitioned into phase-homogeneous atoms by clustering neighboring pixels with similar phase retardation features. Consequently, cell segmentation is performed via a semi-supervised classification technique over the phase-homogeneous atoms. Experiments demonstrate that the proposed approach produces quality segmentation of individual cells and outperforms previous approaches. Copyright © 2013 Elsevier B.V. All rights reserved.
Analysis of the Optimum Gain of a High-Pass L-Matching Network for Rectennas
Jordana, Josep; Robert, Francesc-Josep; Berenguer, Jordi
2017-01-01
Rectennas, which mainly consist of an antenna, matching network, and rectifier, are used to harvest radiofrequency energy in order to power tiny sensor nodes, e.g., the nodes of the Internet of Things. This paper demonstrates for the first time, the existence of an optimum voltage gain for high-pass L-matching networks used in rectennas by deriving an analytical expression. The optimum gain is that which leads to maximum power efficiency of the rectenna. Here, apart from the L-matching network, a Schottky single-diode rectifier was used for the rectenna, which was optimized at 868 MHz for a power range from −30 dBm to −10 dBm. As the theoretical expression depends on parameters not very well-known a priori, an accurate search of the optimum gain for each power level was performed via simulations. Experimental results show remarkable power efficiencies ranging from 16% at −30 dBm to 55% at −10 dBm, which are for almost all the tested power levels the highest published in the literature for similar designs. PMID:28757592
Noise reduction for model counterrotation propeller at cruise by reducing aft-propeller diameter
NASA Technical Reports Server (NTRS)
Dittmar, James H.; Stang, David B.
1987-01-01
The forward propeller of a model counterrotation propeller was tested with its original aft propeller and with a reduced diameter aft propeller. Noise reductions with the reduced diameter aft propeller were measured at simulated cruise conditions. Reductions were as large as 7.5 dB for the aft-propeller passing tone and 15 dB in the harmonics at specific angles. The interaction tones, mostly the first, were reduced probably because the reduced-diameter aft-propeller blades no longer interacted with the forward propeller tip vortex. The total noise (sum of primary and interaction noise) at each harmonic was significantly reduced. The chief noise reduction at each harmonic came from reduced aft-propeller-alone noise, with the interaction tones contributing little to the totals at cruise. Total cruise noise reductions were as much as 3 dB at given angles for the blade passing tone and 10 dB for some of the harmonics. These reductions would measurably improve the fuselage interior noise levels and represent a definite cruise noise benefit from using a reduced diameter aft propeller.
Analysis of the Optimum Gain of a High-Pass L-Matching Network for Rectennas.
Gasulla, Manel; Jordana, Josep; Robert, Francesc-Josep; Berenguer, Jordi
2017-07-25
Rectennas, which mainly consist of an antenna, matching network, and rectifier, are used to harvest radiofrequency energy in order to power tiny sensor nodes, e.g., the nodes of the Internet of Things. This paper demonstrates for the first time, the existence of an optimum voltage gain for high-pass L-matching networks used in rectennas by deriving an analytical expression. The optimum gain is that which leads to maximum power efficiency of the rectenna. Here, apart from the L-matching network, a Schottky single-diode rectifier was used for the rectenna, which was optimized at 868 MHz for a power range from -30 dBm to -10 dBm. As the theoretical expression depends on parameters not very well-known a priori, an accurate search of the optimum gain for each power level was performed via simulations. Experimental results show remarkable power efficiencies ranging from 16% at -30 dBm to 55% at -10 dBm, which are for almost all the tested power levels the highest published in the literature for similar designs.
NASA Astrophysics Data System (ADS)
Seeger, Manuel; Taguas, Encarnación; Brings, Christine; Wirtz, Stefan; Rodrigo Comino, Jesus; Albert, Enrique; Ries, Johabbes B.
2016-04-01
Sediment connectivity is understood as the interaction of sediment sources, the sinks and the pathways which connect them. During the last decade, the research on connectivity has increased, as it is crucial to understand the relation between the observed sediments at a certain point, and the processes leading them to that location. Thus, the knowledge of the biogeophysical features involved in sediment connectivity in an area of interest is essential to understand its functioning and to design treatments allowing its management, e. g. to reduce soil erosion. The structural connectivity is given by landscape elements which enable the production, transport and deposition of sediments, whereas the functional connectivity is understood here as variable processes that lead the sediments through a catchment. Therefore, 2 different levels of connectivity have been considered which superpose each other according to the catchments conditions. We studied the different connectivity features in a catchment almost completely covered by an olive grove. It is located south of Córdoba (Spain), close to the city of Puente Genil. The olive plantation type is of low productivity. The soil management was no tillage for the least 9 years. The farmer allow weed growing in the lanes although he applied herbicide treatment and tractor passes usually in the end of spring. Firstly, a detailed mapping of geomorphodynamic features was carried out. We identified spatially distributed areas of increased sheet-wash and crusting, but also areas where rill erosion has leadedto a high density of rills and small gullies. Especially within these areas rock outcrops up to several m² were mapped, showing like this (former) intense erosion processes. In addition, field measurements with different methodologies were applied on infiltration (single ring infiltrometers, rainfall simulations), soil permeability (Guelph permeameter), interrill erosion (rainfall simulator) and concentrated flow (rill experiment). The measurements were conducted at representative areas identified in advance by precedent mapping. Preliminary results show that the rills are highly effective in producing sediments, but also in connecting fast the different sources with the catchment's outlet. But also they act as a disconnecting feature to the areas of observation, as they may lead the runoff (and the transported sediments) outside the catchment. On the other side, the experiments showed that the evidently degraded areas produce only very delayed runoff, and thus also sediments, whereas the areas with stable deep soils show evidences of fast runoff and erosive responses. The preliminary results of the combination of mapping and experimental techniques demonstrate the different levels at where functional and structural connectivity have to be evaluated. The latter one may be, as a geomorphological feature, the result of former process distributions, whereas the directly observable (functional) connectivity may shift in time due to internal feedbacks, such as the result of soil degradation.
Fouche, Pieter F; Stein, Christopher; Simpson, Paul; Carlson, Jestin N; Zverinova, Kristina M; Doi, Suhail A
2018-01-29
Endotracheal intubation (ETI) is a critical procedure performed by both air medical and ground based emergency medical services (EMS). Previous work has suggested that ETI success rates are greater for air medical providers. However, air medical providers may have greater airway experience, enhanced airway education, and access to alternative ETI options such as rapid sequence intubation (RSI). We sought to analyze the impact of the type of EMS on RSI success. A systematic literature search of Medline, Embase, and the Cochrane Library was conducted and eligibility, data extraction, and assessment of risk of bias were assessed independently by two reviewers. A bias-adjusted meta-analysis using a quality-effects model was conducted for the primary outcomes of overall intubation success and first-pass intubation success. Forty-nine studies were included in the meta-analysis. There was no difference in the overall success between flight and ground based EMS; 97% (95% CI 96-98) vs. 98% (95% CI 91-100), and no difference in first-pass success for flight compared to ground based RSI; 82% (95% CI 73-89) vs. 82% (95% CI 70-93). Compared to flight non-physicians, flight physicians have higher overall success 99% (95% CI 98-100) vs. 96% (95% CI 94-97) and first-pass success 89% (95% CI 77-98) vs. 71% (95% CI 57-84). Ground-based physicians and non-physicians have a similar overall success 98% (95% CI 88-100) vs. 98% (95% CI 95-100), but no analysis for physician ground first pass was possible. Both overall and first-pass success of RSI did not differ between flight and road based EMS. Flight physicians have a higher overall and first-pass success compared to flight non-physicians and all ground based EMS, but no such differences are seen for ground EMS. Our results suggest that ground EMS can use RSI with similar outcomes compared to their flight counterparts.
Method and apparatus for measuring birefringent particles
Bishop, James K.; Guay, Christopher K.
2006-04-18
A method and apparatus for measuring birefringent particles is provided comprising a source lamp, a grating, a first polarizer having a first transmission axis, a sample cell and a second polarizer having a second polarization axis. The second polarizer has a second polarization axis that is set to be perpendicular to the first polarization axis, and thereby blocks linearly polarized light with the orientation of the beam of light passing through the first polarizer. The beam of light passing through the second polarizer is measured using a detector.
Broadband unidirectional cloaks based on flat metasurface focusing lenses
NASA Astrophysics Data System (ADS)
Li, Yongfeng; Zhang, Jieqiu; Qu, Shaobo; Wang, Jiafu; Pang, Yongqiang; Xu, Zhuo; Zhang, Anxue
2015-08-01
Bandwidth extension and thickness reduction are now the two key issues of cloaks. In this paper, we propose to achieve broadband, thin uni-directional electromagnetic (EM) cloaks using metasurfaces. To this end, a wideband flat focusing lens is firstly devised based on high-efficiency transmissive metasurfaces. Due to the nearly dispersionless parabolic phase profile along the metasurface in the operating band, incident plane waves can be focused efficiently after passing through the metasurface. Broadband unidirectional EM cloaks were then designed by combining two identical flat lenses. Upon illumination, the incident plane waves are firstly focused by one lens and then are restored by the other lens, avoiding the cloaked region. Both simulation and experiment results verify the broadband unidirectional cloak. The broad bandwidth and small thickness of such cloaks have potential applications in achieving invisibility for electrically large objects.
Gravitational waves from vacuum first-order phase transitions: From the envelope to the lattice
NASA Astrophysics Data System (ADS)
Cutting, Daniel; Hindmarsh, Mark; Weir, David J.
2018-06-01
We conduct large scale numerical simulations of gravitational wave production at a first-order vacuum phase transition. We find a power law for the gravitational wave power spectrum at high wave number which falls off as k-1.5 rather than the k-1 produced by the envelope approximation. The peak of the power spectrum is shifted to slightly lower wave numbers from that of the envelope approximation. The envelope approximation reproduces our results for the peak power less well, agreeing only to within an order of magnitude. After the bubbles finish colliding, the scalar field oscillates around the true vacuum. An additional feature is produced in the UV of the gravitational wave power spectrum, and this continues to grow linearly until the end of our simulation. The additional feature peaks at a length scale close to the bubble wall thickness and is shown to have a negligible contribution to the energy in gravitational waves, providing the scalar field mass is much smaller than the Planck mass.
Amplitude image processing by diffractive optics.
Cagigal, Manuel P; Valle, Pedro J; Canales, V F
2016-02-22
In contrast to the standard digital image processing, which operates over the detected image intensity, we propose to perform amplitude image processing. Amplitude processing, like low pass or high pass filtering, is carried out using diffractive optics elements (DOE) since it allows to operate over the field complex amplitude before it has been detected. We show the procedure for designing the DOE that corresponds to each operation. Furthermore, we accomplish an analysis of amplitude image processing performances. In particular, a DOE Laplacian filter is applied to simulated astronomical images for detecting two stars one Airy ring apart. We also check by numerical simulations that the use of a Laplacian amplitude filter produces less noisy images than the standard digital image processing.
Ion Move Brownian Dynamics (IMBD)--simulations of ion transport.
Kurczynska, Monika; Kotulska, Malgorzata
2014-01-01
Comparison of the computed characteristics and physiological measurement of ion transport through transmembrane proteins could be a useful method to assess the quality of protein structures. Simulations of ion transport should be detailed but also timeefficient. The most accurate method could be Molecular Dynamics (MD), which is very time-consuming, hence is not used for this purpose. The model which includes ion-ion interactions and reduces the simulation time by excluding water, protein and lipid molecules is Brownian Dynamics (BD). In this paper a new computer program for BD simulation of the ion transport is presented. We evaluate two methods for calculating the pore accessibility (round and irregular shape) and two representations of ion sizes (van der Waals diameter and one voxel). Ion Move Brownian Dynamics (IMBD) was tested with two nanopores: alpha-hemolysin and potassium channel KcsA. In both cases during the simulation an ion passed through the pore in less than 32 ns. Although two types of ions were in solution (potassium and chloride), only ions which agreed with the selectivity properties of the channels passed through the pores. IMBD is a new tool for the ion transport modelling, which can be used in the simulations of wide and narrow pores.
2006-10-13
As Cassini watches the rings pass in front of bright red giant star Aldebaran, the star light fluctuates, providing information about the concentrations of ring particles within the various radial features in the rings
2006-10-11
As Cassini watches the rings pass in front of bright red giant star Aldebaran, the star light fluctuates, providing information about the concentrations of ring particles within the various radial features in the rings
Engel, Pierre; Almas, Mariana Ferreira; De Bruin, Marieke Louise; Starzyk, Kathryn; Blackburn, Stella; Dreyer, Nancy Ann
2017-04-01
To describe and characterize the first cohort of Post-Authorization Safety Study (PASS) protocols reviewed under the recent European pharmacovigilance legislation. A systematic approach was used to compile all publicly available information on PASS protocols and assessments submitted from July 2012 to July 2015 from Pharmacovigilance Risk Assessment Committee (PRAC) minutes, European Medicines Agency (EMA) and European Network of Pharmacovigilance and Pharmacoepidemiology (ENCePP) webpages. During the study period, 189 different PASS protocols were submitted to the PRAC, half of which were entered in the ENCePP electronic register of post-authorization studies (EU-PAS) by July 2015. Those protocols were assessed during 353 PRAC reviews. The EMA published only 31% of the PRAC feedback, of which the main concerns were study design (37%) and feasibility (30%). Among the 189 PASS, slightly more involved primary data capture (58%). PASS assessing drug utilization mainly leveraged secondary data sources (58%). The majority of the PASS did not include a comparator (65%) and 35% of PASS also evaluated clinical effectiveness endpoints. To the best of our knowledge this is the first comprehensive review of three years of PASS protocols submitted under the new pharmacovigilance legislation. Our results show that both EMA and PASS sponsors could respectively increase the availability of protocol assessments and documents in the EU-PAS. Protocol content review and the high number of PRAC comments related to methodological issues and feasibility concerns should raise awareness among PASS stakeholders to design more thoughtful studies according to pharmacoepidemiological principles and existing guidelines. © 2016 The British Pharmacological Society.
Laboratory simulation of rocket-borne D-region blunt probe flows
NASA Technical Reports Server (NTRS)
Kaplan, L. B.
1977-01-01
The flow of weakly ionized plasmas that is similar to the flow that occurs over rocket-borne blunt probes as they pass through the lower ionosphere has been simulated in a scaled laboratory environment, and electron collection D region blunt probe theories have been evaluated.
NASA Astrophysics Data System (ADS)
Chou, Y. C.
2018-04-01
The asymmetry in the two-layered ring structure of helicases and the random thermal fluctuations of the helicase and DNA molecules are considered as the bases for the generation of the force required for translocation of the ring-shaped helicase on DNA. The helicase comprises a channel at its center with two unequal ends, through which strands of DNA can pass. The random collisions between the portion of the DNA strand in the central channel and the wall of the channel generate an impulsive force toward the small end. This impulsive force is the starting point for the helicase to translocate along the DNA with the small end in front. Such a physical mechanism may serve as a complementary for the chemomechanical mechanism of the translocation of helicase on DNA. When the helicase arrives at the junction of ssDNA and dsDNA (a fork), the collision between the helicase and the closest base pair may produce a sufficient impulsive force to break the weak hydrogen bond of the base pair. Thus, the helicase may advance and repeat the process of unwinding the dsDNA strand. This mechanism was tested in a macroscopic simulation system where the helicase was simulated using a truncated-cone structure and DNA was simulated with bead chains. Many features of translocation and unwinding such as translocation on ssDNA and dsDNA, unwinding of dsDNA, rewinding, strand switching, and Holliday junction resolution were reproduced.
NASA Astrophysics Data System (ADS)
Pasquato, Mario; Chung, Chul
2016-05-01
Context. Machine-learning (ML) solves problems by learning patterns from data with limited or no human guidance. In astronomy, ML is mainly applied to large observational datasets, e.g. for morphological galaxy classification. Aims: We apply ML to gravitational N-body simulations of star clusters that are either formed by merging two progenitors or evolved in isolation, planning to later identify globular clusters (GCs) that may have a history of merging from observational data. Methods: We create mock-observations from simulated GCs, from which we measure a set of parameters (also called features in the machine-learning field). After carrying out dimensionality reduction on the feature space, the resulting datapoints are fed in to various classification algorithms. Using repeated random subsampling validation, we check whether the groups identified by the algorithms correspond to the underlying physical distinction between mergers and monolithically evolved simulations. Results: The three algorithms we considered (C5.0 trees, k-nearest neighbour, and support-vector machines) all achieve a test misclassification rate of about 10% without parameter tuning, with support-vector machines slightly outperforming the others. The first principal component of feature space correlates with cluster concentration. If we exclude it from the regression, the performance of the algorithms is only slightly reduced.
Burt, Jenni; Abel, Gary; Barclay, Matt; Evans, Robert; Benson, John; Gurnell, Mark
2016-10-11
To investigate the association between student performance in undergraduate objective structured clinical examinations (OSCEs) and the examination schedule to which they were assigned to undertake these examinations. Analysis of routinely collected data. One UK medical school. 2331 OSCEs of 3 different types (obstetrics OSCE, paediatrics OSCE and simulated clinical encounter examination OSCE) between 2009 and 2013. Students were not quarantined between examinations. (1) Pass rates by day examination started, (2) pass rates by day station undertaken and (3) mean scores by day examination started. We found no evidence that pass rates differed according to the day on which the examination was started by a candidate in any of the examinations considered (p>0.1 for all). There was evidence (p=0.013) that students were more likely to pass individual stations on the second day of the paediatrics OSCE (OR 1.27, 95% CI 1.05 to 1.54). In the cases of the simulated clinical encounter examination and the obstetrics and gynaecology OSCEs, there was no (p=0.42) or very weak evidence (p=0.099), respectively, of any such variation in the probability of passing individual stations according to the day they were attempted. There was no evidence that mean scores varied by day apart from the paediatric OSCE, where slightly higher scores were achieved on the second day of the examination. There is little evidence that different examination schedules have a consistent effect on pass rates or mean scores: students starting the examinations later were not consistently more or less likely to pass or score more highly than those starting earlier. The practice of quarantining students to prevent communication with (and subsequent unfair advantage for) subsequent examination cohorts is unlikely to be required. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
NASA Astrophysics Data System (ADS)
Yang, Hongxin; Su, Fulin
2018-01-01
We propose a moving target analysis algorithm using speeded-up robust features (SURF) and regular moment in inverse synthetic aperture radar (ISAR) image sequences. In our study, we first extract interest points from ISAR image sequences by SURF. Different from traditional feature point extraction methods, SURF-based feature points are invariant to scattering intensity, target rotation, and image size. Then, we employ a bilateral feature registering model to match these feature points. The feature registering scheme can not only search the isotropic feature points to link the image sequences but also reduce the error matching pairs. After that, the target centroid is detected by regular moment. Consequently, a cost function based on correlation coefficient is adopted to analyze the motion information. Experimental results based on simulated and real data validate the effectiveness and practicability of the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rubel, Oliver; Loring, Burlen; Vay, Jean -Luc
The generation of short pulses of ion beams through the interaction of an intense laser with a plasma sheath offers the possibility of compact and cheaper ion sources for many applications--from fast ignition and radiography of dense targets to hadron therapy and injection into conventional accelerators. To enable the efficient analysis of large-scale, high-fidelity particle accelerator simulations using the Warp simulation suite, the authors introduce the Warp In situ Visualization Toolkit (WarpIV). WarpIV integrates state-of-the-art in situ visualization and analysis using VisIt with Warp, supports management and control of complex in situ visualization and analysis workflows, and implements integrated analyticsmore » to facilitate query- and feature-based data analytics and efficient large-scale data analysis. WarpIV enables for the first time distributed parallel, in situ visualization of the full simulation data using high-performance compute resources as the data is being generated by Warp. The authors describe the application of WarpIV to study and compare large 2D and 3D ion accelerator simulations, demonstrating significant differences in the acceleration process in 2D and 3D simulations. WarpIV is available to the public via https://bitbucket.org/berkeleylab/warpiv. The Warp In situ Visualization Toolkit (WarpIV) supports large-scale, parallel, in situ visualization and analysis and facilitates query- and feature-based analytics, enabling for the first time high-performance analysis of large-scale, high-fidelity particle accelerator simulations while the data is being generated by the Warp simulation suite. Furthermore, this supplemental material https://extras.computer.org/extra/mcg2016030022s1.pdf provides more details regarding the memory profiling and optimization and the Yee grid recentering optimization results discussed in the main article.« less
Employment of Adaptive Learning Techniques for the Discrimination of Acoustic Emissions.
1984-12-01
Page 23 The result of low-time pass filtering the cepstrum of the waveform in Figure 15, using only the first 2048 samples as input; a - 0.9985...first 2048 samples as input; a - 0.9985 .................................................... 4-13 ’ 25 The result of low-time pass filtering the...cepstrum of the waveform in Figure 17, using only the first 2048 samples as input; a - 0.9985 .................................................... 4-13 26
Daman, Ernest L.; McCallister, Robert A.
1979-01-01
A heat exchanger is provided having first and second fluid chambers for passing primary and secondary fluids. The chambers are spaced apart and have heat pipes extending from inside one chamber to inside the other chamber. A third chamber is provided for passing a purge fluid, and the heat pipe portion between the first and second chambers lies within the third chamber.
Academic Performance and Pass Rates: Comparison of Three First-Year Life Science Courses
ERIC Educational Resources Information Center
Downs, C. T.
2009-01-01
First year students' academic performance in three Life Science courses (Botany, Zoology and Bioscience) was compared. Pass rates, as well as the means and distributions of final marks were analysed. Of the three components (coursework, practical and theory examinations) contributing to the final mark of each course, students performed best in the…
NASA Astrophysics Data System (ADS)
Pishevar, M. R.; Mohandesi, J. Aghazadeh; Omidvar, H.; Safarkhanian, M. A.
2015-10-01
Friction stir welding is suitable for joining series 5000 alloys because no fusion welding problems arise for the alloys in this process. The present study examined the effects of double-pass welding and tool rotational and travel speeds for the second-pass welding on the mechanical and microstructural properties of friction stir lap welding of AA5456 (AlMg5)-H321 (5 mm thickness) and AA5456 (AlMg5)-O (2.5 mm thickness). The first pass of all specimens was performed at a rotational speed of 650 rpm and a travel speed of 50 mm/min. The second pass was performed at rotational speeds of 250, 450, and 650 rpm and travel speeds of 25, 50, and 75 mm/min. The results showed that the second pass changed the grain sizes in the center of the nugget zone compared with the first pass. It was observed that the size of the hooking defect of the double-pass-welded specimens was higher than that for the single-pass-welded specimen. The size of the hooking defect was found to be a function of the rotational and travel speeds. The optimal joint tensile shear properties were achieved at a rotational speed of 250 rpm and travel a speed of 75 mm/min.
NASA Technical Reports Server (NTRS)
Studer, P. A. (Inventor)
1982-01-01
A linear magnetic motor/generator is disclosed which uses magnetic flux to provide mechanical motion or electrical energy. The linear magnetic motor/generator includes an axially movable actuator mechanism. A permament magnet mechanism defines a first magnetic flux path which passes through a first end portion of the actuator mechanism. Another permament magnet mechanism defines a second magnetic flux path which passes through a second end portion of the actuator mechanism. A drive coil defines a third magnetic flux path passing through a third central portion of the actuator mechanism. A drive coil selectively adds magnetic flux to and subtracts magnetic flux from magnetic flux flowing in the first and second magnetic flux path.
NASA Technical Reports Server (NTRS)
Vinci, Samuel, J.
2012-01-01
This report is the third part of a three-part final report of research performed under an NRA cooperative Agreement contract. The first part was published as NASA/CR-2012-217415. The second part was published as NASA/CR-2012-217416. The study of the very high lift low-pressure turbine airfoil L1A in the presence of unsteady wakes was performed computationally and compared against experimental results. The experiments were conducted in a low speed wind tunnel under high (4.9%) and then low (0.6%) freestream turbulence intensity for Reynolds number equal to 25,000 and 50,000. The experimental and computational data have shown that in cases without wakes, the boundary layer separated without reattachment. The CFD was done with LES and URANS utilizing the finite-volume code ANSYS Fluent (ANSYS, Inc.) under the same freestream turbulence and Reynolds number conditions as the experiment but only at a rod to blade spacing of 1. With wakes, separation was largely suppressed, particularly if the wake passing frequency was sufficiently high. This was validated in the 3D CFD efforts by comparing the experimental results for the pressure coefficients and velocity profiles, which were reasonable for all cases examined. The 2D CFD efforts failed to capture the three dimensionality effects of the wake and thus were less consistent with the experimental data. The effect of the freestream turbulence intensity levels also showed a little more consistency with the experimental data at higher intensities when compared with the low intensity cases. Additional cases with higher wake passing frequencies which were not run experimentally were simulated. The results showed that an initial 25% increase from the experimental wake passing greatly reduced the size of the separation bubble, nearly completely suppressing it.
A high-efficiency low-voltage class-E PA for IoT applications in sub-1 GHz frequency range
NASA Astrophysics Data System (ADS)
Zhou, Chenyi; Lu, Zhenghao; Gu, Jiangmin; Yu, Xiaopeng
2017-10-01
We present and propose a complete and iterative integrated-circuit and electro-magnetic (EM) co-design methodology and procedure for a low-voltage sub-1 GHz class-E PA. The presented class-E PA consists of the on-chip power transistor, the on-chip gate driving circuits, the off-chip tunable LC load network and the off-chip LC ladder low pass filter. The design methodology includes an explicit design equation based circuit components values' analysis and numerical derivation, output power targeted transistor size and low pass filter design, and power efficiency oriented design optimization. The proposed design procedure includes the power efficiency oriented LC network tuning, the detailed circuit/EM co-simulation plan on integrated circuit level, package level and PCB level to ensure an accurate simulation to measurement match and first pass design success. The proposed PA is targeted to achieve more than 15 dBm output power delivery and 40% power efficiency at 433 MHz frequency band with 1.5 V low voltage supply. The LC load network is designed to be off-chip for the purpose of easy tuning and optimization. The same circuit can be extended to all sub-1 GHz applications with the same tuning and optimization on the load network at different frequencies. The amplifier is implemented in 0.13 μm CMOS technology with a core area occupation of 400 μm by 300 μm. Measurement results showed that it provided power delivery of 16.42 dBm at antenna with efficiency of 40.6%. A harmonics suppression of 44 dBc is achieved, making it suitable for massive deployment of IoT devices. Project supported by the National Natural Science Foundation of China (No. 61574125) and the Industry Innovation Project of Suzhou City of China (No. SYG201641).
Aerothermodynamic Analyses of Towed Ballutes
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.; Buck, Greg; Moss, James N.; Nielsen, Eric; Berger, Karen; Jones, William T.; Rudavsky, Rena
2006-01-01
A ballute (balloon-parachute) is an inflatable, aerodynamic drag device for application to planetary entry vehicles. Two challenging aspects of aerothermal simulation of towed ballutes are considered. The first challenge, simulation of a complete system including inflatable tethers and a trailing toroidal ballute, is addressed using the unstructured-grid, Navier-Stokes solver FUN3D. Auxiliary simulations of a semi-infinite cylinder using the rarefied flow, Direct Simulation Monte Carlo solver, DSV2, provide additional insight into limiting behavior of the aerothermal environment around tethers directly exposed to the free stream. Simulations reveal pressures higher than stagnation and corresponding large heating rates on the tether as it emerges from the spacecraft base flow and passes through the spacecraft bow shock. The footprint of the tether shock on the toroidal ballute is also subject to heating amplification. Design options to accommodate or reduce these environments are discussed. The second challenge addresses time-accurate simulation to detect the onset of unsteady flow interactions as a function of geometry and Reynolds number. Video of unsteady interactions measured in the Langley Aerothermodynamic Laboratory 20-Inch Mach 6 Air Tunnel and CFD simulations using the structured grid, Navier-Stokes solver LAURA are compared for flow over a rigid spacecraft-sting-toroid system. The experimental data provides qualitative information on the amplitude and onset of unsteady motion which is captured in the numerical simulations. The presence of severe unsteady fluid - structure interactions is undesirable and numerical simulation must be able to predict the onset of such motion.
2006-10-09
As Cassini watches the rings pass in front of the bright red giant star Aldebaran, the star light fluctuates, providing information about the concentrations of ring particles within the various radial features in the rings
Park, So-Yeon; Kim, Il Han; Ye, Sung-Joon; Carlson, Joel; Park, Jong Min
2014-11-01
Texture analysis on fluence maps was performed to evaluate the degree of modulation for volumetric modulated arc therapy (VMAT) plans. A total of six textural features including angular second moment, inverse difference moment, contrast, variance, correlation, and entropy were calculated for fluence maps generated from 20 prostate and 20 head and neck VMAT plans. For each of the textural features, particular displacement distances (d) of 1, 5, and 10 were adopted. To investigate the deliverability of each VMAT plan, gamma passing rates of pretreatment quality assurance, and differences in modulating parameters such as multileaf collimator (MLC) positions, gantry angles, and monitor units at each control point between VMAT plans and dynamic log files registered by the Linac control system during delivery were acquired. Furthermore, differences between the original VMAT plan and the plan reconstructed from the dynamic log files were also investigated. To test the performance of the textural features as indicators for the modulation degree of VMAT plans, Spearman's rank correlation coefficients (rs) with the plan deliverability were calculated. For comparison purposes, conventional modulation indices for VMAT including the modulation complexity score for VMAT, leaf travel modulation complexity score, and modulation index supporting station parameter optimized radiation therapy (MISPORT) were calculated, and their correlations were analyzed in the same way. There was no particular textural feature which always showed superior correlations with every type of plan deliverability. Considering the results comprehensively, contrast (d = 1) and variance (d = 1) generally showed considerable correlations with every type of plan deliverability. These textural features always showed higher correlations to the plan deliverability than did the conventional modulation indices, except in the case of modulating parameter differences. The rs values of contrast to the global gamma passing rates with criteria of 2%/2 mm, 2%/1 mm, and 1%/2 mm were 0.536, 0.473, and 0.718, respectively. The respective values for variance were 0.551, 0.481, and 0.688. In the case of local gamma passing rates, the rs values of contrast were 0.547, 0.578, and 0.620, respectively, and those of variance were 0.519, 0.527, and 0.569. All of the rs values in those cases were statistically significant (p < 0.003). In the cases of global and local gamma passing rates, MISPORT showed the highest correlations among the conventional modulation indices. For global passing rates, rs values of MISPORT were -0.420, -0.330, and -0.632, respectively, and those for local passing rates were -0.455, -0.490 and -0.502. The values of rs of contrast, variance, and MISPORT with the MLC errors were -0.863, -0.828, and 0.795, respectively, all with statistical significances (p < 0.001). The correlations with statistical significances between variance and dose-volumetric differences were observed more frequently than the others. The contrast (d = 1) and variance (d = 1) calculated from fluence maps of VMAT plans showed considerable correlations with the plan deliverability, indicating their potential use as indicators for assessing the degree of modulation of VMAT plans. Both contrast and variance consistently showed better performance than the conventional modulation indices for VMAT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Couch, R; Wang, P
2003-07-31
In this quarter, an FEM simulation has been performed to compare the shape of the deformed slab after the 8th reduction pass with the experimental metrology data provided by Alcoa Technical Center (ATC). Also, a bug in the thermal contact algorithm used in parallel processing have been identified and corrected for consistent thermal solutions between the rollers and the slab. The overall shape of the slab at the end of the 8th pass is shown in Figure 1. Comparison of the sectional views at the center plane along the length of the slab for both experiment and simulation, shows thatmore » the curvature at the slab mouth at the centerline is slightly higher than the experimental result as shown in Figure 2. We are currently focusing on tuning the parameter values used in the simulation and a more complete parametric study for validation is underway. Also, unexpected fracture occurred along the surface of the slab in the 9th pass as shown in Figure 3. We believe that the reason is due to previously noted inadequacies in the fracture model at low strain rates and high stress triaxiality. We are expecting to receive a modified fracture model based on additional experiment shortly from Alcoa.« less
Ocean barrier layers' effect on tropical cyclone intensification.
Balaguru, Karthik; Chang, Ping; Saravanan, R; Leung, L Ruby; Xu, Zhao; Li, Mingkui; Hsieh, Jen-Shan
2012-09-04
Improving a tropical cyclone's forecast and mitigating its destructive potential requires knowledge of various environmental factors that influence the cyclone's path and intensity. Herein, using a combination of observations and model simulations, we systematically demonstrate that tropical cyclone intensification is significantly affected by salinity-induced barrier layers, which are "quasi-permanent" features in the upper tropical oceans. When tropical cyclones pass over regions with barrier layers, the increased stratification and stability within the layer reduce storm-induced vertical mixing and sea surface temperature cooling. This causes an increase in enthalpy flux from the ocean to the atmosphere and, consequently, an intensification of tropical cyclones. On average, the tropical cyclone intensification rate is nearly 50% higher over regions with barrier layers, compared to regions without. Our finding, which underscores the importance of observing not only the upper-ocean thermal structure but also the salinity structure in deep tropical barrier layer regions, may be a key to more skillful predictions of tropical cyclone intensities through improved ocean state estimates and simulations of barrier layer processes. As the hydrological cycle responds to global warming, any associated changes in the barrier layer distribution must be considered in projecting future tropical cyclone activity.
Dielectrophoretic behaviours of microdroplet sandwiched between LN substrates
Chen, Lipin; Li, Shaobei; Fan, Bolin; Yan, Wenbo; Wang, Donghui; Shi, Lihong; Chen, Hongjian; Ban, Dechao; Sun, Shihao
2016-01-01
We demonstrate a sandwich configuration for microfluidic manipulation in LiNbO3 platform based on photovoltaic effect, and the behaviours of dielectric microdroplet under this sandwich configuration are investigated. It is found that the microdroplet can generate in the form of liquid bridge inside the LiNbO3-based sandwich structure under the governing dielectrophoretic force, and the dynamic process of microdroplet generation highly depends on the substrate combinations. Dynamic features found for different combinations are explained by the different electrostatic field distribution basing on the finite-element simulation results. Moreover, the electrostatic field required by the microdroplet generation is estimated through meniscus evolution and it is found in good agreement with the simulated electrostatic field inside the sandwich gap. Several kinds of microdroplet manipulations are attempted in this work. We suggest that the local dielectrophoretic force acting on the microdroplet depends on the distribution of the accumulated irradiation dosage. Without using any additional pumping or jetting actuator, the microdroplet can be step-moved, deformed or patterned by the inconsecutive dot-irradiation scheme, as well as elastically stretched out and back or smoothly guided in a designed pass by the consecutive line-irradiation scheme. PMID:27383027
Ocean barrier layers’ effect on tropical cyclone intensification
Balaguru, Karthik; Chang, Ping; Saravanan, R.; Leung, L. Ruby; Xu, Zhao; Li, Mingkui; Hsieh, Jen-Shan
2012-01-01
Improving a tropical cyclone’s forecast and mitigating its destructive potential requires knowledge of various environmental factors that influence the cyclone’s path and intensity. Herein, using a combination of observations and model simulations, we systematically demonstrate that tropical cyclone intensification is significantly affected by salinity-induced barrier layers, which are “quasi-permanent” features in the upper tropical oceans. When tropical cyclones pass over regions with barrier layers, the increased stratification and stability within the layer reduce storm-induced vertical mixing and sea surface temperature cooling. This causes an increase in enthalpy flux from the ocean to the atmosphere and, consequently, an intensification of tropical cyclones. On average, the tropical cyclone intensification rate is nearly 50% higher over regions with barrier layers, compared to regions without. Our finding, which underscores the importance of observing not only the upper-ocean thermal structure but also the salinity structure in deep tropical barrier layer regions, may be a key to more skillful predictions of tropical cyclone intensities through improved ocean state estimates and simulations of barrier layer processes. As the hydrological cycle responds to global warming, any associated changes in the barrier layer distribution must be considered in projecting future tropical cyclone activity. PMID:22891298
Simulation of TanDEM-X interferograms for urban change detection
NASA Astrophysics Data System (ADS)
Welte, Amelie; Hammer, Horst; Thiele, Antje; Hinz, Stefan
2017-10-01
Damage detection after natural disasters is one of the remote sensing tasks in which Synthetic Aperture Radar (SAR) sensors play an important role. Since SAR is an active sensor, it can record images at all times of day and in all weather conditions, making it ideally suited for this task. While with the newer generation of SAR satellites such as TerraSAR-X or COSMOSkyMed amplitude change detection has become possible even for urban areas, interferometric phase change detection has not been published widely. This is mainly because of the long revisit times of common SAR sensors leading to temporal decorrelation. This situation has changed dramatically with the advent of the TanDEM-X constellation, which can create single-pass interferograms from space at very high resolutions, avoiding temporal decorrelation almost completely. In this paper the basic structures that are present for any building in InSAR phases, i.e. layover, shadow, and roof areas, are examined. Approaches for their extraction from TanDEM-X interferograms are developed using simulated SAR interferograms. The extracted features of the building signature will in the future be used for urban change detection in real TanDEM-X High Resolution Spotlight interferograms.
Ocean Barrier Layers’ Effect on Tropical Cyclone Intensification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balaguru, Karthik; Chang, P.; Saravanan, R.
2012-09-04
Improving a tropical cyclone's forecast and mitigating its destructive potential requires knowledge of various environmental factors that influence the cyclone's path and intensity. Herein, using a combination of observations and model simulations, we systematically demonstrate that tropical cyclone intensification is significantly affected by salinity-induced barrier layers, which are 'quasi-permanent' features in the upper tropical oceans. When tropical cyclones pass over regions with barrier layers, the increased stratification and stability within the layer reduce storm-induced vertical mixing and sea surface temperature cooling. This causes an increase in enthalpy flux from the ocean to the atmosphere and, consequently, an intensification of tropicalmore » cyclones. On average, the tropical cyclone intensification rate is nearly 50% higher over regions with barrier layers, compared to regions without. Our finding, which underscores the importance of observing not only the upper-ocean thermal structure but also the salinity structure in deep tropical barrier layer regions, may be a key to more skillful predictions of tropical cyclone intensities through improved ocean state estimates and simulations of barrier layer processes. As the hydrological cycle responds to global warming, any associated changes in the barrier layer distribution must be considered in projecting future tropical cyclone activity.« less
The Dark Side of Saturn's Gravity
NASA Astrophysics Data System (ADS)
Iess, L.; Racioppa, P.; Durante, D.; Mariani, M., Jr.; Anabtawi, A.; Armstrong, J. W.; Gomez Casajus, L.; Tortora, P.; Zannoni, M.
2017-12-01
On July 19, 2017 the Cassini spacecraft successfully completed its sixth and last pericenter pass devoted to the investigation of Saturn's interior structure and rings. During each pass the spacecraft was tracked for about 24 hours by the antennas of NASA's Deep Space Network and ESA's ESTRACK network, providing high quality measurements of the spacecraft range rate. We report on a preliminary estimate of Saturn's gravity field and ring mass inferred from range rate observables, and discuss the surprising features of our findings.
Welding-Induced Microstructure Evolution of a Cu-Bearing High-Strength Blast-Resistant Steel
NASA Astrophysics Data System (ADS)
Caron, Jeremy L.; Babu, Sudarsanam Suresh; Lippold, John C.
2011-12-01
A new high strength, high toughness steel containing Cu for precipitation strengthening was recently developed for naval, blast-resistant structural applications. This steel, known as BlastAlloy160 (BA-160), is of nominal composition Fe-0.05C-3.65Cu-6.5Ni-1.84Cr-0.6Mo-0.1V (wt pct). The evident solidification substructure of an autogenous gas tungsten arc (GTA) weld suggested fcc austenite as the primary solidification phase. The heat-affected zone (HAZ) hardness ranged from a minimum of 353 HV in the coarse-grained HAZ (CGHAZ) to a maximum of 448 HV in the intercritical HAZ (ICHAZ). After postweld heat treatment (PWHT) of the spot weld, hardness increases were observed in the fusion zone (FZ), CGHAZ, and fine-grained HAZ (FGHAZ) regions. Phase transformation and metallographic analyses of simulated single-pass HAZ regions revealed lath martensite to be the only austenitic transformation product in the HAZ. Single-pass HAZ simulations revealed a similar hardness profile for low heat-input (LHI) and high heat-input (HHI) conditions, with higher hardness values being measured for the LHI samples. The measured hardness values were in good agreement with those from the GTA weld. Single-pass HAZ regions exhibited higher Charpy V-notch impact toughness than the BM at both test temperatures of 293 K and 223 K (20 °C and -50 °C). Hardness increases were observed for multipass HAZ simulations employing an initial CGHAZ simulation.
Avalanche for shape and feature-based virtual screening with 3D alignment
NASA Astrophysics Data System (ADS)
Diller, David J.; Connell, Nancy D.; Welsh, William J.
2015-11-01
This report introduces a new ligand-based virtual screening tool called Avalanche that incorporates both shape- and feature-based comparison with three-dimensional (3D) alignment between the query molecule and test compounds residing in a chemical database. Avalanche proceeds in two steps. The first step is an extremely rapid shape/feature based comparison which is used to narrow the focus from potentially millions or billions of candidate molecules and conformations to a more manageable number that are then passed to the second step. The second step is a detailed yet still rapid 3D alignment of the remaining candidate conformations to the query conformation. Using the 3D alignment, these remaining candidate conformations are scored, re-ranked and presented to the user as the top hits for further visualization and evaluation. To provide further insight into the method, the results from two prospective virtual screens are presented which show the ability of Avalanche to identify hits from chemical databases that would likely be missed by common substructure-based or fingerprint-based search methods. The Avalanche method is extended to enable patent landscaping, i.e., structural refinements to improve the patentability of hits for deployment in drug discovery campaigns.
Durantin, Gautier; Scannella, Sébastien; Gateau, Thibault; Delorme, Arnaud; Dehais, Frédéric
2015-01-01
Working memory (WM) is a key executive function for operating aircraft, especially when pilots have to recall series of air traffic control instructions. There is a need to implement tools to monitor WM as its limitation may jeopardize flight safety. An innovative way to address this issue is to adopt a Neuroergonomics approach that merges knowledge and methods from Human Factors, System Engineering, and Neuroscience. A challenge of great importance for Neuroergonomics is to implement efficient brain imaging techniques to measure the brain at work and to design Brain Computer Interfaces (BCI). We used functional near infrared spectroscopy as it has been already successfully tested to measure WM capacity in complex environment with air traffic controllers (ATC), pilots, or unmanned vehicle operators. However, the extraction of relevant features from the raw signal in ecological environment is still a critical issue due to the complexity of implementing real-time signal processing techniques without a priori knowledge. We proposed to implement the Kalman filtering approach, a signal processing technique that is efficient when the dynamics of the signal can be modeled. We based our approach on the Boynton model of hemodynamic response. We conducted a first experiment with nine participants involving a basic WM task to estimate the noise covariances of the Kalman filter. We then conducted a more ecological experiment in our flight simulator with 18 pilots who interacted with ATC instructions (two levels of difficulty). The data was processed with the same Kalman filter settings implemented in the first experiment. This filter was benchmarked with a classical pass-band IIR filter and a Moving Average Convergence Divergence (MACD) filter. Statistical analysis revealed that the Kalman filter was the most efficient to separate the two levels of load, by increasing the observed effect size in prefrontal areas involved in WM. In addition, the use of a Kalman filter increased the performance of the classification of WM levels based on brain signal. The results suggest that Kalman filter is a suitable approach for real-time improvement of near infrared spectroscopy signal in ecological situations and the development of BCI.
Durantin, Gautier; Scannella, Sébastien; Gateau, Thibault; Delorme, Arnaud; Dehais, Frédéric
2016-01-01
Working memory (WM) is a key executive function for operating aircraft, especially when pilots have to recall series of air traffic control instructions. There is a need to implement tools to monitor WM as its limitation may jeopardize flight safety. An innovative way to address this issue is to adopt a Neuroergonomics approach that merges knowledge and methods from Human Factors, System Engineering, and Neuroscience. A challenge of great importance for Neuroergonomics is to implement efficient brain imaging techniques to measure the brain at work and to design Brain Computer Interfaces (BCI). We used functional near infrared spectroscopy as it has been already successfully tested to measure WM capacity in complex environment with air traffic controllers (ATC), pilots, or unmanned vehicle operators. However, the extraction of relevant features from the raw signal in ecological environment is still a critical issue due to the complexity of implementing real-time signal processing techniques without a priori knowledge. We proposed to implement the Kalman filtering approach, a signal processing technique that is efficient when the dynamics of the signal can be modeled. We based our approach on the Boynton model of hemodynamic response. We conducted a first experiment with nine participants involving a basic WM task to estimate the noise covariances of the Kalman filter. We then conducted a more ecological experiment in our flight simulator with 18 pilots who interacted with ATC instructions (two levels of difficulty). The data was processed with the same Kalman filter settings implemented in the first experiment. This filter was benchmarked with a classical pass-band IIR filter and a Moving Average Convergence Divergence (MACD) filter. Statistical analysis revealed that the Kalman filter was the most efficient to separate the two levels of load, by increasing the observed effect size in prefrontal areas involved in WM. In addition, the use of a Kalman filter increased the performance of the classification of WM levels based on brain signal. The results suggest that Kalman filter is a suitable approach for real-time improvement of near infrared spectroscopy signal in ecological situations and the development of BCI. PMID:26834607
Park, Si-Woon; Choi, Eun Seok; Lim, Mun Hee; Kim, Eun Joo; Hwang, Sung Il; Choi, Kyung-In; Yoo, Hyun-Chul; Lee, Kuem Ju; Jung, Hi-Eun
2011-03-01
To find an association between cognitive-perceptual problems of older drivers and unsafe driving performance during simulated automobile driving in a virtual environment. Cross-sectional study. A driver evaluation clinic in a rehabilitation hospital. Fifty-five drivers aged 65 years or older and 48 drivers in their late twenties to early forties. All participants underwent evaluation of cognitive-perceptual function and driving performance, and the results were compared between older and younger drivers. The association between cognitive-perceptual function and driving performance was analyzed. Cognitive-perceptual function was evaluated with the Cognitive Perceptual Assessment for Driving (CPAD), a computer-based assessment tool consisting of depth perception, sustained attention, divided attention, the Stroop test, the digit span test, field dependency, and trail-making test A and B. Driving performance was evaluated with use of a virtual reality-based driving simulator. During simulated driving, car crashes were recorded, and an occupational therapist observed unsafe performances in controlling speed, braking, steering, vehicle positioning, making lane changes, and making turns. Thirty-five older drivers did not pass the CPAD test, whereas all of the younger drivers passed the test. When using the driving simulator, a significantly greater number of older drivers experienced car crashes and demonstrated unsafe performance in controlling speed, steering, and making lane changes. CPAD results were associated with car crashes, steering, vehicle positioning, and making lane changes. Older drivers who did not pass the CPAD test are 4 times more likely to experience a car crash, 3.5 times more likely to make errors in steering, 2.8 times more likely to make errors in vehicle positioning, and 6.5 times more likely to make errors in lane changes than are drivers who passed the CPAD test. Unsafe driving performance and car crashes during simulated driving were more prevalent in older drivers than in younger drivers. Unsafe performance in steering, vehicle positioning, making lane changes, and car crashes were associated with cognitive-perceptual dysfunction. Copyright © 2011 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.
Sheffield, Sterling W; Simha, Michelle; Jahn, Kelly N; Gifford, René H
2016-01-01
The primary purpose of this study was to examine the effect of acoustic bandwidth on bimodal benefit for speech recognition in normal-hearing children with a cochlear implant (CI) simulation in one ear and low-pass filtered stimuli in the contralateral ear. The effect of acoustic bandwidth on bimodal benefit in children was compared with the pattern of adults with normal hearing. Our hypothesis was that children would require a wider acoustic bandwidth than adults to (1) derive bimodal benefit, and (2) obtain asymptotic bimodal benefit. Nineteen children (6 to 12 years) and 10 adults with normal hearing participated in the study. Speech recognition was assessed via recorded sentences presented in a 20-talker babble. The AzBio female-talker sentences were used for the adults and the pediatric AzBio sentences (BabyBio) were used for the children. A CI simulation was presented to the right ear and low-pass filtered stimuli were presented to the left ear with the following cutoff frequencies: 250, 500, 750, 1000, and 1500 Hz. The primary findings were (1) adults achieved higher performance than children when presented with only low-pass filtered acoustic stimuli, (2) adults and children performed similarly in all the simulated CI and bimodal conditions, (3) children gained significant bimodal benefit with the addition of low-pass filtered speech at 250 Hz, and (4) unlike previous studies completed with adult bimodal patients, adults and children with normal hearing gained additional significant bimodal benefit with cutoff frequencies up to 1500 Hz with most of the additional benefit gained with energy below 750 Hz. Acoustic bandwidth effects on simulated bimodal benefit were similar in children and adults with normal hearing. Should the current results generalize to children with CIs, these results suggest pediatric CI recipients may derive significant benefit from minimal acoustic hearing (<250 Hz) in the nonimplanted ear and increasing benefit with broader bandwidth. Knowledge of the effect of acoustic bandwidth on bimodal benefit in children may help direct clinical decisions regarding a second CI, continued bimodal hearing, and even optimizing acoustic amplification for the nonimplanted ear.
Spacecraft control center automation using the generic inferential executor (GENIE)
NASA Technical Reports Server (NTRS)
Hartley, Jonathan; Luczak, Ed; Stump, Doug
1996-01-01
The increasing requirement to dramatically reduce the cost of mission operations led to increased emphasis on automation technology. The expert system technology used at the Goddard Space Flight Center (MD) is currently being applied to the automation of spacecraft control center activities. The generic inferential executor (GENIE) is a tool which allows pass automation applications to be constructed. The pass script templates constructed encode the tasks necessary to mimic flight operations team interactions with the spacecraft during a pass. These templates can be configured with data specific to a particular pass. Animated graphical displays illustrate the progress during the pass. The first GENIE application automates passes of the solar, anomalous and magnetospheric particle explorer (SAMPEX) spacecraft.
NASA Astrophysics Data System (ADS)
Siddique, Waseem; El-Gabry, Lamyaa; Shevchuk, Igor V.; Hushmandi, Narmin B.; Fransson, Torsten H.
2012-05-01
Two-pass channels are used for internal cooling in a number of engineering systems e.g., gas turbines. Fluid travelling through the curved path, experiences pressure and centrifugal forces, that result in pressure driven secondary motion. This motion helps in moving the cold high momentum fluid from the channel core to the side walls and plays a significant role in the heat transfer in the channel bend and outlet pass. The present study investigates using Computational Fluid Dynamics (CFD), the flow structure, heat transfer enhancement and pressure drop in a smooth channel with varying aspect ratio channel at different divider-to-tip wall distances. Numerical simulations are performed in two-pass smooth channel with aspect ratio Win/H = 1:3 at inlet pass and Wout/H = 1:1 at outlet pass for a variety of divider-to-tip wall distances. The results show that with a decrease in aspect ratio of inlet pass of the channel, pressure loss decreases. The divider-to-tip wall distance (Wel) not only influences the pressure drop, but also the heat transfer enhancement at the bend and outlet pass. With an increase in the divider-to-tip wall distance, the areas of enhanced heat transfer shifts from side walls of outlet pass towards the inlet pass. To compromise between heat transfer and pressure drop in the channel, Wel/H = 0.88 is found to be optimum for the channel under study.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-02
... Time at Which the Mortgage-Backed Securities Division Runs Its Daily Morning Pass September 26, 2012. I... FICC proposes to move the time at which its Mortgage-Backed Securities Division (``MBSD'') runs its... processing passes. MBSD currently runs its first processing pass of the day (historically referred to as the...
Minor loop dependence of the magnetic forces and stiffness in a PM-HTS levitation system
NASA Astrophysics Data System (ADS)
Yang, Yong; Li, Chengshan
2017-12-01
Based upon the method of current vector potential and the critical state model of Bean, the vertical and lateral forces with different sizes of minor loop are simulated in two typical cooling conditions when a rectangular permanent magnet (PM) above a cylindrical high temperature superconductor (HTS) moves vertically and horizontally. The different values of average magnetic stiffness are calculated by various sizes of minor loop changing from 0.1 to 2 mm. The magnetic stiffness with zero traverse is obtained by using the method of linear extrapolation. The simulation results show that the extreme values of forces decrease with increasing size of minor loop. The magnetic hysteresis of the force curves also becomes small as the size of minor loop increases. This means that the vertical and lateral forces are significantly influenced by the size of minor loop because the forces intensely depend on the moving history of the PM. The vertical stiffness at every vertical position when the PM vertically descends to 1 mm is larger than that as the PM vertically ascents to 30 mm. When the PM moves laterally, the lateral stiffness during the PM passing through any horizontal position in the first time almost equal to the value during the PM passing through the same position in the second time in zero-field cooling (ZFC), however, the lateral stiffness in field cooling (FC) and the cross stiffness in ZFC and FC are significantly affected by the moving history of the PM.
Theoretical studies of solar lasers and converters
NASA Technical Reports Server (NTRS)
Heinbockel, John H.
1990-01-01
The research described consisted of developing and refining the continuous flow laser model program including the creation of a working model. The mathematical development of a two pass amplifier for an iodine laser is summarized. A computer program for the amplifier's simulation is included with output from the simulation model.
Swan-Ganz - right heart catheterization
... this page: //medlineplus.gov/ency/article/003870.htm Swan-Ganz - right heart catheterization To use the sharing features on this page, please enable JavaScript. Swan-Ganz catheterization is the passing of a thin ...
A new method for the automatic interpretation of Schlumberger and Wenner sounding curves
Zohdy, A.A.R.
1989-01-01
A fast iterative method for the automatic interpretation of Schlumberger and Wenner sounding curves is based on obtaining interpreted depths and resistivities from shifted electrode spacings and adjusted apparent resistivities, respectively. The method is fully automatic. It does not require an initial guess of the number of layers, their thicknesses, or their resistivities; and it does not require extrapolation of incomplete sounding curves. The number of layers in the interpreted model equals the number of digitized points on the sounding curve. The resulting multilayer model is always well-behaved with no thin layers of unusually high or unusually low resistivities. For noisy data, interpretation is done in two sets of iterations (two passes). Anomalous layers, created because of noise in the first pass, are eliminated in the second pass. Such layers are eliminated by considering the best-fitting curve from the first pass to be a smoothed version of the observed curve and automatically reinterpreting it (second pass). The application of the method is illustrated by several examples. -Author
Multiple pass and multiple layer friction stir welding and material enhancement processes
Feng, Zhili [Knoxville, TN; David, Stan A [Knoxville, TN; Frederick, David Alan [Harriman, TN
2010-07-27
Processes for friction stir welding, typically for comparatively thick plate materials using multiple passes and multiple layers of a friction stir welding tool. In some embodiments a first portion of a fabrication preform and a second portion of the fabrication preform are placed adjacent to each other to form a joint, and there may be a groove adjacent the joint. The joint is welded and then, where a groove exists, a filler may be disposed in the groove, and the seams between the filler and the first and second portions of the fabrication preform may be friction stir welded. In some embodiments two portions of a fabrication preform are abutted to form a joint, where the joint may, for example, be a lap joint, a bevel joint or a butt joint. In some embodiments a plurality of passes of a friction stir welding tool may be used, with some passes welding from one side of a fabrication preform and other passes welding from the other side of the fabrication preform.
NOVA: A new multi-level logic simulator
NASA Technical Reports Server (NTRS)
Miles, L.; Prins, P.; Cameron, K.; Shovic, J.
1990-01-01
A new logic simulator that was developed at the NASA Space Engineering Research Center for VLSI Design was described. The simulator is multi-level, being able to simulate from the switch level through the functional model level. NOVA is currently in the Beta test phase and was used to simulate chips designed for the NASA Space Station and the Explorer missions. A new algorithm was devised to simulate bi-directional pass transistors and a preliminary version of the algorithm is presented. The usage of functional models in NOVA is also described and performance figures are presented.
The amplitude effects of sedimentary basins on through-passing surface waves
NASA Astrophysics Data System (ADS)
Feng, L.; Ritzwoller, M. H.; Pasyanos, M.
2016-12-01
Understanding the effect of sedimentary basins on through-passing surface waves is essential in many aspects of seismology, including the estimation of the magnitude of natural and anthropogenic events, the study of the attenuation properties of Earth's interior, and the analysis of ground motion as part of seismic hazard assessment. In particular, knowledge of the physical causes of amplitude variations is important in the application of the Ms:mb discriminant of nuclear monitoring. Our work addresses two principal questions, both in the period range between 10 s and 20 s. The first question is: In what respects can surface wave propagation through 3D structures be simulated as 2D membrane waves? This question is motivated by our belief that surface wave amplitude effects down-stream from sedimentary basins result predominantly from elastic focusing and defocusing, which we understand as analogous to the effect of a lens. To the extent that this understanding is correct, 2D membrane waves will approximately capture the amplitude effects of focusing and defocusing. We address this question by applying the 3D simulation code SW4 (a node-based finite-difference code for 3D seismic wave simulation) and the 2D code SPECFEM2D (a spectral element code for 2D seismic wave simulation). Our results show that for surface waves propagating downstream from 3D sedimentary basins, amplitude effects are mostly caused by elastic focusing and defocusing which is modeled accurately as a 2D effect. However, if the epicentral distance is small, higher modes may contaminate the fundamental mode, which may result in large errors in the 2D membrane wave approximation. The second question is: Are observations of amplitude variations across East Asia following North Korean nuclear tests consistent with simulations of amplitude variations caused by elastic focusing/defocusing through a crustal reference model of China (Shen et al., A seismic reference model for the crust and uppermost mantle beneath China from surface wave dispersion, Geophys. J. Int., 206(2), 2015)? We simulate surface wave propagation across Eastern Asia with SES3D (a spectral element code for 3D seismic wave simulation) and observe significant amplitude variations caused by focusing and defocusing with a magnitude that is consistent with the observations.
Aaslund, Mona Kristin; Moe-Nilssen, Rolf; Gjelsvik, Bente Bassøe; Bogen, Bård; Næss, Halvor; Hofstad, Håkon; Skouen, Jan Sture
2017-12-01
To investigate to which degree stroke severity, disability, and physical function the first week post-stroke are associated with preferred walking speed (PWS) at 6 months. Longitudinal observational study. Participants were recruited from a stroke unit and tested within the first week (baseline) and at 6 months post-stroke. Outcome measures were the National Institutes of Health Stroke Scale (NIHSS), the Barthel Index (BI), modified Rankin Scale (mRS), PWS, Postural Assessment Scale for Stroke (PASS), and the Trunk Impairment Scale modified-Norwegian version. Multiple regression models were used to explore which variables best predict PWS at 6 months, and the Receiver Operating Characteristics (ROC) curves to determine the cutoffs. A total of 132 participants post-stroke were included and subdivided into two groups based on the ability to produce PWS at baseline. For the participants that could produce PWS at baseline (WSB group), PASS, PWS, and age at baseline predicted PWS at 6 months with an explained variance of 0.77. For the participants that could not produce a PWS at baseline (NoWSB group), only PASS predicted PWS at 6 months with an explained variance of 0.49. For the Walking speed at baseline (WSB) group, cutoffs at baseline for walking faster than 0.8 m/s at 6 months were 30.5 points on the PASS, PWS 0.75 m/s, and age 73.5 years. For the NoWSB group, the cutoff for PASS was 20.5 points. PASS, PWS, and age the first week predicted PWS at 6 months post-stroke for participants with the best walking ability, and PASS alone predicted PWS at 6 months post-stroke for participants with the poorest walking ability.
A survey of aerobraking orbital transfer vehicle design concepts
NASA Technical Reports Server (NTRS)
Park, Chul
1987-01-01
The five existing design concepts of the aerobraking orbital transfer vehicle (namely, the raked sphere-cone designs, conical lifting-brake, raked elliptic-cone, lifting-body, and ballute) are reviewed and critiqued. Historical backgrounds, and the geometrical, aerothermal, and operational features of these designs are reviewed first. Then, the technological requirements for the vehicle (namely, navigation, aerodynamic stability and control, afterbody flow impingement, nonequilibrium radiation, convective heat-transfer rates, mission abort and multiple atmospheric passes, transportation and construction, and the payload-to-vehicle weight requirements) are delineated by summarizing the recent advancements made on these issues. Each of the five designs are critiqued and rated on these issues. The highest and the lowest ratings are given to the raked sphere-cone and the ballute design, respectively.
Marta, C
1991-03-01
"The essay analyzes the immigration policy pursued by Sweden from 1966, the year in which the first commission was established for this particular area, till 1985, the year in which an important reform was passed regarding the reception of refugees. The focal point of this immigration policy is singled out halfway in the 70's when the assimilation model is repudiated in favor of multiculturalism. One of the main features of this policy is the priority role accorded to research. The author examines the main fields of interest pursued by researchers, in particular the more recent studies on the ethnic-cultural dimension of the immigration phenomenon." (SUMMARY IN ENG AND FRE) excerpt
Face pose tracking using the four-point algorithm
NASA Astrophysics Data System (ADS)
Fung, Ho Yin; Wong, Kin Hong; Yu, Ying Kin; Tsui, Kwan Pang; Kam, Ho Chuen
2017-06-01
In this paper, we have developed an algorithm to track the pose of a human face robustly and efficiently. Face pose estimation is very useful in many applications such as building virtual reality systems and creating an alternative input method for the disabled. Firstly, we have modified a face detection toolbox called DLib for the detection of a face in front of a camera. The detected face features are passed to a pose estimation method, known as the four-point algorithm, for pose computation. The theory applied and the technical problems encountered during system development are discussed in the paper. It is demonstrated that the system is able to track the pose of a face in real time using a consumer grade laptop computer.
Aging in Biometrics: An Experimental Analysis on On-Line Signature
Galbally, Javier; Martinez-Diaz, Marcos; Fierrez, Julian
2013-01-01
The first consistent and reproducible evaluation of the effect of aging on dynamic signature is reported. Experiments are carried out on a database generated from two previous datasets which were acquired, under very similar conditions, in 6 sessions distributed in a 15-month time span. Three different systems, representing the current most popular approaches in signature recognition, are used in the experiments, proving the degradation suffered by this trait with the passing of time. Several template update strategies are also studied as possible measures to reduce the impact of aging on the system’s performance. Different results regarding the way in which signatures tend to change with time, and their most and least stable features, are also given. PMID:23894557
Gun-shot injuries in UK military casualties - Features associated with wound severity.
Penn-Barwell, Jowan G; Sargeant, Ian D
2016-05-01
Surgical treatment of high-energy gun-shot wounds (GSWs) to the extremities is challenging. Recent surgical doctrine states that wound tracts from high-energy GSWs should be laid open, however the experience from previous conflicts suggests that some of these injuries can be managed more conservatively. The aim of this study is to firstly characterise the GSW injuries sustained by UK forces, and secondly test the hypothesis that the likely severity of GSWs can be predicted by features of the wound. The UK Military trauma registry was searched for cases injured by GSW in the five years between 01 January 2009 and 31 December 2013: only UK personnel were included. Clinical notes and radiographs were then reviewed. Features associated with energy transfer in extremity wounds in survivors were further examined with number of wound debridements used as a surrogate marker of wound severity. There were 450 cases who met the inclusion criteria. 96 (21%) were fatally injured, with 354 (79%) surviving their injuries. Casualties in the fatality group had a median New Injury Severity Score (NISS) of 75 (IQR 75-75), while the median NISS of the survivors was 12 (IQR 4-48) with 10 survivors having a NISS of 75. In survivors the limbs were most commonly injured (56%). 'Through and through' wounds, where the bullet passes intact through the body, were strongly associated with less requirement for debridement (p<0.0001). When a bullet fragmented there was a significant association with a requirement for a greater number of wound debridements (p=0.0002), as there was if a bullet fractured a bone (p=0.0006). More complex wounds, as indicated by the requirement for repeated debridements, are associated with injuries where the bullet does not pass straight through the body, or where a bone is fractured. Gunshot wounds should be assessed according to the likely energy transferred, extremity wounds without features of high energy transfer do not require extensive exploration. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
... of this page please turn Javascript on. Feature: Flu Avoiding the Flu Past Issues / Fall 2009 Table of Contents Children ... help avoid getting and passing on the flu. Influenza (Seasonal) The flu is a contagious respiratory illness ...
Simulation of synthetic gecko arrays shearing on rough surfaces
Gillies, Andrew G.; Fearing, Ronald S.
2014-01-01
To better understand the role of surface roughness and tip geometry in the adhesion of gecko synthetic adhesives, a model is developed that attempts to uncover the relationship between surface feature size and the adhesive terminal feature shape. This model is the first to predict the adhesive behaviour of a plurality of hairs acting in shear on simulated rough surfaces using analytically derived contact models. The models showed that the nanoscale geometry of the tip shape alters the macroscale adhesion of the array of fibres by nearly an order of magnitude, and that on sinusoidal surfaces with amplitudes much larger than the nanoscale features, spatula-shaped features can increase adhesive forces by 2.5 times on smooth surfaces and 10 times on rough surfaces. Interestingly, the summation of the fibres acting in concert shows behaviour much more complex that what could be predicted with the pull-off model of a single fibre. Both the Johnson–Kendall–Roberts and Kendall peel models can explain the experimentally observed frictional adhesion effect previously described in the literature. Similar to experimental results recently reported on the macroscale features of the gecko adhesive system, adhesion drops dramatically when surface roughness exceeds the size and spacing of the adhesive fibrillar features. PMID:24694893
Stream capture to form Red Pass, northern Soda Mountains, California
Miller, David; Mahan, Shannon
2014-01-01
Red Pass, a narrow cut through the Soda Mountains important for prehistoric and early historic travelers, is quite young geologically. Its history of downcutting to capture streams west of the Soda Mountains, thereby draining much of eastern Fort Irwin, is told by the contrast in alluvial fan sediments on either side of the pass. Old alluvial fan deposits (>500 ka) were shed westward off an intact ridge of the Soda Mountains but by middle Pleistocene time, intermediate-age alluvial fan deposits (~100 ka) were laid down by streams flowing east through the pass into Silurian Valley. The pass was probably formed by stream capture driven by high levels of groundwater on the west side. This is evidenced by widespread wetland deposits west of the Soda Mountains. Sapping and spring discharge into Silurian Valley over millennia formed a low divide in the mountains that eventually was overtopped and incised by a stream. Lessons include the importance of groundwater levels for stream capture and the relatively youthful appearance of this ~100-200 ka feature in the slowly changing Mojave Desert landscape.
Helping Students with Difficult First Year Subjects through the PASS Program
ERIC Educational Resources Information Center
Sultan, Fauziah K. P. D.; Narayansany, Kannaki S.; Kee, Hooi Ling; Kuan, Chin Hoay; Palaniappa Manickam, M. Kamala; Tee, Meng Yew
2013-01-01
The purpose of this action research was to find out if participants of a pilot PASS program found it to be helpful. The program was implemented for the first time in an institute of higher learning in Malaysia. An action research design guided the study, with surveys, documents, and reflections as primary data sources. The findings were largely…
Paraskevopoulou, Sivylla E; Barsakcioglu, Deren Y; Saberi, Mohammed R; Eftekhar, Amir; Constandinou, Timothy G
2013-04-30
Next generation neural interfaces aspire to achieve real-time multi-channel systems by integrating spike sorting on chip to overcome limitations in communication channel capacity. The feasibility of this approach relies on developing highly efficient algorithms for feature extraction and clustering with the potential of low-power hardware implementation. We are proposing a feature extraction method, not requiring any calibration, based on first and second derivative features of the spike waveform. The accuracy and computational complexity of the proposed method are quantified and compared against commonly used feature extraction methods, through simulation across four datasets (with different single units) at multiple noise levels (ranging from 5 to 20% of the signal amplitude). The average classification error is shown to be below 7% with a computational complexity of 2N-3, where N is the number of sample points of each spike. Overall, this method presents a good trade-off between accuracy and computational complexity and is thus particularly well-suited for hardware-efficient implementation. Copyright © 2013 Elsevier B.V. All rights reserved.
Experimental study of the spray characteristics of a research airblast atomizer
NASA Technical Reports Server (NTRS)
Acosta, W. A.
1985-01-01
Airblast atomization was studied using a especially designed atomizer in which the liquid first impinges on a splash plate, then is directed radially outward and is atomized by the air passing through two concentric, vaned swirlers that swirl the air in opposite directions. The effect of flow conditions, air mass velocity (mass flow rate per unit area) and liquid to air ratio on the mean drop size was studied. Seven different ethanol solutions were used to simulate changes in fuel physical properties. The range of atomizing air velocities was from 30 to 80 m/s. The mean drop diameter was measured at ambient temperature (295 K) and atmospheric pressure.
Experimental study of the spray characteristics of a research airblast atomizer
NASA Technical Reports Server (NTRS)
Acosta, W. A.
1985-01-01
Airblast atomization was studied using a especially designed atomizer in which the liquid first impinges on a splash plate, then is directed radically outward and is atomized by the air passing through two concentric, vaned swirlers that swirl the air in opposite directions. The effect of flow conditions, air mass velocity (mass flow rate per unit area) and liquid to air ratio on the mean drop size was studied. Seven different ethanol solutions were used to simulate changes in fuel physical properties. The range of atomizing air velocities was from 30 to 80 m/s. The mean drop diameter was measured at ambient temperature (295 K) and atmospheric pressure.
Experimental determination of satellite bolted joints thermal resistance
NASA Technical Reports Server (NTRS)
Mantelli, Marcia Barbosa Henriques; Basto, Jose Edson
1990-01-01
The thermal resistance was experimentally determined of the bolted joints of the first Brazilian satellite (SCD 01). These joints, used to connect the satellite structural panels, are reproduced in an experimental apparatus, keeping, as much as possible, the actual dimensions and materials. A controlled amount of heat is forced to pass through the joint and the difference of temperature between the panels is measured. The tests are conducted in a vacuum chamber with liquid nitrogen cooled walls, that simulates the space environment. Experimental procedures are used to avoid much heat losses, which are carefully calculated. Important observations about the behavior of the joint thermal resistance with the variation of the mean temperature are made.
Internal validation of the RapidHIT® ID system.
Wiley, Rachel; Sage, Kelly; LaRue, Bobby; Budowle, Bruce
2017-11-01
Traditionally, forensic DNA analysis has required highly skilled forensic geneticists in a dedicated laboratory to generate short tandem repeat (STR) profiles. STR profiles are routinely used either to associate or exclude potential donors of forensic biological evidence. The typing of forensic reference samples has become more demanding, especially with the requirement in some jurisdictions to DNA profile arrestees. The Rapid DNA (RDNA) platform, the RapidHIT ® ID (IntegenX ® , Pleasanton, CA), is a fully automated system capable of processing reference samples in approximately 90min with minimal human intervention. Thus, the RapidHIT ID instrument can be deployed to non-laboratory environments (e.g., booking stations) and run by trained atypical personnel such as law enforcement. In order to implement the RapidHIT ID platform, validation studies are needed to define the performance and limitations of the system. Internal validation studies were undertaken with four early-production RapidHIT ID units. Reliable and concordant STR profiles were obtained from reference buccal swabs. Throughout the study, no contamination was observed. The overall first-pass success rate with an "expert-like system" was 72%, which is comparable to another current RDNA platform commercially available. The system's second-pass success rate (involving manual interpretation on first-pass inconclusive results) increased to 90%. Inhibitors (i.e., coffee, smoking tobacco, and chewing tobacco) did not appear to affect typing by the instrument system; however, substrate (i.e., swab type) did impact typing success. Additionally, one desirable feature not available with other Rapid systems is that in the event of a system failed run, a swab can be recovered and subsequently re-analyzed in a new sample cartridge. Therefore, rarely should additional sampling or swab consumption be necessary. The RapidHIT ID system is a robust and reliable tool capable of generating complete STR profiles within the forensic DNA typing laboratory or with proper training in decentralized environments by non-laboratory personnel. Copyright © 2017 Elsevier B.V. All rights reserved.
Optical microfiber-loaded surface plasmonic TE-pass polarizer
NASA Astrophysics Data System (ADS)
Ma, Youqiao; Farrell, Gerald; Semenova, Yuliya; Li, Binghui; Yuan, Jinhui; Sang, Xinzhu; Yan, Binbin; Yu, Chongxiu; Guo, Tuan; Wu, Qiang
2016-04-01
We propose a novel optical microfiber-loaded plasmonic TE-pass polarizer consisting of an optical microfiber placed on top of a silver substrate and demonstrate its performance both numerically by using the finite element method (FEM) and experimentally. The simulation results show that the loss in the fundamental TE mode is relatively low while at the same time the fundamental TM mode suffers from a large metal dissipation loss induced by excitation of the microfiber-loaded surface plasmonic mode. The microfiber was fabricated using the standard microheater brushing-tapering technique. The measured extinction ratio over the range of the C-band wavelengths is greater than 20 dB for the polarizer with a microfiber diameter of 4 μm, which agrees well with the simulation results.
Effects of Fetch on Turbulent Flow and Pollutant Dispersion Within a Cubical Canopy
NASA Astrophysics Data System (ADS)
Michioka, Takenobu; Takimoto, Hiroshi; Ono, Hiroki; Sato, Ayumu
2018-03-01
The effects of fetch on turbulent flow and pollutant dispersion within a canopy formed by regularly-spaced cubical objects is investigated using large-eddy simulation. Six tracer gases are simultaneously released from a ground-level continuous pollutant line source placed parallel to the spanwise axis at the first, second, third, fifth, seventh and tenth rows. Beyond the seventh row, the standard deviations of the fluctuations in the velocity components and the Reynolds shear stresses reach nearly equivalent states. Low-frequency turbulent flow is generated near the bottom surface around the first row and develops as the fetch increases. The turbulent flow eventually passes through the canopy at a near-constant interval. The mean concentration within the canopy reaches a near-constant value beyond the seventh row. In the first and second rows, narrow coherent structures frequently affect the pollutant escape from the top of the canopy. These structures increase in width as the fetch increases, and they mainly affect the removal of pollutants from the canopy.
Automatic Beam Path Analysis of Laser Wakefield Particle Acceleration Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rubel, Oliver; Geddes, Cameron G.R.; Cormier-Michel, Estelle
2009-10-19
Numerical simulations of laser wakefield particle accelerators play a key role in the understanding of the complex acceleration process and in the design of expensive experimental facilities. As the size and complexity of simulation output grows, an increasingly acute challenge is the practical need for computational techniques that aid in scientific knowledge discovery. To that end, we present a set of data-understanding algorithms that work in concert in a pipeline fashion to automatically locate and analyze high energy particle bunches undergoing acceleration in very large simulation datasets. These techniques work cooperatively by first identifying features of interest in individual timesteps,more » then integrating features across timesteps, and based on the information derived perform analysis of temporally dynamic features. This combination of techniques supports accurate detection of particle beams enabling a deeper level of scientific understanding of physical phenomena than hasbeen possible before. By combining efficient data analysis algorithms and state-of-the-art data management we enable high-performance analysis of extremely large particle datasets in 3D. We demonstrate the usefulness of our methods for a variety of 2D and 3D datasets and discuss the performance of our analysis pipeline.« less
NASA Astrophysics Data System (ADS)
Gires, Auguste; Abbes, Jean-Baptiste; da Silva Rocha Paz, Igor; Tchiguirinskaia, Ioulia; Schertzer, Daniel
2018-03-01
In this paper we suggest to innovatively use scaling laws and more specifically Universal Multifractals (UM) to analyse simulated surface runoff and compare the retrieved scaling features with the rainfall ones. The methodology is tested on a 3 km2 semi-urbanised with a steep slope study area located in the Paris area along the Bièvre River. First Multi-Hydro, a fully distributed model is validated on this catchment for four rainfall events measured with the help of a C-band radar. The uncertainty associated with small scale unmeasured rainfall, i.e. occurring below the 1 km × 1 km × 5 min observation scale, is quantified with the help of stochastic downscaled rainfall fields. It is rather significant for simulated flow and more limited on overland water depth for these rainfall events. Overland depth is found to exhibit a scaling behaviour over small scales (10 m-80 m) which can be related to fractal features of the sewer network. No direct and obvious dependency between the overland depth multifractal features (quality of the scaling and UM parameters) and the rainfall ones was found.
Users guide to E859 phoswich analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Costales, J.B.
1992-11-30
In this memo the authors describe the analysis path used to transform the phoswich data from raw data banks into cross sections suitable for publication. The primary purpose of this memo is not to document each analysis step in great detail but rather to point the reader to the fortran code used and to point out the essential features of the analysis path. A flow chart which summarizes the various steps performed to massage the data from beginning to end is given. In general, each step corresponds to a fortran program which was written to perform that particular task. Themore » automation of the data analysis has been kept purposefully minimal in order to ensure the highest quality of the final product. However, tools have been developed which ease the non--automated steps. There are two major parallel routes for the data analysis: data reduction and acceptance determination using detailed GEANT Monte Carlo simulations. In this memo, the authors will first describe the data reduction up to the point where PHAD banks (Pass 1-like banks) are created. They the will describe the steps taken in the GEANT Monte Carlo route. Note that a detailed memo describing the methodology of the acceptance corrections has already been written. Therefore the discussion of the acceptance determination will be kept to a minimum and the reader will be referred to the other memo for further details. Finally, they will describe the cross section formation process and how final spectra are extracted.« less
NASA Astrophysics Data System (ADS)
Abnar, B.; Kazeminezhad, M.; Kokabi, A. H.
2014-08-01
Friction stir welding (FSW) was used to join 3003-H18 non-heat-treatable aluminum alloy plates by adding copper powder. The copper powder was first added to the gap (0.1 and 0.2 mm) between two plates and then the FSW was performed. The specimens were joined at various rotational speeds of 800, 1000, and 1200 rpm at traveling speeds of 70 and 100 mm/min. The effects of rotational speed, second pass of FSW, and direction of second pass also were studied on copper particle distribution and formation of Al-Cu intermetallic compounds in the stir zone. The second pass of FSW was carried out in two ways; in line with the first pass direction (2F) and in the reverse direction of the first pass (FB). The microstructure, mechanical properties, and formation of intermetallic compounds type were investigated. In high copper powder compaction into the gap, large clusters were formed in the stir zone, while fine clusters and sound copper particles distribution were obtained in low powder compaction. The copper particle distribution and amount of Al-Cu intermetallic compounds were increased in the stir zone with increasing the rotational speed and applying the second pass. Al2Cu and AlCu intermetallic phases were formed in the stir zone and consequently the hardness was significantly increased. The copper particles and in situ intermetallic compounds were symmetrically distributed in both advancing and retreating sides of weld zone after FB passes. Thus, the wider area was reinforced by the intermetallic compounds. Also, the tensile test specimens tend to fracture from the coarse copper aggregation at the low rotational speeds. At high rotational speeds, the fracture locations are placed in HAZ and TMAZ.
2013-07-11
in Fig. 3) is simulated. Each atom interacts with its neighboring atoms through a potential energy surface (PES), such as the simple Lennard - Jones ... Lennard -‐ Jones (LJ) potential energy surface (PES) dictating atomic interaction forces. The main point of this section is to...the potential energy surface (PES) that governs individual atomic interaction forces. In contrast to existing rotational energy models, we found
NASA Technical Reports Server (NTRS)
Houck, J. A.
1979-01-01
The development of a mission simulator for use in the Terminal Configured Vehicle (TCV) program is outlined. The broad objectives of the TCV program are to evaluate new concepts in airborne systems and in operational flight procedures. These evaluations are directed toward improving terminal area capacity and efficiency, improving approach and landing capability in adverse weather, and reducing noise impact in the terminal area. A description is given of the design features and operating principles of the two major components of the TCV Mission Simulator: the TCV Aft Flight Deck Simulation and the Terminal Area Air Traffic Model Simulation, and their merger to form the TCV Mission Simulator. The first research study conducted in the Mission Simulator is presented along with some preliminary results.
A Fatigue Crack Size Evaluation Method Based on Lamb Wave Simulation and Limited Experimental Data
He, Jingjing; Ran, Yunmeng; Liu, Bin; Yang, Jinsong; Guan, Xuefei
2017-01-01
This paper presents a systematic and general method for Lamb wave-based crack size quantification using finite element simulations and Bayesian updating. The method consists of construction of a baseline quantification model using finite element simulation data and Bayesian updating with limited Lamb wave data from target structure. The baseline model correlates two proposed damage sensitive features, namely the normalized amplitude and phase change, with the crack length through a response surface model. The two damage sensitive features are extracted from the first received S0 mode wave package. The model parameters of the baseline model are estimated using finite element simulation data. To account for uncertainties from numerical modeling, geometry, material and manufacturing between the baseline model and the target model, Bayesian method is employed to update the baseline model with a few measurements acquired from the actual target structure. A rigorous validation is made using in-situ fatigue testing and Lamb wave data from coupon specimens and realistic lap-joint components. The effectiveness and accuracy of the proposed method is demonstrated under different loading and damage conditions. PMID:28902148
Walking simulator for evaluation of ophthalmic devices
NASA Astrophysics Data System (ADS)
Barabas, James; Woods, Russell L.; Peli, Eli
2005-03-01
Simulating mobility tasks in a virtual environment reduces risk for research subjects, and allows for improved experimental control and measurement. We are currently using a simulated shopping mall environment (where subjects walk on a treadmill in front of a large projected video display) to evaluate a number of ophthalmic devices developed at the Schepens Eye Research Institute for people with vision impairment, particularly visual field defects. We have conducted experiments to study subject's perception of "safe passing distance" when walking towards stationary obstacles. The subject's binary responses about potential collisions are analyzed by fitting a psychometric function, which gives an estimate of the subject's perceived safe passing distance, and the variability of subject responses. The system also enables simulations of visual field defects using head and eye tracking, enabling better understanding of the impact of visual field loss. Technical infrastructure for our simulated walking environment includes a custom eye and head tracking system, a gait feedback system to adjust treadmill speed, and a handheld 3-D pointing device. Images are generated by a graphics workstation, which contains a model with photographs of storefronts from an actual shopping mall, where concurrent validation experiments are being conducted.
3D Simulation Modeling of the Tooth Wear Process.
Dai, Ning; Hu, Jian; Liu, Hao
2015-01-01
Severe tooth wear is the most common non-caries dental disease, and it can seriously affect oral health. Studying the tooth wear process is time-consuming and difficult, and technological tools are frequently lacking. This paper presents a novel method of digital simulation modeling that represents a new way to study tooth wear. First, a feature extraction algorithm is used to obtain anatomical feature points of the tooth without attrition. Second, after the alignment of non-attrition areas, the initial homogeneous surface is generated by means of the RBF (Radial Basic Function) implicit surface and then deformed to the final homogeneous by the contraction and bounding algorithm. Finally, the method of bilinear interpolation based on Laplacian coordinates between tooth with attrition and without attrition is used to inversely reconstruct the sequence of changes of the 3D tooth morphology during gradual tooth wear process. This method can also be used to generate a process simulation of nonlinear tooth wear by means of fitting an attrition curve to the statistical data of attrition index in a certain region. The effectiveness and efficiency of the attrition simulation algorithm are verified through experimental simulation.
3D Simulation Modeling of the Tooth Wear Process
Dai, Ning; Hu, Jian; Liu, Hao
2015-01-01
Severe tooth wear is the most common non-caries dental disease, and it can seriously affect oral health. Studying the tooth wear process is time-consuming and difficult, and technological tools are frequently lacking. This paper presents a novel method of digital simulation modeling that represents a new way to study tooth wear. First, a feature extraction algorithm is used to obtain anatomical feature points of the tooth without attrition. Second, after the alignment of non-attrition areas, the initial homogeneous surface is generated by means of the RBF (Radial Basic Function) implicit surface and then deformed to the final homogeneous by the contraction and bounding algorithm. Finally, the method of bilinear interpolation based on Laplacian coordinates between tooth with attrition and without attrition is used to inversely reconstruct the sequence of changes of the 3D tooth morphology during gradual tooth wear process. This method can also be used to generate a process simulation of nonlinear tooth wear by means of fitting an attrition curve to the statistical data of attrition index in a certain region. The effectiveness and efficiency of the attrition simulation algorithm are verified through experimental simulation. PMID:26241942
Validation and Simulation of ARES I Scale Model Acoustic Test -1- Pathfinder Development
NASA Technical Reports Server (NTRS)
Putnam, G. C.
2011-01-01
The Ares I Scale Model Acoustics Test (ASMAT) is a series of live-fire tests of scaled rocket motors meant to simulate the conditions of the Ares I launch configuration. These tests have provided a well documented set of high fidelity measurements useful for validation including data taken over a range of test conditions and containing phenomena like Ignition Over-Pressure and water suppression of acoustics. To take advantage of this data, a digital representation of the ASMAT test setup has been constructed and test firings of the motor have been simulated using the Loci/CHEM computational fluid dynamics software. Within this first of a series of papers, results from ASMAT simulations with the rocket in a held down configuration and without water suppression have then been compared to acoustic data collected from similar live-fire tests to assess the accuracy of the simulations. Detailed evaluations of the mesh features, mesh length scales relative to acoustic signals, Courant-Friedrichs-Lewy numbers, and spatial residual sources have been performed to support this assessment. Results of acoustic comparisons have shown good correlation with the amplitude and temporal shape of pressure features and reasonable spectral accuracy up to approximately 1000 Hz. Major plume and acoustic features have been well captured including the plume shock structure, the igniter pulse transient, and the ignition overpressure. Finally, acoustic propagation patterns illustrated a previously unconsidered issue of tower placement inline with the high intensity overpressure propagation path.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mizell, D.; Carter, S.
In 1987, ISI's parallel distributed computing research group implemented a prototype sequential simulation system, designed for high-level simulation of candidate (Strategic Defense Initiative) architectures. A main design goal was to produce a simulation system that could incorporate non-trivial, executable representations of battle-management computations on each platform that were capable of controlling the actions of that platform throughout the simulation. The term BMA (battle manager abstraction) was used to refer to these simulated battle-management computations. In the authors first version of the simulator, the BMAs were C++ programs that we wrote and manually inserted into the system. Since then, they havemore » designed and implemented KMAC, a high-level language for writing BMA's. The KMAC preprocessor, built using the Unix tools lex 2 and YACC 3, translates KMAC source programs into C++ programs and passes them on to the C++ compiler. The KMAC preprocessor was incorporated into and operates under the control of the simulator's interactive user interface. After the KMAC preprocessor has translated a program into C++, the user interface system invokes the C++ compiler, and incorporates the resulting object code into the simulator load module for execution as part of a simulation run. This report describes the KMAC language and its preprocessor. Section 2 provides background material on the design of the simulation system that is necessary for understanding some of the parts of KMAC and some of the reasons it is structured the way it is. Section 3 describes the syntax and semantics of the language, and Section 4 discusses design of the preprocessor.« less
Yudkowsky, Rachel; Luciano, Cristian; Banerjee, Pat; Schwartz, Alan; Alaraj, Ali; Lemole, G Michael; Charbel, Fady; Smith, Kelly; Rizzi, Silvio; Byrne, Richard; Bendok, Bernard; Frim, David
2013-02-01
Ventriculostomy is a neurosurgical procedure for providing therapeutic cerebrospinal fluid drainage. Complications may arise during repeated attempts at placing the catheter in the ventricle. We studied the impact of simulation-based practice with a library of virtual brains on neurosurgery residents' performance in simulated and live surgical ventriculostomies. Using computed tomographic scans of actual patients, we developed a library of 15 virtual brains for the ImmersiveTouch system, a head- and hand-tracked augmented reality and haptic simulator. The virtual brains represent a range of anatomies including normal, shifted, and compressed ventricles. Neurosurgery residents participated in individual simulator practice on the library of brains including visualizing the 3-dimensional location of the catheter within the brain immediately after each insertion. Performance of participants on novel brains in the simulator and during actual surgery before and after intervention was analyzed using generalized linear mixed models. Simulator cannulation success rates increased after intervention, and live procedure outcomes showed improvement in the rate of successful cannulation on the first pass. However, the incidence of deeper, contralateral (simulator) and third-ventricle (live) placements increased after intervention. Residents reported that simulations were realistic and helpful in improving procedural skills such as aiming the probe, sensing the pressure change when entering the ventricle, and estimating how far the catheter should be advanced within the ventricle. Simulator practice with a library of virtual brains representing a range of anatomies and difficulty levels may improve performance, potentially decreasing complications due to inexpert technique.
HEP - A semaphore-synchronized multiprocessor with central control. [Heterogeneous Element Processor
NASA Technical Reports Server (NTRS)
Gilliland, M. C.; Smith, B. J.; Calvert, W.
1976-01-01
The paper describes the design concept of the Heterogeneous Element Processor (HEP), a system tailored to the special needs of scientific simulation. In order to achieve high-speed computation required by simulation, HEP features a hierarchy of processes executing in parallel on a number of processors, with synchronization being largely accomplished by hardware. A full-empty-reserve scheme of synchronization is realized by zero-one-valued hardware semaphores. A typical system has, besides the control computer and the scheduler, an algebraic module, a memory module, a first-in first-out (FIFO) module, an integrator module, and an I/O module. The architecture of the scheduler and the algebraic module is examined in detail.